According to the article, the paper argues that if humans are able to create conscious robots in the future then there may be push for those robots to receive human rights. I guess they couldn't be accurately called human rights if robots had them, could they?
The Horizon Scan report argues that if ‘correctly managed', this new world of robots' rights could lead to increased labour output and greater prosperity.Welfare for robots?
"If granted full rights, states will be obligated to provide full social benefits to them including income support, housing and possibly robo-healthcare to fix the machines over time," it says.
The argument makes sense if you think that consciousness and/or other instrumental properties are what should gives us our rights. If those instrumental properties are what make humans valuable and worthy of rights then it seems to follow that other entities (animals, robots, etc.) which also have the same level of those instrumental properties should be just as valuable and just as worthy of receiving the rights humans have.
I wonder what Peter Singer (he calls people who think humans are special "speciesists") would call someone who doesn't think conscious robots should have rights? An organismist, maybe?