Consciousness is complicated. I've tried numerous times to try writing this essay without success. I think this struggle is because there is little agreement on what consciousness means.
One theory referenced by Aaronson is IIT, which establishes a scale similar in function to temperature or entropy that measures the degree of consciousness of a system. Aaronson convincingly shoots down this idea by constructing simple mathematical operators that rank highly on this scale, but intuitively are as non-conscious as the sine function.
However, Aaronson argues that the test of whether a given consciousness theory is correct is whether it satisfies our intuitive feeling of what systems are conscious and which are not. This fallacy demonstrates precisely what is wrong with scientific approaches to consciousness. A scientific theory of consciousness should conform to physical measurement, not to intuition. And an excellent benchmark is that such a theory should allow us to predict new properties about the world.
To even entertain scientific questions about consciousness, we must have some physical phenomena that we would be able to find and measure. No one (to my knowledge) supposes the existence of a fundamental particle that imbues an object with consciousness. The physical phenomena that capture our intuitive definition of consciousness are attention and (somewhat) rational behavior.
In line with that thinking, my definition of consciousness is actually extremely simple: a rational agent that reasons about, amongst other things, itself. If we want to complicate the definition, we can add the requirement of an attention system, as Graziano has it. But that addition doesn't change the fundamental conclusion that the consciousness is simply a semantic classification - there's no magic going on.
The deeper question I think most people are asking when they ponder consciousness is the nature of feeling. (Dan Dennett would call it qualia.) But I pose the following question: how do you know that I feel anything? You believe I am an agent that is programmed to sleep every night, eat a few times a day, be productive, form relationships, and so forth. But nothing there says I feel anything. You only believe I am conscious by analogy with yourself. There is no physical phenomena to measure.
Therefore, feeling is not a scientific phenomena, but a religious one. You can go about your life thinking that nobody feels anything, but that there are quasi-rational agents optimizing for certain objectives, and you'll behave just as well as if you thought everybody 'felt' hungry, 'felt' lonely, and so forth. (Personally, the latter is easier to comprehend, so I'm continuing to use that model. But a robot might not - it just depends on its programming.)
What does this conclusion mean? To me, it means that it's quixotic to explain feeling scientifically - there's nothing physical to explain. It also guides my thinking on issues raised, e.g. in The Bicentennial Man - when should we grant personhood to robots or computer systems, pay them wages, and so forth. Effectively, we should do it to the extent that they are rational actors with a notion of self, and especially self-preservation. Every rational being, hypothetical robots included, is deserving of our empathy.