A recent statement from Google about the establishment of an ethical board "to guide the responsible development of AI" seems to be a welcome move but at the same time, it conversely raises questions and doubts if Google, or any other big tech companies (Big Tech) in general can handle ethical issues in a completely honest and open way.
In fact, the ethics boards and charters do not change the way companies operate
Following recent debates on AI ethics, we can see the moves to embrace ethical self-scrutiny of technology companies. They have formed ethics committees, wrote charters or sponsored research on topics such as algorithmic bias. But are those committees and charters really helpful? Do they change the way companies operate or make companies more accountable in any way?
Academic Ben Wagner, with a relatively strict view towards this topic, said that technology companies' enthusiasm is just a strategy to avoid government regulation. It appears as they are "doing something", but it’s merely a cover and nothing is changed. "Most of the current moral principles have no framework or institution, they are not binding," said Wagner. We can see CEO Jack Dorsey and his repeated assurance that he has been thinking seriously about Twitter’s issues of abuse, harassment and neo-Nazis, but thinking is only thinking and all content on this social media website seems to remain the same.
The unclear role of the ethics board is the weakness that makes Big Tech's efforts with AI problems less effective.
Google is not the only company that has an ethics board and a charter. Microsoft has its own AI principles and established its own AI ethics committee in 2018. Amazon has also begun funding research on fairness in artificial intelligence, with the help of the National Science Foundation, while Facebook even has co-founded an AI ethics research center in Germany. However, they all have the same weakness which is the lack of transparency in the operation of the ethics board (changing, recommending, monitoring, etc.) and interest groups. When the board intervenes on ethical violations, no one knows how its recommendations have been made, why are they vetoed and unethical? Only the company knows the reason behind those violations.
Even a study from 2018 tried to check whether the rules of conduct could affect developers about ethical decisions. The study consists of two groups of developers who are asked a variety of hypothetical issues they may encounter at work. Before answering, a group was asked to consider a code of ethics issued by the Association for Computing Machinery, while the other group was only told that the fictional company they worked for had strong moral principles. The study found that priming test subjects with the code of ethics “had no observed effect” on their answers.
This does not mean that the AI moral standards should be completely removed - we need more aggressive engagement from the authority
The efforts of technology giants when it comes to AI ethics problems are undeniable, however, with the difficulty of putting it into practice we need to be strongly aware of the potential flaws of technology in this new eraare. With a strong belief in democracy and self-resolving ability, Silicon Valley sometimes loses its faith in government intervention. But the problem here is not technology but democracy and governance of the authorities. The government needs to take part in and enforce stricter regulations, which is the only way to ensure the real supervision and accountability of technology companies.