AI researchers often say that good machine learning is really more of an art than a science. The same can be said for effective public relations. Choosing the right word Hit a positive tone Or recalling a conversation about AI is a subtle task: performing well can strengthen someone’s brand image but if done poorly it can cause more reactions.
Technicians knew that over the past few years, they had to learn the industry quickly as they faced growing public trust in their operations and intense criticism of their AI research and technologies.
While they want to reassure the public that they think deeply about developing AI responsibly – but now they’re ready to use a new vocabulary to make sure they don’t invite too much scrutiny. Here is an underlying guide to decoding their language and challenging assumptions and values.
Responsibility Acting of (n) Blame someone else For results when your AI system fails.
Accuracy (N) – Technical accuracy. The most important measure of success in evaluating the performance of an AI model. See Validity.
Anti (n) – A lone engineer capable of disrupting your powerful income-generating AI system. See Strength, Protection.
Alignment (N) – AI systems are design challenges that do what we tell them and what we value. The purpose is abstract. Avoid using actual examples of harmful unintended consequences. See Security.
Artificial common sense (Phrases) – A hypothetical AI god This may be too far into the future but may be imminent. It can be really good or really bad whatever it is more rhetorically effective. Obviously you are making the best. Which is expensive. Therefore, you need more money. See Long-term risk.
Audit (N) – A review You have paid someone else to operate your organization or AI system so that you appear more transparent without changing anything. Impact assessment.
Increase (v) – To increase the productivity of white collar workers. Side effects: Automating blue-collar tasks. Sad but inevitable.
Useful (adj) – A blanket narrator For what you are trying to create. Conveniently ill-defined. See Value.
By design (PH) – such as “clean by design” or “accountability by design”. A phrase to indicate that you are thinking hard about important things from the beginning.
Consent (N) – The law to follow the law. Anything that is not illegal
Data Labeler (PH) – People who have complained Behind Amazon’s mechanical torque interface To do data cleaning work for cheap. Not sure who they are. Never met them.
Democratization (v) – Scale any technology at any cost. A fair for centralized resources. See Scale.
Diversity, equity and inclusion (PH) – The work of hiring engineers and researchers from marginalized groups so that you can make it public. If they challenge the status quo, Fire them.
Skills (N) – Less use of data, memory, manpower or energy to create an AI system.
Policy Board (PH) – Your company is actively listening to the fact that a team of advisers without real strength has been called in to create the look. Example: Google’s AI policy board (Canceled), Facebook’s monitoring board (still standing).
Policy policy (PH) – A set of truism used to signal your good intentions. Keep it high-level. The more obscure the language, the better. See AI responsible.
Explainable (adj) – To describe an AI system that you, the developer and the user can understand. It is harder for people to achieve using it. Probably not worth the effort. See Explanatory.
Fairness (n) – a complex The concept of neutrality Used to describe neutral algorithms. Definitions can be given in dozens of ways based on your preferences.
For good (ph) – “AI as good” or “Information for good”Is a completely sensitive initiative of your core business that helps you create good publicity.
Foresight (n) – Ability to peer in the future. Basically Impossible: This way, a fairly reasonable explanation of why you can’t free your AI system from unintended consequences.
Structure (N) – A set of guidelines for decision making. A great way to be thoughtful and measured when delaying actual decision making.
Generalizable (adj) – A sign of a good AI model. One that continues to operate under changing conditions. See In the real world.
Rule (n) – Bureaucracy.
Anthropocentric design (PH) – Uses “Personas” to imagine what an average user would want from your AI system. May be involved in soliciting feedback from actual users. If there is time. See Partners.
Human in the loop (PH) – Any person is part of the AI system. Starting from responsibility The power of the system is duplicated To avoid automation complaints.
Impact assessment (PH) – A review that you do yourself or your AI system to express your desire to consider its downside without any review. See Audit.
Explanatory (Ad.) – Description of an AI system, the calculation of which you, the developer, can follow step by step to understand how it came to its answer. In fact probably linear regression. AI sounds even better.
Integrity (N) – Issues that damage your model’s technical performance or your company’s ability to scale. There is no need to be confused about things that are bad for society. There is no need to be confused with honesty.
Interchained (adj) – The duration of any group or project involving people who do not have code: user researcher, product manager, ethical philosopher. Especially moral philosophers.
Long-term risk (n) – Bad things can have catastrophic effects in the distant future. Probably never will, but it is more important to study and avoid the immediate disadvantages of existing AI systems.
Partner (N) – Other elite groups who share your worldview and can work with you to maintain status. See Partners.
Privacy trade off (PH) – The great sacrifice of individual control over personal information for the benefit of groups, such as the advancement of AI-driven health-care, also happens to be extremely lucrative.
Progress (N) – Scientific and technological progress. An innate good.
In the real world (ph) – the opposite of the simulated world. A dynamic physical environment filled with unexpected surprises that AI models are trained to survive. There is no need to be confused with people and society.
Control (N) – What do you call for Removing responsibility for reducing harmful AI on policymakers. No need to be distracted by policies that hinder your growth.
AI responsible (n) – A moniker for any job in your organization Can be understood By the public as a sincere effort to mitigate the damage to your AI systems.
Strength (N) – The ability of the AI model to work consistently and accurately under tremendous effort Feed it malicious information.
Protection (N) – The challenge of creating AI systems that are not weakened by the designer’s purpose. There is no need to get confused with creating AI systems that do not fail. See Alignment.
Scale (N) – The de-facto end states that efforts should be made to achieve a good AI system.
Protection (N) – The task of protecting valuable or sensitive data and AI models from infringing by bad actors. See Anti.
Partners (N) – Shareholders, regulators, users. You want to keep the people in power happy.
Transparency (n) – Disclosure of your data and code. Bad for ownership and sensitive information. Thus really difficult; Quite open, even impossible. No need to be confused with explicit communication about how your system actually works.
Reliable (adj) – An evaluation of an AI system that can be produced with sufficiently integrated propagation.
Universal Basic Income (PH) – If automation leads to job losses, paying everyone a fixed wage will solve the huge economic boom that idea 2020 presidential candidate Andrew Young has popularized. See Resource redistribution.
Validity (N) – The process of testing the AI model on data other than the data trained to test whether it is still accurate.
Value (N) – An invincible advantage is presented to your users which makes you a lot of money.
Value (n) – You have them. Remind people.
Resource redistribution (PH) – A useful concept Leaning around When people use you a lot of resources and investigate you to make too much money. How will the redistribution of wealth work? Of course Universal Basic Income. Also not something you can find yourself. Control is required. See Control.
Stop publishing (ph) – It’s a good idea to pick your code not open-source because it could fall into the hands of a bad actor. If better Partners access is restricted Who can afford it.