—— In 2019, the AI platform Megvii/Face++ launched the ‘beauty score’ function. Meitu AI launched the function of ‘Selfie Beautification’. This function can analyse a face and assign it a score out of 100 based on criteria such as liveliness, attractiveness and symmetry. ——

When face beautification filters and artificial intelligence (AI) algorithms are widely used in plastic surgery APPs and in situations where our appearance can be scored and quantified. In a virtual digital space characterized by diversity, dispersion, and contingency, who decides what ‘we’ consider as beautiful? Ideology, our democratic society, artificial intelligence?  How do algorithms and synthetic organisms look at faces? How does virtual and traditional capital penetrate and consume our daily masked but networked existence? How do organic desires emerge & disappear? What is the role of sexism, racism, and ableism in this context? How should we perceive the relationship among various AI platforms, social media and the facelift industry?

AI ethic for beautification is no longer merely a sign but a reality that shapes our bodies, faces, scars and desires. As the digital self and virtual reality intervene in the physical world, what measures can be taken to utilise current and future technologies autonomously to decrease the restrictions imposed by AI algorithms on women’s rights?

Beauty-alpha is a phased research artwork. The diagram attempts to explore the relationship among various AI platforms, social media and the facelift industries from the aspects of racial bias and gender oppression, teenagers’ aesthetic uniformity and belittlement of self-esteem due to social media filtering and the considerable application of AI algorithms and virtual face enhancement technologies in the facelift industry. Additionally, how their underlying constructs invisibly shape aesthetics of the public and gain profit is explored.


Download [beauty-alpha. pdf  6.93 MB]

Exhibition photo, Phantoms and Other Illusions, KAI10 | Arthena Foundation, Düsseldorf
Exhibition photo, Phantoms and Other Illusions, KAI10 | Arthena Foundation, Düsseldorf
Photo: Achim Kukulies, Düsseldorf
Exhibition photo, Phantoms and Other Illusions, KAI10 | Arthena Foundation, Düsseldorf
Photo: Achim Kukulies, Düsseldorf

Video unfinished

In order to cope with or express confluences, every individual, every community, forms its own échos-monde, imagine from power or vainglory, from suffering or impatience. Each individual makes this sort of music and each community as well. As does the totality composed of individuals and communities.


— Édouard Glissant 愛德華 ・ 格里桑


移民或迁居内里包裹着最直观的地理环境,地缘政治的改变,一切的内里和外在的不适应。 文化的巨大差异,从差异,拆分,认同,到再融合的流。 我们的身体与精神都重新构建和适应。而每一次留下的痕迹将会带领我们去到一个新的语境。

Traceability and migration mean an identity of “the others”. To adapt to two or more different social, cultural, geographical and climatic environments, we are like plankton engulfed by waves, come to shore, forced to live like migrating grafted plants.

Immigration or relocation is wrapped in the most intuitive geographical environment, geopolitical changes and all internal/ external incompatibility. The huge differences in culture, the flow from differences- splits- identifications- to reintegration. Both our body and spirit  are restructured and adapted. Each trace left will lead us to a new context.


一张德国本地花粉分布月份图。花粉症是来很多侨民迁居以后会逐年加重的症状。A monthly table of German local pollen dispersed. Hay fever is a symptom that many immigrants will gradually aggravate after they go overseas.



Both indirect and direct contact mean the variability and openness of the relationship. Through the development of technology, the way of communication has already changed. Use the Internet, smartphones, social media, send text messages, use WeChat, overseas calls, learn languages, try to find communities of the same ethnic group or local people through social media, always follow Chinese news and hope to keep up with the popularity in China.

No matter how the physical distance changes, the tendency to find and build a common language community will not change.

互联网Internet,社交媒体social media,科技technology,社会体系social system,语言language,金钱money,同族裔same ethnicity,本地人local people,  等etc.

一张欠费的中国电话卡。An arrears Chinese mobile phone card. 


但尽管有参与式文化作为口号,低廉的数字劳动力、监视和控制“其他人”(移民、难民、外国人)的系统、带有种族偏见的生物识别技术、为资本主义利益服务而收集的数据库等问题都导致了新的数字殖民主义,数字革命并没有以信息民主化和技术进步的名义消除权力关系;相反,它以不平等的方式影响了更多被剥夺权利的脆弱的社区。(见Madianou 2019)


Internet allows us to think in fluid time and space, including the perception of the relationship between all “objects” and Online. As larger and more diverse groups of human beings roaming on the planet far away from their traditional homeland, many of them is turning to online connections. These forms of communication through the body and data, cross-border and networks, online and offline, users and platforms, through the practice of symbolic and emotional. Constantly changing, breakthrough and rebuild relationships. An independent Internet diaspora may be able to realize emotional belonging  through the virtual/reality community even if the physical distance (remote) is variable.

But despite the mantra of participatory culture, neocolonial patterns are present in the way digital communications are structured online by exploiting cheap digital labour and the surveillance system to monitor and control “others” (immigrants, refugees, foreigners), biometric with racial prejudice, databases collected for the benefit of capitalism, all of these have leads to a digital neocolonialism. The digital revolution does not eliminate power relations in the name of the democratization of information and access to technological advancement; rather, it impacts in unequal ways, on more disenfranchized and vulnerable communities. (see Madianou 2019)

How to build a cross-contextual virtual relationship community that allows us to break away from the narrow form of “monoculture”? And let us not discard our original cultural characteristics. Echo chamber accelerates the polarization of ideological cognition and the isolation of language will only make the overlap of cultures never come true.

溯源traceability,散居diaspora,分离separation,方言dialect,流动性mobility,互联interconnection,同化assimilation,隔离isolation, 等etc.

在微软Bing地图推荐下的一家距离我18.464,45公里距离的咖啡店的食物图片和菜单。Food photos and menus of a coffee shop that is 18.464,45 kilometers away from me recommended by Microsoft Bing Maps.

关系诗学理论家 Edouard Glissant曾从加勒比地区的克里奥尔化的新变体中汲取灵感,提出可延展的、连贯的、散播性的开放诗学。一种群岛方案理论,让我们不仅所有人都可以保持互联,同时又不失去起源感的保留自身的独特文化。构建一个不固定起源和根源的地点,作为构思多种共存历史和地理的一种方式。一个有着吸引力的社区是开放的,可变的,可融合的。

互联网2.0 因为其分散性和阻隔性,使每一个个体都成为看似与其他人有着互动,但实际上却是十分无力和孤独的“点状体”,任何人都可以发表自己的观点,又同时被其他人的观点所淹没,人们的注意力仍旧集中在热闹的,刺激的,虚无的,甚至是荒谬绝伦的事件中。而移民的驱逐,冗长的申诉,政治难民,无法被批准的工作许可,家庭团聚的法律纠纷等等议题是意志涣散的网民最不关心的议题。这并没有什么错,这是互联网发展至今因其对于消息传送的速度和效率最大化的追求,资讯和信息的爆炸陷入到庞大体系中而构成的无法被消解的信息熵的必然。




 Reference 参考

[Poétique de la relation. English] Poetics of relation / Édouard Glissant: transIated by Betsy Wing, 2010, PP.93-94

《關係詩學》(節選)* ,愛德華 ・ 格里桑, 林書媺 譯, 201912, PP.390

Background: Chocho studio reshaping the face.
Around two cases and three videos attempt to reflect the domination and control of reality by information technology, smart phones and social media.

Chocho virtual clinic is a subsidiary of Chocho studio. The main customer groups are: Internet residents, virtual faces, beauty filter addict and the virtual modifier — doctor.
Chochos face has been analysed in first video chapter— beauty score, in order to get a higher beauty score, Chocho reshaped her virtual face after consulting doctor. In second video chapter — ForgivePo, her body is scanned and examined as a subject. The viewer is both a doctor and a web surfer. In this virtual space, Chocho interacts with information environment and the subject intervenes in real world through virtualization, perception, thinking and behaviour to give new explanations and transformations. This transition is an attempt by Internet user to regain control of the body after their virtual self-image is out of control, from being regarded to being a viewer.

Concerns from artists, social media will rationalize beauty filter even if there is bias in the algorithm. The habitual repetition of smartphone users leads to blurring of the line between consciousness and information. This will make users gradually ignore the harm of invalid information and privacy. In the digital world, women are losing the control and power over their self portraits.

In the mid-20th century, early cybernetics theorists often considered cyborg as a huge form which links machine and body, the consciousness of body modifier cyborg and information connection are mostly material, and control over the body of homo sapiens, animals, cyborgs and robots is treated differently on the basis of humanistic theories that and home sapiens are clearly different from those “non-human” things. In the late-20th century, postmodern feminism scholar Donna Haraway published her famous work titled A Cyborg Manifesto: Science, Technology, and Socialist-Feminism in the Late 20th Century, and in the eyes of cyborg feminists, technology is sexy for women, women and technology exactly can construct a picture of liberation, hence the boundaries between machine and body, consciousness and informosomes began to become blur.

After that, on the basis of the theory of Donna Haraway, N. Katherine Hayles pointed out that the intersection of technology and narrative is crucial (1) , cyborg that is part machine and part human is entity and representation, living organism and narrative structure at the same time, thus emphasizing the concept of posthuman. One of her important views is that human consciousness is like an accidental phenomenon, while visual information model focuses more on material examples, and posthuman considers the human body as an original prosthesis that can control. Thus, based on these views and other methods, posthuman redefines and configures body structure, enabling it to seamlessly combine with intelligent machine. In posthuman, there is no essential difference nor absolute dividing line between control mechanism and organism, robotic teleology and human goals. (2)

However, the facts show that since the birth of digital media and the internet, various kinds of social media have developed and become mainstream means for social communication, liberated and inclusive pictures have not happened yet, as women are gradually losing their sense of identity with their self-image and body. In The Birth of the Clinic, Foucault proposes that doctors’ gaze at patients is a presentation of wielding power, so “viewer’s gaze is dominator’s gaze”. Corresponding to this theory, viewers on social media expand their power infinitely via computer screen or smartphone with functions like browse and comment, which correspond to view.

But, the difference is that any social media user will become the subject and object of such gaze at the same time. Instagram Data Analysis Report of 2019 analyzed more than 3million posts on Instagram and counted the number of online celebrities. According to survey results, 84% of online celebrities who posted on Instagram are female, but male fans of these online celebrities are in the majority, and most of female online celebrities are called influencers while male online celebrities are mostly called creators.

Wendy Hui Kyong Chun,Professor of Modern Culture and Media at Brown University, digital media scholar and critic, proposes in his book Updating to Remain the Same: Habitual New Media (MIT 2016) that smartphone or new media have become important factors that influence behavioral habits, and we get used to the harm brought by those new technologies to privacy and identity, and even problems like racism and sexism are common. These technologies inspire hope as well as bring anxiet.

Furthermore, from the perspective of the theory of embodied cognition, our behavioral habits, living environment and body perception will influence mind and consciousness. Many users say the number of likes and others’ comments they get will directly influence their behavior and emotion when they post selfies on Instagram or WeChat Moments. It means that when the body and the virtual environment are linked via social media, human will feel passive, have a sense of anxiety and doubt their self-image and identity.

When it comes to the role and identity of artist, emerging network art, or new media art, changed the original art ecosystem again. In the post-modern society where almost everyone holds a smartphone, posthuman aesthetics is revolutionized again because of technological advancement, the way art presents itself is changed, digital art makes the public the subject of aesthetics and further diversifies display modes, digital art becomes unaccountable due to its copyability, and the scale of transmission is no longer linear transmission but interactive communication. Before the digital era, places to view artworks were particular, but places to view are no longer limited to galleries, art museums or other art institutions because of the birth of mobile phone screen, which eliminates the sense of distance. Even should we continue to use the concept of “artist”? After creator can appear in a virtual form, human, digital human and AI characters can all be art makers. Besides, after new art and politics become connected, new art and technology, technology and human have also become a discussion in contemporary art.

 Scientists and network engineers are always devoted to developing the most cutting-edge technologies and hope their technologies can make a new breakthrough, while big technology companies always promise us: we will be dedicated to developing programs and artificial intelligence without any prejudice and discrimination. Meanwhile, those users who pay for it are caught in the habituation to and dependence on intelligent and digital life when enjoying the conveniences and freshness brought by these technologies. Moreover, a growing number of artists and art museums are stuck because of the anxiety and pressure brought by “daily updates” on social media. Nonetheless, whether digital media and the internet spark new horizon or bring about unavoidable anxiety, every resident in cyberspace has their own answer.

Chocho studio reshaping the face – Video screenshot



背景:以作品-Chocho studio reshaping the face展开来谈。

虚拟概念店:chocho studio旗下的chocho 虚拟诊所,主要客户人群是: 网络居民,虚拟的脸部,希望改变自己数字化面庞的人群,社交软件滤镜重度依赖者。以及虚拟改造者-医生。

2019年人工智能平台face++推出beauty score(美丽评分)功能。用户上传自己的自拍到该平台后,AI算法将会对照片中的脸庞进行运算,然后打出相应的分数——又称“颜值评分”。现在已知整形美容app如:新氧,正使用他们的基础算法和数据库。该app主要提供微整形咨询,在中国仅安卓用户平台已经有超过三百多万人下载。在对使用instagram的青少年中调查发现,有案例显示:用户希望无限靠近社交媒体中虚拟的自己,为了使现实中的脸也能变得像使用过滤镜后的脸一样,他们会选择进行微整形。在中国,部分网络居民为了成为网红也会选择去整容,从而获取更多的赞和关注。

2019年程序员微博用户@将记忆深埋,宣布完成了一项由AI技术驱动的算法,并发布app:原谅宝(ForgivePo),该算法使用面部识别技术来识别女性照片,以此判断该女性是否为色情活动工作者,并与其在社交媒体中存在的内容产生交叉关联。该项目数据库来源为:有许多活跃用户的色情网站(包括PornHub和中国最大的成人网站Caoliu)以及社交媒体平台(包括Facebook,TikTok和Weibo),从中抓取了100 TB的数据。而这些网站的视频来源其合法性无法确认。

艺术家使用3D扫描技术将自身形象与虚拟的机械体进行结合,再将人机结合体置入信息化的三维视频空间,成为名为Chocho的化身。在虚拟的诊所中,视频第一章“颜值打分”部分将Chocho的脸部进行分析,为了得到较之以前更高的颜值分数,在咨询了医生的建议后进行了虚拟的微整形。而在视频第二章“原谅宝”中Chocho的身体作为被动的主体被进行扫描和审视,观看者既是医生也是网页浏览者。而人机合体人在这一虚拟的概念空间中, 使认知主体与环境交互作用,通过虚拟,感知,思维,行为等介入真实世界给出新的说明与转换。这种转变的发生是艺术家试图在虚拟形象脱离控制后,重新获得对身体控制权的一种尝试,从被观看者转变为凝视者。

而艺术家提出的问题,第一,社交软件滤镜对于人脸的美颜化是否合理化,以及背后的算法是否存在偏见。第二,科技对于身体的影响,是否因为智能手机用户的习惯性重复,而导致意识与信息体的边界模糊, 发生了机械与身体的超脱物理性的链接。第三,女性对于数字媒体世界中自我形象的控制权是否逐渐被弱化。

在20世纪中期,早期的控制论(Cybernetics)理论者通常将赛博格认为是一种机器与身体链接的巨大形式,身体改造者赛博格的意识与信息连接大多是物质性的,对身体的控制权,将人类(Homo sapiens),动物,赛博格和机器人从人文主义的理论依据出发作不同的区别对待, 人类与这些“非人”之物是泾渭分明的。发展至20世纪末期,后现代女性主义学者唐娜•哈拉维(Donna Haraway)发表著名的《赛博格宣言:20世纪晚期的科学、技术和社会主义的女性主义》(A Cyborg Manifesto: Science,Technology,and Socialist-Feminism in the Late 20th Century),在赛博女性主义者眼中,科技对于女人来说是性感的,女人和科技恰恰能构造出一幅解放的图画,而由此机械与身体,意识与信息体的边界开始逐渐模糊化。在这之后,凯萨琳.海尔斯(N. Katherine Hayles)根据唐娜•哈拉维的理论基础指出:科技和叙事的交集是很关键性的(1)人机合体人赛博格同时是实体和表征,活动生物和叙事结构,由此强调后人类概念。其重要的观点是认为人类的意识如同偶发现象,而视信息模型更重于物质实例,而后人类将人类的身体视为学会操控的原始义肢,由此透过这些观点和其他方法,后人类重新定义和配置人体结构,使其可以和智慧机器无缝接合。在后人类中,控制机制和生物体之间,机器人的目的论(teleology)和人类的目标之间,没有本质上的不同和绝对的分野。(2)

而事实显示,从数字媒体和互联网诞生至今,各类社交媒体发展并成为社交沟通的主流方式,解放而包容性的图画并未发生,女性正逐渐失去对自我形象和身体的认同感。福柯在《临床医学的诞生》(The Birth of the Clinic)中提出认为医生对病人的注视是权力施展的呈现,因此「观看者的凝视既是主宰者的凝视」(3)。对应这一理论,在社交媒体中的观看者,其中对应着审视,浏览和评论等功能,将其权力通过电脑屏幕或智能手机无限的扩张。而不同的是,任何社交媒体用户都将同时成为这种凝视的主体和对象。2019年的Instagram数据分析报告对超过300万条Instagram帖子进行了分析,对网红的人数进行了数据统计。根据调查结果,在Instagram上创建帖子的网红中有84%是女性,而其中网红的男性粉丝的数量占据了多数,而其中女性大多被称为网红(Influencers),而男性大多被称为创造博主(Creators)。

布朗大学现代文化与媒体专业的教授,数字媒体学者,批评家Wendy Hui Kyong Chun在《更新以维持现状:习惯新媒体》( Updating to Remain the Same: Habitual New Media, MIT 2016)”一书中提出,智慧型手机或新媒体成为影响行为习惯的重要因素,我们渐渐习惯了这些新科技带来的对隐私权和对身份认同的危害,甚至种族歧视和性别歧视等问题也屡见不鲜。这些科技即激发了希望又带来了焦虑。

如进一步通过涉身性理论来看(embodied cognition),我们的行为习惯,生活环境,身体感知将影响心灵和意识。在instagram或微信朋友圈中公开发布自己的自拍照片,很多用户称被点赞的次数和其他人评论的留言会直接影响其行为与情绪。即身体与虚拟环境和透过社交媒体发生链接时产生被动感和焦虑感,并对自我的形象和身份的认同产生怀疑。

这里说到艺术家的角色与身份。新兴的网络艺术,或新媒体艺术使原有的艺术生态再次改变。而在几乎人人都手握智能手机的后现代社会中,后人类的审美又一次因科技的发展而变革,艺术自身的呈现方式发生转变,数字艺术使大众成为审美的主体,展示方式进一步多元,数字艺术的数量因其可拷贝性而不可数的,传播的规模也不再是线性传播,而是变为交互式传播。 数字时代以前的艺术品观看地点具有特定性,而因手机屏幕的诞生,观看地点不再局限于是画廊,美术馆或其他艺术机构,这使距离感消失。甚至“艺术家”这一概念是否还应该再继续使用?在创造者可做为虚拟的形式出现后,人类,数字人和人工智能的角色都可做为艺术创造者。而在新艺术与政治产生关联之后,新新艺术与科技,科技与人类也成为了当代艺术可讨论的范畴。


(1)Page 5,第五章: 从连字符号到拼接:《地狱边境》Limbo 中的控制论句法, 后人类时代,N. 凯撒琳. 海尔斯|时代出版

(2)Page 7,第一章: 朝向具体话的虚拟, 后人类时代,N. 凯撒琳. 海尔斯|时代出版

(3)Page 59, 第四章:临床医学的昔日凄凉,临床医学的诞生,米歇尔. 福柯 | 译林出版社



Chocho studio reshaping the face

Chapter one beauty score“颜值打分”

Chapter two ForgivePo“原谅宝“
Chapter three So young”新氧”

Machine learning is expected to help humans evolve even in the field of plastic surgery. However, plastic surgeons must be aware that the artificial intelligence (AI) could create a biased view on patients, instead of promoting objectivity.

More and more young people use algorithms to determine if their face is attractive and AI to score their beauty. But what they really want is the perfect body and the perfect face without the help of social media filters, to take the perfect selfie from all angles, instantly and without software application. Their reasoning for plastic surgery is no longer to look like a star, but instead, to become more and more like the idealized, perfect self. To become as beautiful as an Instagram filter in real life.

Chocho studio uses 3D scanning technologies to reshape the face, to make it possible for surgeons and girls to collaborate on creating the perfectly instagrammable face and to obtain a higher beauty score and more attention.

Read more 
Exhibition photos in the Kasseler Kunstverein, MINUS GLEICH PLUS

Exhibition photos in the 2021 Chengdu Biennale: SUPER FUSION

The following research is based on the process and content of my graduation project named “Nicely Nicely all the time! ”, the objective is to explain the 3D modeling, 3D photogrammetry and face tracking, which I have applied over the course of my project. Besides this, I intend to intensively discuss the biases and discrepancies of the use of algorithms and data in the processes of utilizing and testing the programs, in addition to the impact of discrimination and prejudice on real-life situations and data.

Later in my research I stumbled across Safiya Umoja Noble’s book ‚algorithms of oppression’. In it, she discusses how „The near-ubiquitous use of algorithmically driven software, both visible and invisible to everyday people, demands a closer inspection of what values are prioritized in such automated decision-making systems.“ and she asks „that the misinformation and mischaracterization should stop“.

I wonder if engineers would stop building biased technologies if they realized that the languages and algorithms within their programs should be equally impartial to all of us.





TESTING FACE-TRACKING SOFTWARE FACESHIFTFaceshift software  Facial expression scanning

Bias I

In the course of using FaceShift, the first step is to generate an intermediate model by means of scanning the role player’s face. After this, the facial expressions are well-matched. After scanning the role player’s face, I find that the eyes of the primary model were generally bigger than the real-life model him/herself, which means I had to manually reduce the size and depth of the eyes. If I do not do this, the tracking system will identify my eyes as barely closed. In other words, when I open my eyes, the role player’s eyes stored in the program remained partially opened.The images on right reveals the process of adjusting the eye size of my modelled face. The left eye was captured after adjustment.




Online demo : [visited May 20, 2018]

Bias II

As illustrated here, in this experiment I use a photograph of myself (Asian), looking from the side, the face model produced through 3D Face Reconstruction is not Asian. At this point, we could say that the computational mode is based on a Caucasian outline or bone structure. In general, Asian don’t have such high brow ridge and nose bridge.






Bias III

In the process of adjusting the model, the MakeHuman fails to represent personified images in good detail even though it is capable of producing 3D avatars of different races. For instance, when creating an Asian avatar, I found that the software could adjust the size of eyes and the depth of sunken eyes in a limited scope, however, the adjustable range of distance between the eyes and the eyebrows were too limited and inadequate. Generally
speaking, eyes of Asians are not so sunken as Caucasians, which is why there appears a wider distance between their eyebrows and eyes. The MakeHuman, however, failed to notice such an issue. As a result, it left much to be desired in terms of adjusting the position and height of the eyebrows: the space between eyes and eyebrows remained quite narrow even though I adjust it to the maximum extent. Accordingly, when creating an Asian figure, it is easy to see a weird face characterizing Asian’s facial features transplanted to a Caucasian’s face.

blindsaypatten, Difficult to model Asian features. makehumancommunity [visited May 20, 2018]

I am not the only one who has noticed this an issue, the following is an excerpt of what another user called Blindsaypattern published on MakeHuman forum.







Bias IV

At this point, I applied the bones of Meta-Rig to match the model. As the bones that the Meta-Rig provided were of Caucasians, I had to adjust the structure of the bones, such as the nose and the brow ridge, to match the bones and the model(Asian).
In a sense, as the skeleton could automatically fit the plug-in program, it only required one basic skeleton.
However, why it is based on a masculine skeleton of a Caucasian?





AN ACCOUNT OF JOY BUOLAMWINI, A GRADUATE STUDENT AT MITScreenshot Photos from Joy Buolamwini’s, TED talk video : [visited May 20, 2018]

Case I
When Buolamwini was working with facial analysis software she noticed a problem: the software didn’t detect her face. It turned out that the algorithm was never taught to identify a broad range of skin tones and facial structures. As a result, the machine failed to recognize her face, however, when she would wear a white mask, it successfully detected the mask.






A screenshot of New Zealand man Richard Lee’s passport photo rejection notice, supplied to Reuters December 7, 2016. Richard Lee/Handout via REUTERS [visited May 20, 2018]

Case II

Passport robot tells applicant of Asian descent to open eyes, New Zealand man Richard Lee was subsequently rejected by facial recognition software.








Screenshot from the Sun News, Photo from Shandong TV Station. [visited May 20, 2018]

Case III

A news piece regarding the issues of Face ID malfunction tells the story of a Chinese boy who can unlock his mother’s iPhone by using the Face ID.


Screenshot Photo of video: [Asian Face ID FAIL] My Girlfriend Unlocks my iPhone X !! from Hana Cheong’ s YouTube channel [visited May 20, 2018]

Case IV

An Asian men posted a [asian Face ID FAIL] video on YouTube, illustrating that his girlfriend can use the Face ID to unlock his smart phone. It will even accept online payments using the face recognition payment account.




Artificial intelligence makes more errors in the computation of data of Asians and black people. The reason for this is that the developers of these softwares haven’t trained their machines to equally calculate each human race. Its however absolutely crucial to design an unbiased database. Here are some things I came up that relate to this problem:

  • Ensure the equality of data classification
  • Ensure the accuracy of the database’s information sources
  • Select unbiased data for AI training
  • Monitor and inspect the database on a regular basis

The idea, however, is merely to attempt to avoid significant errors and/or deviations. Once the database is not supervised for impartiality, from an ethical point of view, the AI that learned from this database is utterly meaningless, or not equipped with virtuous moral standards and deviates from all principles of fairness, justice and righteousness.

Below are some existing large-scale database, which I accessed to survey and analyse a sample




Algorithmic bias I

The artificially intelligent chatbot Tay was launched on Twitter by Microsoft on March 23, 2016. Tay was designed to mimic the way a 19-year-old American female speaks and to continue its learning online through interactions with the Twitter users. Only one day after its introduction, Tay started to publish radical remarks followed by racial and discriminatory slur. After communicating with the Twitter users, Tay was largely affected by some users’ radical comments and had started to mimic them. Consequently, Microsoft was forced to temporarily shut down Tay’s account on Twitter. Twitter taught Microsoft’s AI chatbot to be a racist in less than a day





Access to data

Algorithmic bias II

YouTube-8M is a large-scale labeled video dataset that consists of millions of YouTube video IDs and associated labels from a diverse vocabulary of 4700+ visual entities.

Access to theses

I analyzed the videos using hashtag #street-fashion (183 in all).

Videos of Asians 44
Videos of black people 37
Videos of mixed-race people 9
Videos of Caucasians 115
Videos of brown people 7
Failed arithmetic 1

In each video, subjects of the same race are calculated as a whole, for example, while there are five Caucasians and one Asian in a video, the statistics only indicate one Caucasian and one Asian.







Screenshots from [visited May 20, 2018]Access to these Access to data

Algorithmic bias III

AudioSet consists of an expanding ontology of 632 audio event classes and a collection of 2,084,320 human-labeled 10-second  sound clips drawn from YouTube videos.

This database can be used to training a chatbot.

Talking about the videos of horses, amongst the 86 videos, there are the following number of persons riding a horse

Black people: 4 times
Asian: 4 times
Caucasian: 37 times







screenshot of database by #Babycrawling

REFERENCES Khurram Soomro, Amir Roshan Zamir and Mubarak Shah, UCF101: A Dataset of 101 Human Action Classes From Videos in The Wild., CRCV-TR-12-01, November, 2012. [visited May 20, 2018]


Algorithmic bias IV

The database collets a range of videos featuring 101 kinds of daily activities of human beings, I picked some of them and calculated the data as follows,

Brushing teeth (131 videos)
Black people: 5 (4%)
Asians: 27 (20%)
Caucasians: 99 (76%)

Box Speedbag (134 videos)
Women:0 (0%)
Men:134 (100%)

Taichi (100 videos)
Asians: 68 (68%)
Black people: 0 (0%)
Caucasians: 32 (32%)

Baby crawling (132 videos)
Asian babies: 12 (10%)
Black babies: 0 (0%)
Caucasian babies: 120 (90%)


Above is my analysis of the major large-scale databases using different hashtags. Clearly, the collection of these databases about human behaviors, sounds and visions is all predominantly from Caucasian.

In the majority of the programs I use, the boning, skin, texture, hair and height of the body are all designed to represent Caucasian body types. As a result, it is extremely difficult and demanding to create an Asian, African or brown-skinned avatar. As the Caucasian 3D avatar is the base of all possible models, I had to make the choice to either go through the labour intensive process to modify this base so it would no longer show, or make a compromise.
This is the only way to cohesively match the two races as a whole. This is why there is some form of incompatibility laced throughout my work. I think a lot of automatic 3D modeling softwares have a certain degree of racial bias. Although the human face has the same structure, the differences lie in the details. If these differences are ignored during the engineering process of the software, when all Avatars are modelled, the program will be built following discrimination and prejudice.







continuing… …

Face tracking, 3D- Avatar algorithmic performance, video, interactive installation

3.5m* 5.5m, Säulengang, Kunsthochschule Kassel

4m * 4 m, Examen, Documenta halle, Kassel

3.5m * 3.5 m, CAA art museum, Hangzhou, China

Full video

Video and installation 



I wish to ask the AI and machine:

  • Could the automatic 3D modeling software generate unbiased human avatar?
  • Can you recognize the Asian eyes as open?
  • Do you believe that all Asians look alike?
  • If I were a black girl, should I be labelled as a gorilla?
  • If someday I were disfigured, could you still recognize me?

Facial tracking software is inherently biased towards Caucasian bone structures, skin color and facial contouring. This means that users from other racial backgrounds experience difficulties, both when tracking the face or during the creative process of 3D modelling.

In 2015, a black software developer named Jacky Alciné revealed that the image classifier used by Google Photos was labelling black people as “gorillas.” In 2016 Passport robot tells applicant of Asian descent to open eyes, New Zealand man Richard Lee was subsequently rejected by facial recognition software. APPLE has been accused of being “racist” after a Chinese boy was able to unlock his mum’s iPhone X with his own face.

“Nicely nicely all the time!”, a 3D interactive installation by Echo Can Luo originally set out to tell the story of minorities in Germany. But early during the process, Luo realized it was impossible to use generative 3D software to model non Caucasian faces. As a result the project includes a survey of 3D modelling software and their inherent bias and finally a video that tells the story of immigrants giving up their (biometric) data.

In the majority of the programs she use, the boning, skin, texture, hair and height of the body are all designed to represent Caucasian body types. As a result, it is extremely difficult and demanding to create an Asian, African or brown-skinned avatar. As the Caucasian 3D avatar is the base of all possible models, she had to make the choice to either go through the labour intensive process to modify this base so it would no longer show, make a compromise. This is the only way to cohesively match the two races as a whole, meanwhile.

When the project further developed, she realized that the tools and software she used were equipped with partial biases. One could argue that this is unavoidable at the beginning of a new technology. One may overlook such minor issues or notify the artist that it only takes a little more time and energy to create a non-Caucasian 3D avatar. As she noticed the issue, however, she realized that the computational and operational errors were caused by human beings, to be specific, they result from the favoritism by those who collect the machine statistics and data. If the creators realized that the language within the program should be equally impartial to all of us, the biases and discrimination may not appear.

A lot of automated 3D modeling software has a certain degree of racial bias. Although the human face has the same structure, but the difference is to be found in the details. If these differences are ignored from the software and all Avatars are modeled, it is discrimination and prejudice.


4m * 4 m, Examen, Documenta halle, Kassel

3.5m* 5.5m, interactive installation, Säulengang, Kunsthochschule Kassel

3.5m * 3.5 m, CAA art museum, Hangzhou, China

5.5m* 5.5m, interactive installation, Rundgang, Kunsthochschule Kassel


Video screenshot


Click to enter my research on  Face- tracking and 3D modeling Bias


THE ALGORITHMS AREN’T BIASED, WE ARE. — Thesis of “Nicley nicley all the time!”

Thesis Download