{"id":8923,"date":"2020-07-20T14:10:36","date_gmt":"2020-07-20T12:10:36","guid":{"rendered":"https:\/\/immune.institute\/?p=8923"},"modified":"2020-07-20T14:10:36","modified_gmt":"2020-07-20T12:10:36","slug":"midiendo-la-distancia-social-en-tiempos-del-covid-19","status":"publish","type":"post","link":"https:\/\/immune.institute\/en\/blog\/midiendo-la-distancia-social-en-tiempos-del-covid-19\/","title":{"rendered":"Measuring social distance in times of Covid-19"},"content":{"rendered":"<h3>Usando TensorFlow Object Detection API para detectar peatones y medir la distancia social entre ellos.<\/h3>\n<p><img decoding=\"async\" class=\"alignnone size-full wp-image-7973\" src=\"https:\/\/immune.institute\/wp-content\/uploads\/2020\/07\/1_4TbSdKWejM2r4WSlZNfI8g.jpeg\" alt=\"\" width=\"700\" height=\"402\" srcset=\"https:\/\/immune.institute\/wp-content\/uploads\/2020\/07\/1_4TbSdKWejM2r4WSlZNfI8g.jpeg 700w, https:\/\/immune.institute\/wp-content\/uploads\/2020\/07\/1_4TbSdKWejM2r4WSlZNfI8g-256x147.jpeg 256w, https:\/\/immune.institute\/wp-content\/uploads\/2020\/07\/1_4TbSdKWejM2r4WSlZNfI8g-512x294.jpeg 512w, https:\/\/immune.institute\/wp-content\/uploads\/2020\/07\/1_4TbSdKWejM2r4WSlZNfI8g-18x10.jpeg 18w\" sizes=\"(max-width: 700px) 100vw, 700px\" \/><\/p>\n<p>Hoy, por desgracia, todo el mundo est\u00e1 familiarizado con el t\u00e9rmino \u00abdistancia social\u00bb. Es algo con lo que tendremos que vivir por un tiempo hasta que todo vuelva a la normalidad. En&nbsp;<a href=\"https:\/\/immune.institute\/en\/?utm_campaign=IMMUNE&amp;utm_source=Embajador\"><strong>Immune Technology Institute<\/strong><\/a>&nbsp;hemos desarrollado una aplicaci\u00f3n usando&nbsp;<strong>TensorFlow Object Detection API<\/strong>&nbsp;para identificar y medir la distancia social entre peatones a tiempo real.<\/p>\n<h2><strong>Un momento\u2026 \u00bfQu\u00e9 es TensorFlow Object Detection API?<\/strong><\/h2>\n<p><strong>TensorFlow Object Detection API<\/strong>&nbsp;es un framework para crear redes neuronales enfocadas a &nbsp;problemas de detecci\u00f3n de objetos. Contiene algunos modelos pre-entrenados en distintos conjuntos de datos y que pueden ser utilizados en diversos casos de uso.<\/p>\n<p><img decoding=\"async\" class=\"size-full wp-image-7974 alignleft\" src=\"https:\/\/principal.immune.institute\/wp-content\/uploads\/2020\/07\/0_X6Lm9b8_SRsCzFkV-213x300-1.jpeg\" sizes=\"(max-width: 213px) 100vw, 213px\" srcset=\"https:\/\/principal.immune.institute\/wp-content\/uploads\/2020\/07\/0_X6Lm9b8_SRsCzFkV-213x300-1.jpeg 213w, https:\/\/principal.immune.institute\/wp-content\/uploads\/2020\/07\/0_X6Lm9b8_SRsCzFkV-213x300-1-9x12.jpeg 9w\" alt=\"\" width=\"213\" height=\"300\"><\/p>\n<p>Adem\u00e1s, este framework puede ser utilizado para aplicar&nbsp;<strong>transfer learning<\/strong>&nbsp;a modelos pre-entrenados, lo cual nos permite personalizar dichos modelos para predecir otros objetos. Por ejemplo, podemos aplicar&nbsp;<strong>transfer learning<\/strong>&nbsp;a un modelo y usarlo para identificar si una persona lleva mascarilla o no.<\/p>\n<p>La teor\u00eda detr\u00e1s del&nbsp;<a href=\"https:\/\/www.tensorflow.org\/tutorials\/images\/transfer_learning\" target=\"_blank\" rel=\"noopener\">transfer learning<\/a>&nbsp;es que si un modelo se entrena en un conjunto de datos lo suficientemente grande y general, este modelo podr\u00eda ser utilizado como un modelo gen\u00e9rico. Entonces se podr\u00eda aprovechar las features aprendidas para otros casos de uso sin tener que empezar de cero ni entrenar un nuevo modelo en un nuevo conjunto de datos, lo cual consumir\u00eda muchos recursos y tiempo.<\/p>\n<p>En este caso, no necesitamos aplicar&nbsp;<strong>transfer learning<\/strong>&nbsp;ya que queremos identificar a los peatones, y ya hay algunos modelos entrenados para ello. En concreto, hemos utilizado el modelo&nbsp;<strong>ssd_mobilenet_v2_coco_2018_03_29&nbsp;<\/strong>que ha sido entrenado para inferir estos objetos:<\/p>\n<p><\/p>\n\n\n<pre class=\"wp-block-code\"><code>{\n{'id': 1, 'name': u'person'},\n{'id': 2, 'name': u'bicycle'},\n{'id': 3, 'name': u'car'},\n{'id': 4, 'name': u'motorcycle'},\n{'id': 5, 'name': u'airplane'},\n{'id': 6, 'name': u'bus'},\n{'id': 7, 'name': u'train'},\n{'id': 8, 'name': u'truck'},\n{'id': 9, 'name': u'boat'},\n{'id': 10, 'name': u'traffic light'},\n...\n{'id': 90, 'name': u'toothbrush'}\n}<\/code><\/pre>\n\n\n<p>Para este caso de uso, s\u00f3lo necesitamos identificar y mostrar los peatones, por lo que crearemos una funci\u00f3n para filtrar las predicciones en base a su etiqueta (label) y s\u00f3lo mostraremos dichos objetos, person (id=1).<\/p>\n<h3><strong>\u00a1Vamos all\u00e1! Un poco de c\u00f3digo<\/strong><\/h3>\n<p>Cuando desarroll\u00e9 este c\u00f3digo&nbsp;<strong>TensorFlow Object Detection API<\/strong>&nbsp;no ten\u00eda soporte completo para&nbsp;<strong>TensorFlow 2<\/strong>, pero el 10 de julio Google public\u00f3 una nueva versi\u00f3n, dando soporte para algunas nuevas funcionalidades. En este caso, he utilizado&nbsp;<strong>TensorFlow 1<\/strong>&nbsp;con la versi\u00f3n&nbsp;<strong>r1.13.0&nbsp;<\/strong>de&nbsp;<strong>TF Object Detection API<\/strong>&nbsp;y Google Colab para este experimento.<\/p>\n<p>Puedes encontrar el c\u00f3digo en my&nbsp;<a href=\"https:\/\/github.com\/alejandrods\/Social-Distance-Using-TensorFlow-API-Object\" target=\"_blank\" rel=\"noopener\">GitHub<\/a>.<\/p>\n<p>En primer lugar, necesitamos crear en nuestro Drive el directorio Projects\/Pedestrian_Detection, donde clonaremos el repositorio de&nbsp;<strong>TensorFlow Object Detection API<\/strong>. Luego, podremos iniciar un notebook con Google Colab.<\/p>\n<p><\/p>\n<p>0_0YsPZGrOb14vQAeP<img decoding=\"async\" class=\"alignnone size-full wp-image-7976\" src=\"https:\/\/immune.institute\/wp-content\/uploads\/2020\/07\/0_0YsPZGrOb14vQAeP.png\" alt=\"\" width=\"700\" height=\"624\" srcset=\"https:\/\/immune.institute\/wp-content\/uploads\/2020\/07\/0_0YsPZGrOb14vQAeP.png 700w, https:\/\/immune.institute\/wp-content\/uploads\/2020\/07\/0_0YsPZGrOb14vQAeP-256x228.png 256w, https:\/\/immune.institute\/wp-content\/uploads\/2020\/07\/0_0YsPZGrOb14vQAeP-512x456.png 512w, https:\/\/immune.institute\/wp-content\/uploads\/2020\/07\/0_0YsPZGrOb14vQAeP-13x12.png 13w\" sizes=\"(max-width: 700px) 100vw, 700px\" \/><\/p>\n<p>Primero debemos montar nuestro drive dentro del notebook, simplemente debemos seguir las &nbsp;instrucciones.<\/p>\n<pre>from google.colab import drive\n\ndrive.mount('\/gdrive', force_remount=True)<\/pre>\n<p>Posteriormente, podremos cambiar nuestro path a nuestra carpeta ra\u00edz Projects\/Pedestrian_Detection<\/p>\n<pre>&nbsp;<\/pre>\n\n\n<pre class=\"wp-block-code\"><code>%cd \/gdrive\/'My Drive'\/Projects\/Pedestrian_Detection\/models\/research\/<\/code><\/pre>\n\n\n<pre>git clone<\/pre>\n<pre><a href=\"https:\/\/github.com\/tensorflow\/models\/tree\/r1.13.0\" target=\"_blank\" rel=\"noopener\">https:\/\/github.com\/tensorflow\/models\/tree\/r1.13.0<\/a><\/pre>\n<p><strong>\u00bfPor qu\u00e9 he usado esta versi\u00f3n en concreto?<\/strong>&nbsp;? B\u00e1sicamente porque al probar otras versiones he encontrado algunos problemas al ejecutarlas, esta es la primera que prob\u00e9 y funcion\u00f3 perfectamente.<\/p>\n<p>Una vez que hayas clonado el repositorio ver\u00e1s una carpeta llamada Models en el directorio: Projects\/Pedestrian_Detection.<\/p>\n<p>Los chicos de TensorFlow son muy majos y nos han proporcionado un notebook para usar un modelo pre-entrenado y detectar objetos en una imagen (lo puedes encontrar en esta ruta: : Projects\/Pedestrian_Detection\/models\/research\/object_detection\/colab_tutorials). Sin embargo, nosotros vamos a crear nuestro propio notebook porque aprenderemos a implementar nuevas funciones con el objetivo de mejorar la visualizaci\u00f3n de los objetos predichos.<\/p>\n<p>Crea un nuevo Google Colab en: Projects\/Pedestrian_Detection\/models\/research\/object_detection<\/p>\n<p>Lo primero que debemos hacer es cambiar la versi\u00f3n de&nbsp;<strong>TensorFlow&nbsp;<\/strong>a la versi\u00f3n&nbsp;<strong>1.x.<\/strong><\/p>\n\n\n<pre class=\"wp-block-code\"><code>%tensorflow_version 1.x\nimport tensorflow as tf\nComo hicimos antes, debemos montar nuestra imagen de drive dentro de Google Colab e ir a la siguiente ruta: Projects\/Pedestrian_Detection\/models\/research\/\n\nfrom google.colab import drive\ndrive.mount('\/gdrive', force_remount=True)\n%cd \/gdrive\/'My Drive'\/Projects\/Pedestrian_Detection\/models\/research\/\nNecesitamos instalar algunas librerias como: Cython, contextlib2, pillow, lxml, matplotlib, pycocotools y protocbuf compiler.\n\n!apt-get install -qq protobuf-compiler python-pil python-lxml python-tk\n!pip install -qq Cython contextlib2 pillow lxml matplotlib pycocotools\n!pip install tf-slim\n!pip install numpy==1.17.0<\/code><\/pre>\n\n\n<p>Ahora debemos ejecutar el protoc compiler, el cual es una librer\u00eda de Google, usada por&nbsp;<strong>TensorFlow,<\/strong>&nbsp;para serializar estructuras de datos<strong>\u200a<\/strong>\u2014\u200aes como un XML, pero m\u00e1s ligero, r\u00e1pido y simple.<\/p>\n<p><strong>Protoc&nbsp;<\/strong>est\u00e1 ya instalado en Google Colab pero si est\u00e1s usando tu propio ordenador puedes seguir&nbsp;<a href=\"https:\/\/github.com\/protocolbuffers\/protobuf\" target=\"_blank\" rel=\"noopener\"><strong><em>estas instrucciones<\/em><\/strong>&nbsp;para instalarlo<\/a>.<\/p>\n\n\n<pre class=\"wp-block-code\"><code>!pwd\n!protoc object_detection\/protos\/*.proto --python_out=.<\/code><\/pre>\n\n\n<p>Espera\u2026 \u00bfQu\u00e9 ha hecho&nbsp;<strong>protoc<\/strong>? B\u00e1sicamente, protoc ha generado un script de Python para cada archivo en \/models\/research\/object_detection\/protos.<\/p>\n<p><strong>Guay!!&nbsp;<\/strong>Estamos un poco m\u00e1s cerca \u270c\ufe0f. El siguiente paso es definir algunas variables de entorno como PYTHONPATH e instalar algunos paquetes usando setup.py.<\/p>\n<p><\/p>\n\n\n<pre class=\"wp-block-code\"><code>import os\nos.environ&#91;'PYTHONPATH'] += '\/gdrive\/My Drive\/Projects\/Pedestrian_Detection\/models\/research\/:\/gdrive\/My Drive\/Projects\/Pedestrian_Detection\/models\/research\/slim\/'\n!python setup.py build\n!python setup.py install\n<\/code><\/pre>\n\n\n<p>Al final del proceso, deber\u00edas ver algo como esto:<\/p>\n<p><strong>Un momento\u2026 Necesitamos un modelo, \u00bfno?&#8230; \u00a1Exacto!&nbsp;<\/strong>Necesitamos descargar un modelo pre-entrenado, en este caso yo he elegido:&nbsp;<strong>ssd_mobilenet_v2_coco_2018_03_29&nbsp;<\/strong>porque es r\u00e1pido (esto es importante para aplicaciones a tiempo real) y tiene un buen accuracy. Sin embargo, puedes usar&nbsp;<a href=\"https:\/\/github.com\/tensorflow\/models\/blob\/master\/research\/object_detection\/g3doc\/tf1_detection_zoo.md\" target=\"_blank\" rel=\"noopener\">otros modelos<\/a>&nbsp;y analizar su rendimiento. El modelo a elegir vendr\u00e1 dado por los requerimientos de tu aplicaci\u00f3n.<\/p>\n\n\n<pre class=\"wp-block-code\"><code>!mkdir \/gdrive\/'My Drive'\/Projects\/Pedestrian_Detection\/\nmodels\/research\/pretrained_model\n%cd \/gdrive\/'My Drive'\/Projects\/Pedestrian_Detection\/\nmodels\/research\/pretrained_model\n\n!wget http:\/\/download.tensorflow.org\/\nmodels\/object_detection\/ssd_mobilenet_v2_coco_2018_03_29.tar.gz\n!tar -xzf ssd_mobilenet_v2_coco_2018_03_29.tar.gz -C .\n\n%cd \/gdrive\/'My Drive'\/Projects\/\nPedestrian_Detection\/models\/research\/<\/code><\/pre>\n\n\n<p>Si todo ha funcionado correctamente ya podr\u00e1s usar&nbsp;<strong>TF Object Detection API.&nbsp;<\/strong>El siguiente paso es importar algunas librer\u00edas.<\/p>\n\n\n<pre class=\"wp-block-code\"><code>import numpy as np\nimport os\nimport six.moves.urllib as urllib\nimport sys\nimport tarfile\nimport tensorflow as tf\nimport zipfile\n\nfrom distutils.version import StrictVersion\nfrom collections import defaultdict\nfrom io import StringIO\nfrom matplotlib import pyplot as plt\nfrom PIL import Image\n\n# This is needed since the notebook is stored in the object_detection folder.\nsys.path.append(\"..\")\nfrom object_detection.utils import ops as utils_ops\n\nif StrictVersion(tf.__version__) &lt; StrictVersion('1.9.0'):\nraise ImportError('Please upgrade your TensorFlow installation to v1.9.* or later!')\n\n# This is needed to display the images.\n%matplotlib inline\n\nfrom object_detection.utils import label_map_util\nfrom object_detection.utils import visualization_utils as vis_util\n\nimport math\nimport itertools\nfrom itertools import compress\nfrom PIL import Image, ImageDraw<\/code><\/pre>\n\n\n<p>Ahora mismo, ya podremos cargar nuestro modelo pre-entrenado. Simplemente debemos definir la ruta a nuestro modelo PATH_TO_FROZEN_GRAPH, y la ruta a las labels PATH_TO_LABELS.<\/p>\n\n\n<pre class=\"wp-block-code\"><code># Model Name\nMODEL_NAME = 'ssd_mobilenet_v2_coco_2018_03_29'\n\n# Path to frozen detection graph. This is the actual model that is used for the object detection.\nPATH_TO_FROZEN_GRAPH = \"\/gdrive\/My Drive\/Projects\/Pedestrian_Detection\n\/models\/research\/pretrained_model\/\nssd_mobilenet_v2_coco_2018_03_29\/frozen_inference_graph.pb\"\n\n# List of the strings that is used to add correct label for each box.\nPATH_TO_LABELS = '\/gdrive\/My Drive\/Projects\/Pedestrian_Detection\n\/models\/research\/object_detection\/data\/mscoco_label_map.pbtxt'\n\n# Load graph\ndetection_graph = tf.Graph()\nwith detection_graph.as_default():\nod_graph_def = tf.GraphDef()\nwith tf.gfile.GFile(PATH_TO_FROZEN_GRAPH, 'rb') as fid:\nserialized_graph = fid.read()\nod_graph_def.ParseFromString(serialized_graph)\ntf.import_graph_def(od_graph_def, name='')\n<\/code><\/pre>\n\n\n<p>Debemos mapear los&nbsp;<strong>\u00edndices de las labels<\/strong>&nbsp;a los nombres de los objetos que nuestro modelo va a predecir. De esta forma, cuando nuestro modelo prediga 1 sabremos que corresponde a la etiqueta person. Aqu\u00ed vamos a usar algunas funciones que nos proporciona&nbsp;<strong>TF&nbsp;<\/strong>pero podremos usar las nuestras propias.<\/p>\n\n\n<pre class=\"wp-block-code\"><code>category_index = \nlabel_map_util.create_category_index_from_labelmap(PATH_TO_LABELS, use_display_name=True)\n<\/code><\/pre>\n\n\n<p>La funci\u00f3n run_inference_for_single_image toma como argumentos la imagen y nuestro modelo, y nos devolver\u00e1 la predicci\u00f3n. En concreto, devuelve un diccionario output_dict que contiene las coordenadas y la etiqueta para cada objecto que ha detectado en la imagen.<\/p>\n\n\n<pre class=\"wp-block-code\"><code>def run_inference_for_single_image(image, graph):\nwith graph.as_default():\nwith tf.Session() as sess:\n# Get handles to input and output tensors\nops = tf.get_default_graph().get_operations()\nall_tensor_names = {output.name for op in ops for output in op.outputs}\ntensor_dict = {}\nfor key in &#91;\n'num_detections', 'detection_boxes', 'detection_scores',\n'detection_classes', 'detection_masks'\n]:\ntensor_name = key + ':0'\nif tensor_name in all_tensor_names:\ntensor_dict&#91;key] = tf.get_default_graph().get_tensor_by_name(tensor_name)\nif 'detection_masks' in tensor_dict:\n# The following processing is only for single image\ndetection_boxes = tf.squeeze(tensor_dict&#91;'detection_boxes'], &#91;0])\ndetection_masks = tf.squeeze(tensor_dict&#91;'detection_masks'], &#91;0])\n# Reframe is required to translate mask from box coordinates to image coordinates and fit the image size.\nreal_num_detection = tf.cast(tensor_dict&#91;'num_detections']&#91;0], tf.int32)\ndetection_boxes = tf.slice(detection_boxes, &#91;0, 0], &#91;real_num_detection, -1])\ndetection_masks = tf.slice(detection_masks, &#91;0, 0, 0], &#91;real_num_detection, -1, -1])\ndetection_masks_reframed = utils_ops.reframe_box_masks_to_image_masks(\ndetection_masks, detection_boxes, image.shape&#91;0], image.shape&#91;1])\ndetection_masks_reframed = tf.cast(\ntf.greater(detection_masks_reframed, 0.5), tf.uint8)\n# Follow the convention by adding back the batch dimension\ntensor_dict&#91;'detection_masks'] = tf.expand_dims(\ndetection_masks_reframed, 0)\nimage_tensor = tf.get_default_graph().get_tensor_by_name('image_tensor:0')\n\n# Run inference\noutput_dict = sess.run(tensor_dict,\nfeed_dict={image_tensor: np.expand_dims(image, 0)})\n# all outputs are float32 numpy arrays, so convert types as appropriate\noutput_dict&#91;'num_detections'] = int(output_dict&#91;'num_detections']&#91;0])\noutput_dict&#91;'detection_classes'] = output_dict&#91;\n   'detection_classes']&#91;0].astype(np.uint8)\noutput_dict&#91;'detection_boxes'] = output_dict&#91;'detection_boxes']&#91;0]\noutput_dict&#91;'detection_scores'] = output_dict&#91;'detection_scores']&#91;0]\n\nif 'detection_masks' in output_dict:\noutput_dict&#91;'detection_masks'] = output_dict&#91;'detection_masks']&#91;0]\nreturn output_dict<\/code><\/pre>\n\n\n<p>Como hemos dicho antes, nuestro modelo puede predecir diversos objetos al mismo tiempo (puedes verlos en category_index). Sin embargo, solo queremos mostrar una categor\u00eda en particular: person. Por tanto, hemos creado una funci\u00f3n para filtrar nuestras predicciones en base a un&nbsp;<strong><em>threshold&nbsp;<\/em><\/strong>min_score y el&nbsp;<strong>id<\/strong>&nbsp;de la etiqueta categories. En este caso, el&nbsp;<strong>id&nbsp;<\/strong>para la etiqueta de&nbsp;<strong>personas&nbsp;<\/strong>es 1.<\/p>\n\n\n<pre class=\"wp-block-code\"><code>def filter_boxes(min_score, boxes, scores, classes, categories):\n\"\"\"Return boxes with a confidence &gt;= `min_score`\"\"\"\nn = len(classes)\nidxs = &#91;]\nfor i in range(n):\n    if classes&#91;i] in categories and scores&#91;i] &gt;= min_score:\n        idxs.append(i)\nfiltered_boxes = boxes&#91;idxs, ...]\nfiltered_scores = scores&#91;idxs, ...]\nfiltered_classes = classes&#91;idxs, ...]\nreturn filtered_boxes, filtered_scores, filtered_classes<\/code><\/pre>\n\n\n<h3><strong>Midiendo la distancia entre los objetos&nbsp;<\/strong><strong>?<\/strong><strong>\ufe0f<\/strong><\/h3>\n<p>En este punto, tenemos ya un modelo preparado para ejecutar predicciones, el cual nos devuelve un diccionario output_dict con las coordenadas de los objetos. Queremos medir la distancia entre los objetos detectados por el modelo pero no es una soluci\u00f3n trivial. Por tanto, hemos creado algunas funciones para calcular dicha distancia entre los&nbsp;<strong>centroides&nbsp;<\/strong>de cada objeto. Los pasos que sigue el c\u00f3digo son:<\/p>\n<p><img decoding=\"async\" class=\"alignnone size-full wp-image-7977\" src=\"https:\/\/immune.institute\/wp-content\/uploads\/2020\/07\/1_nV749oNHAeKWowUrSBsO3w.jpeg\" alt=\"\" width=\"700\" height=\"427\" srcset=\"https:\/\/immune.institute\/wp-content\/uploads\/2020\/07\/1_nV749oNHAeKWowUrSBsO3w.jpeg 700w, https:\/\/immune.institute\/wp-content\/uploads\/2020\/07\/1_nV749oNHAeKWowUrSBsO3w-256x156.jpeg 256w, https:\/\/immune.institute\/wp-content\/uploads\/2020\/07\/1_nV749oNHAeKWowUrSBsO3w-512x312.jpeg 512w, https:\/\/immune.institute\/wp-content\/uploads\/2020\/07\/1_nV749oNHAeKWowUrSBsO3w-18x12.jpeg 18w\" sizes=\"(max-width: 700px) 100vw, 700px\" \/><\/p>\n<ul>\n<li>Obtener las coordenadas de cada objeto usando la funci\u00f3n calculate_coord.<\/li>\n<li>Calcular el centroide para cada rect\u00e1ngulo\u200a\u2014\u200acalculate_centr.<\/li>\n<\/ul>\n<p><img decoding=\"async\" class=\"size-full wp-image-7978 alignleft\" src=\"https:\/\/principal.immune.institute\/wp-content\/uploads\/2020\/07\/1_ofphZZWhUjTYOAoQG88DTA.jpeg\" sizes=\"(max-width: 500px) 100vw, 500px\" srcset=\"https:\/\/principal.immune.institute\/wp-content\/uploads\/2020\/07\/1_ofphZZWhUjTYOAoQG88DTA.jpeg 500w, https:\/\/principal.immune.institute\/wp-content\/uploads\/2020\/07\/1_ofphZZWhUjTYOAoQG88DTA-300x131.jpeg 300w, https:\/\/principal.immune.institute\/wp-content\/uploads\/2020\/07\/1_ofphZZWhUjTYOAoQG88DTA-18x8.jpeg 18w\" alt=\"\" width=\"500\" height=\"219\"><\/p>\n<ul>\n<li>\n<ul>\n<li>Debemos calcular todas las permutaciones entre los centroides usando calculate_perm.<\/li>\n<li>&nbsp;Calcular la distancia entre cada centroide (por ejemplo, entre person A y person B) con esta funci\u00f3n: calculate_centr_distance.<\/li>\n<li>Finalmente, calcular el punto medio de cada segmento para visualizar la distancia como texto en la imagen\u200a\u2014\u200amidpoint y calculate_slope.<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<pre>def calculate_coord(bbox, width, height):\n  \"\"\"Return boxes coordinates\"\"\"\n  xmin = bbox[1] * width\n  ymin = bbox[0] * height\n  xmax = bbox[3] * width\n  ymax = bbox[2] * height\n\n  return [xmin, ymin, xmax - xmin, ymax - ymin]<\/pre>\n\n\n<pre class=\"wp-block-code\"><code>def calculate_centr(coord):\n  \"\"\"Calculate centroid for each box\"\"\"\n  return (coord&#91;0]+(coord&#91;2]\/2), coord&#91;1]+(coord&#91;3]\/2))\n\ndef calculate_centr_distances(centroid_1, centroid_2):\n  \"\"\"Calculate the distance between 2 centroids\"\"\"\n  return  math.sqrt((centroid_2&#91;0]-centroid_1&#91;0])**2 + (centroid_2&#91;1]-centroid_1&#91;1])**2)\n\ndef calculate_perm(centroids):\n  \"\"\"Return all combinations of centroids\"\"\"\n  permutations = &#91;]\n  for current_permutation in itertools.permutations(centroids, 2):\n    if current_permutation&#91;::-1] not in permutations:\n      permutations.append(current_permutation)\n  return permutations\n\n  def midpoint(p1, p2):\n  \"\"\"Midpoint between 2 points\"\"\"\n  return ((p1&#91;0] + p2&#91;0])\/2, (p1&#91;1] + p2&#91;1])\/2)\n\n  def calculate_slope(x1, y1, x2, y2):\n  \"\"\"Calculate slope\"\"\"\n  m = (y2-y1)\/(x2-x1)\n  return m<\/code><\/pre>\n\n\n<p>Ahora que hemos definido estas funciones, podemos crear la funci\u00f3n principal llamada show_inference el cual ejecutar\u00e1 la predicci\u00f3n para las im\u00e1genes y graficar\u00e1 los recuadros y las distancias entre los peatones.<\/p>\n<p>La primera parte de esta funci\u00f3n es responsable de obtener la imagen y ejecutar la inferencia. Obtendremos un diccionario output_dict con las coordenadas de los objetos que el modelo ha detectado.<\/p>\n\n\n<pre class=\"wp-block-code\"><code>image = Image.open(image_path)\n\n# the array based representation of the image will be used later\nimage_np = load_image_into_numpy_array(image)\n\n# Expanding dimensions\n# Since the model expects images to have shape: &#91;1, None, None, 3]\nimage_np_expanded = np.expand_dims(image_np, axis=0)\n\n# Actual detection.\noutput_dict = run_inference_for_single_image(image_np, detection_graph)<\/code><\/pre>\n\n\n<p>Seguidamente, definiremos un umbral confidence_cutoff=0.5 para evitar graficar predicciones con una probabilidad baja. En paralelo, obtendremos el tama\u00f1o de nuestra imagen y deberemos definir una relaci\u00f3n&nbsp;<strong>\u201cpixel-metro\u201d<\/strong>&nbsp;para calcular la distancia correctamente. He analizado algunas im\u00e1genes y he considerado que width\u200a\u2014\u200a150px = 7 meters es una buena relaci\u00f3n. Esta parte es complicada porque no tenemos en cuenta la perspectiva ni el \u00e1ngulo de la c\u00e1mara, es un problema complicado de generalizar para todas las im\u00e1genes y os animo a mejorarlo y mostrar vuestra soluci\u00f3n con nosotros ?.<\/p>\n\n\n<pre class=\"wp-block-code\"><code># Get boxes only for person\nconfidence_cutoff = 0.5\nboxes, scores, classes = filter_boxes(confidence_cutoff, output_dict&#91;'detection_boxes'], output_dict&#91;'detection_scores'], output_dict&#91;'detection_classes'], &#91;1])\n\n# Get width and heigth\nim = Image.fromarray(image_np)\nwidth, height = im.size\n\n# Pixel per meters - THIS IS A REFERENCE, YOU HAVE TO ADAPT THIS FOR EACH IMAGE\n# In this case, we are considering that (width - 150) approximately is 7 meters\naverage_px_meter = (width-150) \/ 7\n<\/code><\/pre>\n\n\n<p>Finalmente, podemos calcular todos los centroides para nuestras predicciones y generar las combinaciones entre ellos. Posteriormente, podremos crear las l\u00edneas dx,dy&nbsp; que conectan los centroides y graficar la distancia.<\/p>\n\n\n<pre class=\"wp-block-code\"><code># Calculate normalized coordinates for boxes\ncentroids = &#91;]\ncoordinates = &#91;]\nfor box in boxes:\n    coord = calculate_coord(box, width, height)\n    centr = calculate_centr(coord)\n    centroids.append(centr)\n    coordinates.append(coord)\n\n# Calculate all permutations\npermutations = calculate_perm(centroids)\n\n# Display boxes and centroids\nfig, ax = plt.subplots(figsize = (20,12), dpi = 90)\nax.imshow(image, interpolation='nearest')\n\nfor coord, centr in zip(coordinates, centroids):\n    ax.add_patch(patches.Rectangle((coord&#91;0], coord&#91;1]), coord&#91;2],      coord&#91;3], linewidth=2, edgecolor='y', facecolor='none', zorder=10))\n    ax.add_patch(patches.Circle((centr&#91;0], centr&#91;1]), 3, color='yellow', zorder=20))\n# Display lines between centroids\nfor perm in permutations:\n    dist = calculate_centr_distances(perm&#91;0], perm&#91;1])\n    dist_m = dist\/average_px_meter\n\n    print(\"M meters: \", dist_m)\n    middle = midpoint(perm&#91;0], perm&#91;1])\n    print(\"Middle point\", middle)\n\n    x1 = perm&#91;0]&#91;0]\n    x2 = perm&#91;1]&#91;0]\n    y1 = perm&#91;0]&#91;1]\n    y2 = perm&#91;1]&#91;1]\n\n    slope = calculate_slope(x1, y1, x2, y2)\n    dy = math.sqrt(3**2\/(slope**2+1))\n    dx = -slope*dy<\/code><\/pre>\n\n\n<p>En resumen, la funci\u00f3n deber\u00eda quedar tal que as\u00ed:<\/p>\n\n\n<pre class=\"wp-block-code\"><code>import matplotlib.pyplot as plt\nimport matplotlib.patches as patches\nfrom random import randrange \n\n# If you want to test the code with your images, just add path to the images to the TEST_IMAGE_PATHS.\nPATH_TO_TEST_IMAGES_DIR = '\/gdrive\/My Drive\/Projects\/Pedestrian_Detection\/\nmodels\/research\/test_images'\nTEST_IMAGE_PATHS = &#91; os.path.join(PATH_TO_TEST_IMAGES_DIR,\n 'image{}.jpg'.format(i)) for i in range(8, 9) ] \n\n# Size, in inches, of the output images.\nIMAGE_SIZE = (12, 8)\n\n def show_inference(image_path):\n  image = Image.open(image_path)\n  # the array based representation of the image will be used later in order to prepare the\n  # result image with boxes and labels on it.\n  image_np = load_image_into_numpy_array(image)\n  # Expand dimensions since the model expects images to have shape: &#91;1, None, None, 3]\n  image_np_expanded = np.expand_dims(image_np, axis=0)\n  # Actual detection.\n  output_dict = run_inference_for_single_image(image_np, detection_graph)\n\n   # Get boxes only for person\n  confidence_cutoff = 0.5\n  boxes, scores, classes = filter_boxes(confidence_cutoff,\n output_dict&#91;'detection_boxes'], output_dict&#91;'detection_scores'],\n output_dict&#91;'detection_classes'], &#91;1])\n\n   # Get width and heigth\n  im = Image.fromarray(image_np)\n  width, height = im.size\n\n   # Pixel per meters - THIS IS A REFERENCE, YOU HAVE TO ADAPT THIS FOR EACH IMAGE\n   # In this case, we are considering that (width - 150) approximately is 7 meters\n  average_px_meter = (width-150) \/ 7\n\n   # Calculate normalized coordinates for boxes\n   centroids = &#91;]\n   coordinates = &#91;]\n   for box in boxes:\n    coord = calculate_coord(box, width, height)\n    centr = calculate_centr(coord)\n    centroids.append(centr)\n    coordinates.append(coord)\n\n   # Calculate all permutations\n   permutations = calculate_perm(centroids)\n\n   # Display boxes and centroids\n   fig, ax = plt.subplots(figsize = (20,12), dpi = 90)\n   ax.imshow(image, interpolation='nearest')\n   for coord, centr in zip(coordinates, centroids):\n    ax.add_patch(patches.Rectangle((coord&#91;0], coord&#91;1]), coord&#91;2], coord&#91;3], linewidth=2, edgecolor='y', facecolor='none', zorder=10))\n    ax.add_patch(patches.Circle((centr&#91;0], centr&#91;1]), 3, color='yellow', zorder=20))\n\n   # Display lines between centroids  for perm in permutations:\n    dist = calculate_centr_distances(perm&#91;0], perm&#91;1])\n    dist_m = dist\/average_px_meter\n\n     print(\"M meters: \", dist_m)\n     middle = midpoint(perm&#91;0], perm&#91;1])\n     print(\"Middle point\", middle)\n\n     x1 = perm&#91;0]&#91;0]\n     x2 = perm&#91;1]&#91;0]\n     y1 = perm&#91;0]&#91;1]\n     y2 = perm&#91;1]&#91;1]\n\n     slope = calculate_slope(x1, y1, x2, y2)\n     dy = math.sqrt(3**2\/(slope**2+1))\n     dx = -slope*dy\n\n     # Display randomly the position of our distance text\n     if randrange(10) % 2== 0:\n      Dx = middle&#91;0] - dx*10\n      Dy = middle&#91;1] - dy*10\n    else:\n      Dx = middle&#91;0] + dx*10\n      Dy = middle&#91;1] + dy*10\n\n     if dist_m &lt; 1.5:\n      ax.annotate(\"{}m\".format(round(dist_m, 2)), xy=middle, color='white', xytext=(Dx, Dy), fontsize=10,\n arrowprops=dict(arrowstyle='-&gt;', lw=1.5, color='yellow'), bbox=dict(facecolor='red', edgecolor='white', boxstyle='round', pad=0.2), zorder=30)\n      ax.plot((perm&#91;0]&#91;0], perm&#91;1]&#91;0]), (perm&#91;0]&#91;1], perm&#91;1]&#91;1]), linewidth=2, color='yellow', zorder=15)    elif 1.5 &lt; dist_m &lt; 3.5:\n      ax.annotate(\"{}m\".format(round(dist_m, 2)), xy=middle, color='black', xytext=(Dx, Dy), fontsize=8,\n arrowprops=dict(arrowstyle='-&gt;', lw=1.5, color='skyblue'), bbox=dict(facecolor='y', edgecolor='white', boxstyle='round', pad=0.2), zorder=30)\n      ax.plot((perm&#91;0]&#91;0], perm&#91;1]&#91;0]), (perm&#91;0]&#91;1], perm&#91;1]&#91;1]), linewidth=2, color='skyblue', zorder=15)\n    else:\n      pass\n    # Make prediction\n    for file in TEST_IMAGE_PATHS:\n     show_inference(file)<\/code><\/pre>\n\n\n<p>Despu\u00e9s de todo el trabajo, podremos correr nuestro c\u00f3digo en im\u00e1genes. Simplemente debemos a\u00f1adir las im\u00e1genes a esta carpeta: ..\/Projects\/Pedestrian_Detection\/models\/research\/test_images<\/p>\n<p>_7coQNS7_14BXFUfCMphJF<\/p>\n<p><img decoding=\"async\" class=\"alignnone size-full wp-image-7979\" src=\"https:\/\/immune.institute\/wp-content\/uploads\/2020\/07\/1_7coQNS7_14BXFUfCMphJFw.jpeg\" alt=\"\" width=\"517\" height=\"255\" srcset=\"https:\/\/immune.institute\/wp-content\/uploads\/2020\/07\/1_7coQNS7_14BXFUfCMphJFw.jpeg 517w, https:\/\/immune.institute\/wp-content\/uploads\/2020\/07\/1_7coQNS7_14BXFUfCMphJFw-256x126.jpeg 256w, https:\/\/immune.institute\/wp-content\/uploads\/2020\/07\/1_7coQNS7_14BXFUfCMphJFw-512x253.jpeg 512w, https:\/\/immune.institute\/wp-content\/uploads\/2020\/07\/1_7coQNS7_14BXFUfCMphJFw-18x9.jpeg 18w\" sizes=\"(max-width: 517px) 100vw, 517px\" \/><\/p>\n<p><img decoding=\"async\" class=\"alignnone size-full wp-image-7980\" src=\"https:\/\/immune.institute\/wp-content\/uploads\/2020\/07\/1_p7zZ6TwaPH9omVDW0ustww.jpeg\" alt=\"\" width=\"484\" height=\"256\" srcset=\"https:\/\/immune.institute\/wp-content\/uploads\/2020\/07\/1_p7zZ6TwaPH9omVDW0ustww.jpeg 484w, https:\/\/immune.institute\/wp-content\/uploads\/2020\/07\/1_p7zZ6TwaPH9omVDW0ustww-256x135.jpeg 256w, https:\/\/immune.institute\/wp-content\/uploads\/2020\/07\/1_p7zZ6TwaPH9omVDW0ustww-18x10.jpeg 18w\" sizes=\"(max-width: 484px) 100vw, 484px\" \/><\/p>\n<p><strong>Wow!<\/strong>&nbsp;Tienen Buena pinta y podr\u00eda incluso mejor si ejecutamos este c\u00f3digo en videos. As\u00ed que, \u00a1vamos all\u00e1!<\/p>\n<h3><strong>Predicci\u00f3n en&nbsp;v\u00eddeos<\/strong><\/h3>\n<p>El c\u00f3digo para ejecutar el modelo en v\u00eddeos es igual que para las im\u00e1genes porque usaremos OpenCV para dividir el video en frames y as\u00ed poder procesar cada frame como im\u00e1genes individuales.<\/p>\n\n\n<pre class=\"wp-block-code\"><code>import cv2\nimport matplotlib\nfrom matplotlib import pyplot as plt\nplt.ioff()matplotlib.use('Agg')\n\nFILE_OUTPUT = '\/gdrive\/My Drive\/Projects\/Pedestrian_Detection\/models\/research\/test_\nimages\/Rail_Station_Predicted2.mp4'\n\n# Playing video from file\ncap = cv2.VideoCapture('\/gdrive\/My Drive\/Projects\/Pedestrian_Detection\/\nmodels\/research\/test_images\/Rail_Station_Converted.mp4')\n\n# Default resolutions of the frame are obtained.The default resolutions are system dependent.\n# We convert the resolutions from float to integer.\nwidth = int(cap.get(3))\nheight = int(cap.get(4))\n\ndim = (width, height)\nprint(dim)\n\ntope = 10\ni = 0\nnew = True\nwith detection_graph.as_default():\n    with tf.Session(graph=detection_graph) as sess:\n        # Definite input and output Tensors for detection_graph\n        image_tensor = detection_graph.get_tensor_by_name('image_tensor:0')\n        # Each box represents a part of the image where a particular object was detected.\n        detection_boxes = detection_graph.get_tensor_by_name('detection_boxes:0')\n\n        # Each score represent how level of confidence for each of the objects.\n        # Score is shown on the result image, together with the class label.\n        detection_scores =\n detection_graph.get_tensor_by_name('detection_scores:0')\n        detection_classes =\n detection_graph.get_tensor_by_name('detection_classes:0')\n        num_detections =\n detection_graph.get_tensor_by_name('num_detections:0')\n\n         i = 0\n        while(cap.isOpened()):\n            # Capture frame-by-frame\n            ret, frame = cap.read()\n                        if ret == True:\n              # Correct color\n              frame = gray = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)\n\n               # Expand dimensions since the model expects images to have shape: &#91;1, None, None, 3]\n              image_np_expanded = np.expand_dims(frame, axis=0)\n               # Actual detection.\n              (boxes, scores, classes, num) = sess.run(\n                  &#91;detection_boxes, detection_scores, detection_classes, num_detections],\n                  feed_dict={image_tensor: image_np_expanded})\n\n                  # Filter boxes\n              confidence_cutoff = 0.5\n              boxes, scores, classes = filter_boxes(confidence_cutoff,\n np.squeeze(boxes), np.squeeze(scores), np.squeeze(classes), &#91;1])\n\n               # Calculate normalized coordinates for boxes\n              centroids = &#91;]\n              coordinates = &#91;]\n              for box in boxes:\n                coord = calculate_coord(box, width, height)\n                centr = calculate_centr(coord)\n                centroids.append(centr)\n                coordinates.append(coord)\n\n              # Pixel per meters\n              average_px_meter = (width-150) \/ 7\n              \n              permutations = calculate_perm(centroids)\n\n              # Display boxes and centroids\n              fig, ax = plt.subplots(figsize = (20,12), dpi = 90, frameon=False)\n              ax = fig.add_axes(&#91;0, 0, 1, 1])\n              ax.axis('off')\n              ax.spines&#91;'top'].set_visible(False)\n              ax.spines&#91;'right'].set_visible(False)\n              ax.spines&#91;'bottom'].set_visible(False)\n              ax.spines&#91;'left'].set_visible(False)\n              ax.get_xaxis().set_ticks(&#91;])\n              ax.get_yaxis().set_ticks(&#91;])\n              for coord, centr in zip(coordinates, centroids):\n                ax.add_patch(patches.Rectangle((coord&#91;0], coord&#91;1]),\n coord&#91;2], coord&#91;3], linewidth=2, edgecolor='y', facecolor='none', zorder=10))\n                ax.add_patch(patches.Circle((centr&#91;0], centr&#91;1]), 3,\n color='yellow', zorder=20))\n\n               # Display lines between centroids\n              for perm in permutations:\n                dist = calculate_centr_distances(perm&#91;0], perm&#91;1])\n                dist_m = dist\/average_px_meter\n\n                x1 = perm&#91;0]&#91;0]\n                y1 = perm&#91;0]&#91;1]\n                x2 = perm&#91;1]&#91;0]\n                y2 = perm&#91;1]&#91;1]\n                \n                # Calculate middle point\n                middle = midpoint(perm&#91;0], perm&#91;1])\n\n                 # Calculate slope\n                slope = calculate_slope(x1, y1, x2, y2)\n                dy = math.sqrt(3**2\/(slope**2+1))\n                dx = -slope*dy\n\n                 # Set random location\n                if randrange(10) % 2== 0:\n                  Dx = middle&#91;0] - dx*10\n                  Dy = middle&#91;1] - dy*10\n                else:\n                  Dx = middle&#91;0] + dx*10\n                  Dy = middle&#91;1] + dy*10\n\n                 if dist_m &lt; 1.5:\n                  ax.annotate(\"{}m\".format(round(dist_m, 2)), xy=middle,\n color='white', xytext=(Dx, Dy), fontsize=10, arrowprops=dict(arrowstyle='-&gt;',\n lw=1.5, color='yellow'), bbox=dict(facecolor='red', edgecolor='white',\n boxstyle='round', pad=0.2), zorder=35)\n                  ax.plot((perm&#91;0]&#91;0], perm&#91;1]&#91;0]), (perm&#91;0]&#91;1], perm&#91;1]&#91;1]),\n linewidth=2, color='yellow', zorder=15)\n                elif 1.5 &lt; dist_m &lt; 3.5:\n                  ax.annotate(\"{}m\".format(round(dist_m, 2)), xy=middle,\n color='black', xytext=(Dx, Dy), fontsize=8, arrowprops=dict(arrowstyle='-&gt;',\n lw=1.5, color='skyblue'), bbox=dict(facecolor='y', edgecolor='white',\n boxstyle='round', pad=0.2), zorder=35)\n                  ax.plot((perm&#91;0]&#91;0], perm&#91;1]&#91;0]), (perm&#91;0]&#91;1], perm&#91;1]&#91;1]),\n linewidth=2, color='skyblue', zorder=15)\n                else:\n                  pass\n\n               ax.imshow(frame, interpolation='nearest')\n               \n              # This allows you to save each frame in a folder\n              #fig.savefig(\"\/gdrive\/My\n Drive\/Projects\/Pedestrian_Detection\/models\/research\/test_images\/TEST_{}.png\".\nformat(i))\n              # i += 1\n              # Convert figure to numpy\n              fig.canvas.draw()\n\n              img = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8,\n sep='')\n              img  = img.reshape(fig.canvas.get_width_height()&#91;::-1] + (3,))\n\n              img = np.array(fig.canvas.get_renderer()._renderer)\n              img = cv2.cvtColor(img,cv2.COLOR_RGB2BGR)\n\n              if new:\n                print(\"Define out\")\n                out = cv2.VideoWriter(FILE_OUTPUT,\n cv2.VideoWriter_fourcc(*'MP4V'), 20.0, (img.shape&#91;1], img.shape&#91;0]))\n                new = False\n\n               out.write(img)\n            else:\n              break\n\n    # When everything done, release the video capture and video write objects\n    cap.release()\n    out.release()\n\n    # Closes all the frames\n    cv2.destroyAllWindows()\n<\/code><\/pre>\n\n\n<p>Como hicimos antes, simplemente debemos copiar nuestro video en: ..\/Projects\/Pedestrian_Detection\/models\/research\/test_images y actualizar el path en cap = cv2.VideoCapture(\u2026).Tambi\u00e9n podemos definir un nombre para FILE_OUTPUT.<\/p>\n\n\n\n","protected":false},"excerpt":{"rendered":"<p>Usando TensorFlow Object Detection API para detectar peatones y medir la distancia social entre ellos. Hoy, por desgracia, todo el mundo est\u00e1 familiarizado con el t\u00e9rmino \u00abdistancia social\u00bb. Es algo con lo que tendremos que vivir por un tiempo hasta que todo vuelva a la normalidad. En&nbsp;Immune Technology Institute&nbsp;hemos desarrollado una aplicaci\u00f3n usando&nbsp;TensorFlow Object Detection [&hellip;]<\/p>\n","protected":false},"author":3,"featured_media":7981,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"_crdt_document":"","footnotes":""},"categories":[1],"tags":[],"class_list":["post-8923","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-blog"],"acf":[],"_links":{"self":[{"href":"https:\/\/immune.institute\/en\/wp-json\/wp\/v2\/posts\/8923","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/immune.institute\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/immune.institute\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/immune.institute\/en\/wp-json\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"https:\/\/immune.institute\/en\/wp-json\/wp\/v2\/comments?post=8923"}],"version-history":[{"count":0,"href":"https:\/\/immune.institute\/en\/wp-json\/wp\/v2\/posts\/8923\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/immune.institute\/en\/wp-json\/wp\/v2\/media\/7981"}],"wp:attachment":[{"href":"https:\/\/immune.institute\/en\/wp-json\/wp\/v2\/media?parent=8923"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/immune.institute\/en\/wp-json\/wp\/v2\/categories?post=8923"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/immune.institute\/en\/wp-json\/wp\/v2\/tags?post=8923"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}