TOP MEDIA

2023 TOP MEDIA AUDITION


آرگنائزر TOP MEDIA
بھرتی کی شکل باقاعدگی سے آڈیشن
بھرتی کی مدت Every Tue/Thu/Sun
معاونت کی اہلیت Regardless of gender born in 2005 ~ 2013, No limit on nationality
بھرتی کے علاقوں ETC
ایک کلک کی حمایت ناممکن



2023 TOPMEDIA AUDITION
Face To Face
Every Tue/Thu/Sun from 5pm to 8pm Let's do face talk with us!

Qualification
Regardless of gender born in 2005 ~ 2013, No limit on nationality

How to apply
① Add as KakaoTalk ID topaudition
② If you're ready, please call me on FaceTalk
③ Please join our casting manager

Audition procedure
STEP 1 Face Talk with the Artist development team (Feb.28, 2023 Closing)
STEP 2 Closed audition at TOP MEDIA
*Informed individually only for participants who passed audition
STEP 3 Final Audition

Benefits
Opportunity to qualified as a TOP MEDIA Trainee

*We will be contacted individually only for participants who passed audition
*Communication might be difficult due to Internet connection problems
*Global applicants must be able to be training in Korea after contract TOP MEDIA trainee
*The audition schedule may change depending on the situation and there might be no one about successful applicant

※The copyright of the content generated during the audition is attributed to TOP MEDIA

 

===========================================

 

2023 TOP MEDIA AUDITION
Face To Face
毎週火/木/日、午後5時から8時まで私たちとチャットしましょう!

志願対象
国籍 性別 問わず 2005年生まれ~2013年生まれ 出生志望者

志願方法
① カカオトークIDで追加topaudition
② 準備ができたらフェイストークをかけてください。
③ 新人開発チームの担当者とお付き合いください。

オーディション日程
STEP 1 新人開発チームとのフェイストーク(2023年02月28日締め切り)
STEP 2 本社非公開オーディション(1次合格者個別案内)
STEP 3  最終オーディション

合格者特典
TOP MEDIA練習生契約の機会提供

*オーディション結果は合格者に限り個別連絡いたします。
*インターネット接続問題により、コミュニケーションに一部困難がある場合があります。
*オーディション日程は状況によって変動することがあり、最終合格者がいないこともあります。
*海外居住志願者の場合、練習生契約時に国内トレーニングが可能でなければなりません。

※オーディション進行に発生するコンテンツの著作権はTOP MEDIAに帰属します。





  import numpy as np
  import matplotlib.pyplot as plt

  # 입력 데이터
  X = np.array([[0, 0], [0, 1], [1, 0], [1, 1]])
  # 정답 레이블
  y = np.array([[0], [1], [1], [0]])

  # 시그모이드 활성화 함수
  def sigmoid(x):
      return 1 / (1 + np.exp(-x))

  # 시그모이드의 도함수
  def sigmoid_derivative(x):
      return x * (1 - x)

  # 신경망 클래스
  class NeuralNetwork:
      def __init__(self, input_size, hidden_size, output_size):
          # 가중치 초기화
          self.U = np.array([[0.2, 0.2, 0.3], [0.3, 0.1, 0.2]])
          self.W = np.array([[0.3, 0.2], [0.1, 0.4], [0.2, 0.3]])
          # 편향 초기화
          self.b1 = np.zeros((1, hidden_size))
          self.b2 = np.zeros((1, output_size))
          # 에포크와 오차 기록을 위한 리스트
          self.epochs = []
          self.errors = []

      def forward(self, X):
          # 은닉층 계산
          self.hidden_layer = sigmoid(np.dot(X, self.U) + self.b1)
          # 출력층 계산
          self.output_layer = sigmoid(np.dot(self.hidden_layer, self.W) + self.b2)

      def backward(self, X, y, learning_rate):
          # 출력층의 오차 계산
          output_error = y - self.output_layer
          output_delta = output_error * sigmoid_derivative(self.output_layer)

          # 은닉층의 오차 계산
          hidden_error = np.dot(output_delta, self.W.T)
          hidden_delta = hidden_error * sigmoid_derivative(self.hidden_layer)

          # 가중치 및 편향 업데이트
          self.U += learning_rate * np.dot(X.T, hidden_delta)
          self.W += learning_rate * np.dot(self.hidden_layer.T, output_delta)
          self.b1 += learning_rate * np.sum(hidden_delta, axis=0)
          self.b2 += learning_rate * np.sum(output_delta, axis=0)

      def train(self, X, y, epochs, learning_rate):
          for epoch in range(epochs):
              self.forward(X)
              self.backward(X, y, learning_rate)
              self.epochs.append(epoch + 1)
              self.errors.append(np.mean(np.abs(y - self.output_layer)))

      def predict(self, X):
          self.forward(X)
          return self.output_layer

  # 신경망 모델 생성
  input_size = 2
  hidden_size = 3
  output_size = 2
  learning_rate = 1.0
  epochs = 1000

  model = NeuralNetwork(input_size, hidden_size, output_size)
  # 모델 훈련
  model.train(X, y, epochs, learning_rate)

  # 예측 결과 출력
  predictions = model.predict(X)
  for x, y_pred in zip(X, predictions):
      print(f"x1: {x[0]}, x2: {x[1]}, y1: {y_pred[0]:.4f}, y2: {1 - y_pred[0]:.4f}")

  # 에포크와 오차 그래프 출력
  plt.plot(model.epochs, model.errors)
  plt.xlabel('epoch')
  plt.ylabel('Error')
  plt.show()




  def sigmoid(x):
      return 1 / (1 + np.exp(-x))

  def sigmoid_derivative(x):
      return x * (1 - x)

  def mse_loss(y_true, y_pred):
      return ((y_true - y_pred) ** 2).mean()

  def forward_propagation(X, weights, biases):
      layers = [X]
      for weight, bias in zip(weights, biases):
          layers.append(sigmoid(np.dot(layers[-1], weight) + bias))
      return layers

  def back_propagation(y_true, layers, weights, biases, learning_rate=1.0):
      error = y_true - layers[-1]
      for i in reversed(range(len(weights))):
          delta = error * sigmoid_derivative(layers[i+1])
          error = np.dot(delta, weights[i].T)
          weights[i] += learning_rate * np.dot(layers[i].T, delta)
          biases[i] += learning_rate * np.sum(delta, axis=0)
      return weights, biases

  np.random.seed(0)

  weights = [
      np.random.uniform(0, 1, (1, 6)),
      np.random.uniform(0, 1, (6, 4)),
      np.random.uniform(0, 1, (4, 1))
  ]
  biases = [
      np.zeros((1, 6)),
      np.zeros((1, 4)),
      np.zeros((1, 1))
  ]

  df = pd.read_csv('nonlinear.csv')
  X = df['x'].to_numpy().reshape(-1, 1)
  y_true = df['y'].to_numpy().reshape(-1, 1)

  for _ in range(100):
      for i in range(X.shape[0]):
          layers = forward_propagation(X[i:i+1], weights, biases)
          weights, biases = back_propagation(y_true[i:i+1], layers, weights, biases)

  domain = np.linspace(0, 1, 100).reshape(-1, 1)
  y_hat = forward_propagation(domain, weights, biases)[-1]

  plt.scatter(df['x'], df['y'])
  plt.scatter(domain, y_hat, color='r')
  plt.show()


  model = keras.models.Sequential([
      keras.layers.Dense(32, activation='tanh', input_shape=(1,)),
      keras.layers.Dense(16, activation='tanh'),
      keras.layers.Dense(8, activation='tanh'),
      keras.layers.Dense(4, activation='tanh'),
      keras.layers.Dense(1, activation='tanh'),
  ])

  optimizer = keras.optimizers.SGD(learning_rate=0.1)
  model.compile(optimizer=optimizer, loss='mse')

  df = pd.read_csv('nonlinear.csv')
  X = df['x'].to_numpy()
  X = X.reshape(-1, 1)
  y_label = df['y'].to_numpy()

  model.fit(X, y_label, epochs=100)

  domain = np.linspace(0, 1, 100).reshape(-1, 1)
  y_hat = model.predict(domain)
  plt.scatter(df['x'], df['y'])
  plt.scatter(domain, y_hat, color='r')
  plt.show()


  #Iris 데이터를 입력으로 하여 Versicolor, Setosa, Virginica 3종의 품종을 구분하는 심층신경망을 아래 조건을 이용하여 구성
  import numpy as np
  import pandas as pd
  import tensorflow as tf
  from sklearn.model_selection import train_test_split
  from sklearn.preprocessing import StandardScaler
  import matplotlib.pyplot as plt

  # Iris 데이터 불러오기
  url = "https://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data"
  df = pd.read_csv(url, header=None)

  # 입력과 출력 데이터 분리
  X = df.iloc[:, :-1].values
  y = df.iloc[:, -1].values

  # 레이블 숫자로 매핑
  label_mapping = {
      'Iris-setosa': 0,
      'Iris-versicolor': 1,
      'Iris-virginica': 2
  }
  y = np.array([label_mapping[label] for label in y])

  # 훈련 데이터와 테스트 데이터로 분할 (stratify 옵션을 사용하여 클래스 비율 유지)
  X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, stratify=y, random_state=42)

  # 특성 스케일링
  scaler = StandardScaler()
  X_train = scaler.fit_transform(X_train)
  X_test = scaler.transform(X_test)

  # 신경망 모델 구성
  model = tf.keras.models.Sequential()
  model.add(tf.keras.layers.Dense(64, activation='relu', input_shape=(4,)))
  model.add(tf.keras.layers.Dropout(0.2))
  model.add(tf.keras.layers.Dense(32, activation='relu'))
  model.add(tf.keras.layers.Dense(10, activation='relu'))
  model.add(tf.keras.layers.Dense(3, activation='softmax'))

  # 모델 컴파일
  model.compile(optimizer='adam',
                loss='sparse_categorical_crossentropy',
                metrics=['accuracy'])

  # 모델 훈련
  history = model.fit(X_train, y_train, batch_size=5, epochs=30, verbose=0)

  # 테스트 데이터에 대한 정확도 평가
  _, test_accuracy = model.evaluate(X_test, y_test)

  # 그래프 출력
  plt.plot(history.history['loss'], label='Loss')
  plt.plot(history.history['accuracy'], label='Accuracy')
  plt.title('Model Loss and Accuracy')
  plt.xlabel('Epoch')
  plt.legend()
  plt.show()

  print("Test Accuracy:", test_accuracy)


  # Fashion MNIST 데이터를 이용하여 다음 조건을 만족하는 신경망을 생성 + 이미지 출력
  import tensorflow as tf
  from tensorflow import keras
  import matplotlib.pyplot as plt
  import numpy as np

  # Fashion MNIST 데이터 불러오기
  fashion_mnist = keras.datasets.fashion_mnist
  (train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()

  # 데이터 전처리: 0~1 사이의 값으로 스케일링
  train_images = train_images / 255.0
  test_images = test_images / 255.0

  # 모델 구성
  model = keras.Sequential([
      keras.layers.Flatten(input_shape=(28, 28)),
      keras.layers.Dense(128, activation='relu'),
      keras.layers.Dropout(0.2),
      keras.layers.Dense(32, activation='relu'),
      keras.layers.Dense(10, activation='softmax')
  ])

  # 모델 컴파일
  model.compile(optimizer='adam',
                loss='sparse_categorical_crossentropy',
                metrics=['accuracy'])

  # 모델 훈련
  history = model.fit(train_images, train_labels, batch_size=64, epochs=10, validation_split=0.25)

  # 테스트 데이터에 대한 정확도 평가
  test_loss, test_accuracy = model.evaluate(test_images, test_labels)

  # 그래프 출력
  plt.figure(figsize=(12, 5))

  # Loss 그래프
  plt.subplot(1, 2, 1)
  plt.plot(history.history['loss'], label='Training Loss')
  plt.plot(history.history['val_loss'], label='Validation Loss', linestyle='dashed')
  plt.title('Model Loss')
  plt.xlabel('Epoch')
  plt.ylabel('Loss')
  plt.legend()

  # Accuracy 그래프
  plt.subplot(1, 2, 2)
  plt.plot(history.history['accuracy'], label='Training Accuracy')
  plt.plot(history.history['val_accuracy'], label='Validation Accuracy', linestyle='dashed')
  plt.title('Model Accuracy')
  plt.xlabel('Epoch')
  plt.ylabel('Accuracy')
  plt.legend()

  plt.tight_layout()
  plt.show()


  # 첫 25개 테스트 이미지 가져오기
  test_images_subset = test_images[:25]
  test_labels_subset = test_labels[:25]

  # 이미지 분류 및 출력
  predictions = model.predict(test_images_subset)
  predicted_labels = np.argmax(predictions, axis=1)

  class_labels = {
      0: "T-shirt/top",
      1: "Trouser",
      2: "Pullover",
      3: "Dress",
      4: "Coat",
      5: "Sandal",
      6: "Shirt",
      7: "Sneaker",
      8: "Bag",
      9: "Ankle boot"
  }

  plt.figure(figsize=(10, 10))
  for i in range(25):
      plt.subplot(5, 5, i + 1)
      plt.imshow(test_images_subset[i])
      plt.title(class_labels[predicted_labels[i]])
      plt.axis('off')
  plt.tight_layout()
  plt.show()

  print("Test Accuracy:", test_accuracy)




  #Keras에서 제공하는 MNIST 데이터에 대하여 합성곱 신경망을 통한 학습을 진행하라. 6만개의 훈련 데이터를 사용하여 학습하고 1만개의 테스트 데이터를 사용하여 정확도를 검증하라
  import tensorflow as tf
  from tensorflow import keras
  from tensorflow.keras import layers

  # 시드 고정
  tf.random.set_seed(0)

  # MNIST 데이터
  (x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()
  x_train = x_train.astype("float32") / 255.0
  x_test = x_test.astype("float32") / 255.0

  # 과제 11-1-1
  # CNN
  model = keras.Sequential()
  model.add(layers.Conv2D(4, (2, 2), strides=(2, 2), padding="same", activation="relu", input_shape=(28, 28, 1)))
  model.add(layers.MaxPooling2D((2, 2)))

  # Flatten
  model.add(layers.Flatten())
  model.add(layers.Dense(32, activation="relu"))
  model.add(layers.Dense(10, activation="softmax"))

  model.compile(
      optimizer=keras.optimizers.Adam(learning_rate=0.001),
      loss=keras.losses.SparseCategoricalCrossentropy(),
      metrics=["accuracy"],
  )

  # 모델 학습
  model.fit(x_train, y_train, epochs=1, batch_size=32)
  test_loss, test_acc = model.evaluate(x_test, y_test)

  print("테스트 데이터의 손실값 : {:.2f}".format(test_loss))
  print("테스트 데이터의 정확도 : {:.2f}".format(test_acc))

  # 과제 11-1-2 98 % 이상되게 하라 정확도가
  # CNN
  model = keras.Sequential()
  model.add(layers.Conv2D(8, (3, 3), strides=(1, 1), padding="same", activation="relu", input_shape=(28, 28, 1)))
  model.add(layers.MaxPooling2D((2, 2)))
  model.add(layers.Conv2D(16, (3, 3), strides=(1, 1), padding="same", activation="relu"))
  model.add(layers.MaxPooling2D((2, 2)))

  # Flatten
  model.add(layers.Flatten())
  model.add(layers.Dense(128, activation="relu"))
  model.add(layers.Dense(10, activation="softmax"))

  model.compile(
      optimizer=keras.optimizers.Adam(learning_rate=0.001),
      loss=keras.losses.SparseCategoricalCrossentropy(),
      metrics=["accuracy"],
  )

  # 모델 학습
  model.fit(x_train, y_train, epochs=1, batch_size=32)
  test_loss, test_acc = model.evaluate(x_test, y_test)

  print("테스트 데이터의 손실값 : {:.2f}, 테스트 데이터의 정확도 : {:.2f}".format(test_loss, test_acc))

  # Keras에서 Fashion MNIST 데이터를 불러와 아래와 같은 신경망을 구성하고 테스트 데이터에 대해 정확도를 계산하라
  import tensorflow as tf
  from tensorflow import keras
  from tensorflow.keras import layers
  import matplotlib.pyplot as plt

  # Fashion MNIST 데이터 로드
  (x_train, y_train), (x_test, y_test) = keras.datasets.fashion_mnist.load_data()
  x_train = x_train.reshape(-1, 28, 28, 1).astype("float32") / 255.0
  x_test = x_test.reshape(-1, 28, 28, 1).astype("float32") / 255.0
  y_train = keras.utils.to_categorical(y_train, num_classes=10)
  y_test = keras.utils.to_categorical(y_test, num_classes=10)

  # 모델 생성
  model = keras.Sequential()
  model.add(layers.Conv2D(32, (3, 3), activation="relu", input_shape=(28, 28, 1)))
  model.add(layers.MaxPooling2D((2, 2)))
  model.add(layers.Conv2D(64, (3, 3), activation="relu"))
  model.add(layers.MaxPooling2D((2, 2)))
  model.add(layers.Conv2D(32, (3, 3), activation="relu"))
  model.add(layers.Flatten())

  # DENSE
  model.add(layers.Dense(1568, activation="relu"))
  model.add(layers.Dense(128, activation="relu"))
  model.add(layers.Dense(32, activation="relu"))

  # 출력층
  model.add(layers.Dense(10, activation="softmax"))

  # 과제 11-2-1
  # 모델 출력
  model.compile(
      optimizer=keras.optimizers.Adam(),
      loss=keras.losses.CategoricalCrossentropy(),
      metrics=["accuracy"],
  )
  model.fit(x_train, y_train, epochs=1)
  _, test_acc = model.evaluate(x_test, y_test)
  print("테스트 정확도: {}".format(test_acc))

  # 과제 11-2-2
  # 테스트 이미지 분류 및 출력
  x_test_subset = x_test[:25]
  y_test_subset = y_test[:25]
  predictions = model.predict(x_test_subset)
  predicted_labels = tf.argmax(predictions, axis=1).numpy()
  class_labels = [
      "T-shirt/top",
      "Trouser",
      "Pullover",
      "Dress",
      "Coat",
      "Sandal",
      "Shirt",
      "Sneaker",
      "Bag",
      "Ankle boot"
  ]

  plt.figure(figsize=(10, 10))
  for i in range(25):
      plt.subplot(5, 5, i + 1)
      plt.imshow(x_test_subset[i].reshape(28, 28))
      plt.title(class_labels[predicted_labels[i]])
      plt.axis('off')
  plt.tight_layout()
  plt.show()



  #51개의 시퀀스를 가지는 Sine 함수 100개를 생성하고, 마지막 시퀀스의 값(51번째 값)을 예측하는 모델을 RNN, LSTM, GRU를 이용하여 구현하고 그래프를 출력
  import numpy as np
  import matplotlib.pyplot as plt
  from tensorflow.keras.models import Sequential
  from tensorflow.keras.layers import SimpleRNN, LSTM, GRU, Dense
  from sklearn.metrics import mean_squared_error

  # 데이터 생성
  x = np.arange(0, 10.2, 0.2)
  num_samples = 100
  num_sequences = 51
  start_points = np.random.choice(100, num_samples)
  dataset = []
  for start in start_points:
      y = np.sin(x + start)
      dataset.append(y)
  dataset = np.array(dataset)
  X = dataset[:, :50]
  y = dataset[:, -1]

  # 모델 생성
  model_list = [SimpleRNN, LSTM, GRU]


  for i, model_name in enumerate(model_list):

      X_train, X_test, y_train, y_test = X[:80], X[80:], y[:80], y[80:]
      model = Sequential()
      model.add(model_name(10, input_shape=(50, 1)))
      model.add(Dense(1))
      model.compile(optimizer='adam', loss='mean_squared_error')

      # 모델 학습
      history = model.fit(X_train[:, :, np.newaxis], y_train, epochs=50, verbose=0)
      y_train_pred = model.predict(X_train[:, :, np.newaxis])
      y_test_pred = model.predict(X_test[:, :, np.newaxis])
      train_mse = mean_squared_error(y_train, y_train_pred)
      test_mse = mean_squared_error(y_test, y_test_pred)

      # 출력
      fig, axs = plt.subplots(2, 2, figsize=(10, 8))
      axs[0, 0].plot(history.history['loss'])
      axs[0, 0].set_xlabel('epoch')
      axs[0, 0].set_ylabel('loss')

      axs[1, 0].plot(y_train, label='Train Actual', color='black', linewidth=2.0)
      axs[1, 0].plot(y_train_pred, label='Train Predicted', color='red')
      axs[1, 0].set_title('Train')
      axs[1, 0].legend(loc='upper right')

      axs[0, 1].scatter(y_train, y_train_pred, label='Train', marker='o')
      axs[0, 1].scatter(y_test, y_test_pred, label='Test', color='orange', marker='o')
      axs[0, 1].set_xlabel('y')
      axs[0, 1].set_ylabel('y_hat')
      axs[0, 1].legend(loc='upper left')

      axs[1, 1].plot(y_test, label='Test Actual', color='black', linewidth=2.0)
      axs[1, 1].plot(y_test_pred, label='Test Predicted', color='red')
      axs[1, 1].set_title('Test')
      axs[1, 1].legend(loc='upper right')

      axs[1, 0].text(0.05, 0.9, f'MSE: {train_mse:.5f}', transform=axs[1, 0].transAxes)
      axs[1, 1].text(0.05, 0.9, f'MSE: {train_mse:.5f}', transform=axs[1, 1].transAxes)

      plt.tight_layout()
      plt.show()