Jan 13 20:06:53.247531 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Jan 13 20:06:53.250655 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Mon Jan 13 18:57:23 -00 2025 Jan 13 20:06:53.250692 kernel: KASLR disabled due to lack of seed Jan 13 20:06:53.250709 kernel: efi: EFI v2.7 by EDK II Jan 13 20:06:53.250725 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b003a98 MEMRESERVE=0x78503d98 Jan 13 20:06:53.250740 kernel: secureboot: Secure boot disabled Jan 13 20:06:53.250758 kernel: ACPI: Early table checksum verification disabled Jan 13 20:06:53.250773 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Jan 13 20:06:53.250789 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Jan 13 20:06:53.250804 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Jan 13 20:06:53.250824 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Jan 13 20:06:53.250840 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jan 13 20:06:53.250855 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Jan 13 20:06:53.250871 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Jan 13 20:06:53.250890 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Jan 13 20:06:53.250910 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jan 13 20:06:53.250927 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Jan 13 20:06:53.250944 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Jan 13 20:06:53.250960 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Jan 13 20:06:53.250976 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Jan 13 20:06:53.250992 kernel: printk: bootconsole [uart0] enabled Jan 13 20:06:53.251009 kernel: NUMA: Failed to initialise from firmware Jan 13 20:06:53.251026 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Jan 13 20:06:53.251042 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Jan 13 20:06:53.251058 kernel: Zone ranges: Jan 13 20:06:53.251074 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jan 13 20:06:53.251095 kernel: DMA32 empty Jan 13 20:06:53.251111 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Jan 13 20:06:53.251127 kernel: Movable zone start for each node Jan 13 20:06:53.251143 kernel: Early memory node ranges Jan 13 20:06:53.251159 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Jan 13 20:06:53.251175 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Jan 13 20:06:53.251190 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Jan 13 20:06:53.251207 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Jan 13 20:06:53.251222 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Jan 13 20:06:53.251239 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Jan 13 20:06:53.251255 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Jan 13 20:06:53.251271 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Jan 13 20:06:53.251290 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Jan 13 20:06:53.251308 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Jan 13 20:06:53.251330 kernel: psci: probing for conduit method from ACPI. Jan 13 20:06:53.251348 kernel: psci: PSCIv1.0 detected in firmware. Jan 13 20:06:53.251365 kernel: psci: Using standard PSCI v0.2 function IDs Jan 13 20:06:53.251386 kernel: psci: Trusted OS migration not required Jan 13 20:06:53.251403 kernel: psci: SMC Calling Convention v1.1 Jan 13 20:06:53.251420 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 13 20:06:53.251437 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 13 20:06:53.251455 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 13 20:06:53.251472 kernel: Detected PIPT I-cache on CPU0 Jan 13 20:06:53.251490 kernel: CPU features: detected: GIC system register CPU interface Jan 13 20:06:53.251507 kernel: CPU features: detected: Spectre-v2 Jan 13 20:06:53.251524 kernel: CPU features: detected: Spectre-v3a Jan 13 20:06:53.251542 kernel: CPU features: detected: Spectre-BHB Jan 13 20:06:53.251578 kernel: CPU features: detected: ARM erratum 1742098 Jan 13 20:06:53.251601 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Jan 13 20:06:53.251625 kernel: alternatives: applying boot alternatives Jan 13 20:06:53.251644 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=6ba5f90349644346e4f5fa9305ab5a05339928ee9f4f137665e797727c1fc436 Jan 13 20:06:53.251663 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 13 20:06:53.251680 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 13 20:06:53.251697 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 13 20:06:53.251714 kernel: Fallback order for Node 0: 0 Jan 13 20:06:53.251731 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Jan 13 20:06:53.251748 kernel: Policy zone: Normal Jan 13 20:06:53.251765 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 13 20:06:53.251782 kernel: software IO TLB: area num 2. Jan 13 20:06:53.251804 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Jan 13 20:06:53.251822 kernel: Memory: 3819960K/4030464K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39680K init, 897K bss, 210504K reserved, 0K cma-reserved) Jan 13 20:06:53.251839 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 13 20:06:53.251872 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 13 20:06:53.251895 kernel: rcu: RCU event tracing is enabled. Jan 13 20:06:53.251914 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 13 20:06:53.251932 kernel: Trampoline variant of Tasks RCU enabled. Jan 13 20:06:53.251950 kernel: Tracing variant of Tasks RCU enabled. Jan 13 20:06:53.251967 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 13 20:06:53.251984 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 13 20:06:53.252001 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 13 20:06:53.252024 kernel: GICv3: 96 SPIs implemented Jan 13 20:06:53.252041 kernel: GICv3: 0 Extended SPIs implemented Jan 13 20:06:53.252058 kernel: Root IRQ handler: gic_handle_irq Jan 13 20:06:53.252075 kernel: GICv3: GICv3 features: 16 PPIs Jan 13 20:06:53.252092 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Jan 13 20:06:53.252109 kernel: ITS [mem 0x10080000-0x1009ffff] Jan 13 20:06:53.252126 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Jan 13 20:06:53.252143 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Jan 13 20:06:53.252160 kernel: GICv3: using LPI property table @0x00000004000d0000 Jan 13 20:06:53.252177 kernel: ITS: Using hypervisor restricted LPI range [128] Jan 13 20:06:53.252194 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Jan 13 20:06:53.252212 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 13 20:06:53.252234 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Jan 13 20:06:53.252251 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Jan 13 20:06:53.252270 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Jan 13 20:06:53.252290 kernel: Console: colour dummy device 80x25 Jan 13 20:06:53.252309 kernel: printk: console [tty1] enabled Jan 13 20:06:53.252328 kernel: ACPI: Core revision 20230628 Jan 13 20:06:53.252348 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Jan 13 20:06:53.252366 kernel: pid_max: default: 32768 minimum: 301 Jan 13 20:06:53.252384 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 13 20:06:53.252406 kernel: landlock: Up and running. Jan 13 20:06:53.252423 kernel: SELinux: Initializing. Jan 13 20:06:53.252441 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 20:06:53.252459 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 20:06:53.252477 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 20:06:53.252495 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 20:06:53.252513 kernel: rcu: Hierarchical SRCU implementation. Jan 13 20:06:53.252531 kernel: rcu: Max phase no-delay instances is 400. Jan 13 20:06:53.254072 kernel: Platform MSI: ITS@0x10080000 domain created Jan 13 20:06:53.254116 kernel: PCI/MSI: ITS@0x10080000 domain created Jan 13 20:06:53.254135 kernel: Remapping and enabling EFI services. Jan 13 20:06:53.254153 kernel: smp: Bringing up secondary CPUs ... Jan 13 20:06:53.254171 kernel: Detected PIPT I-cache on CPU1 Jan 13 20:06:53.254190 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Jan 13 20:06:53.254207 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Jan 13 20:06:53.254226 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Jan 13 20:06:53.254243 kernel: smp: Brought up 1 node, 2 CPUs Jan 13 20:06:53.254261 kernel: SMP: Total of 2 processors activated. Jan 13 20:06:53.254283 kernel: CPU features: detected: 32-bit EL0 Support Jan 13 20:06:53.254301 kernel: CPU features: detected: 32-bit EL1 Support Jan 13 20:06:53.254318 kernel: CPU features: detected: CRC32 instructions Jan 13 20:06:53.254348 kernel: CPU: All CPU(s) started at EL1 Jan 13 20:06:53.254370 kernel: alternatives: applying system-wide alternatives Jan 13 20:06:53.254389 kernel: devtmpfs: initialized Jan 13 20:06:53.254407 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 13 20:06:53.254426 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 13 20:06:53.254444 kernel: pinctrl core: initialized pinctrl subsystem Jan 13 20:06:53.254463 kernel: SMBIOS 3.0.0 present. Jan 13 20:06:53.254485 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Jan 13 20:06:53.254504 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 13 20:06:53.254523 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 13 20:06:53.254542 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 13 20:06:53.254618 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 13 20:06:53.254642 kernel: audit: initializing netlink subsys (disabled) Jan 13 20:06:53.254661 kernel: audit: type=2000 audit(0.236:1): state=initialized audit_enabled=0 res=1 Jan 13 20:06:53.254688 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 13 20:06:53.254707 kernel: cpuidle: using governor menu Jan 13 20:06:53.254726 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 13 20:06:53.254745 kernel: ASID allocator initialised with 65536 entries Jan 13 20:06:53.254764 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 13 20:06:53.254783 kernel: Serial: AMBA PL011 UART driver Jan 13 20:06:53.254801 kernel: Modules: 17440 pages in range for non-PLT usage Jan 13 20:06:53.254821 kernel: Modules: 508960 pages in range for PLT usage Jan 13 20:06:53.254840 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 13 20:06:53.254863 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 13 20:06:53.254882 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 13 20:06:53.254901 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 13 20:06:53.254919 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 13 20:06:53.254938 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 13 20:06:53.254957 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 13 20:06:53.254975 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 13 20:06:53.254994 kernel: ACPI: Added _OSI(Module Device) Jan 13 20:06:53.255012 kernel: ACPI: Added _OSI(Processor Device) Jan 13 20:06:53.255035 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 13 20:06:53.255054 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 13 20:06:53.255074 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 13 20:06:53.255093 kernel: ACPI: Interpreter enabled Jan 13 20:06:53.255111 kernel: ACPI: Using GIC for interrupt routing Jan 13 20:06:53.255130 kernel: ACPI: MCFG table detected, 1 entries Jan 13 20:06:53.255149 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Jan 13 20:06:53.255518 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 13 20:06:53.258050 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 13 20:06:53.258286 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 13 20:06:53.258501 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Jan 13 20:06:53.258743 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Jan 13 20:06:53.258772 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Jan 13 20:06:53.258791 kernel: acpiphp: Slot [1] registered Jan 13 20:06:53.258810 kernel: acpiphp: Slot [2] registered Jan 13 20:06:53.258829 kernel: acpiphp: Slot [3] registered Jan 13 20:06:53.258856 kernel: acpiphp: Slot [4] registered Jan 13 20:06:53.258875 kernel: acpiphp: Slot [5] registered Jan 13 20:06:53.258893 kernel: acpiphp: Slot [6] registered Jan 13 20:06:53.258912 kernel: acpiphp: Slot [7] registered Jan 13 20:06:53.258931 kernel: acpiphp: Slot [8] registered Jan 13 20:06:53.258949 kernel: acpiphp: Slot [9] registered Jan 13 20:06:53.258968 kernel: acpiphp: Slot [10] registered Jan 13 20:06:53.258987 kernel: acpiphp: Slot [11] registered Jan 13 20:06:53.259005 kernel: acpiphp: Slot [12] registered Jan 13 20:06:53.259024 kernel: acpiphp: Slot [13] registered Jan 13 20:06:53.259049 kernel: acpiphp: Slot [14] registered Jan 13 20:06:53.259068 kernel: acpiphp: Slot [15] registered Jan 13 20:06:53.259087 kernel: acpiphp: Slot [16] registered Jan 13 20:06:53.259105 kernel: acpiphp: Slot [17] registered Jan 13 20:06:53.259123 kernel: acpiphp: Slot [18] registered Jan 13 20:06:53.259141 kernel: acpiphp: Slot [19] registered Jan 13 20:06:53.259159 kernel: acpiphp: Slot [20] registered Jan 13 20:06:53.259177 kernel: acpiphp: Slot [21] registered Jan 13 20:06:53.259195 kernel: acpiphp: Slot [22] registered Jan 13 20:06:53.259218 kernel: acpiphp: Slot [23] registered Jan 13 20:06:53.259236 kernel: acpiphp: Slot [24] registered Jan 13 20:06:53.259255 kernel: acpiphp: Slot [25] registered Jan 13 20:06:53.259273 kernel: acpiphp: Slot [26] registered Jan 13 20:06:53.259292 kernel: acpiphp: Slot [27] registered Jan 13 20:06:53.259310 kernel: acpiphp: Slot [28] registered Jan 13 20:06:53.259329 kernel: acpiphp: Slot [29] registered Jan 13 20:06:53.259347 kernel: acpiphp: Slot [30] registered Jan 13 20:06:53.259366 kernel: acpiphp: Slot [31] registered Jan 13 20:06:53.259384 kernel: PCI host bridge to bus 0000:00 Jan 13 20:06:53.261749 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Jan 13 20:06:53.261990 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 13 20:06:53.262190 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Jan 13 20:06:53.262377 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Jan 13 20:06:53.262641 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Jan 13 20:06:53.262873 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Jan 13 20:06:53.263123 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Jan 13 20:06:53.263407 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jan 13 20:06:53.266546 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Jan 13 20:06:53.266850 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 13 20:06:53.267089 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jan 13 20:06:53.269773 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Jan 13 20:06:53.270036 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Jan 13 20:06:53.270244 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Jan 13 20:06:53.270451 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 13 20:06:53.270715 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Jan 13 20:06:53.270937 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Jan 13 20:06:53.271153 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Jan 13 20:06:53.271368 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Jan 13 20:06:53.274194 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Jan 13 20:06:53.274452 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Jan 13 20:06:53.274679 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 13 20:06:53.274871 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Jan 13 20:06:53.274897 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 13 20:06:53.274916 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 13 20:06:53.274936 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 13 20:06:53.274954 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 13 20:06:53.274973 kernel: iommu: Default domain type: Translated Jan 13 20:06:53.275002 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 13 20:06:53.275021 kernel: efivars: Registered efivars operations Jan 13 20:06:53.275039 kernel: vgaarb: loaded Jan 13 20:06:53.275059 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 13 20:06:53.275077 kernel: VFS: Disk quotas dquot_6.6.0 Jan 13 20:06:53.275095 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 13 20:06:53.275114 kernel: pnp: PnP ACPI init Jan 13 20:06:53.275343 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Jan 13 20:06:53.275386 kernel: pnp: PnP ACPI: found 1 devices Jan 13 20:06:53.275406 kernel: NET: Registered PF_INET protocol family Jan 13 20:06:53.275427 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 13 20:06:53.275446 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 13 20:06:53.275466 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 13 20:06:53.275485 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 13 20:06:53.275504 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 13 20:06:53.275523 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 13 20:06:53.275543 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 20:06:53.275598 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 20:06:53.275620 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 13 20:06:53.275639 kernel: PCI: CLS 0 bytes, default 64 Jan 13 20:06:53.275658 kernel: kvm [1]: HYP mode not available Jan 13 20:06:53.275676 kernel: Initialise system trusted keyrings Jan 13 20:06:53.275695 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 13 20:06:53.275713 kernel: Key type asymmetric registered Jan 13 20:06:53.275731 kernel: Asymmetric key parser 'x509' registered Jan 13 20:06:53.275749 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 13 20:06:53.275775 kernel: io scheduler mq-deadline registered Jan 13 20:06:53.275793 kernel: io scheduler kyber registered Jan 13 20:06:53.275811 kernel: io scheduler bfq registered Jan 13 20:06:53.276112 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Jan 13 20:06:53.276145 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 13 20:06:53.276165 kernel: ACPI: button: Power Button [PWRB] Jan 13 20:06:53.276185 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Jan 13 20:06:53.276203 kernel: ACPI: button: Sleep Button [SLPB] Jan 13 20:06:53.276230 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 13 20:06:53.276250 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jan 13 20:06:53.276515 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Jan 13 20:06:53.276547 kernel: printk: console [ttyS0] disabled Jan 13 20:06:53.276591 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Jan 13 20:06:53.276644 kernel: printk: console [ttyS0] enabled Jan 13 20:06:53.276666 kernel: printk: bootconsole [uart0] disabled Jan 13 20:06:53.276685 kernel: thunder_xcv, ver 1.0 Jan 13 20:06:53.276704 kernel: thunder_bgx, ver 1.0 Jan 13 20:06:53.276731 kernel: nicpf, ver 1.0 Jan 13 20:06:53.276750 kernel: nicvf, ver 1.0 Jan 13 20:06:53.277010 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 13 20:06:53.277246 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-13T20:06:52 UTC (1736798812) Jan 13 20:06:53.277284 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 13 20:06:53.279386 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Jan 13 20:06:53.279417 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 13 20:06:53.279436 kernel: watchdog: Hard watchdog permanently disabled Jan 13 20:06:53.279468 kernel: NET: Registered PF_INET6 protocol family Jan 13 20:06:53.279489 kernel: Segment Routing with IPv6 Jan 13 20:06:53.279507 kernel: In-situ OAM (IOAM) with IPv6 Jan 13 20:06:53.279526 kernel: NET: Registered PF_PACKET protocol family Jan 13 20:06:53.279546 kernel: Key type dns_resolver registered Jan 13 20:06:53.279588 kernel: registered taskstats version 1 Jan 13 20:06:53.279610 kernel: Loading compiled-in X.509 certificates Jan 13 20:06:53.279630 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: a9edf9d44b1b82dedf7830d1843430df7c4d16cb' Jan 13 20:06:53.279649 kernel: Key type .fscrypt registered Jan 13 20:06:53.279704 kernel: Key type fscrypt-provisioning registered Jan 13 20:06:53.279725 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 13 20:06:53.279745 kernel: ima: Allocated hash algorithm: sha1 Jan 13 20:06:53.279764 kernel: ima: No architecture policies found Jan 13 20:06:53.279784 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 13 20:06:53.279802 kernel: clk: Disabling unused clocks Jan 13 20:06:53.279821 kernel: Freeing unused kernel memory: 39680K Jan 13 20:06:53.279840 kernel: Run /init as init process Jan 13 20:06:53.279878 kernel: with arguments: Jan 13 20:06:53.279902 kernel: /init Jan 13 20:06:53.279932 kernel: with environment: Jan 13 20:06:53.279951 kernel: HOME=/ Jan 13 20:06:53.279971 kernel: TERM=linux Jan 13 20:06:53.279989 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 13 20:06:53.280014 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 20:06:53.280039 systemd[1]: Detected virtualization amazon. Jan 13 20:06:53.280060 systemd[1]: Detected architecture arm64. Jan 13 20:06:53.280087 systemd[1]: Running in initrd. Jan 13 20:06:53.280108 systemd[1]: No hostname configured, using default hostname. Jan 13 20:06:53.280128 systemd[1]: Hostname set to . Jan 13 20:06:53.280150 systemd[1]: Initializing machine ID from VM UUID. Jan 13 20:06:53.280171 systemd[1]: Queued start job for default target initrd.target. Jan 13 20:06:53.280192 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:06:53.280214 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:06:53.280237 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 13 20:06:53.280266 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 20:06:53.280288 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 13 20:06:53.280310 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 13 20:06:53.280335 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 13 20:06:53.280356 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 13 20:06:53.280379 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:06:53.280401 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:06:53.280428 systemd[1]: Reached target paths.target - Path Units. Jan 13 20:06:53.280449 systemd[1]: Reached target slices.target - Slice Units. Jan 13 20:06:53.280471 systemd[1]: Reached target swap.target - Swaps. Jan 13 20:06:53.280494 systemd[1]: Reached target timers.target - Timer Units. Jan 13 20:06:53.280516 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 20:06:53.280537 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 20:06:53.280646 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 20:06:53.280679 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 20:06:53.280702 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:06:53.280734 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 20:06:53.280755 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:06:53.280776 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 20:06:53.280798 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 13 20:06:53.280818 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 20:06:53.280839 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 13 20:06:53.280859 systemd[1]: Starting systemd-fsck-usr.service... Jan 13 20:06:53.280880 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 20:06:53.280905 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 20:06:53.280926 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:06:53.280947 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 13 20:06:53.280967 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:06:53.280989 systemd[1]: Finished systemd-fsck-usr.service. Jan 13 20:06:53.281011 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 20:06:53.282711 systemd-journald[251]: Collecting audit messages is disabled. Jan 13 20:06:53.282759 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:06:53.282781 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:06:53.282813 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 13 20:06:53.282835 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 20:06:53.282855 kernel: Bridge firewalling registered Jan 13 20:06:53.282875 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 20:06:53.282896 systemd-journald[251]: Journal started Jan 13 20:06:53.282938 systemd-journald[251]: Runtime Journal (/run/log/journal/ec20928cafa7864a22ab6cc46abbaa56) is 8.0M, max 75.3M, 67.3M free. Jan 13 20:06:53.231706 systemd-modules-load[252]: Inserted module 'overlay' Jan 13 20:06:53.288332 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:06:53.270481 systemd-modules-load[252]: Inserted module 'br_netfilter' Jan 13 20:06:53.310648 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 20:06:53.318629 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 20:06:53.337866 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 20:06:53.343488 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:06:53.349335 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:06:53.354179 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:06:53.371201 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 13 20:06:53.377642 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:06:53.403869 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 20:06:53.429602 dracut-cmdline[284]: dracut-dracut-053 Jan 13 20:06:53.434409 dracut-cmdline[284]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=6ba5f90349644346e4f5fa9305ab5a05339928ee9f4f137665e797727c1fc436 Jan 13 20:06:53.494932 systemd-resolved[287]: Positive Trust Anchors: Jan 13 20:06:53.496819 systemd-resolved[287]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 20:06:53.496889 systemd-resolved[287]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 20:06:53.573609 kernel: SCSI subsystem initialized Jan 13 20:06:53.581598 kernel: Loading iSCSI transport class v2.0-870. Jan 13 20:06:53.594614 kernel: iscsi: registered transport (tcp) Jan 13 20:06:53.618767 kernel: iscsi: registered transport (qla4xxx) Jan 13 20:06:53.618847 kernel: QLogic iSCSI HBA Driver Jan 13 20:06:53.723599 kernel: random: crng init done Jan 13 20:06:53.721930 systemd-resolved[287]: Defaulting to hostname 'linux'. Jan 13 20:06:53.726484 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 20:06:53.728993 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:06:53.758047 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 13 20:06:53.767923 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 13 20:06:53.817713 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 13 20:06:53.817827 kernel: device-mapper: uevent: version 1.0.3 Jan 13 20:06:53.819542 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 13 20:06:53.890652 kernel: raid6: neonx8 gen() 6615 MB/s Jan 13 20:06:53.907622 kernel: raid6: neonx4 gen() 6446 MB/s Jan 13 20:06:53.924619 kernel: raid6: neonx2 gen() 5346 MB/s Jan 13 20:06:53.941616 kernel: raid6: neonx1 gen() 3889 MB/s Jan 13 20:06:53.958622 kernel: raid6: int64x8 gen() 3723 MB/s Jan 13 20:06:53.975623 kernel: raid6: int64x4 gen() 3651 MB/s Jan 13 20:06:53.992629 kernel: raid6: int64x2 gen() 3555 MB/s Jan 13 20:06:54.010637 kernel: raid6: int64x1 gen() 2731 MB/s Jan 13 20:06:54.010724 kernel: raid6: using algorithm neonx8 gen() 6615 MB/s Jan 13 20:06:54.029626 kernel: raid6: .... xor() 4691 MB/s, rmw enabled Jan 13 20:06:54.029713 kernel: raid6: using neon recovery algorithm Jan 13 20:06:54.037623 kernel: xor: measuring software checksum speed Jan 13 20:06:54.039883 kernel: 8regs : 9550 MB/sec Jan 13 20:06:54.039976 kernel: 32regs : 11908 MB/sec Jan 13 20:06:54.041118 kernel: arm64_neon : 9194 MB/sec Jan 13 20:06:54.041183 kernel: xor: using function: 32regs (11908 MB/sec) Jan 13 20:06:54.130624 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 13 20:06:54.153948 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 13 20:06:54.167914 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:06:54.206983 systemd-udevd[469]: Using default interface naming scheme 'v255'. Jan 13 20:06:54.217425 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:06:54.228859 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 13 20:06:54.271927 dracut-pre-trigger[473]: rd.md=0: removing MD RAID activation Jan 13 20:06:54.336745 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 20:06:54.352118 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 20:06:54.471497 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:06:54.486891 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 13 20:06:54.533356 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 13 20:06:54.552396 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 20:06:54.555975 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:06:54.565383 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 20:06:54.591941 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 13 20:06:54.642713 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 13 20:06:54.699608 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 13 20:06:54.699681 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Jan 13 20:06:54.719514 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jan 13 20:06:54.719873 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jan 13 20:06:54.720159 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:10:d4:97:38:d1 Jan 13 20:06:54.724261 (udev-worker)[525]: Network interface NamePolicy= disabled on kernel command line. Jan 13 20:06:54.737217 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 20:06:54.739318 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:06:54.757177 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:06:54.763492 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:06:54.772763 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jan 13 20:06:54.772809 kernel: nvme nvme0: pci function 0000:00:04.0 Jan 13 20:06:54.763880 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:06:54.777647 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:06:54.791628 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 13 20:06:54.796308 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:06:54.806168 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 13 20:06:54.806236 kernel: GPT:9289727 != 16777215 Jan 13 20:06:54.806273 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 13 20:06:54.808233 kernel: GPT:9289727 != 16777215 Jan 13 20:06:54.808309 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 13 20:06:54.810009 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 13 20:06:54.831706 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:06:54.847044 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:06:54.891442 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:06:54.964644 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by (udev-worker) (533) Jan 13 20:06:54.974656 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jan 13 20:06:54.994627 kernel: BTRFS: device fsid 8e09fced-e016-4c4f-bac5-4013d13dfd78 devid 1 transid 38 /dev/nvme0n1p3 scanned by (udev-worker) (521) Jan 13 20:06:55.085722 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jan 13 20:06:55.121340 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 13 20:06:55.138164 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jan 13 20:06:55.140917 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jan 13 20:06:55.156919 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 13 20:06:55.180740 disk-uuid[660]: Primary Header is updated. Jan 13 20:06:55.180740 disk-uuid[660]: Secondary Entries is updated. Jan 13 20:06:55.180740 disk-uuid[660]: Secondary Header is updated. Jan 13 20:06:55.194705 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 13 20:06:55.208614 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 13 20:06:56.210060 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 13 20:06:56.210740 disk-uuid[661]: The operation has completed successfully. Jan 13 20:06:56.390929 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 13 20:06:56.391614 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 13 20:06:56.450884 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 13 20:06:56.458747 sh[921]: Success Jan 13 20:06:56.477627 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 13 20:06:56.582353 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 13 20:06:56.606893 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 13 20:06:56.617623 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 13 20:06:56.636631 kernel: BTRFS info (device dm-0): first mount of filesystem 8e09fced-e016-4c4f-bac5-4013d13dfd78 Jan 13 20:06:56.636722 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 13 20:06:56.636750 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 13 20:06:56.639109 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 13 20:06:56.639177 kernel: BTRFS info (device dm-0): using free space tree Jan 13 20:06:56.722608 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 13 20:06:56.755411 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 13 20:06:56.758541 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 13 20:06:56.776977 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 13 20:06:56.784850 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 13 20:06:56.817466 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem cd0b9c1b-856d-4823-9d4d-1660845d57c6 Jan 13 20:06:56.817599 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 13 20:06:56.820073 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 13 20:06:56.825615 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 13 20:06:56.843345 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 13 20:06:56.846815 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem cd0b9c1b-856d-4823-9d4d-1660845d57c6 Jan 13 20:06:56.868851 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 13 20:06:56.881682 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 13 20:06:56.995460 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 20:06:57.012825 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 20:06:57.061387 systemd-networkd[1113]: lo: Link UP Jan 13 20:06:57.061417 systemd-networkd[1113]: lo: Gained carrier Jan 13 20:06:57.066673 systemd-networkd[1113]: Enumeration completed Jan 13 20:06:57.066847 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 20:06:57.069545 systemd[1]: Reached target network.target - Network. Jan 13 20:06:57.069819 systemd-networkd[1113]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:06:57.069893 systemd-networkd[1113]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:06:57.074863 systemd-networkd[1113]: eth0: Link UP Jan 13 20:06:57.074872 systemd-networkd[1113]: eth0: Gained carrier Jan 13 20:06:57.074891 systemd-networkd[1113]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:06:57.103682 systemd-networkd[1113]: eth0: DHCPv4 address 172.31.21.202/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 13 20:06:57.277631 ignition[1030]: Ignition 2.20.0 Jan 13 20:06:57.277672 ignition[1030]: Stage: fetch-offline Jan 13 20:06:57.278265 ignition[1030]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:06:57.279405 ignition[1030]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 20:06:57.283008 ignition[1030]: Ignition finished successfully Jan 13 20:06:57.288779 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 20:06:57.302893 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 13 20:06:57.338171 ignition[1124]: Ignition 2.20.0 Jan 13 20:06:57.338209 ignition[1124]: Stage: fetch Jan 13 20:06:57.340029 ignition[1124]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:06:57.340063 ignition[1124]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 20:06:57.341263 ignition[1124]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 20:06:57.359198 ignition[1124]: PUT result: OK Jan 13 20:06:57.362739 ignition[1124]: parsed url from cmdline: "" Jan 13 20:06:57.362773 ignition[1124]: no config URL provided Jan 13 20:06:57.362790 ignition[1124]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 20:06:57.362851 ignition[1124]: no config at "/usr/lib/ignition/user.ign" Jan 13 20:06:57.362889 ignition[1124]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 20:06:57.366732 ignition[1124]: PUT result: OK Jan 13 20:06:57.366875 ignition[1124]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jan 13 20:06:57.369211 ignition[1124]: GET result: OK Jan 13 20:06:57.369373 ignition[1124]: parsing config with SHA512: 23d38fcce661f880f9bfad5ae889338ac8a7e630cf4a9d76472cad24eeb045306024baeeae483f8e5a825e2a740628c06e64baddca71f0475a75a4698d9f83c2 Jan 13 20:06:57.387938 unknown[1124]: fetched base config from "system" Jan 13 20:06:57.387967 unknown[1124]: fetched base config from "system" Jan 13 20:06:57.387982 unknown[1124]: fetched user config from "aws" Jan 13 20:06:57.391788 ignition[1124]: fetch: fetch complete Jan 13 20:06:57.391804 ignition[1124]: fetch: fetch passed Jan 13 20:06:57.391961 ignition[1124]: Ignition finished successfully Jan 13 20:06:57.400447 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 13 20:06:57.415874 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 13 20:06:57.441393 ignition[1130]: Ignition 2.20.0 Jan 13 20:06:57.441416 ignition[1130]: Stage: kargs Jan 13 20:06:57.442161 ignition[1130]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:06:57.442919 ignition[1130]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 20:06:57.443107 ignition[1130]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 20:06:57.445151 ignition[1130]: PUT result: OK Jan 13 20:06:57.455972 ignition[1130]: kargs: kargs passed Jan 13 20:06:57.456095 ignition[1130]: Ignition finished successfully Jan 13 20:06:57.459968 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 13 20:06:57.469946 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 13 20:06:57.504337 ignition[1136]: Ignition 2.20.0 Jan 13 20:06:57.504373 ignition[1136]: Stage: disks Jan 13 20:06:57.505358 ignition[1136]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:06:57.505388 ignition[1136]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 20:06:57.505673 ignition[1136]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 20:06:57.508206 ignition[1136]: PUT result: OK Jan 13 20:06:57.519186 ignition[1136]: disks: disks passed Jan 13 20:06:57.519412 ignition[1136]: Ignition finished successfully Jan 13 20:06:57.524313 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 13 20:06:57.527180 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 13 20:06:57.530487 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 20:06:57.532974 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 20:06:57.536832 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 20:06:57.540930 systemd[1]: Reached target basic.target - Basic System. Jan 13 20:06:57.565040 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 13 20:06:57.612905 systemd-fsck[1144]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 13 20:06:57.620398 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 13 20:06:57.630776 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 13 20:06:57.721605 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 8fd847fb-a6be-44f6-9adf-0a0a79b9fa94 r/w with ordered data mode. Quota mode: none. Jan 13 20:06:57.722796 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 13 20:06:57.723653 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 13 20:06:57.740798 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 20:06:57.752945 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 13 20:06:57.757183 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 13 20:06:57.761612 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 13 20:06:57.765974 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 20:06:57.771525 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 13 20:06:57.783640 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1163) Jan 13 20:06:57.787820 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem cd0b9c1b-856d-4823-9d4d-1660845d57c6 Jan 13 20:06:57.787909 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 13 20:06:57.787936 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 13 20:06:57.789225 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 13 20:06:57.803603 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 13 20:06:57.805378 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 20:06:58.142078 systemd-networkd[1113]: eth0: Gained IPv6LL Jan 13 20:06:58.240778 initrd-setup-root[1187]: cut: /sysroot/etc/passwd: No such file or directory Jan 13 20:06:58.264639 initrd-setup-root[1194]: cut: /sysroot/etc/group: No such file or directory Jan 13 20:06:58.272187 initrd-setup-root[1201]: cut: /sysroot/etc/shadow: No such file or directory Jan 13 20:06:58.280348 initrd-setup-root[1208]: cut: /sysroot/etc/gshadow: No such file or directory Jan 13 20:06:58.680443 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 13 20:06:58.688774 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 13 20:06:58.706837 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 13 20:06:58.721738 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 13 20:06:58.724175 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem cd0b9c1b-856d-4823-9d4d-1660845d57c6 Jan 13 20:06:58.754386 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 13 20:06:58.768745 ignition[1276]: INFO : Ignition 2.20.0 Jan 13 20:06:58.768745 ignition[1276]: INFO : Stage: mount Jan 13 20:06:58.772097 ignition[1276]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:06:58.772097 ignition[1276]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 20:06:58.772097 ignition[1276]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 20:06:58.778611 ignition[1276]: INFO : PUT result: OK Jan 13 20:06:58.783277 ignition[1276]: INFO : mount: mount passed Jan 13 20:06:58.784781 ignition[1276]: INFO : Ignition finished successfully Jan 13 20:06:58.789981 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 13 20:06:58.802719 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 13 20:06:58.841986 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 20:06:58.858592 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by mount (1287) Jan 13 20:06:58.863673 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem cd0b9c1b-856d-4823-9d4d-1660845d57c6 Jan 13 20:06:58.863722 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 13 20:06:58.863749 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 13 20:06:58.868583 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 13 20:06:58.872016 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 20:06:58.914154 ignition[1304]: INFO : Ignition 2.20.0 Jan 13 20:06:58.914154 ignition[1304]: INFO : Stage: files Jan 13 20:06:58.917471 ignition[1304]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:06:58.917471 ignition[1304]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 20:06:58.917471 ignition[1304]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 20:06:58.924651 ignition[1304]: INFO : PUT result: OK Jan 13 20:06:58.930112 ignition[1304]: DEBUG : files: compiled without relabeling support, skipping Jan 13 20:06:58.933218 ignition[1304]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 13 20:06:58.933218 ignition[1304]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 13 20:06:58.941827 ignition[1304]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 13 20:06:58.944905 ignition[1304]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 13 20:06:58.948147 unknown[1304]: wrote ssh authorized keys file for user: core Jan 13 20:06:58.950390 ignition[1304]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 13 20:06:58.969445 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 13 20:06:58.973664 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 13 20:06:58.973664 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 13 20:06:58.973664 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jan 13 20:06:59.087761 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 13 20:06:59.272034 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 13 20:06:59.272034 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 13 20:06:59.279370 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jan 13 20:06:59.737256 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Jan 13 20:06:59.880308 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 13 20:06:59.880308 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Jan 13 20:06:59.887497 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Jan 13 20:06:59.887497 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 13 20:06:59.887497 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 13 20:06:59.887497 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 20:06:59.887497 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 20:06:59.887497 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 20:06:59.887497 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 20:06:59.910927 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 20:06:59.910927 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 20:06:59.910927 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jan 13 20:06:59.910927 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jan 13 20:06:59.910927 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jan 13 20:06:59.910927 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1 Jan 13 20:07:00.169240 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Jan 13 20:07:00.498413 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jan 13 20:07:00.502532 ignition[1304]: INFO : files: op(d): [started] processing unit "containerd.service" Jan 13 20:07:00.514904 ignition[1304]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 13 20:07:00.519139 ignition[1304]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 13 20:07:00.519139 ignition[1304]: INFO : files: op(d): [finished] processing unit "containerd.service" Jan 13 20:07:00.519139 ignition[1304]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Jan 13 20:07:00.519139 ignition[1304]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 20:07:00.519139 ignition[1304]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 20:07:00.519139 ignition[1304]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Jan 13 20:07:00.519139 ignition[1304]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 13 20:07:00.519139 ignition[1304]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 13 20:07:00.519139 ignition[1304]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 13 20:07:00.519139 ignition[1304]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 13 20:07:00.519139 ignition[1304]: INFO : files: files passed Jan 13 20:07:00.519139 ignition[1304]: INFO : Ignition finished successfully Jan 13 20:07:00.532621 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 13 20:07:00.574026 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 13 20:07:00.580896 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 13 20:07:00.590994 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 13 20:07:00.593290 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 13 20:07:00.611703 initrd-setup-root-after-ignition[1333]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:07:00.615096 initrd-setup-root-after-ignition[1337]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:07:00.618945 initrd-setup-root-after-ignition[1333]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:07:00.622514 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 20:07:00.626645 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 13 20:07:00.648946 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 13 20:07:00.707987 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 13 20:07:00.709172 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 13 20:07:00.713001 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 13 20:07:00.715124 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 13 20:07:00.722821 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 13 20:07:00.736592 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 13 20:07:00.764999 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 20:07:00.782982 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 13 20:07:00.807454 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:07:00.811912 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:07:00.816137 systemd[1]: Stopped target timers.target - Timer Units. Jan 13 20:07:00.817997 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 13 20:07:00.818226 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 20:07:00.826524 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 13 20:07:00.828652 systemd[1]: Stopped target basic.target - Basic System. Jan 13 20:07:00.830924 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 13 20:07:00.835882 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 20:07:00.838146 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 13 20:07:00.841043 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 13 20:07:00.849990 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 20:07:00.853143 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 13 20:07:00.857628 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 13 20:07:00.863551 systemd[1]: Stopped target swap.target - Swaps. Jan 13 20:07:00.865381 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 13 20:07:00.865974 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 13 20:07:00.869466 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:07:00.874767 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:07:00.881314 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 13 20:07:00.881533 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:07:00.885882 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 13 20:07:00.886113 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 13 20:07:00.894784 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 13 20:07:00.895247 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 20:07:00.902153 systemd[1]: ignition-files.service: Deactivated successfully. Jan 13 20:07:00.902586 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 13 20:07:00.922954 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 13 20:07:00.928476 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 13 20:07:00.930248 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 13 20:07:00.934910 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:07:00.941806 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 13 20:07:00.944169 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 20:07:00.968608 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 13 20:07:00.969374 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 13 20:07:00.985612 ignition[1357]: INFO : Ignition 2.20.0 Jan 13 20:07:00.985612 ignition[1357]: INFO : Stage: umount Jan 13 20:07:00.985612 ignition[1357]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:07:00.985612 ignition[1357]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 20:07:00.985612 ignition[1357]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 20:07:00.999248 ignition[1357]: INFO : PUT result: OK Jan 13 20:07:00.991033 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 13 20:07:01.004925 ignition[1357]: INFO : umount: umount passed Jan 13 20:07:01.006702 ignition[1357]: INFO : Ignition finished successfully Jan 13 20:07:01.010216 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 13 20:07:01.011389 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 13 20:07:01.016690 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 13 20:07:01.017879 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 13 20:07:01.023545 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 13 20:07:01.023759 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 13 20:07:01.028476 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 13 20:07:01.028641 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 13 20:07:01.035756 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 13 20:07:01.035884 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 13 20:07:01.037767 systemd[1]: Stopped target network.target - Network. Jan 13 20:07:01.039333 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 13 20:07:01.039415 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 20:07:01.041530 systemd[1]: Stopped target paths.target - Path Units. Jan 13 20:07:01.043101 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 13 20:07:01.048267 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:07:01.050918 systemd[1]: Stopped target slices.target - Slice Units. Jan 13 20:07:01.054421 systemd[1]: Stopped target sockets.target - Socket Units. Jan 13 20:07:01.058186 systemd[1]: iscsid.socket: Deactivated successfully. Jan 13 20:07:01.058273 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 20:07:01.060270 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 13 20:07:01.060416 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 20:07:01.064781 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 13 20:07:01.064891 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 13 20:07:01.067618 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 13 20:07:01.067709 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 13 20:07:01.069746 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 13 20:07:01.069828 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 13 20:07:01.072605 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 13 20:07:01.079494 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 13 20:07:01.083635 systemd-networkd[1113]: eth0: DHCPv6 lease lost Jan 13 20:07:01.090382 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 13 20:07:01.090606 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 13 20:07:01.094180 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 13 20:07:01.094448 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 13 20:07:01.100575 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 13 20:07:01.101579 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:07:01.126870 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 13 20:07:01.136660 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 13 20:07:01.136777 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 20:07:01.139128 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 20:07:01.139216 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:07:01.141237 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 13 20:07:01.141314 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 13 20:07:01.144177 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 13 20:07:01.144256 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:07:01.157392 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:07:01.185436 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 13 20:07:01.187417 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:07:01.192713 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 13 20:07:01.192935 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 13 20:07:01.199446 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 13 20:07:01.199609 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 13 20:07:01.204688 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 13 20:07:01.204765 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:07:01.207048 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 13 20:07:01.207139 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 13 20:07:01.210475 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 13 20:07:01.210615 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 13 20:07:01.217090 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 20:07:01.217174 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:07:01.242928 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 13 20:07:01.247798 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 13 20:07:01.247945 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:07:01.253446 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 13 20:07:01.253549 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 20:07:01.260809 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 13 20:07:01.260912 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:07:01.274023 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:07:01.276155 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:07:01.282815 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 13 20:07:01.283153 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 13 20:07:01.288013 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 13 20:07:01.298949 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 13 20:07:01.363311 systemd[1]: Switching root. Jan 13 20:07:01.405900 systemd-journald[251]: Journal stopped Jan 13 20:07:05.085729 systemd-journald[251]: Received SIGTERM from PID 1 (systemd). Jan 13 20:07:05.085867 kernel: SELinux: policy capability network_peer_controls=1 Jan 13 20:07:05.085909 kernel: SELinux: policy capability open_perms=1 Jan 13 20:07:05.085939 kernel: SELinux: policy capability extended_socket_class=1 Jan 13 20:07:05.085993 kernel: SELinux: policy capability always_check_network=0 Jan 13 20:07:05.086033 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 13 20:07:05.086064 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 13 20:07:05.086093 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 13 20:07:05.086122 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 13 20:07:05.086165 kernel: audit: type=1403 audit(1736798823.268:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 13 20:07:05.086201 systemd[1]: Successfully loaded SELinux policy in 48.183ms. Jan 13 20:07:05.086244 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 22.511ms. Jan 13 20:07:05.086281 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 20:07:05.086313 systemd[1]: Detected virtualization amazon. Jan 13 20:07:05.086344 systemd[1]: Detected architecture arm64. Jan 13 20:07:05.086374 systemd[1]: Detected first boot. Jan 13 20:07:05.086406 systemd[1]: Initializing machine ID from VM UUID. Jan 13 20:07:05.086436 zram_generator::config[1417]: No configuration found. Jan 13 20:07:05.086470 systemd[1]: Populated /etc with preset unit settings. Jan 13 20:07:05.086501 systemd[1]: Queued start job for default target multi-user.target. Jan 13 20:07:05.086531 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 13 20:07:05.090965 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 13 20:07:05.091047 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 13 20:07:05.091081 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 13 20:07:05.091117 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 13 20:07:05.091154 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 13 20:07:05.091187 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 13 20:07:05.091221 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 13 20:07:05.091258 systemd[1]: Created slice user.slice - User and Session Slice. Jan 13 20:07:05.091298 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:07:05.091333 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:07:05.091367 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 13 20:07:05.091402 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 13 20:07:05.091437 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 13 20:07:05.091471 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 20:07:05.091516 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 13 20:07:05.091549 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:07:05.091697 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 13 20:07:05.091743 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:07:05.091779 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 20:07:05.091829 systemd[1]: Reached target slices.target - Slice Units. Jan 13 20:07:05.091867 systemd[1]: Reached target swap.target - Swaps. Jan 13 20:07:05.091903 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 13 20:07:05.091935 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 13 20:07:05.091965 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 20:07:05.091995 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 20:07:05.092040 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:07:05.092077 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 20:07:05.092106 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:07:05.092137 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 13 20:07:05.092166 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 13 20:07:05.092197 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 13 20:07:05.092231 systemd[1]: Mounting media.mount - External Media Directory... Jan 13 20:07:05.092263 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 13 20:07:05.092295 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 13 20:07:05.092332 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 13 20:07:05.092363 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 13 20:07:05.092394 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:07:05.092435 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 20:07:05.092466 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 13 20:07:05.092496 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:07:05.092526 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 20:07:05.092555 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:07:05.098336 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 13 20:07:05.098392 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:07:05.098423 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 13 20:07:05.098454 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jan 13 20:07:05.098488 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jan 13 20:07:05.098516 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 20:07:05.098545 kernel: fuse: init (API version 7.39) Jan 13 20:07:05.098678 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 20:07:05.098712 kernel: loop: module loaded Jan 13 20:07:05.098741 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 13 20:07:05.098776 kernel: ACPI: bus type drm_connector registered Jan 13 20:07:05.098806 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 13 20:07:05.098836 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 20:07:05.098868 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 13 20:07:05.098948 systemd-journald[1517]: Collecting audit messages is disabled. Jan 13 20:07:05.099002 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 13 20:07:05.099033 systemd[1]: Mounted media.mount - External Media Directory. Jan 13 20:07:05.099068 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 13 20:07:05.099099 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 13 20:07:05.099130 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 13 20:07:05.099159 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:07:05.099188 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 13 20:07:05.099216 systemd-journald[1517]: Journal started Jan 13 20:07:05.099267 systemd-journald[1517]: Runtime Journal (/run/log/journal/ec20928cafa7864a22ab6cc46abbaa56) is 8.0M, max 75.3M, 67.3M free. Jan 13 20:07:05.101606 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 13 20:07:05.106770 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 20:07:05.112963 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 13 20:07:05.117376 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:07:05.118121 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:07:05.121187 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 20:07:05.121537 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 20:07:05.125518 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:07:05.125962 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:07:05.129427 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 13 20:07:05.130344 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 13 20:07:05.133428 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:07:05.134243 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:07:05.137305 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 20:07:05.140631 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 13 20:07:05.145088 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 13 20:07:05.168759 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 13 20:07:05.179797 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 13 20:07:05.190770 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 13 20:07:05.195706 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 13 20:07:05.210083 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 13 20:07:05.222899 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 13 20:07:05.225421 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 20:07:05.234831 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 13 20:07:05.237235 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 20:07:05.247936 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:07:05.259772 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 20:07:05.271591 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 13 20:07:05.274173 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 13 20:07:05.301263 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 13 20:07:05.305015 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 13 20:07:05.310399 systemd-journald[1517]: Time spent on flushing to /var/log/journal/ec20928cafa7864a22ab6cc46abbaa56 is 49.575ms for 902 entries. Jan 13 20:07:05.310399 systemd-journald[1517]: System Journal (/var/log/journal/ec20928cafa7864a22ab6cc46abbaa56) is 8.0M, max 195.6M, 187.6M free. Jan 13 20:07:05.374265 systemd-journald[1517]: Received client request to flush runtime journal. Jan 13 20:07:05.379289 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 13 20:07:05.392711 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:07:05.401446 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:07:05.405393 systemd-tmpfiles[1570]: ACLs are not supported, ignoring. Jan 13 20:07:05.406012 systemd-tmpfiles[1570]: ACLs are not supported, ignoring. Jan 13 20:07:05.418146 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 13 20:07:05.431271 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 20:07:05.443943 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 13 20:07:05.467970 udevadm[1585]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 13 20:07:05.524320 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 13 20:07:05.535957 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 20:07:05.577982 systemd-tmpfiles[1592]: ACLs are not supported, ignoring. Jan 13 20:07:05.578023 systemd-tmpfiles[1592]: ACLs are not supported, ignoring. Jan 13 20:07:05.586496 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:07:06.305425 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 13 20:07:06.314896 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:07:06.375254 systemd-udevd[1598]: Using default interface naming scheme 'v255'. Jan 13 20:07:06.420750 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:07:06.440157 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 20:07:06.479162 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 13 20:07:06.527347 (udev-worker)[1601]: Network interface NamePolicy= disabled on kernel command line. Jan 13 20:07:06.590351 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jan 13 20:07:06.637256 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 13 20:07:06.806073 systemd-networkd[1604]: lo: Link UP Jan 13 20:07:06.806092 systemd-networkd[1604]: lo: Gained carrier Jan 13 20:07:06.809257 systemd-networkd[1604]: Enumeration completed Jan 13 20:07:06.812240 systemd-networkd[1604]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:07:06.812262 systemd-networkd[1604]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:07:06.812479 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 20:07:06.816890 systemd-networkd[1604]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:07:06.817076 systemd-networkd[1604]: eth0: Link UP Jan 13 20:07:06.817494 systemd-networkd[1604]: eth0: Gained carrier Jan 13 20:07:06.817653 systemd-networkd[1604]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:07:06.820895 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 13 20:07:06.833692 systemd-networkd[1604]: eth0: DHCPv4 address 172.31.21.202/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 13 20:07:06.836288 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:07:06.879623 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (1605) Jan 13 20:07:07.059888 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:07:07.065179 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 13 20:07:07.118615 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 13 20:07:07.131834 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 13 20:07:07.163314 lvm[1727]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 20:07:07.198251 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 13 20:07:07.201938 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:07:07.211882 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 13 20:07:07.233243 lvm[1730]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 20:07:07.272062 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 13 20:07:07.274971 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 20:07:07.277918 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 13 20:07:07.278072 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 20:07:07.280892 systemd[1]: Reached target machines.target - Containers. Jan 13 20:07:07.284730 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 13 20:07:07.293874 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 13 20:07:07.305916 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 13 20:07:07.308756 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:07:07.314302 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 13 20:07:07.331855 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 13 20:07:07.340015 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 13 20:07:07.346372 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 13 20:07:07.381366 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 13 20:07:07.394605 kernel: loop0: detected capacity change from 0 to 116808 Jan 13 20:07:07.420478 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 13 20:07:07.424080 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 13 20:07:07.482681 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 13 20:07:07.502683 kernel: loop1: detected capacity change from 0 to 113536 Jan 13 20:07:07.614658 kernel: loop2: detected capacity change from 0 to 53784 Jan 13 20:07:07.735608 kernel: loop3: detected capacity change from 0 to 194512 Jan 13 20:07:07.778596 kernel: loop4: detected capacity change from 0 to 116808 Jan 13 20:07:07.791656 kernel: loop5: detected capacity change from 0 to 113536 Jan 13 20:07:07.803798 kernel: loop6: detected capacity change from 0 to 53784 Jan 13 20:07:07.815606 kernel: loop7: detected capacity change from 0 to 194512 Jan 13 20:07:07.830809 (sd-merge)[1751]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jan 13 20:07:07.831844 (sd-merge)[1751]: Merged extensions into '/usr'. Jan 13 20:07:07.840105 systemd[1]: Reloading requested from client PID 1738 ('systemd-sysext') (unit systemd-sysext.service)... Jan 13 20:07:07.840137 systemd[1]: Reloading... Jan 13 20:07:07.956633 zram_generator::config[1780]: No configuration found. Jan 13 20:07:08.212432 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:07:08.351659 systemd[1]: Reloading finished in 510 ms. Jan 13 20:07:08.378947 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 13 20:07:08.383684 systemd-networkd[1604]: eth0: Gained IPv6LL Jan 13 20:07:08.403051 systemd[1]: Starting ensure-sysext.service... Jan 13 20:07:08.410860 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 20:07:08.417145 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 13 20:07:08.431218 systemd[1]: Reloading requested from client PID 1837 ('systemctl') (unit ensure-sysext.service)... Jan 13 20:07:08.431256 systemd[1]: Reloading... Jan 13 20:07:08.458051 systemd-tmpfiles[1838]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 13 20:07:08.460887 systemd-tmpfiles[1838]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 13 20:07:08.463775 systemd-tmpfiles[1838]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 13 20:07:08.464617 systemd-tmpfiles[1838]: ACLs are not supported, ignoring. Jan 13 20:07:08.464943 systemd-tmpfiles[1838]: ACLs are not supported, ignoring. Jan 13 20:07:08.472089 systemd-tmpfiles[1838]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 20:07:08.472654 systemd-tmpfiles[1838]: Skipping /boot Jan 13 20:07:08.494186 systemd-tmpfiles[1838]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 20:07:08.494696 systemd-tmpfiles[1838]: Skipping /boot Jan 13 20:07:08.613596 zram_generator::config[1871]: No configuration found. Jan 13 20:07:08.806652 ldconfig[1734]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 13 20:07:08.859071 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:07:08.999045 systemd[1]: Reloading finished in 567 ms. Jan 13 20:07:09.032503 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 13 20:07:09.044748 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:07:09.070848 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 20:07:09.081170 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 13 20:07:09.095884 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 13 20:07:09.105099 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 20:07:09.120457 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 13 20:07:09.138905 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:07:09.147843 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:07:09.169998 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:07:09.181708 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:07:09.191337 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:07:09.198938 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 13 20:07:09.220089 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:07:09.220448 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:07:09.229256 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:07:09.229683 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:07:09.248190 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:07:09.264754 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:07:09.277693 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:07:09.280486 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:07:09.298997 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 13 20:07:09.309016 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 13 20:07:09.322132 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 13 20:07:09.329039 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:07:09.329408 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:07:09.332605 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:07:09.332956 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:07:09.350179 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:07:09.355787 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:07:09.370064 augenrules[1975]: No rules Jan 13 20:07:09.380124 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 20:07:09.380686 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 20:07:09.395679 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 13 20:07:09.412127 systemd[1]: Finished ensure-sysext.service. Jan 13 20:07:09.422608 systemd-resolved[1934]: Positive Trust Anchors: Jan 13 20:07:09.422960 systemd-resolved[1934]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 20:07:09.423024 systemd-resolved[1934]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 20:07:09.426016 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 20:07:09.428073 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:07:09.432772 systemd-resolved[1934]: Defaulting to hostname 'linux'. Jan 13 20:07:09.434028 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:07:09.445813 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 20:07:09.461838 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:07:09.474924 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:07:09.477262 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:07:09.477355 systemd[1]: Reached target time-set.target - System Time Set. Jan 13 20:07:09.479725 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 13 20:07:09.480023 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 20:07:09.484088 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:07:09.484455 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:07:09.490281 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 20:07:09.490706 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 20:07:09.502225 systemd[1]: Reached target network.target - Network. Jan 13 20:07:09.504542 systemd[1]: Reached target network-online.target - Network is Online. Jan 13 20:07:09.507035 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:07:09.521346 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:07:09.525872 augenrules[1989]: /sbin/augenrules: No change Jan 13 20:07:09.521814 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:07:09.527461 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:07:09.529535 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:07:09.532159 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 20:07:09.532217 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 20:07:09.534917 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 13 20:07:09.539913 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 13 20:07:09.542979 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 13 20:07:09.545355 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 13 20:07:09.547787 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 13 20:07:09.550150 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 13 20:07:09.550201 systemd[1]: Reached target paths.target - Path Units. Jan 13 20:07:09.551871 systemd[1]: Reached target timers.target - Timer Units. Jan 13 20:07:09.555547 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 13 20:07:09.561353 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 13 20:07:09.564276 augenrules[2019]: No rules Jan 13 20:07:09.568002 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 13 20:07:09.571010 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 20:07:09.572176 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 20:07:09.573077 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 20:07:09.576109 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 13 20:07:09.581340 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 20:07:09.584542 systemd[1]: Reached target basic.target - Basic System. Jan 13 20:07:09.586681 systemd[1]: System is tainted: cgroupsv1 Jan 13 20:07:09.586770 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 13 20:07:09.586818 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 13 20:07:09.599817 systemd[1]: Starting containerd.service - containerd container runtime... Jan 13 20:07:09.606859 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 13 20:07:09.618883 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 13 20:07:09.627253 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 13 20:07:09.640876 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 13 20:07:09.643784 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 13 20:07:09.650850 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:07:09.664963 jq[2032]: false Jan 13 20:07:09.666923 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 13 20:07:09.687016 systemd[1]: Started ntpd.service - Network Time Service. Jan 13 20:07:09.701803 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 13 20:07:09.716976 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 13 20:07:09.759719 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 13 20:07:09.771915 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 13 20:07:09.799808 extend-filesystems[2033]: Found loop4 Jan 13 20:07:09.799808 extend-filesystems[2033]: Found loop5 Jan 13 20:07:09.799808 extend-filesystems[2033]: Found loop6 Jan 13 20:07:09.799808 extend-filesystems[2033]: Found loop7 Jan 13 20:07:09.803643 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 13 20:07:09.814016 ntpd[2038]: ntpd 4.2.8p17@1.4004-o Mon Jan 13 18:29:07 UTC 2025 (1): Starting Jan 13 20:07:09.816431 dbus-daemon[2031]: [system] SELinux support is enabled Jan 13 20:07:09.822706 extend-filesystems[2033]: Found nvme0n1 Jan 13 20:07:09.822706 extend-filesystems[2033]: Found nvme0n1p1 Jan 13 20:07:09.822706 extend-filesystems[2033]: Found nvme0n1p2 Jan 13 20:07:09.822706 extend-filesystems[2033]: Found nvme0n1p3 Jan 13 20:07:09.831872 ntpd[2038]: 13 Jan 20:07:09 ntpd[2038]: ntpd 4.2.8p17@1.4004-o Mon Jan 13 18:29:07 UTC 2025 (1): Starting Jan 13 20:07:09.831872 ntpd[2038]: 13 Jan 20:07:09 ntpd[2038]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 13 20:07:09.831872 ntpd[2038]: 13 Jan 20:07:09 ntpd[2038]: ---------------------------------------------------- Jan 13 20:07:09.831872 ntpd[2038]: 13 Jan 20:07:09 ntpd[2038]: ntp-4 is maintained by Network Time Foundation, Jan 13 20:07:09.831872 ntpd[2038]: 13 Jan 20:07:09 ntpd[2038]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 13 20:07:09.831872 ntpd[2038]: 13 Jan 20:07:09 ntpd[2038]: corporation. Support and training for ntp-4 are Jan 13 20:07:09.831872 ntpd[2038]: 13 Jan 20:07:09 ntpd[2038]: available at https://www.nwtime.org/support Jan 13 20:07:09.831872 ntpd[2038]: 13 Jan 20:07:09 ntpd[2038]: ---------------------------------------------------- Jan 13 20:07:09.831872 ntpd[2038]: 13 Jan 20:07:09 ntpd[2038]: proto: precision = 0.108 usec (-23) Jan 13 20:07:09.817403 ntpd[2038]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 13 20:07:09.833470 extend-filesystems[2033]: Found usr Jan 13 20:07:09.833470 extend-filesystems[2033]: Found nvme0n1p4 Jan 13 20:07:09.833470 extend-filesystems[2033]: Found nvme0n1p6 Jan 13 20:07:09.833470 extend-filesystems[2033]: Found nvme0n1p7 Jan 13 20:07:09.833470 extend-filesystems[2033]: Found nvme0n1p9 Jan 13 20:07:09.817424 ntpd[2038]: ---------------------------------------------------- Jan 13 20:07:09.846884 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 13 20:07:09.847377 ntpd[2038]: 13 Jan 20:07:09 ntpd[2038]: basedate set to 2025-01-01 Jan 13 20:07:09.847377 ntpd[2038]: 13 Jan 20:07:09 ntpd[2038]: gps base set to 2025-01-05 (week 2348) Jan 13 20:07:09.847377 ntpd[2038]: 13 Jan 20:07:09 ntpd[2038]: Listen and drop on 0 v6wildcard [::]:123 Jan 13 20:07:09.847377 ntpd[2038]: 13 Jan 20:07:09 ntpd[2038]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 13 20:07:09.847377 ntpd[2038]: 13 Jan 20:07:09 ntpd[2038]: Listen normally on 2 lo 127.0.0.1:123 Jan 13 20:07:09.847377 ntpd[2038]: 13 Jan 20:07:09 ntpd[2038]: Listen normally on 3 eth0 172.31.21.202:123 Jan 13 20:07:09.847377 ntpd[2038]: 13 Jan 20:07:09 ntpd[2038]: Listen normally on 4 lo [::1]:123 Jan 13 20:07:09.847377 ntpd[2038]: 13 Jan 20:07:09 ntpd[2038]: Listen normally on 5 eth0 [fe80::410:d4ff:fe97:38d1%2]:123 Jan 13 20:07:09.847377 ntpd[2038]: 13 Jan 20:07:09 ntpd[2038]: Listening on routing socket on fd #22 for interface updates Jan 13 20:07:09.817445 ntpd[2038]: ntp-4 is maintained by Network Time Foundation, Jan 13 20:07:09.817464 ntpd[2038]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 13 20:07:09.817483 ntpd[2038]: corporation. Support and training for ntp-4 are Jan 13 20:07:09.817502 ntpd[2038]: available at https://www.nwtime.org/support Jan 13 20:07:09.817520 ntpd[2038]: ---------------------------------------------------- Jan 13 20:07:09.824409 dbus-daemon[2031]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1604 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 13 20:07:09.831326 ntpd[2038]: proto: precision = 0.108 usec (-23) Jan 13 20:07:09.834358 ntpd[2038]: basedate set to 2025-01-01 Jan 13 20:07:09.834800 ntpd[2038]: gps base set to 2025-01-05 (week 2348) Jan 13 20:07:09.845516 ntpd[2038]: Listen and drop on 0 v6wildcard [::]:123 Jan 13 20:07:09.845614 ntpd[2038]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 13 20:07:09.845891 ntpd[2038]: Listen normally on 2 lo 127.0.0.1:123 Jan 13 20:07:09.845965 ntpd[2038]: Listen normally on 3 eth0 172.31.21.202:123 Jan 13 20:07:09.846037 ntpd[2038]: Listen normally on 4 lo [::1]:123 Jan 13 20:07:09.846121 ntpd[2038]: Listen normally on 5 eth0 [fe80::410:d4ff:fe97:38d1%2]:123 Jan 13 20:07:09.846191 ntpd[2038]: Listening on routing socket on fd #22 for interface updates Jan 13 20:07:09.854054 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 13 20:07:09.856905 extend-filesystems[2033]: Checking size of /dev/nvme0n1p9 Jan 13 20:07:09.862187 ntpd[2038]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 20:07:09.863358 ntpd[2038]: 13 Jan 20:07:09 ntpd[2038]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 20:07:09.863358 ntpd[2038]: 13 Jan 20:07:09 ntpd[2038]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 20:07:09.862252 ntpd[2038]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 20:07:09.877976 systemd[1]: Starting update-engine.service - Update Engine... Jan 13 20:07:09.890839 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 13 20:07:09.902433 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 13 20:07:09.923367 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 13 20:07:09.924048 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 13 20:07:09.930597 systemd[1]: motdgen.service: Deactivated successfully. Jan 13 20:07:09.931227 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 13 20:07:09.955894 extend-filesystems[2033]: Resized partition /dev/nvme0n1p9 Jan 13 20:07:09.974768 jq[2064]: true Jan 13 20:07:10.040777 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jan 13 20:07:10.040990 extend-filesystems[2077]: resize2fs 1.47.1 (20-May-2024) Jan 13 20:07:10.043417 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 13 20:07:10.044051 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 13 20:07:10.095713 update_engine[2062]: I20250113 20:07:10.094995 2062 main.cc:92] Flatcar Update Engine starting Jan 13 20:07:10.097707 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 13 20:07:10.121602 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jan 13 20:07:10.131609 update_engine[2062]: I20250113 20:07:10.129926 2062 update_check_scheduler.cc:74] Next update check in 10m29s Jan 13 20:07:10.135600 jq[2081]: true Jan 13 20:07:10.162920 (ntainerd)[2089]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 13 20:07:10.179151 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 13 20:07:10.200094 extend-filesystems[2077]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jan 13 20:07:10.200094 extend-filesystems[2077]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 13 20:07:10.200094 extend-filesystems[2077]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jan 13 20:07:10.234035 tar[2074]: linux-arm64/helm Jan 13 20:07:10.179209 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 13 20:07:10.234842 extend-filesystems[2033]: Resized filesystem in /dev/nvme0n1p9 Jan 13 20:07:10.252080 coreos-metadata[2030]: Jan 13 20:07:10.206 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 13 20:07:10.252080 coreos-metadata[2030]: Jan 13 20:07:10.236 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jan 13 20:07:10.252080 coreos-metadata[2030]: Jan 13 20:07:10.237 INFO Fetch successful Jan 13 20:07:10.252080 coreos-metadata[2030]: Jan 13 20:07:10.237 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jan 13 20:07:10.252080 coreos-metadata[2030]: Jan 13 20:07:10.238 INFO Fetch successful Jan 13 20:07:10.252080 coreos-metadata[2030]: Jan 13 20:07:10.239 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jan 13 20:07:10.252080 coreos-metadata[2030]: Jan 13 20:07:10.240 INFO Fetch successful Jan 13 20:07:10.252080 coreos-metadata[2030]: Jan 13 20:07:10.240 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jan 13 20:07:10.252080 coreos-metadata[2030]: Jan 13 20:07:10.242 INFO Fetch successful Jan 13 20:07:10.252080 coreos-metadata[2030]: Jan 13 20:07:10.242 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jan 13 20:07:10.252080 coreos-metadata[2030]: Jan 13 20:07:10.244 INFO Fetch failed with 404: resource not found Jan 13 20:07:10.252080 coreos-metadata[2030]: Jan 13 20:07:10.244 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jan 13 20:07:10.252080 coreos-metadata[2030]: Jan 13 20:07:10.244 INFO Fetch successful Jan 13 20:07:10.252080 coreos-metadata[2030]: Jan 13 20:07:10.245 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jan 13 20:07:10.252080 coreos-metadata[2030]: Jan 13 20:07:10.247 INFO Fetch successful Jan 13 20:07:10.252080 coreos-metadata[2030]: Jan 13 20:07:10.247 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jan 13 20:07:10.252080 coreos-metadata[2030]: Jan 13 20:07:10.249 INFO Fetch successful Jan 13 20:07:10.252080 coreos-metadata[2030]: Jan 13 20:07:10.249 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jan 13 20:07:10.252080 coreos-metadata[2030]: Jan 13 20:07:10.251 INFO Fetch successful Jan 13 20:07:10.252080 coreos-metadata[2030]: Jan 13 20:07:10.254 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jan 13 20:07:10.213637 dbus-daemon[2031]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 13 20:07:10.181859 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 13 20:07:10.181896 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 13 20:07:10.196421 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 13 20:07:10.196977 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 13 20:07:10.232792 systemd[1]: Started update-engine.service - Update Engine. Jan 13 20:07:10.258774 coreos-metadata[2030]: Jan 13 20:07:10.256 INFO Fetch successful Jan 13 20:07:10.297928 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 13 20:07:10.301409 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 13 20:07:10.323293 systemd-logind[2058]: Watching system buttons on /dev/input/event0 (Power Button) Jan 13 20:07:10.323351 systemd-logind[2058]: Watching system buttons on /dev/input/event1 (Sleep Button) Jan 13 20:07:10.326847 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 13 20:07:10.342778 systemd-logind[2058]: New seat seat0. Jan 13 20:07:10.382284 systemd[1]: Started systemd-logind.service - User Login Management. Jan 13 20:07:10.416226 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 13 20:07:10.438579 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jan 13 20:07:10.508595 bash[2141]: Updated "/home/core/.ssh/authorized_keys" Jan 13 20:07:10.530305 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 13 20:07:10.553348 systemd[1]: Starting sshkeys.service... Jan 13 20:07:10.609518 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 13 20:07:10.616766 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 13 20:07:10.632445 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 13 20:07:10.643033 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 13 20:07:10.681821 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (2138) Jan 13 20:07:10.684536 amazon-ssm-agent[2150]: Initializing new seelog logger Jan 13 20:07:10.694884 amazon-ssm-agent[2150]: New Seelog Logger Creation Complete Jan 13 20:07:10.695072 amazon-ssm-agent[2150]: 2025/01/13 20:07:10 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 20:07:10.695072 amazon-ssm-agent[2150]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 20:07:10.695817 amazon-ssm-agent[2150]: 2025/01/13 20:07:10 processing appconfig overrides Jan 13 20:07:10.705741 amazon-ssm-agent[2150]: 2025/01/13 20:07:10 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 20:07:10.705741 amazon-ssm-agent[2150]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 20:07:10.705948 amazon-ssm-agent[2150]: 2025/01/13 20:07:10 processing appconfig overrides Jan 13 20:07:10.706256 amazon-ssm-agent[2150]: 2025/01/13 20:07:10 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 20:07:10.706256 amazon-ssm-agent[2150]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 20:07:10.706406 amazon-ssm-agent[2150]: 2025/01/13 20:07:10 processing appconfig overrides Jan 13 20:07:10.707293 amazon-ssm-agent[2150]: 2025-01-13 20:07:10 INFO Proxy environment variables: Jan 13 20:07:10.724681 amazon-ssm-agent[2150]: 2025/01/13 20:07:10 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 20:07:10.724827 amazon-ssm-agent[2150]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 20:07:10.725129 amazon-ssm-agent[2150]: 2025/01/13 20:07:10 processing appconfig overrides Jan 13 20:07:10.811616 amazon-ssm-agent[2150]: 2025-01-13 20:07:10 INFO https_proxy: Jan 13 20:07:10.868202 locksmithd[2122]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 13 20:07:10.916601 amazon-ssm-agent[2150]: 2025-01-13 20:07:10 INFO http_proxy: Jan 13 20:07:11.011437 amazon-ssm-agent[2150]: 2025-01-13 20:07:10 INFO no_proxy: Jan 13 20:07:11.022601 coreos-metadata[2164]: Jan 13 20:07:11.019 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 13 20:07:11.023170 coreos-metadata[2164]: Jan 13 20:07:11.022 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jan 13 20:07:11.032781 coreos-metadata[2164]: Jan 13 20:07:11.024 INFO Fetch successful Jan 13 20:07:11.032781 coreos-metadata[2164]: Jan 13 20:07:11.024 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 13 20:07:11.032781 coreos-metadata[2164]: Jan 13 20:07:11.026 INFO Fetch successful Jan 13 20:07:11.032789 unknown[2164]: wrote ssh authorized keys file for user: core Jan 13 20:07:11.117043 amazon-ssm-agent[2150]: 2025-01-13 20:07:10 INFO Checking if agent identity type OnPrem can be assumed Jan 13 20:07:11.172758 update-ssh-keys[2246]: Updated "/home/core/.ssh/authorized_keys" Jan 13 20:07:11.178007 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 13 20:07:11.201436 systemd[1]: Finished sshkeys.service. Jan 13 20:07:11.217849 amazon-ssm-agent[2150]: 2025-01-13 20:07:10 INFO Checking if agent identity type EC2 can be assumed Jan 13 20:07:11.293608 containerd[2089]: time="2025-01-13T20:07:11.292649341Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 13 20:07:11.321496 amazon-ssm-agent[2150]: 2025-01-13 20:07:11 INFO Agent will take identity from EC2 Jan 13 20:07:11.427624 amazon-ssm-agent[2150]: 2025-01-13 20:07:11 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 13 20:07:11.428369 containerd[2089]: time="2025-01-13T20:07:11.427955954Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:07:11.437847 containerd[2089]: time="2025-01-13T20:07:11.437200562Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:07:11.437847 containerd[2089]: time="2025-01-13T20:07:11.437264942Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 13 20:07:11.437847 containerd[2089]: time="2025-01-13T20:07:11.437298638Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 13 20:07:11.437847 containerd[2089]: time="2025-01-13T20:07:11.437639222Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 13 20:07:11.437847 containerd[2089]: time="2025-01-13T20:07:11.437674286Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 13 20:07:11.437847 containerd[2089]: time="2025-01-13T20:07:11.437793722Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:07:11.439340 containerd[2089]: time="2025-01-13T20:07:11.439292678Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:07:11.442003 containerd[2089]: time="2025-01-13T20:07:11.439936766Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:07:11.442003 containerd[2089]: time="2025-01-13T20:07:11.439982378Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 13 20:07:11.442003 containerd[2089]: time="2025-01-13T20:07:11.440014430Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:07:11.442003 containerd[2089]: time="2025-01-13T20:07:11.440055302Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 13 20:07:11.442003 containerd[2089]: time="2025-01-13T20:07:11.440235530Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:07:11.442003 containerd[2089]: time="2025-01-13T20:07:11.440662934Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:07:11.442003 containerd[2089]: time="2025-01-13T20:07:11.440928638Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:07:11.442003 containerd[2089]: time="2025-01-13T20:07:11.440960222Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 13 20:07:11.442003 containerd[2089]: time="2025-01-13T20:07:11.441146666Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 13 20:07:11.442003 containerd[2089]: time="2025-01-13T20:07:11.441241118Z" level=info msg="metadata content store policy set" policy=shared Jan 13 20:07:11.446090 dbus-daemon[2031]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 13 20:07:11.446499 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 13 20:07:11.452161 dbus-daemon[2031]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2120 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 13 20:07:11.465095 systemd[1]: Starting polkit.service - Authorization Manager... Jan 13 20:07:11.480393 containerd[2089]: time="2025-01-13T20:07:11.477273662Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 13 20:07:11.480393 containerd[2089]: time="2025-01-13T20:07:11.477422594Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 13 20:07:11.480393 containerd[2089]: time="2025-01-13T20:07:11.477459290Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 13 20:07:11.480393 containerd[2089]: time="2025-01-13T20:07:11.477512102Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 13 20:07:11.480393 containerd[2089]: time="2025-01-13T20:07:11.477546398Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 13 20:07:11.480393 containerd[2089]: time="2025-01-13T20:07:11.477834506Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 13 20:07:11.480393 containerd[2089]: time="2025-01-13T20:07:11.478369202Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 13 20:07:11.480393 containerd[2089]: time="2025-01-13T20:07:11.478536518Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 13 20:07:11.480393 containerd[2089]: time="2025-01-13T20:07:11.478598786Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 13 20:07:11.480393 containerd[2089]: time="2025-01-13T20:07:11.478635806Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 13 20:07:11.480393 containerd[2089]: time="2025-01-13T20:07:11.478668446Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 13 20:07:11.480393 containerd[2089]: time="2025-01-13T20:07:11.478709810Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 13 20:07:11.480393 containerd[2089]: time="2025-01-13T20:07:11.478740134Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 13 20:07:11.480393 containerd[2089]: time="2025-01-13T20:07:11.478775930Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 13 20:07:11.481186 containerd[2089]: time="2025-01-13T20:07:11.478819730Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 13 20:07:11.481186 containerd[2089]: time="2025-01-13T20:07:11.478853522Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 13 20:07:11.481186 containerd[2089]: time="2025-01-13T20:07:11.478881878Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 13 20:07:11.481186 containerd[2089]: time="2025-01-13T20:07:11.478910006Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 13 20:07:11.481186 containerd[2089]: time="2025-01-13T20:07:11.478949162Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 13 20:07:11.481186 containerd[2089]: time="2025-01-13T20:07:11.478979894Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 13 20:07:11.481186 containerd[2089]: time="2025-01-13T20:07:11.479008958Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 13 20:07:11.481186 containerd[2089]: time="2025-01-13T20:07:11.479039582Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 13 20:07:11.481186 containerd[2089]: time="2025-01-13T20:07:11.479068010Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 13 20:07:11.481186 containerd[2089]: time="2025-01-13T20:07:11.479097698Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 13 20:07:11.481186 containerd[2089]: time="2025-01-13T20:07:11.479126822Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 13 20:07:11.481186 containerd[2089]: time="2025-01-13T20:07:11.479156750Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 13 20:07:11.481186 containerd[2089]: time="2025-01-13T20:07:11.479186294Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 13 20:07:11.481186 containerd[2089]: time="2025-01-13T20:07:11.479220302Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 13 20:07:11.481782 containerd[2089]: time="2025-01-13T20:07:11.479256578Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 13 20:07:11.481782 containerd[2089]: time="2025-01-13T20:07:11.479286194Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 13 20:07:11.481782 containerd[2089]: time="2025-01-13T20:07:11.479323022Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 13 20:07:11.481782 containerd[2089]: time="2025-01-13T20:07:11.479359274Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 13 20:07:11.481782 containerd[2089]: time="2025-01-13T20:07:11.479407718Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 13 20:07:11.481782 containerd[2089]: time="2025-01-13T20:07:11.479439590Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 13 20:07:11.481782 containerd[2089]: time="2025-01-13T20:07:11.479467778Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 13 20:07:11.481782 containerd[2089]: time="2025-01-13T20:07:11.481358750Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 13 20:07:11.487398 containerd[2089]: time="2025-01-13T20:07:11.483597698Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 13 20:07:11.487398 containerd[2089]: time="2025-01-13T20:07:11.483679790Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 13 20:07:11.487398 containerd[2089]: time="2025-01-13T20:07:11.483729062Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 13 20:07:11.487398 containerd[2089]: time="2025-01-13T20:07:11.483800342Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 13 20:07:11.487398 containerd[2089]: time="2025-01-13T20:07:11.483870098Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 13 20:07:11.487398 containerd[2089]: time="2025-01-13T20:07:11.483897458Z" level=info msg="NRI interface is disabled by configuration." Jan 13 20:07:11.487398 containerd[2089]: time="2025-01-13T20:07:11.483946106Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 13 20:07:11.487940 containerd[2089]: time="2025-01-13T20:07:11.486445430Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 13 20:07:11.487940 containerd[2089]: time="2025-01-13T20:07:11.486615026Z" level=info msg="Connect containerd service" Jan 13 20:07:11.487940 containerd[2089]: time="2025-01-13T20:07:11.486713846Z" level=info msg="using legacy CRI server" Jan 13 20:07:11.487940 containerd[2089]: time="2025-01-13T20:07:11.486733922Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 13 20:07:11.487940 containerd[2089]: time="2025-01-13T20:07:11.487070414Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 13 20:07:11.494630 containerd[2089]: time="2025-01-13T20:07:11.492979178Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 20:07:11.498407 containerd[2089]: time="2025-01-13T20:07:11.494441462Z" level=info msg="Start subscribing containerd event" Jan 13 20:07:11.498407 containerd[2089]: time="2025-01-13T20:07:11.495133454Z" level=info msg="Start recovering state" Jan 13 20:07:11.500577 containerd[2089]: time="2025-01-13T20:07:11.498767354Z" level=info msg="Start event monitor" Jan 13 20:07:11.500577 containerd[2089]: time="2025-01-13T20:07:11.498932930Z" level=info msg="Start snapshots syncer" Jan 13 20:07:11.500577 containerd[2089]: time="2025-01-13T20:07:11.498994298Z" level=info msg="Start cni network conf syncer for default" Jan 13 20:07:11.500577 containerd[2089]: time="2025-01-13T20:07:11.499015046Z" level=info msg="Start streaming server" Jan 13 20:07:11.500577 containerd[2089]: time="2025-01-13T20:07:11.498840062Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 13 20:07:11.500816 containerd[2089]: time="2025-01-13T20:07:11.499448966Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 13 20:07:11.507086 containerd[2089]: time="2025-01-13T20:07:11.506696918Z" level=info msg="containerd successfully booted in 0.222638s" Jan 13 20:07:11.506873 systemd[1]: Started containerd.service - containerd container runtime. Jan 13 20:07:11.524539 polkitd[2274]: Started polkitd version 121 Jan 13 20:07:11.537971 amazon-ssm-agent[2150]: 2025-01-13 20:07:11 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 13 20:07:11.559355 polkitd[2274]: Loading rules from directory /etc/polkit-1/rules.d Jan 13 20:07:11.559471 polkitd[2274]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 13 20:07:11.562617 polkitd[2274]: Finished loading, compiling and executing 2 rules Jan 13 20:07:11.567205 dbus-daemon[2031]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 13 20:07:11.567493 systemd[1]: Started polkit.service - Authorization Manager. Jan 13 20:07:11.571649 polkitd[2274]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 13 20:07:11.606636 systemd-resolved[1934]: System hostname changed to 'ip-172-31-21-202'. Jan 13 20:07:11.607209 systemd-hostnamed[2120]: Hostname set to (transient) Jan 13 20:07:11.638217 amazon-ssm-agent[2150]: 2025-01-13 20:07:11 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 13 20:07:11.738310 amazon-ssm-agent[2150]: 2025-01-13 20:07:11 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jan 13 20:07:11.840527 amazon-ssm-agent[2150]: 2025-01-13 20:07:11 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Jan 13 20:07:11.936273 amazon-ssm-agent[2150]: 2025-01-13 20:07:11 INFO [amazon-ssm-agent] Starting Core Agent Jan 13 20:07:12.036731 amazon-ssm-agent[2150]: 2025-01-13 20:07:11 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jan 13 20:07:12.103431 sshd_keygen[2078]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 13 20:07:12.138468 amazon-ssm-agent[2150]: 2025-01-13 20:07:11 INFO [Registrar] Starting registrar module Jan 13 20:07:12.224536 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 13 20:07:12.239349 amazon-ssm-agent[2150]: 2025-01-13 20:07:11 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jan 13 20:07:12.247187 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 13 20:07:12.273188 systemd[1]: issuegen.service: Deactivated successfully. Jan 13 20:07:12.273715 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 13 20:07:12.292315 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 13 20:07:12.321906 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 13 20:07:12.342169 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 13 20:07:12.358083 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 13 20:07:12.361009 systemd[1]: Reached target getty.target - Login Prompts. Jan 13 20:07:12.497602 tar[2074]: linux-arm64/LICENSE Jan 13 20:07:12.497602 tar[2074]: linux-arm64/README.md Jan 13 20:07:12.519861 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:07:12.537184 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 13 20:07:12.540181 (kubelet)[2314]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:07:12.541519 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 13 20:07:12.544656 systemd[1]: Startup finished in 11.704s (kernel) + 9.322s (userspace) = 21.027s. Jan 13 20:07:13.055193 amazon-ssm-agent[2150]: 2025-01-13 20:07:13 INFO [EC2Identity] EC2 registration was successful. Jan 13 20:07:13.090469 amazon-ssm-agent[2150]: 2025-01-13 20:07:13 INFO [CredentialRefresher] credentialRefresher has started Jan 13 20:07:13.090469 amazon-ssm-agent[2150]: 2025-01-13 20:07:13 INFO [CredentialRefresher] Starting credentials refresher loop Jan 13 20:07:13.090469 amazon-ssm-agent[2150]: 2025-01-13 20:07:13 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jan 13 20:07:13.156110 amazon-ssm-agent[2150]: 2025-01-13 20:07:13 INFO [CredentialRefresher] Next credential rotation will be in 32.19165964996667 minutes Jan 13 20:07:13.329454 kubelet[2314]: E0113 20:07:13.329260 2314 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:07:13.334873 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:07:13.335275 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:07:14.115659 amazon-ssm-agent[2150]: 2025-01-13 20:07:14 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jan 13 20:07:14.216271 amazon-ssm-agent[2150]: 2025-01-13 20:07:14 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2330) started Jan 13 20:07:14.316669 amazon-ssm-agent[2150]: 2025-01-13 20:07:14 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jan 13 20:07:16.431663 systemd-resolved[1934]: Clock change detected. Flushing caches. Jan 13 20:07:16.767156 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 13 20:07:16.779218 systemd[1]: Started sshd@0-172.31.21.202:22-139.178.68.195:54782.service - OpenSSH per-connection server daemon (139.178.68.195:54782). Jan 13 20:07:17.021659 sshd[2340]: Accepted publickey for core from 139.178.68.195 port 54782 ssh2: RSA SHA256:dyUqkUNC7j9I+iFvInUHdwtzQ+aO/14TB1/ljGdoY9k Jan 13 20:07:17.024747 sshd-session[2340]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:07:17.045430 systemd-logind[2058]: New session 1 of user core. Jan 13 20:07:17.046853 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 13 20:07:17.053263 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 13 20:07:17.079172 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 13 20:07:17.090472 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 13 20:07:17.104105 (systemd)[2346]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 13 20:07:17.306882 systemd[2346]: Queued start job for default target default.target. Jan 13 20:07:17.307573 systemd[2346]: Created slice app.slice - User Application Slice. Jan 13 20:07:17.307629 systemd[2346]: Reached target paths.target - Paths. Jan 13 20:07:17.307659 systemd[2346]: Reached target timers.target - Timers. Jan 13 20:07:17.323962 systemd[2346]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 13 20:07:17.336151 systemd[2346]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 13 20:07:17.337389 systemd[2346]: Reached target sockets.target - Sockets. Jan 13 20:07:17.337426 systemd[2346]: Reached target basic.target - Basic System. Jan 13 20:07:17.337537 systemd[2346]: Reached target default.target - Main User Target. Jan 13 20:07:17.337602 systemd[2346]: Startup finished in 222ms. Jan 13 20:07:17.337786 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 13 20:07:17.344673 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 13 20:07:17.493416 systemd[1]: Started sshd@1-172.31.21.202:22-139.178.68.195:54796.service - OpenSSH per-connection server daemon (139.178.68.195:54796). Jan 13 20:07:17.685725 sshd[2358]: Accepted publickey for core from 139.178.68.195 port 54796 ssh2: RSA SHA256:dyUqkUNC7j9I+iFvInUHdwtzQ+aO/14TB1/ljGdoY9k Jan 13 20:07:17.688142 sshd-session[2358]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:07:17.696823 systemd-logind[2058]: New session 2 of user core. Jan 13 20:07:17.703390 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 13 20:07:17.830593 sshd[2361]: Connection closed by 139.178.68.195 port 54796 Jan 13 20:07:17.831444 sshd-session[2358]: pam_unix(sshd:session): session closed for user core Jan 13 20:07:17.838437 systemd[1]: sshd@1-172.31.21.202:22-139.178.68.195:54796.service: Deactivated successfully. Jan 13 20:07:17.841095 systemd-logind[2058]: Session 2 logged out. Waiting for processes to exit. Jan 13 20:07:17.844392 systemd[1]: session-2.scope: Deactivated successfully. Jan 13 20:07:17.846371 systemd-logind[2058]: Removed session 2. Jan 13 20:07:17.864266 systemd[1]: Started sshd@2-172.31.21.202:22-139.178.68.195:54804.service - OpenSSH per-connection server daemon (139.178.68.195:54804). Jan 13 20:07:18.041011 sshd[2366]: Accepted publickey for core from 139.178.68.195 port 54804 ssh2: RSA SHA256:dyUqkUNC7j9I+iFvInUHdwtzQ+aO/14TB1/ljGdoY9k Jan 13 20:07:18.043525 sshd-session[2366]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:07:18.051934 systemd-logind[2058]: New session 3 of user core. Jan 13 20:07:18.064297 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 13 20:07:18.183962 sshd[2369]: Connection closed by 139.178.68.195 port 54804 Jan 13 20:07:18.184750 sshd-session[2366]: pam_unix(sshd:session): session closed for user core Jan 13 20:07:18.191467 systemd[1]: sshd@2-172.31.21.202:22-139.178.68.195:54804.service: Deactivated successfully. Jan 13 20:07:18.193125 systemd-logind[2058]: Session 3 logged out. Waiting for processes to exit. Jan 13 20:07:18.198176 systemd[1]: session-3.scope: Deactivated successfully. Jan 13 20:07:18.199227 systemd-logind[2058]: Removed session 3. Jan 13 20:07:18.215291 systemd[1]: Started sshd@3-172.31.21.202:22-139.178.68.195:54808.service - OpenSSH per-connection server daemon (139.178.68.195:54808). Jan 13 20:07:18.397164 sshd[2374]: Accepted publickey for core from 139.178.68.195 port 54808 ssh2: RSA SHA256:dyUqkUNC7j9I+iFvInUHdwtzQ+aO/14TB1/ljGdoY9k Jan 13 20:07:18.400079 sshd-session[2374]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:07:18.408347 systemd-logind[2058]: New session 4 of user core. Jan 13 20:07:18.415307 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 13 20:07:18.542324 sshd[2377]: Connection closed by 139.178.68.195 port 54808 Jan 13 20:07:18.543162 sshd-session[2374]: pam_unix(sshd:session): session closed for user core Jan 13 20:07:18.547791 systemd[1]: sshd@3-172.31.21.202:22-139.178.68.195:54808.service: Deactivated successfully. Jan 13 20:07:18.554441 systemd[1]: session-4.scope: Deactivated successfully. Jan 13 20:07:18.556884 systemd-logind[2058]: Session 4 logged out. Waiting for processes to exit. Jan 13 20:07:18.558479 systemd-logind[2058]: Removed session 4. Jan 13 20:07:18.572324 systemd[1]: Started sshd@4-172.31.21.202:22-139.178.68.195:54810.service - OpenSSH per-connection server daemon (139.178.68.195:54810). Jan 13 20:07:18.759925 sshd[2382]: Accepted publickey for core from 139.178.68.195 port 54810 ssh2: RSA SHA256:dyUqkUNC7j9I+iFvInUHdwtzQ+aO/14TB1/ljGdoY9k Jan 13 20:07:18.762448 sshd-session[2382]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:07:18.769901 systemd-logind[2058]: New session 5 of user core. Jan 13 20:07:18.781341 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 13 20:07:18.897955 sudo[2386]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 13 20:07:18.899385 sudo[2386]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:07:18.914038 sudo[2386]: pam_unix(sudo:session): session closed for user root Jan 13 20:07:18.936731 sshd[2385]: Connection closed by 139.178.68.195 port 54810 Jan 13 20:07:18.938026 sshd-session[2382]: pam_unix(sshd:session): session closed for user core Jan 13 20:07:18.945379 systemd[1]: sshd@4-172.31.21.202:22-139.178.68.195:54810.service: Deactivated successfully. Jan 13 20:07:18.952557 systemd[1]: session-5.scope: Deactivated successfully. Jan 13 20:07:18.954305 systemd-logind[2058]: Session 5 logged out. Waiting for processes to exit. Jan 13 20:07:18.956394 systemd-logind[2058]: Removed session 5. Jan 13 20:07:18.964246 systemd[1]: Started sshd@5-172.31.21.202:22-139.178.68.195:54822.service - OpenSSH per-connection server daemon (139.178.68.195:54822). Jan 13 20:07:19.152663 sshd[2391]: Accepted publickey for core from 139.178.68.195 port 54822 ssh2: RSA SHA256:dyUqkUNC7j9I+iFvInUHdwtzQ+aO/14TB1/ljGdoY9k Jan 13 20:07:19.155156 sshd-session[2391]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:07:19.163422 systemd-logind[2058]: New session 6 of user core. Jan 13 20:07:19.173414 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 13 20:07:19.278849 sudo[2396]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 13 20:07:19.280026 sudo[2396]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:07:19.286302 sudo[2396]: pam_unix(sudo:session): session closed for user root Jan 13 20:07:19.296287 sudo[2395]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 13 20:07:19.297041 sudo[2395]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:07:19.323354 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 20:07:19.373192 augenrules[2418]: No rules Jan 13 20:07:19.376410 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 20:07:19.377254 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 20:07:19.381436 sudo[2395]: pam_unix(sudo:session): session closed for user root Jan 13 20:07:19.403983 sshd[2394]: Connection closed by 139.178.68.195 port 54822 Jan 13 20:07:19.405928 sshd-session[2391]: pam_unix(sshd:session): session closed for user core Jan 13 20:07:19.413350 systemd[1]: sshd@5-172.31.21.202:22-139.178.68.195:54822.service: Deactivated successfully. Jan 13 20:07:19.417929 systemd[1]: session-6.scope: Deactivated successfully. Jan 13 20:07:19.419313 systemd-logind[2058]: Session 6 logged out. Waiting for processes to exit. Jan 13 20:07:19.421540 systemd-logind[2058]: Removed session 6. Jan 13 20:07:19.435337 systemd[1]: Started sshd@6-172.31.21.202:22-139.178.68.195:54836.service - OpenSSH per-connection server daemon (139.178.68.195:54836). Jan 13 20:07:19.628850 sshd[2427]: Accepted publickey for core from 139.178.68.195 port 54836 ssh2: RSA SHA256:dyUqkUNC7j9I+iFvInUHdwtzQ+aO/14TB1/ljGdoY9k Jan 13 20:07:19.631215 sshd-session[2427]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:07:19.639892 systemd-logind[2058]: New session 7 of user core. Jan 13 20:07:19.651424 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 13 20:07:19.756135 sudo[2431]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 13 20:07:19.756771 sudo[2431]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:07:20.427224 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 13 20:07:20.428485 (dockerd)[2450]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 13 20:07:20.861614 dockerd[2450]: time="2025-01-13T20:07:20.860881066Z" level=info msg="Starting up" Jan 13 20:07:21.001987 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3482771016-merged.mount: Deactivated successfully. Jan 13 20:07:21.839613 dockerd[2450]: time="2025-01-13T20:07:21.839233510Z" level=info msg="Loading containers: start." Jan 13 20:07:22.143032 kernel: Initializing XFRM netlink socket Jan 13 20:07:22.199686 (udev-worker)[2472]: Network interface NamePolicy= disabled on kernel command line. Jan 13 20:07:22.292150 systemd-networkd[1604]: docker0: Link UP Jan 13 20:07:22.335110 dockerd[2450]: time="2025-01-13T20:07:22.335059893Z" level=info msg="Loading containers: done." Jan 13 20:07:22.358930 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2703052767-merged.mount: Deactivated successfully. Jan 13 20:07:22.369489 dockerd[2450]: time="2025-01-13T20:07:22.369420921Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 13 20:07:22.369686 dockerd[2450]: time="2025-01-13T20:07:22.369558237Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Jan 13 20:07:22.369786 dockerd[2450]: time="2025-01-13T20:07:22.369747765Z" level=info msg="Daemon has completed initialization" Jan 13 20:07:22.430432 dockerd[2450]: time="2025-01-13T20:07:22.430283997Z" level=info msg="API listen on /run/docker.sock" Jan 13 20:07:22.432274 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 13 20:07:23.198921 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 13 20:07:23.217180 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:07:23.672207 containerd[2089]: time="2025-01-13T20:07:23.672060239Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Jan 13 20:07:24.209218 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:07:24.225528 (kubelet)[2655]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:07:24.313908 kubelet[2655]: E0113 20:07:24.313750 2655 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:07:24.324614 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:07:24.326203 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:07:24.638366 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1059857273.mount: Deactivated successfully. Jan 13 20:07:26.484604 containerd[2089]: time="2025-01-13T20:07:26.484523989Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:07:26.486714 containerd[2089]: time="2025-01-13T20:07:26.486630529Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.12: active requests=0, bytes read=32201250" Jan 13 20:07:26.488846 containerd[2089]: time="2025-01-13T20:07:26.487999729Z" level=info msg="ImageCreate event name:\"sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:07:26.493755 containerd[2089]: time="2025-01-13T20:07:26.493701541Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:07:26.496108 containerd[2089]: time="2025-01-13T20:07:26.496047913Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.12\" with image id \"sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\", size \"32198050\" in 2.82392747s" Jan 13 20:07:26.496223 containerd[2089]: time="2025-01-13T20:07:26.496107937Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f\"" Jan 13 20:07:26.535584 containerd[2089]: time="2025-01-13T20:07:26.535527542Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Jan 13 20:07:28.417862 containerd[2089]: time="2025-01-13T20:07:28.417641631Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:07:28.419845 containerd[2089]: time="2025-01-13T20:07:28.419733387Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.12: active requests=0, bytes read=29381297" Jan 13 20:07:28.421276 containerd[2089]: time="2025-01-13T20:07:28.421199343Z" level=info msg="ImageCreate event name:\"sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:07:28.426845 containerd[2089]: time="2025-01-13T20:07:28.426765579Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:07:28.429627 containerd[2089]: time="2025-01-13T20:07:28.429364503Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.12\" with image id \"sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\", size \"30783618\" in 1.893434445s" Jan 13 20:07:28.429627 containerd[2089]: time="2025-01-13T20:07:28.429423195Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7\"" Jan 13 20:07:28.474294 containerd[2089]: time="2025-01-13T20:07:28.473998659Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Jan 13 20:07:29.706931 containerd[2089]: time="2025-01-13T20:07:29.706841465Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:07:29.709105 containerd[2089]: time="2025-01-13T20:07:29.709032125Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.12: active requests=0, bytes read=15765640" Jan 13 20:07:29.711017 containerd[2089]: time="2025-01-13T20:07:29.710938865Z" level=info msg="ImageCreate event name:\"sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:07:29.717164 containerd[2089]: time="2025-01-13T20:07:29.717111041Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:07:29.719849 containerd[2089]: time="2025-01-13T20:07:29.719450082Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.12\" with image id \"sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\", size \"17167979\" in 1.245392935s" Jan 13 20:07:29.719849 containerd[2089]: time="2025-01-13T20:07:29.719509134Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db\"" Jan 13 20:07:29.758701 containerd[2089]: time="2025-01-13T20:07:29.758646390Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Jan 13 20:07:31.098229 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4068242079.mount: Deactivated successfully. Jan 13 20:07:31.662529 containerd[2089]: time="2025-01-13T20:07:31.662444839Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:07:31.664216 containerd[2089]: time="2025-01-13T20:07:31.664135795Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.12: active requests=0, bytes read=25273977" Jan 13 20:07:31.666064 containerd[2089]: time="2025-01-13T20:07:31.665981815Z" level=info msg="ImageCreate event name:\"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:07:31.670207 containerd[2089]: time="2025-01-13T20:07:31.670130563Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:07:31.671595 containerd[2089]: time="2025-01-13T20:07:31.671550499Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.12\" with image id \"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\", repo tag \"registry.k8s.io/kube-proxy:v1.29.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\", size \"25272996\" in 1.912844997s" Jan 13 20:07:31.671894 containerd[2089]: time="2025-01-13T20:07:31.671730211Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\"" Jan 13 20:07:31.713567 containerd[2089]: time="2025-01-13T20:07:31.713441251Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 13 20:07:32.312335 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1294507347.mount: Deactivated successfully. Jan 13 20:07:34.575386 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 13 20:07:34.586251 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:07:35.191963 containerd[2089]: time="2025-01-13T20:07:35.191882817Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:07:35.209932 containerd[2089]: time="2025-01-13T20:07:35.209839137Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Jan 13 20:07:35.215971 containerd[2089]: time="2025-01-13T20:07:35.215852289Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:07:35.236213 containerd[2089]: time="2025-01-13T20:07:35.236108793Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:07:35.239097 containerd[2089]: time="2025-01-13T20:07:35.238564185Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 3.525062478s" Jan 13 20:07:35.239097 containerd[2089]: time="2025-01-13T20:07:35.238620609Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Jan 13 20:07:35.282572 containerd[2089]: time="2025-01-13T20:07:35.282439557Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 13 20:07:35.703167 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:07:35.709666 (kubelet)[2811]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:07:35.797145 kubelet[2811]: E0113 20:07:35.797046 2811 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:07:35.803111 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:07:35.804247 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:07:35.951698 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2435174253.mount: Deactivated successfully. Jan 13 20:07:35.962433 containerd[2089]: time="2025-01-13T20:07:35.962260237Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:07:35.964180 containerd[2089]: time="2025-01-13T20:07:35.964110289Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268821" Jan 13 20:07:35.965923 containerd[2089]: time="2025-01-13T20:07:35.965836777Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:07:35.971837 containerd[2089]: time="2025-01-13T20:07:35.971734021Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:07:35.974874 containerd[2089]: time="2025-01-13T20:07:35.973446457Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 690.947632ms" Jan 13 20:07:35.974874 containerd[2089]: time="2025-01-13T20:07:35.973499041Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Jan 13 20:07:36.012081 containerd[2089]: time="2025-01-13T20:07:36.012005997Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jan 13 20:07:36.703903 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1369707688.mount: Deactivated successfully. Jan 13 20:07:38.907830 containerd[2089]: time="2025-01-13T20:07:38.907738515Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:07:38.911429 containerd[2089]: time="2025-01-13T20:07:38.911319591Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200786" Jan 13 20:07:38.912396 containerd[2089]: time="2025-01-13T20:07:38.912331527Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:07:38.918538 containerd[2089]: time="2025-01-13T20:07:38.918439947Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:07:38.921309 containerd[2089]: time="2025-01-13T20:07:38.921084999Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 2.908807826s" Jan 13 20:07:38.921309 containerd[2089]: time="2025-01-13T20:07:38.921145131Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Jan 13 20:07:41.230720 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 13 20:07:46.054768 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 13 20:07:46.063355 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:07:46.563102 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:07:46.585509 (kubelet)[2946]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:07:46.676089 kubelet[2946]: E0113 20:07:46.676016 2946 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:07:46.681189 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:07:46.682608 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:07:47.001049 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:07:47.014286 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:07:47.063957 systemd[1]: Reloading requested from client PID 2962 ('systemctl') (unit session-7.scope)... Jan 13 20:07:47.064417 systemd[1]: Reloading... Jan 13 20:07:47.261914 zram_generator::config[3007]: No configuration found. Jan 13 20:07:47.507424 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:07:47.660048 systemd[1]: Reloading finished in 594 ms. Jan 13 20:07:47.750519 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 13 20:07:47.750736 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 13 20:07:47.751414 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:07:47.760449 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:07:48.249243 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:07:48.265495 (kubelet)[3077]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 20:07:48.351591 kubelet[3077]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:07:48.351591 kubelet[3077]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 20:07:48.351591 kubelet[3077]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:07:48.354321 kubelet[3077]: I0113 20:07:48.351796 3077 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 20:07:50.146831 kubelet[3077]: I0113 20:07:50.146125 3077 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 13 20:07:50.146831 kubelet[3077]: I0113 20:07:50.146167 3077 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 20:07:50.146831 kubelet[3077]: I0113 20:07:50.146471 3077 server.go:919] "Client rotation is on, will bootstrap in background" Jan 13 20:07:50.189514 kubelet[3077]: I0113 20:07:50.189455 3077 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 20:07:50.190856 kubelet[3077]: E0113 20:07:50.190792 3077 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.21.202:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.21.202:6443: connect: connection refused Jan 13 20:07:50.205033 kubelet[3077]: I0113 20:07:50.204988 3077 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 20:07:50.207410 kubelet[3077]: I0113 20:07:50.207364 3077 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 20:07:50.207727 kubelet[3077]: I0113 20:07:50.207686 3077 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 20:07:50.207944 kubelet[3077]: I0113 20:07:50.207735 3077 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 20:07:50.207944 kubelet[3077]: I0113 20:07:50.207759 3077 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 20:07:50.210302 kubelet[3077]: I0113 20:07:50.210255 3077 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:07:50.214909 kubelet[3077]: I0113 20:07:50.214557 3077 kubelet.go:396] "Attempting to sync node with API server" Jan 13 20:07:50.214909 kubelet[3077]: I0113 20:07:50.214605 3077 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 20:07:50.214909 kubelet[3077]: I0113 20:07:50.214648 3077 kubelet.go:312] "Adding apiserver pod source" Jan 13 20:07:50.214909 kubelet[3077]: I0113 20:07:50.214680 3077 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 20:07:50.218848 kubelet[3077]: W0113 20:07:50.218682 3077 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.21.202:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-21-202&limit=500&resourceVersion=0": dial tcp 172.31.21.202:6443: connect: connection refused Jan 13 20:07:50.218848 kubelet[3077]: E0113 20:07:50.218772 3077 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.21.202:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-21-202&limit=500&resourceVersion=0": dial tcp 172.31.21.202:6443: connect: connection refused Jan 13 20:07:50.219240 kubelet[3077]: W0113 20:07:50.219149 3077 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.21.202:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.21.202:6443: connect: connection refused Jan 13 20:07:50.219240 kubelet[3077]: E0113 20:07:50.219213 3077 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.21.202:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.21.202:6443: connect: connection refused Jan 13 20:07:50.219997 kubelet[3077]: I0113 20:07:50.219964 3077 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 13 20:07:50.220879 kubelet[3077]: I0113 20:07:50.220606 3077 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 20:07:50.222861 kubelet[3077]: W0113 20:07:50.221716 3077 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 13 20:07:50.222861 kubelet[3077]: I0113 20:07:50.222783 3077 server.go:1256] "Started kubelet" Jan 13 20:07:50.224657 kubelet[3077]: I0113 20:07:50.224613 3077 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 20:07:50.226036 kubelet[3077]: I0113 20:07:50.225991 3077 server.go:461] "Adding debug handlers to kubelet server" Jan 13 20:07:50.228523 kubelet[3077]: I0113 20:07:50.228481 3077 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 20:07:50.229134 kubelet[3077]: I0113 20:07:50.229106 3077 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 20:07:50.232179 kubelet[3077]: I0113 20:07:50.232119 3077 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 20:07:50.233657 kubelet[3077]: E0113 20:07:50.233338 3077 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.21.202:6443/api/v1/namespaces/default/events\": dial tcp 172.31.21.202:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-21-202.181a595ed9ed373f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-21-202,UID:ip-172-31-21-202,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-21-202,},FirstTimestamp:2025-01-13 20:07:50.222747455 +0000 UTC m=+1.950381910,LastTimestamp:2025-01-13 20:07:50.222747455 +0000 UTC m=+1.950381910,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-21-202,}" Jan 13 20:07:50.240765 kubelet[3077]: I0113 20:07:50.240721 3077 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 20:07:50.242954 kubelet[3077]: I0113 20:07:50.242920 3077 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 13 20:07:50.246700 kubelet[3077]: I0113 20:07:50.245370 3077 reconciler_new.go:29] "Reconciler: start to sync state" Jan 13 20:07:50.247107 kubelet[3077]: W0113 20:07:50.247029 3077 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.21.202:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.21.202:6443: connect: connection refused Jan 13 20:07:50.247221 kubelet[3077]: E0113 20:07:50.247122 3077 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.21.202:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.21.202:6443: connect: connection refused Jan 13 20:07:50.248717 kubelet[3077]: I0113 20:07:50.248640 3077 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 20:07:50.250879 kubelet[3077]: E0113 20:07:50.250777 3077 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.202:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-202?timeout=10s\": dial tcp 172.31.21.202:6443: connect: connection refused" interval="200ms" Jan 13 20:07:50.252924 kubelet[3077]: E0113 20:07:50.252849 3077 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 20:07:50.254927 kubelet[3077]: I0113 20:07:50.254884 3077 factory.go:221] Registration of the containerd container factory successfully Jan 13 20:07:50.254927 kubelet[3077]: I0113 20:07:50.254918 3077 factory.go:221] Registration of the systemd container factory successfully Jan 13 20:07:50.283234 kubelet[3077]: I0113 20:07:50.283157 3077 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 20:07:50.290189 kubelet[3077]: I0113 20:07:50.290135 3077 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 20:07:50.290343 kubelet[3077]: I0113 20:07:50.290185 3077 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 20:07:50.290343 kubelet[3077]: I0113 20:07:50.290242 3077 kubelet.go:2329] "Starting kubelet main sync loop" Jan 13 20:07:50.290453 kubelet[3077]: E0113 20:07:50.290352 3077 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 20:07:50.297850 kubelet[3077]: W0113 20:07:50.297743 3077 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.21.202:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.21.202:6443: connect: connection refused Jan 13 20:07:50.298078 kubelet[3077]: E0113 20:07:50.298057 3077 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.21.202:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.21.202:6443: connect: connection refused Jan 13 20:07:50.315360 kubelet[3077]: I0113 20:07:50.315306 3077 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 20:07:50.315360 kubelet[3077]: I0113 20:07:50.315347 3077 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 20:07:50.315546 kubelet[3077]: I0113 20:07:50.315381 3077 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:07:50.318179 kubelet[3077]: I0113 20:07:50.318132 3077 policy_none.go:49] "None policy: Start" Jan 13 20:07:50.319334 kubelet[3077]: I0113 20:07:50.319295 3077 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 20:07:50.319446 kubelet[3077]: I0113 20:07:50.319370 3077 state_mem.go:35] "Initializing new in-memory state store" Jan 13 20:07:50.331491 kubelet[3077]: I0113 20:07:50.331422 3077 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 20:07:50.333829 kubelet[3077]: I0113 20:07:50.332323 3077 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 20:07:50.338715 kubelet[3077]: E0113 20:07:50.338671 3077 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-21-202\" not found" Jan 13 20:07:50.342977 kubelet[3077]: I0113 20:07:50.342945 3077 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-21-202" Jan 13 20:07:50.343873 kubelet[3077]: E0113 20:07:50.343845 3077 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.21.202:6443/api/v1/nodes\": dial tcp 172.31.21.202:6443: connect: connection refused" node="ip-172-31-21-202" Jan 13 20:07:50.391342 kubelet[3077]: I0113 20:07:50.391307 3077 topology_manager.go:215] "Topology Admit Handler" podUID="ac7e40ab2950001aebd6006b9996ff4a" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-21-202" Jan 13 20:07:50.393575 kubelet[3077]: I0113 20:07:50.393517 3077 topology_manager.go:215] "Topology Admit Handler" podUID="96c0e831fe614bedb7d1dc2af7fa70cf" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-21-202" Jan 13 20:07:50.398599 kubelet[3077]: I0113 20:07:50.396985 3077 topology_manager.go:215] "Topology Admit Handler" podUID="2ec7164b13242a3e97be230f6494520d" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-21-202" Jan 13 20:07:50.451989 kubelet[3077]: E0113 20:07:50.451951 3077 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.202:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-202?timeout=10s\": dial tcp 172.31.21.202:6443: connect: connection refused" interval="400ms" Jan 13 20:07:50.545983 kubelet[3077]: I0113 20:07:50.545911 3077 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-21-202" Jan 13 20:07:50.546687 kubelet[3077]: E0113 20:07:50.546374 3077 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.21.202:6443/api/v1/nodes\": dial tcp 172.31.21.202:6443: connect: connection refused" node="ip-172-31-21-202" Jan 13 20:07:50.546687 kubelet[3077]: I0113 20:07:50.546512 3077 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ac7e40ab2950001aebd6006b9996ff4a-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-21-202\" (UID: \"ac7e40ab2950001aebd6006b9996ff4a\") " pod="kube-system/kube-apiserver-ip-172-31-21-202" Jan 13 20:07:50.546687 kubelet[3077]: I0113 20:07:50.546561 3077 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/96c0e831fe614bedb7d1dc2af7fa70cf-ca-certs\") pod \"kube-controller-manager-ip-172-31-21-202\" (UID: \"96c0e831fe614bedb7d1dc2af7fa70cf\") " pod="kube-system/kube-controller-manager-ip-172-31-21-202" Jan 13 20:07:50.546687 kubelet[3077]: I0113 20:07:50.546607 3077 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/96c0e831fe614bedb7d1dc2af7fa70cf-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-21-202\" (UID: \"96c0e831fe614bedb7d1dc2af7fa70cf\") " pod="kube-system/kube-controller-manager-ip-172-31-21-202" Jan 13 20:07:50.546687 kubelet[3077]: I0113 20:07:50.546672 3077 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/96c0e831fe614bedb7d1dc2af7fa70cf-k8s-certs\") pod \"kube-controller-manager-ip-172-31-21-202\" (UID: \"96c0e831fe614bedb7d1dc2af7fa70cf\") " pod="kube-system/kube-controller-manager-ip-172-31-21-202" Jan 13 20:07:50.547162 kubelet[3077]: I0113 20:07:50.546726 3077 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/96c0e831fe614bedb7d1dc2af7fa70cf-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-21-202\" (UID: \"96c0e831fe614bedb7d1dc2af7fa70cf\") " pod="kube-system/kube-controller-manager-ip-172-31-21-202" Jan 13 20:07:50.547162 kubelet[3077]: I0113 20:07:50.546769 3077 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ac7e40ab2950001aebd6006b9996ff4a-ca-certs\") pod \"kube-apiserver-ip-172-31-21-202\" (UID: \"ac7e40ab2950001aebd6006b9996ff4a\") " pod="kube-system/kube-apiserver-ip-172-31-21-202" Jan 13 20:07:50.547162 kubelet[3077]: I0113 20:07:50.546854 3077 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/96c0e831fe614bedb7d1dc2af7fa70cf-kubeconfig\") pod \"kube-controller-manager-ip-172-31-21-202\" (UID: \"96c0e831fe614bedb7d1dc2af7fa70cf\") " pod="kube-system/kube-controller-manager-ip-172-31-21-202" Jan 13 20:07:50.547162 kubelet[3077]: I0113 20:07:50.546911 3077 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2ec7164b13242a3e97be230f6494520d-kubeconfig\") pod \"kube-scheduler-ip-172-31-21-202\" (UID: \"2ec7164b13242a3e97be230f6494520d\") " pod="kube-system/kube-scheduler-ip-172-31-21-202" Jan 13 20:07:50.547162 kubelet[3077]: I0113 20:07:50.546956 3077 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ac7e40ab2950001aebd6006b9996ff4a-k8s-certs\") pod \"kube-apiserver-ip-172-31-21-202\" (UID: \"ac7e40ab2950001aebd6006b9996ff4a\") " pod="kube-system/kube-apiserver-ip-172-31-21-202" Jan 13 20:07:50.705960 containerd[2089]: time="2025-01-13T20:07:50.705829490Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-21-202,Uid:ac7e40ab2950001aebd6006b9996ff4a,Namespace:kube-system,Attempt:0,}" Jan 13 20:07:50.711942 containerd[2089]: time="2025-01-13T20:07:50.711575006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-21-202,Uid:2ec7164b13242a3e97be230f6494520d,Namespace:kube-system,Attempt:0,}" Jan 13 20:07:50.718155 containerd[2089]: time="2025-01-13T20:07:50.717989030Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-21-202,Uid:96c0e831fe614bedb7d1dc2af7fa70cf,Namespace:kube-system,Attempt:0,}" Jan 13 20:07:50.853199 kubelet[3077]: E0113 20:07:50.853161 3077 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.202:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-202?timeout=10s\": dial tcp 172.31.21.202:6443: connect: connection refused" interval="800ms" Jan 13 20:07:50.949454 kubelet[3077]: I0113 20:07:50.949168 3077 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-21-202" Jan 13 20:07:50.949899 kubelet[3077]: E0113 20:07:50.949867 3077 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.21.202:6443/api/v1/nodes\": dial tcp 172.31.21.202:6443: connect: connection refused" node="ip-172-31-21-202" Jan 13 20:07:51.177088 kubelet[3077]: W0113 20:07:51.176921 3077 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.21.202:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.21.202:6443: connect: connection refused Jan 13 20:07:51.177088 kubelet[3077]: E0113 20:07:51.177014 3077 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.21.202:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.21.202:6443: connect: connection refused Jan 13 20:07:51.226014 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3860601145.mount: Deactivated successfully. Jan 13 20:07:51.232893 containerd[2089]: time="2025-01-13T20:07:51.232479000Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:07:51.243612 containerd[2089]: time="2025-01-13T20:07:51.243481416Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:07:51.247592 containerd[2089]: time="2025-01-13T20:07:51.247455960Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Jan 13 20:07:51.248682 containerd[2089]: time="2025-01-13T20:07:51.248618904Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 20:07:51.255828 containerd[2089]: time="2025-01-13T20:07:51.255134844Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:07:51.257625 containerd[2089]: time="2025-01-13T20:07:51.257579616Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:07:51.258437 containerd[2089]: time="2025-01-13T20:07:51.258368772Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 20:07:51.262711 containerd[2089]: time="2025-01-13T20:07:51.262657141Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:07:51.266984 containerd[2089]: time="2025-01-13T20:07:51.266936233Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 560.981259ms" Jan 13 20:07:51.271074 containerd[2089]: time="2025-01-13T20:07:51.271002085Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 552.910587ms" Jan 13 20:07:51.282754 containerd[2089]: time="2025-01-13T20:07:51.282696997Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 571.019715ms" Jan 13 20:07:51.299708 kubelet[3077]: W0113 20:07:51.299626 3077 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.21.202:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.21.202:6443: connect: connection refused Jan 13 20:07:51.299708 kubelet[3077]: E0113 20:07:51.299702 3077 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.21.202:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.21.202:6443: connect: connection refused Jan 13 20:07:51.373911 kubelet[3077]: W0113 20:07:51.373788 3077 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.21.202:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.21.202:6443: connect: connection refused Jan 13 20:07:51.374081 kubelet[3077]: E0113 20:07:51.373936 3077 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.21.202:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.21.202:6443: connect: connection refused Jan 13 20:07:51.438433 containerd[2089]: time="2025-01-13T20:07:51.437974537Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:07:51.438433 containerd[2089]: time="2025-01-13T20:07:51.438105145Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:07:51.438433 containerd[2089]: time="2025-01-13T20:07:51.438142717Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:07:51.438878 containerd[2089]: time="2025-01-13T20:07:51.438319717Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:07:51.445932 containerd[2089]: time="2025-01-13T20:07:51.445609621Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:07:51.445932 containerd[2089]: time="2025-01-13T20:07:51.445749697Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:07:51.447302 containerd[2089]: time="2025-01-13T20:07:51.446863921Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:07:51.447302 containerd[2089]: time="2025-01-13T20:07:51.446955745Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:07:51.447302 containerd[2089]: time="2025-01-13T20:07:51.446991925Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:07:51.447302 containerd[2089]: time="2025-01-13T20:07:51.445820317Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:07:51.447754 containerd[2089]: time="2025-01-13T20:07:51.447460801Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:07:51.448177 containerd[2089]: time="2025-01-13T20:07:51.448030441Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:07:51.523126 kubelet[3077]: W0113 20:07:51.523038 3077 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.21.202:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-21-202&limit=500&resourceVersion=0": dial tcp 172.31.21.202:6443: connect: connection refused Jan 13 20:07:51.523126 kubelet[3077]: E0113 20:07:51.523134 3077 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.21.202:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-21-202&limit=500&resourceVersion=0": dial tcp 172.31.21.202:6443: connect: connection refused Jan 13 20:07:51.601318 containerd[2089]: time="2025-01-13T20:07:51.601135118Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-21-202,Uid:96c0e831fe614bedb7d1dc2af7fa70cf,Namespace:kube-system,Attempt:0,} returns sandbox id \"d47e02ed24d7f18fef721b76e01ab1da0d1e44955a3576b94c9649879517149f\"" Jan 13 20:07:51.610325 containerd[2089]: time="2025-01-13T20:07:51.610178990Z" level=info msg="CreateContainer within sandbox \"d47e02ed24d7f18fef721b76e01ab1da0d1e44955a3576b94c9649879517149f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 13 20:07:51.612067 containerd[2089]: time="2025-01-13T20:07:51.611886122Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-21-202,Uid:ac7e40ab2950001aebd6006b9996ff4a,Namespace:kube-system,Attempt:0,} returns sandbox id \"a895ec305de99fd79461d4e5c2f01e48deefcb84d1c35ea6f73c61f8e4301580\"" Jan 13 20:07:51.623885 containerd[2089]: time="2025-01-13T20:07:51.623157506Z" level=info msg="CreateContainer within sandbox \"a895ec305de99fd79461d4e5c2f01e48deefcb84d1c35ea6f73c61f8e4301580\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 13 20:07:51.629177 containerd[2089]: time="2025-01-13T20:07:51.629067614Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-21-202,Uid:2ec7164b13242a3e97be230f6494520d,Namespace:kube-system,Attempt:0,} returns sandbox id \"df40b16e40783d61869bfe7ae39eeeb9e7a12705c51dcecdc5db3fc975d89396\"" Jan 13 20:07:51.641057 containerd[2089]: time="2025-01-13T20:07:51.640959326Z" level=info msg="CreateContainer within sandbox \"df40b16e40783d61869bfe7ae39eeeb9e7a12705c51dcecdc5db3fc975d89396\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 13 20:07:51.653623 containerd[2089]: time="2025-01-13T20:07:51.653552090Z" level=info msg="CreateContainer within sandbox \"d47e02ed24d7f18fef721b76e01ab1da0d1e44955a3576b94c9649879517149f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ca76a4c11db3c4baa3930486910eead16bfba0680b7e0fdf86156a8f3f86aade\"" Jan 13 20:07:51.654460 kubelet[3077]: E0113 20:07:51.654317 3077 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.202:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-202?timeout=10s\": dial tcp 172.31.21.202:6443: connect: connection refused" interval="1.6s" Jan 13 20:07:51.654832 containerd[2089]: time="2025-01-13T20:07:51.654765470Z" level=info msg="StartContainer for \"ca76a4c11db3c4baa3930486910eead16bfba0680b7e0fdf86156a8f3f86aade\"" Jan 13 20:07:51.666699 containerd[2089]: time="2025-01-13T20:07:51.666627087Z" level=info msg="CreateContainer within sandbox \"a895ec305de99fd79461d4e5c2f01e48deefcb84d1c35ea6f73c61f8e4301580\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d8e4b61b917aaab242e95c10d7e386e225e4ee85cdee1cc98d4929c4bd93f8a6\"" Jan 13 20:07:51.667592 containerd[2089]: time="2025-01-13T20:07:51.667419543Z" level=info msg="StartContainer for \"d8e4b61b917aaab242e95c10d7e386e225e4ee85cdee1cc98d4929c4bd93f8a6\"" Jan 13 20:07:51.682824 containerd[2089]: time="2025-01-13T20:07:51.682729287Z" level=info msg="CreateContainer within sandbox \"df40b16e40783d61869bfe7ae39eeeb9e7a12705c51dcecdc5db3fc975d89396\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"dff87eaea4bb084a1eb3e888a45dc6f8e261b648b4808c6742450116c29b3cc1\"" Jan 13 20:07:51.685639 containerd[2089]: time="2025-01-13T20:07:51.685491675Z" level=info msg="StartContainer for \"dff87eaea4bb084a1eb3e888a45dc6f8e261b648b4808c6742450116c29b3cc1\"" Jan 13 20:07:51.755787 kubelet[3077]: I0113 20:07:51.755050 3077 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-21-202" Jan 13 20:07:51.755787 kubelet[3077]: E0113 20:07:51.755752 3077 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.21.202:6443/api/v1/nodes\": dial tcp 172.31.21.202:6443: connect: connection refused" node="ip-172-31-21-202" Jan 13 20:07:51.826469 containerd[2089]: time="2025-01-13T20:07:51.825483471Z" level=info msg="StartContainer for \"ca76a4c11db3c4baa3930486910eead16bfba0680b7e0fdf86156a8f3f86aade\" returns successfully" Jan 13 20:07:51.891443 containerd[2089]: time="2025-01-13T20:07:51.891377704Z" level=info msg="StartContainer for \"d8e4b61b917aaab242e95c10d7e386e225e4ee85cdee1cc98d4929c4bd93f8a6\" returns successfully" Jan 13 20:07:51.954424 containerd[2089]: time="2025-01-13T20:07:51.953735320Z" level=info msg="StartContainer for \"dff87eaea4bb084a1eb3e888a45dc6f8e261b648b4808c6742450116c29b3cc1\" returns successfully" Jan 13 20:07:53.362494 kubelet[3077]: I0113 20:07:53.360049 3077 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-21-202" Jan 13 20:07:55.146826 update_engine[2062]: I20250113 20:07:55.144840 2062 update_attempter.cc:509] Updating boot flags... Jan 13 20:07:55.462877 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (3369) Jan 13 20:07:55.686943 kubelet[3077]: I0113 20:07:55.686059 3077 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-21-202" Jan 13 20:07:55.908927 kubelet[3077]: E0113 20:07:55.903189 3077 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="3.2s" Jan 13 20:07:56.113884 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (3369) Jan 13 20:07:56.220597 kubelet[3077]: I0113 20:07:56.220553 3077 apiserver.go:52] "Watching apiserver" Jan 13 20:07:56.245780 kubelet[3077]: I0113 20:07:56.245373 3077 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 13 20:07:58.449546 systemd[1]: Reloading requested from client PID 3539 ('systemctl') (unit session-7.scope)... Jan 13 20:07:58.450024 systemd[1]: Reloading... Jan 13 20:07:58.632846 zram_generator::config[3582]: No configuration found. Jan 13 20:07:58.889874 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:07:59.067871 systemd[1]: Reloading finished in 617 ms. Jan 13 20:07:59.129198 kubelet[3077]: I0113 20:07:59.129146 3077 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 20:07:59.129960 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:07:59.144036 systemd[1]: kubelet.service: Deactivated successfully. Jan 13 20:07:59.144976 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:07:59.157564 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:07:59.537234 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:07:59.556609 (kubelet)[3649]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 20:07:59.680895 kubelet[3649]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:07:59.680895 kubelet[3649]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 20:07:59.680895 kubelet[3649]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:07:59.680895 kubelet[3649]: I0113 20:07:59.679964 3649 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 20:07:59.691443 kubelet[3649]: I0113 20:07:59.691388 3649 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 13 20:07:59.691443 kubelet[3649]: I0113 20:07:59.691435 3649 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 20:07:59.692096 kubelet[3649]: I0113 20:07:59.692054 3649 server.go:919] "Client rotation is on, will bootstrap in background" Jan 13 20:07:59.695266 kubelet[3649]: I0113 20:07:59.695216 3649 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 13 20:07:59.698714 kubelet[3649]: I0113 20:07:59.698654 3649 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 20:07:59.699203 sudo[3663]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 13 20:07:59.701039 sudo[3663]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 13 20:07:59.720702 kubelet[3649]: I0113 20:07:59.720286 3649 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 20:07:59.723873 kubelet[3649]: I0113 20:07:59.723501 3649 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 20:07:59.724885 kubelet[3649]: I0113 20:07:59.724274 3649 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 20:07:59.724885 kubelet[3649]: I0113 20:07:59.724337 3649 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 20:07:59.724885 kubelet[3649]: I0113 20:07:59.724358 3649 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 20:07:59.724885 kubelet[3649]: I0113 20:07:59.724414 3649 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:07:59.726110 kubelet[3649]: I0113 20:07:59.726024 3649 kubelet.go:396] "Attempting to sync node with API server" Jan 13 20:07:59.726768 kubelet[3649]: I0113 20:07:59.726710 3649 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 20:07:59.728018 kubelet[3649]: I0113 20:07:59.727958 3649 kubelet.go:312] "Adding apiserver pod source" Jan 13 20:07:59.728149 kubelet[3649]: I0113 20:07:59.728034 3649 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 20:07:59.736202 kubelet[3649]: I0113 20:07:59.735336 3649 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 13 20:07:59.736202 kubelet[3649]: I0113 20:07:59.735665 3649 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 20:07:59.736365 kubelet[3649]: I0113 20:07:59.736324 3649 server.go:1256] "Started kubelet" Jan 13 20:07:59.754823 kubelet[3649]: I0113 20:07:59.750413 3649 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 20:07:59.758704 kubelet[3649]: I0113 20:07:59.758502 3649 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 20:07:59.765538 kubelet[3649]: I0113 20:07:59.765486 3649 server.go:461] "Adding debug handlers to kubelet server" Jan 13 20:07:59.776345 kubelet[3649]: I0113 20:07:59.773718 3649 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 20:07:59.778197 kubelet[3649]: I0113 20:07:59.777983 3649 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 20:07:59.799321 kubelet[3649]: I0113 20:07:59.794980 3649 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 20:07:59.799321 kubelet[3649]: I0113 20:07:59.795338 3649 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 13 20:07:59.799321 kubelet[3649]: I0113 20:07:59.795609 3649 reconciler_new.go:29] "Reconciler: start to sync state" Jan 13 20:07:59.829725 kubelet[3649]: I0113 20:07:59.828557 3649 factory.go:221] Registration of the systemd container factory successfully Jan 13 20:07:59.829725 kubelet[3649]: I0113 20:07:59.828714 3649 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 20:07:59.838984 kubelet[3649]: I0113 20:07:59.838942 3649 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 20:07:59.843275 kubelet[3649]: I0113 20:07:59.843225 3649 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 20:07:59.843275 kubelet[3649]: I0113 20:07:59.843270 3649 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 20:07:59.843930 kubelet[3649]: I0113 20:07:59.843301 3649 kubelet.go:2329] "Starting kubelet main sync loop" Jan 13 20:07:59.843930 kubelet[3649]: E0113 20:07:59.843391 3649 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 20:07:59.858627 kubelet[3649]: I0113 20:07:59.858574 3649 factory.go:221] Registration of the containerd container factory successfully Jan 13 20:07:59.873474 kubelet[3649]: E0113 20:07:59.873424 3649 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 20:07:59.916583 kubelet[3649]: I0113 20:07:59.916532 3649 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-21-202" Jan 13 20:07:59.943648 kubelet[3649]: I0113 20:07:59.942204 3649 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-21-202" Jan 13 20:07:59.943648 kubelet[3649]: I0113 20:07:59.942323 3649 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-21-202" Jan 13 20:07:59.945086 kubelet[3649]: E0113 20:07:59.944319 3649 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 13 20:08:00.026178 kubelet[3649]: I0113 20:08:00.026129 3649 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 20:08:00.026178 kubelet[3649]: I0113 20:08:00.026170 3649 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 20:08:00.026369 kubelet[3649]: I0113 20:08:00.026204 3649 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:08:00.026985 kubelet[3649]: I0113 20:08:00.026436 3649 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 13 20:08:00.026985 kubelet[3649]: I0113 20:08:00.026483 3649 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 13 20:08:00.026985 kubelet[3649]: I0113 20:08:00.026501 3649 policy_none.go:49] "None policy: Start" Jan 13 20:08:00.028924 kubelet[3649]: I0113 20:08:00.028284 3649 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 20:08:00.028924 kubelet[3649]: I0113 20:08:00.028337 3649 state_mem.go:35] "Initializing new in-memory state store" Jan 13 20:08:00.028924 kubelet[3649]: I0113 20:08:00.028611 3649 state_mem.go:75] "Updated machine memory state" Jan 13 20:08:00.031371 kubelet[3649]: I0113 20:08:00.031328 3649 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 20:08:00.038991 kubelet[3649]: I0113 20:08:00.038855 3649 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 20:08:00.146299 kubelet[3649]: I0113 20:08:00.145109 3649 topology_manager.go:215] "Topology Admit Handler" podUID="2ec7164b13242a3e97be230f6494520d" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-21-202" Jan 13 20:08:00.146299 kubelet[3649]: I0113 20:08:00.145217 3649 topology_manager.go:215] "Topology Admit Handler" podUID="ac7e40ab2950001aebd6006b9996ff4a" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-21-202" Jan 13 20:08:00.146299 kubelet[3649]: I0113 20:08:00.145310 3649 topology_manager.go:215] "Topology Admit Handler" podUID="96c0e831fe614bedb7d1dc2af7fa70cf" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-21-202" Jan 13 20:08:00.164837 kubelet[3649]: E0113 20:08:00.163632 3649 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-21-202\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-21-202" Jan 13 20:08:00.165126 kubelet[3649]: E0113 20:08:00.163633 3649 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ip-172-31-21-202\" already exists" pod="kube-system/kube-scheduler-ip-172-31-21-202" Jan 13 20:08:00.198727 kubelet[3649]: I0113 20:08:00.198669 3649 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/96c0e831fe614bedb7d1dc2af7fa70cf-k8s-certs\") pod \"kube-controller-manager-ip-172-31-21-202\" (UID: \"96c0e831fe614bedb7d1dc2af7fa70cf\") " pod="kube-system/kube-controller-manager-ip-172-31-21-202" Jan 13 20:08:00.198908 kubelet[3649]: I0113 20:08:00.198751 3649 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/96c0e831fe614bedb7d1dc2af7fa70cf-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-21-202\" (UID: \"96c0e831fe614bedb7d1dc2af7fa70cf\") " pod="kube-system/kube-controller-manager-ip-172-31-21-202" Jan 13 20:08:00.198908 kubelet[3649]: I0113 20:08:00.198823 3649 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ac7e40ab2950001aebd6006b9996ff4a-ca-certs\") pod \"kube-apiserver-ip-172-31-21-202\" (UID: \"ac7e40ab2950001aebd6006b9996ff4a\") " pod="kube-system/kube-apiserver-ip-172-31-21-202" Jan 13 20:08:00.198908 kubelet[3649]: I0113 20:08:00.198879 3649 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ac7e40ab2950001aebd6006b9996ff4a-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-21-202\" (UID: \"ac7e40ab2950001aebd6006b9996ff4a\") " pod="kube-system/kube-apiserver-ip-172-31-21-202" Jan 13 20:08:00.199067 kubelet[3649]: I0113 20:08:00.198924 3649 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/96c0e831fe614bedb7d1dc2af7fa70cf-ca-certs\") pod \"kube-controller-manager-ip-172-31-21-202\" (UID: \"96c0e831fe614bedb7d1dc2af7fa70cf\") " pod="kube-system/kube-controller-manager-ip-172-31-21-202" Jan 13 20:08:00.199067 kubelet[3649]: I0113 20:08:00.198969 3649 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/96c0e831fe614bedb7d1dc2af7fa70cf-kubeconfig\") pod \"kube-controller-manager-ip-172-31-21-202\" (UID: \"96c0e831fe614bedb7d1dc2af7fa70cf\") " pod="kube-system/kube-controller-manager-ip-172-31-21-202" Jan 13 20:08:00.199067 kubelet[3649]: I0113 20:08:00.199012 3649 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2ec7164b13242a3e97be230f6494520d-kubeconfig\") pod \"kube-scheduler-ip-172-31-21-202\" (UID: \"2ec7164b13242a3e97be230f6494520d\") " pod="kube-system/kube-scheduler-ip-172-31-21-202" Jan 13 20:08:00.199067 kubelet[3649]: I0113 20:08:00.199057 3649 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ac7e40ab2950001aebd6006b9996ff4a-k8s-certs\") pod \"kube-apiserver-ip-172-31-21-202\" (UID: \"ac7e40ab2950001aebd6006b9996ff4a\") " pod="kube-system/kube-apiserver-ip-172-31-21-202" Jan 13 20:08:00.199251 kubelet[3649]: I0113 20:08:00.199104 3649 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/96c0e831fe614bedb7d1dc2af7fa70cf-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-21-202\" (UID: \"96c0e831fe614bedb7d1dc2af7fa70cf\") " pod="kube-system/kube-controller-manager-ip-172-31-21-202" Jan 13 20:08:00.654057 sudo[3663]: pam_unix(sudo:session): session closed for user root Jan 13 20:08:00.748000 kubelet[3649]: I0113 20:08:00.747900 3649 apiserver.go:52] "Watching apiserver" Jan 13 20:08:00.796111 kubelet[3649]: I0113 20:08:00.796008 3649 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 13 20:08:01.053672 kubelet[3649]: I0113 20:08:01.053614 3649 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-21-202" podStartSLOduration=4.053544657 podStartE2EDuration="4.053544657s" podCreationTimestamp="2025-01-13 20:07:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:08:01.019034385 +0000 UTC m=+1.445819420" watchObservedRunningTime="2025-01-13 20:08:01.053544657 +0000 UTC m=+1.480329764" Jan 13 20:08:01.077599 kubelet[3649]: I0113 20:08:01.077519 3649 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-21-202" podStartSLOduration=4.077442321 podStartE2EDuration="4.077442321s" podCreationTimestamp="2025-01-13 20:07:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:08:01.053964609 +0000 UTC m=+1.480749692" watchObservedRunningTime="2025-01-13 20:08:01.077442321 +0000 UTC m=+1.504227344" Jan 13 20:08:01.097763 kubelet[3649]: I0113 20:08:01.097115 3649 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-21-202" podStartSLOduration=1.097004397 podStartE2EDuration="1.097004397s" podCreationTimestamp="2025-01-13 20:08:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:08:01.077297781 +0000 UTC m=+1.504082816" watchObservedRunningTime="2025-01-13 20:08:01.097004397 +0000 UTC m=+1.523789432" Jan 13 20:08:02.645217 sudo[2431]: pam_unix(sudo:session): session closed for user root Jan 13 20:08:02.668238 sshd[2430]: Connection closed by 139.178.68.195 port 54836 Jan 13 20:08:02.669120 sshd-session[2427]: pam_unix(sshd:session): session closed for user core Jan 13 20:08:02.677359 systemd[1]: sshd@6-172.31.21.202:22-139.178.68.195:54836.service: Deactivated successfully. Jan 13 20:08:02.684400 systemd[1]: session-7.scope: Deactivated successfully. Jan 13 20:08:02.685924 systemd-logind[2058]: Session 7 logged out. Waiting for processes to exit. Jan 13 20:08:02.688459 systemd-logind[2058]: Removed session 7. Jan 13 20:08:10.994100 kubelet[3649]: I0113 20:08:10.993993 3649 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 13 20:08:10.994894 containerd[2089]: time="2025-01-13T20:08:10.994688207Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 13 20:08:10.995616 kubelet[3649]: I0113 20:08:10.995038 3649 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 13 20:08:11.950022 kubelet[3649]: I0113 20:08:11.948119 3649 topology_manager.go:215] "Topology Admit Handler" podUID="3d4450fb-dabe-4799-88b3-b7a3c2ac7361" podNamespace="kube-system" podName="kube-proxy-xnxvk" Jan 13 20:08:11.980425 kubelet[3649]: I0113 20:08:11.978008 3649 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsj4d\" (UniqueName: \"kubernetes.io/projected/3d4450fb-dabe-4799-88b3-b7a3c2ac7361-kube-api-access-fsj4d\") pod \"kube-proxy-xnxvk\" (UID: \"3d4450fb-dabe-4799-88b3-b7a3c2ac7361\") " pod="kube-system/kube-proxy-xnxvk" Jan 13 20:08:11.980425 kubelet[3649]: I0113 20:08:11.978093 3649 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3d4450fb-dabe-4799-88b3-b7a3c2ac7361-kube-proxy\") pod \"kube-proxy-xnxvk\" (UID: \"3d4450fb-dabe-4799-88b3-b7a3c2ac7361\") " pod="kube-system/kube-proxy-xnxvk" Jan 13 20:08:11.980425 kubelet[3649]: I0113 20:08:11.978150 3649 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3d4450fb-dabe-4799-88b3-b7a3c2ac7361-xtables-lock\") pod \"kube-proxy-xnxvk\" (UID: \"3d4450fb-dabe-4799-88b3-b7a3c2ac7361\") " pod="kube-system/kube-proxy-xnxvk" Jan 13 20:08:11.980425 kubelet[3649]: I0113 20:08:11.978201 3649 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3d4450fb-dabe-4799-88b3-b7a3c2ac7361-lib-modules\") pod \"kube-proxy-xnxvk\" (UID: \"3d4450fb-dabe-4799-88b3-b7a3c2ac7361\") " pod="kube-system/kube-proxy-xnxvk" Jan 13 20:08:11.980425 kubelet[3649]: I0113 20:08:11.979402 3649 topology_manager.go:215] "Topology Admit Handler" podUID="cecb4869-cc53-4d7f-9f23-2b1d7002f5e6" podNamespace="kube-system" podName="cilium-xmc4w" Jan 13 20:08:12.043194 kubelet[3649]: I0113 20:08:12.040648 3649 topology_manager.go:215] "Topology Admit Handler" podUID="107e644e-3160-4704-9289-0520c79b86dd" podNamespace="kube-system" podName="cilium-operator-5cc964979-66vzt" Jan 13 20:08:12.078623 kubelet[3649]: I0113 20:08:12.078579 3649 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cecb4869-cc53-4d7f-9f23-2b1d7002f5e6-etc-cni-netd\") pod \"cilium-xmc4w\" (UID: \"cecb4869-cc53-4d7f-9f23-2b1d7002f5e6\") " pod="kube-system/cilium-xmc4w" Jan 13 20:08:12.078899 kubelet[3649]: I0113 20:08:12.078875 3649 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skccg\" (UniqueName: \"kubernetes.io/projected/107e644e-3160-4704-9289-0520c79b86dd-kube-api-access-skccg\") pod \"cilium-operator-5cc964979-66vzt\" (UID: \"107e644e-3160-4704-9289-0520c79b86dd\") " pod="kube-system/cilium-operator-5cc964979-66vzt" Jan 13 20:08:12.080768 kubelet[3649]: I0113 20:08:12.080721 3649 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cecb4869-cc53-4d7f-9f23-2b1d7002f5e6-cni-path\") pod \"cilium-xmc4w\" (UID: \"cecb4869-cc53-4d7f-9f23-2b1d7002f5e6\") " pod="kube-system/cilium-xmc4w" Jan 13 20:08:12.081096 kubelet[3649]: I0113 20:08:12.081069 3649 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cecb4869-cc53-4d7f-9f23-2b1d7002f5e6-lib-modules\") pod \"cilium-xmc4w\" (UID: \"cecb4869-cc53-4d7f-9f23-2b1d7002f5e6\") " pod="kube-system/cilium-xmc4w" Jan 13 20:08:12.081827 kubelet[3649]: I0113 20:08:12.081224 3649 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cecb4869-cc53-4d7f-9f23-2b1d7002f5e6-clustermesh-secrets\") pod \"cilium-xmc4w\" (UID: \"cecb4869-cc53-4d7f-9f23-2b1d7002f5e6\") " pod="kube-system/cilium-xmc4w" Jan 13 20:08:12.081827 kubelet[3649]: I0113 20:08:12.081280 3649 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cecb4869-cc53-4d7f-9f23-2b1d7002f5e6-host-proc-sys-kernel\") pod \"cilium-xmc4w\" (UID: \"cecb4869-cc53-4d7f-9f23-2b1d7002f5e6\") " pod="kube-system/cilium-xmc4w" Jan 13 20:08:12.081827 kubelet[3649]: I0113 20:08:12.081332 3649 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cecb4869-cc53-4d7f-9f23-2b1d7002f5e6-bpf-maps\") pod \"cilium-xmc4w\" (UID: \"cecb4869-cc53-4d7f-9f23-2b1d7002f5e6\") " pod="kube-system/cilium-xmc4w" Jan 13 20:08:12.081827 kubelet[3649]: I0113 20:08:12.081375 3649 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cecb4869-cc53-4d7f-9f23-2b1d7002f5e6-cilium-run\") pod \"cilium-xmc4w\" (UID: \"cecb4869-cc53-4d7f-9f23-2b1d7002f5e6\") " pod="kube-system/cilium-xmc4w" Jan 13 20:08:12.081827 kubelet[3649]: I0113 20:08:12.081441 3649 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cecb4869-cc53-4d7f-9f23-2b1d7002f5e6-cilium-cgroup\") pod \"cilium-xmc4w\" (UID: \"cecb4869-cc53-4d7f-9f23-2b1d7002f5e6\") " pod="kube-system/cilium-xmc4w" Jan 13 20:08:12.081827 kubelet[3649]: I0113 20:08:12.081491 3649 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cecb4869-cc53-4d7f-9f23-2b1d7002f5e6-xtables-lock\") pod \"cilium-xmc4w\" (UID: \"cecb4869-cc53-4d7f-9f23-2b1d7002f5e6\") " pod="kube-system/cilium-xmc4w" Jan 13 20:08:12.082198 kubelet[3649]: I0113 20:08:12.081536 3649 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/107e644e-3160-4704-9289-0520c79b86dd-cilium-config-path\") pod \"cilium-operator-5cc964979-66vzt\" (UID: \"107e644e-3160-4704-9289-0520c79b86dd\") " pod="kube-system/cilium-operator-5cc964979-66vzt" Jan 13 20:08:12.082198 kubelet[3649]: I0113 20:08:12.081582 3649 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cecb4869-cc53-4d7f-9f23-2b1d7002f5e6-cilium-config-path\") pod \"cilium-xmc4w\" (UID: \"cecb4869-cc53-4d7f-9f23-2b1d7002f5e6\") " pod="kube-system/cilium-xmc4w" Jan 13 20:08:12.082198 kubelet[3649]: I0113 20:08:12.081625 3649 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cecb4869-cc53-4d7f-9f23-2b1d7002f5e6-hubble-tls\") pod \"cilium-xmc4w\" (UID: \"cecb4869-cc53-4d7f-9f23-2b1d7002f5e6\") " pod="kube-system/cilium-xmc4w" Jan 13 20:08:12.082198 kubelet[3649]: I0113 20:08:12.081670 3649 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cecb4869-cc53-4d7f-9f23-2b1d7002f5e6-hostproc\") pod \"cilium-xmc4w\" (UID: \"cecb4869-cc53-4d7f-9f23-2b1d7002f5e6\") " pod="kube-system/cilium-xmc4w" Jan 13 20:08:12.082198 kubelet[3649]: I0113 20:08:12.081753 3649 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9c44\" (UniqueName: \"kubernetes.io/projected/cecb4869-cc53-4d7f-9f23-2b1d7002f5e6-kube-api-access-x9c44\") pod \"cilium-xmc4w\" (UID: \"cecb4869-cc53-4d7f-9f23-2b1d7002f5e6\") " pod="kube-system/cilium-xmc4w" Jan 13 20:08:12.083571 kubelet[3649]: I0113 20:08:12.083432 3649 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cecb4869-cc53-4d7f-9f23-2b1d7002f5e6-host-proc-sys-net\") pod \"cilium-xmc4w\" (UID: \"cecb4869-cc53-4d7f-9f23-2b1d7002f5e6\") " pod="kube-system/cilium-xmc4w" Jan 13 20:08:12.304389 containerd[2089]: time="2025-01-13T20:08:12.303163773Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xnxvk,Uid:3d4450fb-dabe-4799-88b3-b7a3c2ac7361,Namespace:kube-system,Attempt:0,}" Jan 13 20:08:12.353400 containerd[2089]: time="2025-01-13T20:08:12.353206629Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:08:12.353573 containerd[2089]: time="2025-01-13T20:08:12.353315025Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:08:12.353634 containerd[2089]: time="2025-01-13T20:08:12.353549229Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:08:12.354683 containerd[2089]: time="2025-01-13T20:08:12.354573237Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:08:12.365958 containerd[2089]: time="2025-01-13T20:08:12.365652741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-66vzt,Uid:107e644e-3160-4704-9289-0520c79b86dd,Namespace:kube-system,Attempt:0,}" Jan 13 20:08:12.431156 containerd[2089]: time="2025-01-13T20:08:12.430970674Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:08:12.431602 containerd[2089]: time="2025-01-13T20:08:12.431091706Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:08:12.431602 containerd[2089]: time="2025-01-13T20:08:12.431386702Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:08:12.432193 containerd[2089]: time="2025-01-13T20:08:12.431980726Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:08:12.438168 containerd[2089]: time="2025-01-13T20:08:12.438036346Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xnxvk,Uid:3d4450fb-dabe-4799-88b3-b7a3c2ac7361,Namespace:kube-system,Attempt:0,} returns sandbox id \"9dc351ebe1102400464379b0d26e0275391ad92454be5aa932dc2918b8ef20f7\"" Jan 13 20:08:12.447323 containerd[2089]: time="2025-01-13T20:08:12.447039850Z" level=info msg="CreateContainer within sandbox \"9dc351ebe1102400464379b0d26e0275391ad92454be5aa932dc2918b8ef20f7\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 13 20:08:12.483656 containerd[2089]: time="2025-01-13T20:08:12.483489118Z" level=info msg="CreateContainer within sandbox \"9dc351ebe1102400464379b0d26e0275391ad92454be5aa932dc2918b8ef20f7\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4f0b251c87013d9856005191eebe72ede3d678207c905a90744c4af16dab0e42\"" Jan 13 20:08:12.486749 containerd[2089]: time="2025-01-13T20:08:12.486507598Z" level=info msg="StartContainer for \"4f0b251c87013d9856005191eebe72ede3d678207c905a90744c4af16dab0e42\"" Jan 13 20:08:12.544915 containerd[2089]: time="2025-01-13T20:08:12.544864378Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-66vzt,Uid:107e644e-3160-4704-9289-0520c79b86dd,Namespace:kube-system,Attempt:0,} returns sandbox id \"537d853197c791f4609a76ea7850936532adf8764f53005b7579390a962ad241\"" Jan 13 20:08:12.550904 containerd[2089]: time="2025-01-13T20:08:12.550524166Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 13 20:08:12.607955 containerd[2089]: time="2025-01-13T20:08:12.607560815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xmc4w,Uid:cecb4869-cc53-4d7f-9f23-2b1d7002f5e6,Namespace:kube-system,Attempt:0,}" Jan 13 20:08:12.627528 containerd[2089]: time="2025-01-13T20:08:12.627471431Z" level=info msg="StartContainer for \"4f0b251c87013d9856005191eebe72ede3d678207c905a90744c4af16dab0e42\" returns successfully" Jan 13 20:08:12.668853 containerd[2089]: time="2025-01-13T20:08:12.667656683Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:08:12.668853 containerd[2089]: time="2025-01-13T20:08:12.667902263Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:08:12.668853 containerd[2089]: time="2025-01-13T20:08:12.667985327Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:08:12.670451 containerd[2089]: time="2025-01-13T20:08:12.669965303Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:08:12.766545 containerd[2089]: time="2025-01-13T20:08:12.766470275Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xmc4w,Uid:cecb4869-cc53-4d7f-9f23-2b1d7002f5e6,Namespace:kube-system,Attempt:0,} returns sandbox id \"a739bf7cc08951ac4b3aa4ac88d501499d0e95803535736aa84711ac24e3762c\"" Jan 13 20:08:14.818089 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1953472875.mount: Deactivated successfully. Jan 13 20:08:16.432958 containerd[2089]: time="2025-01-13T20:08:16.432883358Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:08:16.435401 containerd[2089]: time="2025-01-13T20:08:16.435312482Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17138326" Jan 13 20:08:16.437196 containerd[2089]: time="2025-01-13T20:08:16.437126198Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:08:16.440018 containerd[2089]: time="2025-01-13T20:08:16.439939178Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 3.889352288s" Jan 13 20:08:16.440018 containerd[2089]: time="2025-01-13T20:08:16.440000558Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jan 13 20:08:16.442451 containerd[2089]: time="2025-01-13T20:08:16.442390838Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 13 20:08:16.445668 containerd[2089]: time="2025-01-13T20:08:16.445387646Z" level=info msg="CreateContainer within sandbox \"537d853197c791f4609a76ea7850936532adf8764f53005b7579390a962ad241\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 13 20:08:16.470446 containerd[2089]: time="2025-01-13T20:08:16.470267834Z" level=info msg="CreateContainer within sandbox \"537d853197c791f4609a76ea7850936532adf8764f53005b7579390a962ad241\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"a1c2334ce0e569a8b8e4441f2fabee3ee34782c199eb06c8a7733b0cce570256\"" Jan 13 20:08:16.472835 containerd[2089]: time="2025-01-13T20:08:16.471305822Z" level=info msg="StartContainer for \"a1c2334ce0e569a8b8e4441f2fabee3ee34782c199eb06c8a7733b0cce570256\"" Jan 13 20:08:16.571343 containerd[2089]: time="2025-01-13T20:08:16.571266758Z" level=info msg="StartContainer for \"a1c2334ce0e569a8b8e4441f2fabee3ee34782c199eb06c8a7733b0cce570256\" returns successfully" Jan 13 20:08:17.100521 kubelet[3649]: I0113 20:08:17.100470 3649 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-xnxvk" podStartSLOduration=6.100412257 podStartE2EDuration="6.100412257s" podCreationTimestamp="2025-01-13 20:08:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:08:13.004890981 +0000 UTC m=+13.431676040" watchObservedRunningTime="2025-01-13 20:08:17.100412257 +0000 UTC m=+17.527197292" Jan 13 20:08:32.003100 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3841157403.mount: Deactivated successfully. Jan 13 20:08:34.681741 containerd[2089]: time="2025-01-13T20:08:34.681591728Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:08:34.683420 containerd[2089]: time="2025-01-13T20:08:34.683347124Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157651530" Jan 13 20:08:34.685412 containerd[2089]: time="2025-01-13T20:08:34.685303088Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:08:34.689836 containerd[2089]: time="2025-01-13T20:08:34.688979768Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 18.246530298s" Jan 13 20:08:34.689836 containerd[2089]: time="2025-01-13T20:08:34.689048612Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jan 13 20:08:34.695071 containerd[2089]: time="2025-01-13T20:08:34.695005700Z" level=info msg="CreateContainer within sandbox \"a739bf7cc08951ac4b3aa4ac88d501499d0e95803535736aa84711ac24e3762c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 13 20:08:34.715356 containerd[2089]: time="2025-01-13T20:08:34.715302416Z" level=info msg="CreateContainer within sandbox \"a739bf7cc08951ac4b3aa4ac88d501499d0e95803535736aa84711ac24e3762c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e5a4b92dbd67c6bf76c47b24aec02fe6c1cfdb18d6df083d94794a257320762f\"" Jan 13 20:08:34.716443 containerd[2089]: time="2025-01-13T20:08:34.716318216Z" level=info msg="StartContainer for \"e5a4b92dbd67c6bf76c47b24aec02fe6c1cfdb18d6df083d94794a257320762f\"" Jan 13 20:08:34.772588 systemd[1]: run-containerd-runc-k8s.io-e5a4b92dbd67c6bf76c47b24aec02fe6c1cfdb18d6df083d94794a257320762f-runc.Yy89uU.mount: Deactivated successfully. Jan 13 20:08:34.828349 containerd[2089]: time="2025-01-13T20:08:34.827888061Z" level=info msg="StartContainer for \"e5a4b92dbd67c6bf76c47b24aec02fe6c1cfdb18d6df083d94794a257320762f\" returns successfully" Jan 13 20:08:35.105988 kubelet[3649]: I0113 20:08:35.104269 3649 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-66vzt" podStartSLOduration=19.212723942 podStartE2EDuration="23.10420893s" podCreationTimestamp="2025-01-13 20:08:12 +0000 UTC" firstStartedPulling="2025-01-13 20:08:12.549031234 +0000 UTC m=+12.975816257" lastFinishedPulling="2025-01-13 20:08:16.440516222 +0000 UTC m=+16.867301245" observedRunningTime="2025-01-13 20:08:17.102909841 +0000 UTC m=+17.529694936" watchObservedRunningTime="2025-01-13 20:08:35.10420893 +0000 UTC m=+35.530994049" Jan 13 20:08:35.708061 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e5a4b92dbd67c6bf76c47b24aec02fe6c1cfdb18d6df083d94794a257320762f-rootfs.mount: Deactivated successfully. Jan 13 20:08:36.114595 containerd[2089]: time="2025-01-13T20:08:36.114092323Z" level=info msg="shim disconnected" id=e5a4b92dbd67c6bf76c47b24aec02fe6c1cfdb18d6df083d94794a257320762f namespace=k8s.io Jan 13 20:08:36.114595 containerd[2089]: time="2025-01-13T20:08:36.114201859Z" level=warning msg="cleaning up after shim disconnected" id=e5a4b92dbd67c6bf76c47b24aec02fe6c1cfdb18d6df083d94794a257320762f namespace=k8s.io Jan 13 20:08:36.114595 containerd[2089]: time="2025-01-13T20:08:36.114245395Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:08:37.091241 containerd[2089]: time="2025-01-13T20:08:37.091177460Z" level=info msg="CreateContainer within sandbox \"a739bf7cc08951ac4b3aa4ac88d501499d0e95803535736aa84711ac24e3762c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 13 20:08:37.122629 containerd[2089]: time="2025-01-13T20:08:37.122393924Z" level=info msg="CreateContainer within sandbox \"a739bf7cc08951ac4b3aa4ac88d501499d0e95803535736aa84711ac24e3762c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e8f674f45a215422c83aca99f8950aada634f40a53bc48f0e992b7b16cc7ccca\"" Jan 13 20:08:37.125371 containerd[2089]: time="2025-01-13T20:08:37.124587776Z" level=info msg="StartContainer for \"e8f674f45a215422c83aca99f8950aada634f40a53bc48f0e992b7b16cc7ccca\"" Jan 13 20:08:37.190792 systemd[1]: run-containerd-runc-k8s.io-e8f674f45a215422c83aca99f8950aada634f40a53bc48f0e992b7b16cc7ccca-runc.SmKHFj.mount: Deactivated successfully. Jan 13 20:08:37.245994 containerd[2089]: time="2025-01-13T20:08:37.245922813Z" level=info msg="StartContainer for \"e8f674f45a215422c83aca99f8950aada634f40a53bc48f0e992b7b16cc7ccca\" returns successfully" Jan 13 20:08:37.267334 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 20:08:37.269031 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:08:37.269169 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:08:37.280362 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:08:37.333963 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:08:37.346448 containerd[2089]: time="2025-01-13T20:08:37.344671461Z" level=info msg="shim disconnected" id=e8f674f45a215422c83aca99f8950aada634f40a53bc48f0e992b7b16cc7ccca namespace=k8s.io Jan 13 20:08:37.346448 containerd[2089]: time="2025-01-13T20:08:37.345367905Z" level=warning msg="cleaning up after shim disconnected" id=e8f674f45a215422c83aca99f8950aada634f40a53bc48f0e992b7b16cc7ccca namespace=k8s.io Jan 13 20:08:37.346448 containerd[2089]: time="2025-01-13T20:08:37.345391077Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:08:38.097363 containerd[2089]: time="2025-01-13T20:08:38.096496197Z" level=info msg="CreateContainer within sandbox \"a739bf7cc08951ac4b3aa4ac88d501499d0e95803535736aa84711ac24e3762c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 13 20:08:38.114591 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e8f674f45a215422c83aca99f8950aada634f40a53bc48f0e992b7b16cc7ccca-rootfs.mount: Deactivated successfully. Jan 13 20:08:38.144437 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3849774275.mount: Deactivated successfully. Jan 13 20:08:38.151580 containerd[2089]: time="2025-01-13T20:08:38.151426521Z" level=info msg="CreateContainer within sandbox \"a739bf7cc08951ac4b3aa4ac88d501499d0e95803535736aa84711ac24e3762c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"cde11d79462c88f4e67edfa42d7938c5e9541873d05d6945271dada8fcf08e44\"" Jan 13 20:08:38.156050 containerd[2089]: time="2025-01-13T20:08:38.152886669Z" level=info msg="StartContainer for \"cde11d79462c88f4e67edfa42d7938c5e9541873d05d6945271dada8fcf08e44\"" Jan 13 20:08:38.269500 containerd[2089]: time="2025-01-13T20:08:38.269420938Z" level=info msg="StartContainer for \"cde11d79462c88f4e67edfa42d7938c5e9541873d05d6945271dada8fcf08e44\" returns successfully" Jan 13 20:08:38.328894 containerd[2089]: time="2025-01-13T20:08:38.328765966Z" level=info msg="shim disconnected" id=cde11d79462c88f4e67edfa42d7938c5e9541873d05d6945271dada8fcf08e44 namespace=k8s.io Jan 13 20:08:38.328894 containerd[2089]: time="2025-01-13T20:08:38.328876594Z" level=warning msg="cleaning up after shim disconnected" id=cde11d79462c88f4e67edfa42d7938c5e9541873d05d6945271dada8fcf08e44 namespace=k8s.io Jan 13 20:08:38.328894 containerd[2089]: time="2025-01-13T20:08:38.328898170Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:08:38.353605 containerd[2089]: time="2025-01-13T20:08:38.352896118Z" level=warning msg="cleanup warnings time=\"2025-01-13T20:08:38Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 13 20:08:39.103338 containerd[2089]: time="2025-01-13T20:08:39.103247254Z" level=info msg="CreateContainer within sandbox \"a739bf7cc08951ac4b3aa4ac88d501499d0e95803535736aa84711ac24e3762c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 13 20:08:39.112396 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cde11d79462c88f4e67edfa42d7938c5e9541873d05d6945271dada8fcf08e44-rootfs.mount: Deactivated successfully. Jan 13 20:08:39.146340 containerd[2089]: time="2025-01-13T20:08:39.146133754Z" level=info msg="CreateContainer within sandbox \"a739bf7cc08951ac4b3aa4ac88d501499d0e95803535736aa84711ac24e3762c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"39dca387943ea2ec6df25cd501a9ccb6bf3dfdfb0aba4350f008edf284cf7ee3\"" Jan 13 20:08:39.148607 containerd[2089]: time="2025-01-13T20:08:39.147148918Z" level=info msg="StartContainer for \"39dca387943ea2ec6df25cd501a9ccb6bf3dfdfb0aba4350f008edf284cf7ee3\"" Jan 13 20:08:39.271658 containerd[2089]: time="2025-01-13T20:08:39.271252667Z" level=info msg="StartContainer for \"39dca387943ea2ec6df25cd501a9ccb6bf3dfdfb0aba4350f008edf284cf7ee3\" returns successfully" Jan 13 20:08:39.314329 containerd[2089]: time="2025-01-13T20:08:39.314125187Z" level=info msg="shim disconnected" id=39dca387943ea2ec6df25cd501a9ccb6bf3dfdfb0aba4350f008edf284cf7ee3 namespace=k8s.io Jan 13 20:08:39.314593 containerd[2089]: time="2025-01-13T20:08:39.314336939Z" level=warning msg="cleaning up after shim disconnected" id=39dca387943ea2ec6df25cd501a9ccb6bf3dfdfb0aba4350f008edf284cf7ee3 namespace=k8s.io Jan 13 20:08:39.314593 containerd[2089]: time="2025-01-13T20:08:39.314361719Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:08:40.113460 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-39dca387943ea2ec6df25cd501a9ccb6bf3dfdfb0aba4350f008edf284cf7ee3-rootfs.mount: Deactivated successfully. Jan 13 20:08:40.125951 containerd[2089]: time="2025-01-13T20:08:40.123562211Z" level=info msg="CreateContainer within sandbox \"a739bf7cc08951ac4b3aa4ac88d501499d0e95803535736aa84711ac24e3762c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 13 20:08:40.158763 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2570108500.mount: Deactivated successfully. Jan 13 20:08:40.159614 containerd[2089]: time="2025-01-13T20:08:40.159187835Z" level=info msg="CreateContainer within sandbox \"a739bf7cc08951ac4b3aa4ac88d501499d0e95803535736aa84711ac24e3762c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"eea70692f6fbc3292891452e81220eeadf939726f10efeb5f91be50a5c62d3b6\"" Jan 13 20:08:40.163645 containerd[2089]: time="2025-01-13T20:08:40.161426615Z" level=info msg="StartContainer for \"eea70692f6fbc3292891452e81220eeadf939726f10efeb5f91be50a5c62d3b6\"" Jan 13 20:08:40.274750 containerd[2089]: time="2025-01-13T20:08:40.274677804Z" level=info msg="StartContainer for \"eea70692f6fbc3292891452e81220eeadf939726f10efeb5f91be50a5c62d3b6\" returns successfully" Jan 13 20:08:40.458991 kubelet[3649]: I0113 20:08:40.458468 3649 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 13 20:08:40.499875 kubelet[3649]: I0113 20:08:40.499196 3649 topology_manager.go:215] "Topology Admit Handler" podUID="062f1694-d12a-4a33-b2a6-b7bb23467a2e" podNamespace="kube-system" podName="coredns-76f75df574-p94db" Jan 13 20:08:40.503708 kubelet[3649]: I0113 20:08:40.503630 3649 topology_manager.go:215] "Topology Admit Handler" podUID="0a7d9249-19a1-4ee4-88b4-198c864038a6" podNamespace="kube-system" podName="coredns-76f75df574-b8n94" Jan 13 20:08:40.696756 kubelet[3649]: I0113 20:08:40.696374 3649 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/062f1694-d12a-4a33-b2a6-b7bb23467a2e-config-volume\") pod \"coredns-76f75df574-p94db\" (UID: \"062f1694-d12a-4a33-b2a6-b7bb23467a2e\") " pod="kube-system/coredns-76f75df574-p94db" Jan 13 20:08:40.696756 kubelet[3649]: I0113 20:08:40.696459 3649 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0a7d9249-19a1-4ee4-88b4-198c864038a6-config-volume\") pod \"coredns-76f75df574-b8n94\" (UID: \"0a7d9249-19a1-4ee4-88b4-198c864038a6\") " pod="kube-system/coredns-76f75df574-b8n94" Jan 13 20:08:40.696756 kubelet[3649]: I0113 20:08:40.696511 3649 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnzf5\" (UniqueName: \"kubernetes.io/projected/062f1694-d12a-4a33-b2a6-b7bb23467a2e-kube-api-access-rnzf5\") pod \"coredns-76f75df574-p94db\" (UID: \"062f1694-d12a-4a33-b2a6-b7bb23467a2e\") " pod="kube-system/coredns-76f75df574-p94db" Jan 13 20:08:40.696756 kubelet[3649]: I0113 20:08:40.696578 3649 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rt7g\" (UniqueName: \"kubernetes.io/projected/0a7d9249-19a1-4ee4-88b4-198c864038a6-kube-api-access-5rt7g\") pod \"coredns-76f75df574-b8n94\" (UID: \"0a7d9249-19a1-4ee4-88b4-198c864038a6\") " pod="kube-system/coredns-76f75df574-b8n94" Jan 13 20:08:40.847118 containerd[2089]: time="2025-01-13T20:08:40.845660175Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-p94db,Uid:062f1694-d12a-4a33-b2a6-b7bb23467a2e,Namespace:kube-system,Attempt:0,}" Jan 13 20:08:40.859893 containerd[2089]: time="2025-01-13T20:08:40.859467915Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-b8n94,Uid:0a7d9249-19a1-4ee4-88b4-198c864038a6,Namespace:kube-system,Attempt:0,}" Jan 13 20:08:42.851273 systemd[1]: Started sshd@7-172.31.21.202:22-139.178.68.195:56642.service - OpenSSH per-connection server daemon (139.178.68.195:56642). Jan 13 20:08:43.040493 (udev-worker)[4439]: Network interface NamePolicy= disabled on kernel command line. Jan 13 20:08:43.043905 systemd-networkd[1604]: cilium_host: Link UP Jan 13 20:08:43.045879 systemd-networkd[1604]: cilium_net: Link UP Jan 13 20:08:43.046024 (udev-worker)[4447]: Network interface NamePolicy= disabled on kernel command line. Jan 13 20:08:43.046243 systemd-networkd[1604]: cilium_net: Gained carrier Jan 13 20:08:43.052939 systemd-networkd[1604]: cilium_host: Gained carrier Jan 13 20:08:43.061571 sshd[4477]: Accepted publickey for core from 139.178.68.195 port 56642 ssh2: RSA SHA256:dyUqkUNC7j9I+iFvInUHdwtzQ+aO/14TB1/ljGdoY9k Jan 13 20:08:43.053494 systemd-networkd[1604]: cilium_net: Gained IPv6LL Jan 13 20:08:43.068702 sshd-session[4477]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:08:43.107450 systemd-logind[2058]: New session 8 of user core. Jan 13 20:08:43.116843 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 13 20:08:43.155396 systemd-networkd[1604]: cilium_host: Gained IPv6LL Jan 13 20:08:43.274128 systemd-networkd[1604]: cilium_vxlan: Link UP Jan 13 20:08:43.274143 systemd-networkd[1604]: cilium_vxlan: Gained carrier Jan 13 20:08:43.473083 sshd[4505]: Connection closed by 139.178.68.195 port 56642 Jan 13 20:08:43.475641 sshd-session[4477]: pam_unix(sshd:session): session closed for user core Jan 13 20:08:43.482626 systemd[1]: sshd@7-172.31.21.202:22-139.178.68.195:56642.service: Deactivated successfully. Jan 13 20:08:43.492775 systemd[1]: session-8.scope: Deactivated successfully. Jan 13 20:08:43.497225 systemd-logind[2058]: Session 8 logged out. Waiting for processes to exit. Jan 13 20:08:43.503225 systemd-logind[2058]: Removed session 8. Jan 13 20:08:43.788882 kernel: NET: Registered PF_ALG protocol family Jan 13 20:08:45.121151 systemd-networkd[1604]: lxc_health: Link UP Jan 13 20:08:45.125753 (udev-worker)[4492]: Network interface NamePolicy= disabled on kernel command line. Jan 13 20:08:45.129772 systemd-networkd[1604]: lxc_health: Gained carrier Jan 13 20:08:45.275071 systemd-networkd[1604]: cilium_vxlan: Gained IPv6LL Jan 13 20:08:45.525944 systemd-networkd[1604]: lxce12f9303fd59: Link UP Jan 13 20:08:45.537989 kernel: eth0: renamed from tmp316ad Jan 13 20:08:45.546263 systemd-networkd[1604]: lxce12f9303fd59: Gained carrier Jan 13 20:08:45.565027 systemd-networkd[1604]: lxce28117fdb551: Link UP Jan 13 20:08:45.577491 kernel: eth0: renamed from tmp569c6 Jan 13 20:08:45.584359 (udev-worker)[4494]: Network interface NamePolicy= disabled on kernel command line. Jan 13 20:08:45.585179 systemd-networkd[1604]: lxce28117fdb551: Gained carrier Jan 13 20:08:46.670520 kubelet[3649]: I0113 20:08:46.668965 3649 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-xmc4w" podStartSLOduration=13.750091715 podStartE2EDuration="35.668874872s" podCreationTimestamp="2025-01-13 20:08:11 +0000 UTC" firstStartedPulling="2025-01-13 20:08:12.770609459 +0000 UTC m=+13.197394482" lastFinishedPulling="2025-01-13 20:08:34.689392604 +0000 UTC m=+35.116177639" observedRunningTime="2025-01-13 20:08:41.208216009 +0000 UTC m=+41.635001044" watchObservedRunningTime="2025-01-13 20:08:46.668874872 +0000 UTC m=+47.095659979" Jan 13 20:08:47.003523 systemd-networkd[1604]: lxc_health: Gained IPv6LL Jan 13 20:08:47.259356 systemd-networkd[1604]: lxce12f9303fd59: Gained IPv6LL Jan 13 20:08:47.323349 systemd-networkd[1604]: lxce28117fdb551: Gained IPv6LL Jan 13 20:08:48.515295 systemd[1]: Started sshd@8-172.31.21.202:22-139.178.68.195:43260.service - OpenSSH per-connection server daemon (139.178.68.195:43260). Jan 13 20:08:48.705845 sshd[4860]: Accepted publickey for core from 139.178.68.195 port 43260 ssh2: RSA SHA256:dyUqkUNC7j9I+iFvInUHdwtzQ+aO/14TB1/ljGdoY9k Jan 13 20:08:48.705191 sshd-session[4860]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:08:48.721489 systemd-logind[2058]: New session 9 of user core. Jan 13 20:08:48.729433 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 13 20:08:49.058846 sshd[4863]: Connection closed by 139.178.68.195 port 43260 Jan 13 20:08:49.057002 sshd-session[4860]: pam_unix(sshd:session): session closed for user core Jan 13 20:08:49.065367 systemd-logind[2058]: Session 9 logged out. Waiting for processes to exit. Jan 13 20:08:49.070036 systemd[1]: sshd@8-172.31.21.202:22-139.178.68.195:43260.service: Deactivated successfully. Jan 13 20:08:49.084880 systemd[1]: session-9.scope: Deactivated successfully. Jan 13 20:08:49.089563 systemd-logind[2058]: Removed session 9. Jan 13 20:08:49.432254 ntpd[2038]: Listen normally on 6 cilium_host 192.168.0.18:123 Jan 13 20:08:49.433722 ntpd[2038]: 13 Jan 20:08:49 ntpd[2038]: Listen normally on 6 cilium_host 192.168.0.18:123 Jan 13 20:08:49.433722 ntpd[2038]: 13 Jan 20:08:49 ntpd[2038]: Listen normally on 7 cilium_net [fe80::24f2:7cff:fe08:2e1b%4]:123 Jan 13 20:08:49.433722 ntpd[2038]: 13 Jan 20:08:49 ntpd[2038]: Listen normally on 8 cilium_host [fe80::e066:e1ff:fe03:1861%5]:123 Jan 13 20:08:49.433722 ntpd[2038]: 13 Jan 20:08:49 ntpd[2038]: Listen normally on 9 cilium_vxlan [fe80::f876:e1ff:fefc:c543%6]:123 Jan 13 20:08:49.433722 ntpd[2038]: 13 Jan 20:08:49 ntpd[2038]: Listen normally on 10 lxc_health [fe80::282e:4fff:fe4e:83%8]:123 Jan 13 20:08:49.433722 ntpd[2038]: 13 Jan 20:08:49 ntpd[2038]: Listen normally on 11 lxce12f9303fd59 [fe80::642d:58ff:fe95:264a%10]:123 Jan 13 20:08:49.433722 ntpd[2038]: 13 Jan 20:08:49 ntpd[2038]: Listen normally on 12 lxce28117fdb551 [fe80::508d:32ff:fe8e:bd8a%12]:123 Jan 13 20:08:49.432400 ntpd[2038]: Listen normally on 7 cilium_net [fe80::24f2:7cff:fe08:2e1b%4]:123 Jan 13 20:08:49.432481 ntpd[2038]: Listen normally on 8 cilium_host [fe80::e066:e1ff:fe03:1861%5]:123 Jan 13 20:08:49.432573 ntpd[2038]: Listen normally on 9 cilium_vxlan [fe80::f876:e1ff:fefc:c543%6]:123 Jan 13 20:08:49.432641 ntpd[2038]: Listen normally on 10 lxc_health [fe80::282e:4fff:fe4e:83%8]:123 Jan 13 20:08:49.432708 ntpd[2038]: Listen normally on 11 lxce12f9303fd59 [fe80::642d:58ff:fe95:264a%10]:123 Jan 13 20:08:49.432775 ntpd[2038]: Listen normally on 12 lxce28117fdb551 [fe80::508d:32ff:fe8e:bd8a%12]:123 Jan 13 20:08:54.098325 systemd[1]: Started sshd@9-172.31.21.202:22-139.178.68.195:43274.service - OpenSSH per-connection server daemon (139.178.68.195:43274). Jan 13 20:08:54.199670 containerd[2089]: time="2025-01-13T20:08:54.199500313Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:08:54.213893 containerd[2089]: time="2025-01-13T20:08:54.200407801Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:08:54.213893 containerd[2089]: time="2025-01-13T20:08:54.201061621Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:08:54.213893 containerd[2089]: time="2025-01-13T20:08:54.201249997Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:08:54.303861 containerd[2089]: time="2025-01-13T20:08:54.301953338Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:08:54.304239 containerd[2089]: time="2025-01-13T20:08:54.304080446Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:08:54.304399 containerd[2089]: time="2025-01-13T20:08:54.304212122Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:08:54.304990 containerd[2089]: time="2025-01-13T20:08:54.304730666Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:08:54.373394 sshd[4882]: Accepted publickey for core from 139.178.68.195 port 43274 ssh2: RSA SHA256:dyUqkUNC7j9I+iFvInUHdwtzQ+aO/14TB1/ljGdoY9k Jan 13 20:08:54.375908 sshd-session[4882]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:08:54.421183 systemd-logind[2058]: New session 10 of user core. Jan 13 20:08:54.425481 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 13 20:08:54.436338 containerd[2089]: time="2025-01-13T20:08:54.434980262Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-b8n94,Uid:0a7d9249-19a1-4ee4-88b4-198c864038a6,Namespace:kube-system,Attempt:0,} returns sandbox id \"569c68baa1da02c21027ad58e8a8cedc34be3cb2fd1f9ba3e8aed22db8f45301\"" Jan 13 20:08:54.452814 containerd[2089]: time="2025-01-13T20:08:54.452307698Z" level=info msg="CreateContainer within sandbox \"569c68baa1da02c21027ad58e8a8cedc34be3cb2fd1f9ba3e8aed22db8f45301\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 20:08:54.510117 containerd[2089]: time="2025-01-13T20:08:54.510048603Z" level=info msg="CreateContainer within sandbox \"569c68baa1da02c21027ad58e8a8cedc34be3cb2fd1f9ba3e8aed22db8f45301\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5e76471e7d84b79a3bfa76c1bbbaadf824d35ae6f2b91800117fbf3c1c80f889\"" Jan 13 20:08:54.531548 containerd[2089]: time="2025-01-13T20:08:54.531153051Z" level=info msg="StartContainer for \"5e76471e7d84b79a3bfa76c1bbbaadf824d35ae6f2b91800117fbf3c1c80f889\"" Jan 13 20:08:54.606699 containerd[2089]: time="2025-01-13T20:08:54.606543039Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-p94db,Uid:062f1694-d12a-4a33-b2a6-b7bb23467a2e,Namespace:kube-system,Attempt:0,} returns sandbox id \"316adb7a66e7621cc80e80414399e952effb8bddcd6cf2bd4b0991ba8a9cbbfd\"" Jan 13 20:08:54.623688 containerd[2089]: time="2025-01-13T20:08:54.623286375Z" level=info msg="CreateContainer within sandbox \"316adb7a66e7621cc80e80414399e952effb8bddcd6cf2bd4b0991ba8a9cbbfd\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 20:08:54.661996 containerd[2089]: time="2025-01-13T20:08:54.661734243Z" level=info msg="CreateContainer within sandbox \"316adb7a66e7621cc80e80414399e952effb8bddcd6cf2bd4b0991ba8a9cbbfd\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"56aab8ce3e9d83b4994d4262178bfe562f57a7a82f82b02e9a667d1648ab8355\"" Jan 13 20:08:54.665066 containerd[2089]: time="2025-01-13T20:08:54.664595547Z" level=info msg="StartContainer for \"56aab8ce3e9d83b4994d4262178bfe562f57a7a82f82b02e9a667d1648ab8355\"" Jan 13 20:08:54.731602 containerd[2089]: time="2025-01-13T20:08:54.730960288Z" level=info msg="StartContainer for \"5e76471e7d84b79a3bfa76c1bbbaadf824d35ae6f2b91800117fbf3c1c80f889\" returns successfully" Jan 13 20:08:54.851364 sshd[4961]: Connection closed by 139.178.68.195 port 43274 Jan 13 20:08:54.855362 sshd-session[4882]: pam_unix(sshd:session): session closed for user core Jan 13 20:08:54.875029 systemd[1]: sshd@9-172.31.21.202:22-139.178.68.195:43274.service: Deactivated successfully. Jan 13 20:08:54.879888 containerd[2089]: time="2025-01-13T20:08:54.879790624Z" level=info msg="StartContainer for \"56aab8ce3e9d83b4994d4262178bfe562f57a7a82f82b02e9a667d1648ab8355\" returns successfully" Jan 13 20:08:54.891146 systemd[1]: session-10.scope: Deactivated successfully. Jan 13 20:08:54.896791 systemd-logind[2058]: Session 10 logged out. Waiting for processes to exit. Jan 13 20:08:54.901413 systemd-logind[2058]: Removed session 10. Jan 13 20:08:55.211881 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2809005142.mount: Deactivated successfully. Jan 13 20:08:55.261324 kubelet[3649]: I0113 20:08:55.260337 3649 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-p94db" podStartSLOduration=43.260280494 podStartE2EDuration="43.260280494s" podCreationTimestamp="2025-01-13 20:08:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:08:55.260086634 +0000 UTC m=+55.686871681" watchObservedRunningTime="2025-01-13 20:08:55.260280494 +0000 UTC m=+55.687065517" Jan 13 20:08:55.320846 kubelet[3649]: I0113 20:08:55.318555 3649 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-b8n94" podStartSLOduration=43.318501435 podStartE2EDuration="43.318501435s" podCreationTimestamp="2025-01-13 20:08:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:08:55.317013027 +0000 UTC m=+55.743798062" watchObservedRunningTime="2025-01-13 20:08:55.318501435 +0000 UTC m=+55.745286446" Jan 13 20:08:59.885395 systemd[1]: Started sshd@10-172.31.21.202:22-139.178.68.195:40050.service - OpenSSH per-connection server daemon (139.178.68.195:40050). Jan 13 20:09:00.069373 sshd[5064]: Accepted publickey for core from 139.178.68.195 port 40050 ssh2: RSA SHA256:dyUqkUNC7j9I+iFvInUHdwtzQ+aO/14TB1/ljGdoY9k Jan 13 20:09:00.071913 sshd-session[5064]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:09:00.080450 systemd-logind[2058]: New session 11 of user core. Jan 13 20:09:00.086487 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 13 20:09:00.346589 sshd[5067]: Connection closed by 139.178.68.195 port 40050 Jan 13 20:09:00.347480 sshd-session[5064]: pam_unix(sshd:session): session closed for user core Jan 13 20:09:00.354264 systemd-logind[2058]: Session 11 logged out. Waiting for processes to exit. Jan 13 20:09:00.356233 systemd[1]: sshd@10-172.31.21.202:22-139.178.68.195:40050.service: Deactivated successfully. Jan 13 20:09:00.363876 systemd[1]: session-11.scope: Deactivated successfully. Jan 13 20:09:00.366565 systemd-logind[2058]: Removed session 11. Jan 13 20:09:00.376289 systemd[1]: Started sshd@11-172.31.21.202:22-139.178.68.195:40056.service - OpenSSH per-connection server daemon (139.178.68.195:40056). Jan 13 20:09:00.578142 sshd[5079]: Accepted publickey for core from 139.178.68.195 port 40056 ssh2: RSA SHA256:dyUqkUNC7j9I+iFvInUHdwtzQ+aO/14TB1/ljGdoY9k Jan 13 20:09:00.580798 sshd-session[5079]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:09:00.590773 systemd-logind[2058]: New session 12 of user core. Jan 13 20:09:00.598370 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 13 20:09:00.948058 sshd[5082]: Connection closed by 139.178.68.195 port 40056 Jan 13 20:09:00.949040 sshd-session[5079]: pam_unix(sshd:session): session closed for user core Jan 13 20:09:00.970215 systemd[1]: sshd@11-172.31.21.202:22-139.178.68.195:40056.service: Deactivated successfully. Jan 13 20:09:00.988162 systemd[1]: session-12.scope: Deactivated successfully. Jan 13 20:09:00.989314 systemd-logind[2058]: Session 12 logged out. Waiting for processes to exit. Jan 13 20:09:00.996483 systemd[1]: Started sshd@12-172.31.21.202:22-139.178.68.195:40068.service - OpenSSH per-connection server daemon (139.178.68.195:40068). Jan 13 20:09:00.999453 systemd-logind[2058]: Removed session 12. Jan 13 20:09:01.199886 sshd[5091]: Accepted publickey for core from 139.178.68.195 port 40068 ssh2: RSA SHA256:dyUqkUNC7j9I+iFvInUHdwtzQ+aO/14TB1/ljGdoY9k Jan 13 20:09:01.202424 sshd-session[5091]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:09:01.210022 systemd-logind[2058]: New session 13 of user core. Jan 13 20:09:01.220460 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 13 20:09:01.462209 sshd[5094]: Connection closed by 139.178.68.195 port 40068 Jan 13 20:09:01.462669 sshd-session[5091]: pam_unix(sshd:session): session closed for user core Jan 13 20:09:01.470795 systemd[1]: sshd@12-172.31.21.202:22-139.178.68.195:40068.service: Deactivated successfully. Jan 13 20:09:01.477652 systemd[1]: session-13.scope: Deactivated successfully. Jan 13 20:09:01.479332 systemd-logind[2058]: Session 13 logged out. Waiting for processes to exit. Jan 13 20:09:01.481263 systemd-logind[2058]: Removed session 13. Jan 13 20:09:06.493260 systemd[1]: Started sshd@13-172.31.21.202:22-139.178.68.195:56020.service - OpenSSH per-connection server daemon (139.178.68.195:56020). Jan 13 20:09:06.686847 sshd[5106]: Accepted publickey for core from 139.178.68.195 port 56020 ssh2: RSA SHA256:dyUqkUNC7j9I+iFvInUHdwtzQ+aO/14TB1/ljGdoY9k Jan 13 20:09:06.690004 sshd-session[5106]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:09:06.697931 systemd-logind[2058]: New session 14 of user core. Jan 13 20:09:06.709282 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 13 20:09:06.959305 sshd[5109]: Connection closed by 139.178.68.195 port 56020 Jan 13 20:09:06.960330 sshd-session[5106]: pam_unix(sshd:session): session closed for user core Jan 13 20:09:06.968669 systemd[1]: sshd@13-172.31.21.202:22-139.178.68.195:56020.service: Deactivated successfully. Jan 13 20:09:06.975249 systemd[1]: session-14.scope: Deactivated successfully. Jan 13 20:09:06.975586 systemd-logind[2058]: Session 14 logged out. Waiting for processes to exit. Jan 13 20:09:06.980426 systemd-logind[2058]: Removed session 14. Jan 13 20:09:11.990276 systemd[1]: Started sshd@14-172.31.21.202:22-139.178.68.195:56028.service - OpenSSH per-connection server daemon (139.178.68.195:56028). Jan 13 20:09:12.175263 sshd[5120]: Accepted publickey for core from 139.178.68.195 port 56028 ssh2: RSA SHA256:dyUqkUNC7j9I+iFvInUHdwtzQ+aO/14TB1/ljGdoY9k Jan 13 20:09:12.177794 sshd-session[5120]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:09:12.186894 systemd-logind[2058]: New session 15 of user core. Jan 13 20:09:12.193627 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 13 20:09:12.442613 sshd[5123]: Connection closed by 139.178.68.195 port 56028 Jan 13 20:09:12.443114 sshd-session[5120]: pam_unix(sshd:session): session closed for user core Jan 13 20:09:12.448484 systemd[1]: sshd@14-172.31.21.202:22-139.178.68.195:56028.service: Deactivated successfully. Jan 13 20:09:12.454984 systemd[1]: session-15.scope: Deactivated successfully. Jan 13 20:09:12.458434 systemd-logind[2058]: Session 15 logged out. Waiting for processes to exit. Jan 13 20:09:12.461336 systemd-logind[2058]: Removed session 15. Jan 13 20:09:17.473316 systemd[1]: Started sshd@15-172.31.21.202:22-139.178.68.195:52922.service - OpenSSH per-connection server daemon (139.178.68.195:52922). Jan 13 20:09:17.669853 sshd[5137]: Accepted publickey for core from 139.178.68.195 port 52922 ssh2: RSA SHA256:dyUqkUNC7j9I+iFvInUHdwtzQ+aO/14TB1/ljGdoY9k Jan 13 20:09:17.672477 sshd-session[5137]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:09:17.679837 systemd-logind[2058]: New session 16 of user core. Jan 13 20:09:17.688473 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 13 20:09:17.944981 sshd[5140]: Connection closed by 139.178.68.195 port 52922 Jan 13 20:09:17.945888 sshd-session[5137]: pam_unix(sshd:session): session closed for user core Jan 13 20:09:17.951776 systemd[1]: sshd@15-172.31.21.202:22-139.178.68.195:52922.service: Deactivated successfully. Jan 13 20:09:17.961390 systemd[1]: session-16.scope: Deactivated successfully. Jan 13 20:09:17.961524 systemd-logind[2058]: Session 16 logged out. Waiting for processes to exit. Jan 13 20:09:17.964767 systemd-logind[2058]: Removed session 16. Jan 13 20:09:17.975416 systemd[1]: Started sshd@16-172.31.21.202:22-139.178.68.195:52932.service - OpenSSH per-connection server daemon (139.178.68.195:52932). Jan 13 20:09:18.166897 sshd[5151]: Accepted publickey for core from 139.178.68.195 port 52932 ssh2: RSA SHA256:dyUqkUNC7j9I+iFvInUHdwtzQ+aO/14TB1/ljGdoY9k Jan 13 20:09:18.169436 sshd-session[5151]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:09:18.177413 systemd-logind[2058]: New session 17 of user core. Jan 13 20:09:18.185797 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 13 20:09:18.489901 sshd[5154]: Connection closed by 139.178.68.195 port 52932 Jan 13 20:09:18.490752 sshd-session[5151]: pam_unix(sshd:session): session closed for user core Jan 13 20:09:18.497519 systemd[1]: sshd@16-172.31.21.202:22-139.178.68.195:52932.service: Deactivated successfully. Jan 13 20:09:18.506036 systemd[1]: session-17.scope: Deactivated successfully. Jan 13 20:09:18.508564 systemd-logind[2058]: Session 17 logged out. Waiting for processes to exit. Jan 13 20:09:18.510496 systemd-logind[2058]: Removed session 17. Jan 13 20:09:18.527246 systemd[1]: Started sshd@17-172.31.21.202:22-139.178.68.195:52936.service - OpenSSH per-connection server daemon (139.178.68.195:52936). Jan 13 20:09:18.708002 sshd[5163]: Accepted publickey for core from 139.178.68.195 port 52936 ssh2: RSA SHA256:dyUqkUNC7j9I+iFvInUHdwtzQ+aO/14TB1/ljGdoY9k Jan 13 20:09:18.710479 sshd-session[5163]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:09:18.718140 systemd-logind[2058]: New session 18 of user core. Jan 13 20:09:18.728517 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 13 20:09:21.245835 sshd[5166]: Connection closed by 139.178.68.195 port 52936 Jan 13 20:09:21.245312 sshd-session[5163]: pam_unix(sshd:session): session closed for user core Jan 13 20:09:21.258258 systemd[1]: sshd@17-172.31.21.202:22-139.178.68.195:52936.service: Deactivated successfully. Jan 13 20:09:21.272073 systemd[1]: session-18.scope: Deactivated successfully. Jan 13 20:09:21.275067 systemd-logind[2058]: Session 18 logged out. Waiting for processes to exit. Jan 13 20:09:21.290294 systemd[1]: Started sshd@18-172.31.21.202:22-139.178.68.195:52950.service - OpenSSH per-connection server daemon (139.178.68.195:52950). Jan 13 20:09:21.292979 systemd-logind[2058]: Removed session 18. Jan 13 20:09:21.487452 sshd[5182]: Accepted publickey for core from 139.178.68.195 port 52950 ssh2: RSA SHA256:dyUqkUNC7j9I+iFvInUHdwtzQ+aO/14TB1/ljGdoY9k Jan 13 20:09:21.489933 sshd-session[5182]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:09:21.497887 systemd-logind[2058]: New session 19 of user core. Jan 13 20:09:21.507454 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 13 20:09:22.001980 sshd[5185]: Connection closed by 139.178.68.195 port 52950 Jan 13 20:09:22.002441 sshd-session[5182]: pam_unix(sshd:session): session closed for user core Jan 13 20:09:22.010962 systemd[1]: sshd@18-172.31.21.202:22-139.178.68.195:52950.service: Deactivated successfully. Jan 13 20:09:22.016498 systemd-logind[2058]: Session 19 logged out. Waiting for processes to exit. Jan 13 20:09:22.016643 systemd[1]: session-19.scope: Deactivated successfully. Jan 13 20:09:22.020825 systemd-logind[2058]: Removed session 19. Jan 13 20:09:22.036295 systemd[1]: Started sshd@19-172.31.21.202:22-139.178.68.195:52954.service - OpenSSH per-connection server daemon (139.178.68.195:52954). Jan 13 20:09:22.216916 sshd[5193]: Accepted publickey for core from 139.178.68.195 port 52954 ssh2: RSA SHA256:dyUqkUNC7j9I+iFvInUHdwtzQ+aO/14TB1/ljGdoY9k Jan 13 20:09:22.219302 sshd-session[5193]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:09:22.227582 systemd-logind[2058]: New session 20 of user core. Jan 13 20:09:22.233783 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 13 20:09:22.481914 sshd[5196]: Connection closed by 139.178.68.195 port 52954 Jan 13 20:09:22.480935 sshd-session[5193]: pam_unix(sshd:session): session closed for user core Jan 13 20:09:22.485659 systemd[1]: sshd@19-172.31.21.202:22-139.178.68.195:52954.service: Deactivated successfully. Jan 13 20:09:22.493663 systemd[1]: session-20.scope: Deactivated successfully. Jan 13 20:09:22.498306 systemd-logind[2058]: Session 20 logged out. Waiting for processes to exit. Jan 13 20:09:22.500278 systemd-logind[2058]: Removed session 20. Jan 13 20:09:27.512421 systemd[1]: Started sshd@20-172.31.21.202:22-139.178.68.195:60934.service - OpenSSH per-connection server daemon (139.178.68.195:60934). Jan 13 20:09:27.708862 sshd[5208]: Accepted publickey for core from 139.178.68.195 port 60934 ssh2: RSA SHA256:dyUqkUNC7j9I+iFvInUHdwtzQ+aO/14TB1/ljGdoY9k Jan 13 20:09:27.711272 sshd-session[5208]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:09:27.720194 systemd-logind[2058]: New session 21 of user core. Jan 13 20:09:27.725624 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 13 20:09:27.970269 sshd[5211]: Connection closed by 139.178.68.195 port 60934 Jan 13 20:09:27.971190 sshd-session[5208]: pam_unix(sshd:session): session closed for user core Jan 13 20:09:27.977679 systemd[1]: sshd@20-172.31.21.202:22-139.178.68.195:60934.service: Deactivated successfully. Jan 13 20:09:27.986070 systemd[1]: session-21.scope: Deactivated successfully. Jan 13 20:09:27.988980 systemd-logind[2058]: Session 21 logged out. Waiting for processes to exit. Jan 13 20:09:27.990790 systemd-logind[2058]: Removed session 21. Jan 13 20:09:33.009351 systemd[1]: Started sshd@21-172.31.21.202:22-139.178.68.195:60938.service - OpenSSH per-connection server daemon (139.178.68.195:60938). Jan 13 20:09:33.191127 sshd[5225]: Accepted publickey for core from 139.178.68.195 port 60938 ssh2: RSA SHA256:dyUqkUNC7j9I+iFvInUHdwtzQ+aO/14TB1/ljGdoY9k Jan 13 20:09:33.194078 sshd-session[5225]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:09:33.203248 systemd-logind[2058]: New session 22 of user core. Jan 13 20:09:33.208499 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 13 20:09:33.450440 sshd[5228]: Connection closed by 139.178.68.195 port 60938 Jan 13 20:09:33.451483 sshd-session[5225]: pam_unix(sshd:session): session closed for user core Jan 13 20:09:33.458038 systemd[1]: sshd@21-172.31.21.202:22-139.178.68.195:60938.service: Deactivated successfully. Jan 13 20:09:33.458340 systemd-logind[2058]: Session 22 logged out. Waiting for processes to exit. Jan 13 20:09:33.466477 systemd[1]: session-22.scope: Deactivated successfully. Jan 13 20:09:33.471425 systemd-logind[2058]: Removed session 22. Jan 13 20:09:38.483320 systemd[1]: Started sshd@22-172.31.21.202:22-139.178.68.195:42826.service - OpenSSH per-connection server daemon (139.178.68.195:42826). Jan 13 20:09:38.676061 sshd[5239]: Accepted publickey for core from 139.178.68.195 port 42826 ssh2: RSA SHA256:dyUqkUNC7j9I+iFvInUHdwtzQ+aO/14TB1/ljGdoY9k Jan 13 20:09:38.678611 sshd-session[5239]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:09:38.687268 systemd-logind[2058]: New session 23 of user core. Jan 13 20:09:38.693477 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 13 20:09:38.948837 sshd[5242]: Connection closed by 139.178.68.195 port 42826 Jan 13 20:09:38.949679 sshd-session[5239]: pam_unix(sshd:session): session closed for user core Jan 13 20:09:38.957500 systemd[1]: sshd@22-172.31.21.202:22-139.178.68.195:42826.service: Deactivated successfully. Jan 13 20:09:38.963607 systemd[1]: session-23.scope: Deactivated successfully. Jan 13 20:09:38.965237 systemd-logind[2058]: Session 23 logged out. Waiting for processes to exit. Jan 13 20:09:38.967046 systemd-logind[2058]: Removed session 23. Jan 13 20:09:43.980290 systemd[1]: Started sshd@23-172.31.21.202:22-139.178.68.195:42832.service - OpenSSH per-connection server daemon (139.178.68.195:42832). Jan 13 20:09:44.171076 sshd[5255]: Accepted publickey for core from 139.178.68.195 port 42832 ssh2: RSA SHA256:dyUqkUNC7j9I+iFvInUHdwtzQ+aO/14TB1/ljGdoY9k Jan 13 20:09:44.173639 sshd-session[5255]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:09:44.182643 systemd-logind[2058]: New session 24 of user core. Jan 13 20:09:44.188333 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 13 20:09:44.435969 sshd[5258]: Connection closed by 139.178.68.195 port 42832 Jan 13 20:09:44.436570 sshd-session[5255]: pam_unix(sshd:session): session closed for user core Jan 13 20:09:44.442338 systemd[1]: sshd@23-172.31.21.202:22-139.178.68.195:42832.service: Deactivated successfully. Jan 13 20:09:44.442680 systemd-logind[2058]: Session 24 logged out. Waiting for processes to exit. Jan 13 20:09:44.449921 systemd[1]: session-24.scope: Deactivated successfully. Jan 13 20:09:44.455045 systemd-logind[2058]: Removed session 24. Jan 13 20:09:44.466389 systemd[1]: Started sshd@24-172.31.21.202:22-139.178.68.195:42840.service - OpenSSH per-connection server daemon (139.178.68.195:42840). Jan 13 20:09:44.654027 sshd[5269]: Accepted publickey for core from 139.178.68.195 port 42840 ssh2: RSA SHA256:dyUqkUNC7j9I+iFvInUHdwtzQ+aO/14TB1/ljGdoY9k Jan 13 20:09:44.656507 sshd-session[5269]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:09:44.664341 systemd-logind[2058]: New session 25 of user core. Jan 13 20:09:44.675305 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 13 20:09:47.120729 containerd[2089]: time="2025-01-13T20:09:47.120630124Z" level=info msg="StopContainer for \"a1c2334ce0e569a8b8e4441f2fabee3ee34782c199eb06c8a7733b0cce570256\" with timeout 30 (s)" Jan 13 20:09:47.127363 containerd[2089]: time="2025-01-13T20:09:47.125752576Z" level=info msg="Stop container \"a1c2334ce0e569a8b8e4441f2fabee3ee34782c199eb06c8a7733b0cce570256\" with signal terminated" Jan 13 20:09:47.165154 containerd[2089]: time="2025-01-13T20:09:47.165029956Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 20:09:47.187910 containerd[2089]: time="2025-01-13T20:09:47.187660156Z" level=info msg="StopContainer for \"eea70692f6fbc3292891452e81220eeadf939726f10efeb5f91be50a5c62d3b6\" with timeout 2 (s)" Jan 13 20:09:47.188583 containerd[2089]: time="2025-01-13T20:09:47.188381080Z" level=info msg="Stop container \"eea70692f6fbc3292891452e81220eeadf939726f10efeb5f91be50a5c62d3b6\" with signal terminated" Jan 13 20:09:47.218777 systemd-networkd[1604]: lxc_health: Link DOWN Jan 13 20:09:47.218796 systemd-networkd[1604]: lxc_health: Lost carrier Jan 13 20:09:47.231320 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a1c2334ce0e569a8b8e4441f2fabee3ee34782c199eb06c8a7733b0cce570256-rootfs.mount: Deactivated successfully. Jan 13 20:09:47.258334 containerd[2089]: time="2025-01-13T20:09:47.257880785Z" level=info msg="shim disconnected" id=a1c2334ce0e569a8b8e4441f2fabee3ee34782c199eb06c8a7733b0cce570256 namespace=k8s.io Jan 13 20:09:47.259014 containerd[2089]: time="2025-01-13T20:09:47.258298481Z" level=warning msg="cleaning up after shim disconnected" id=a1c2334ce0e569a8b8e4441f2fabee3ee34782c199eb06c8a7733b0cce570256 namespace=k8s.io Jan 13 20:09:47.259014 containerd[2089]: time="2025-01-13T20:09:47.258735389Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:09:47.292110 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eea70692f6fbc3292891452e81220eeadf939726f10efeb5f91be50a5c62d3b6-rootfs.mount: Deactivated successfully. Jan 13 20:09:47.304687 containerd[2089]: time="2025-01-13T20:09:47.304608569Z" level=info msg="shim disconnected" id=eea70692f6fbc3292891452e81220eeadf939726f10efeb5f91be50a5c62d3b6 namespace=k8s.io Jan 13 20:09:47.304687 containerd[2089]: time="2025-01-13T20:09:47.304684049Z" level=warning msg="cleaning up after shim disconnected" id=eea70692f6fbc3292891452e81220eeadf939726f10efeb5f91be50a5c62d3b6 namespace=k8s.io Jan 13 20:09:47.305331 containerd[2089]: time="2025-01-13T20:09:47.304705733Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:09:47.308673 containerd[2089]: time="2025-01-13T20:09:47.308452829Z" level=info msg="StopContainer for \"a1c2334ce0e569a8b8e4441f2fabee3ee34782c199eb06c8a7733b0cce570256\" returns successfully" Jan 13 20:09:47.310210 containerd[2089]: time="2025-01-13T20:09:47.310160345Z" level=info msg="StopPodSandbox for \"537d853197c791f4609a76ea7850936532adf8764f53005b7579390a962ad241\"" Jan 13 20:09:47.310686 containerd[2089]: time="2025-01-13T20:09:47.310442369Z" level=info msg="Container to stop \"a1c2334ce0e569a8b8e4441f2fabee3ee34782c199eb06c8a7733b0cce570256\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:09:47.316421 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-537d853197c791f4609a76ea7850936532adf8764f53005b7579390a962ad241-shm.mount: Deactivated successfully. Jan 13 20:09:47.357139 containerd[2089]: time="2025-01-13T20:09:47.356743505Z" level=info msg="StopContainer for \"eea70692f6fbc3292891452e81220eeadf939726f10efeb5f91be50a5c62d3b6\" returns successfully" Jan 13 20:09:47.358448 containerd[2089]: time="2025-01-13T20:09:47.358056485Z" level=info msg="StopPodSandbox for \"a739bf7cc08951ac4b3aa4ac88d501499d0e95803535736aa84711ac24e3762c\"" Jan 13 20:09:47.358448 containerd[2089]: time="2025-01-13T20:09:47.358123349Z" level=info msg="Container to stop \"e5a4b92dbd67c6bf76c47b24aec02fe6c1cfdb18d6df083d94794a257320762f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:09:47.358448 containerd[2089]: time="2025-01-13T20:09:47.358149461Z" level=info msg="Container to stop \"e8f674f45a215422c83aca99f8950aada634f40a53bc48f0e992b7b16cc7ccca\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:09:47.358448 containerd[2089]: time="2025-01-13T20:09:47.358170821Z" level=info msg="Container to stop \"cde11d79462c88f4e67edfa42d7938c5e9541873d05d6945271dada8fcf08e44\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:09:47.358448 containerd[2089]: time="2025-01-13T20:09:47.358192301Z" level=info msg="Container to stop \"39dca387943ea2ec6df25cd501a9ccb6bf3dfdfb0aba4350f008edf284cf7ee3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:09:47.358448 containerd[2089]: time="2025-01-13T20:09:47.358212293Z" level=info msg="Container to stop \"eea70692f6fbc3292891452e81220eeadf939726f10efeb5f91be50a5c62d3b6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:09:47.364374 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a739bf7cc08951ac4b3aa4ac88d501499d0e95803535736aa84711ac24e3762c-shm.mount: Deactivated successfully. Jan 13 20:09:47.413419 containerd[2089]: time="2025-01-13T20:09:47.413123885Z" level=info msg="shim disconnected" id=537d853197c791f4609a76ea7850936532adf8764f53005b7579390a962ad241 namespace=k8s.io Jan 13 20:09:47.415174 containerd[2089]: time="2025-01-13T20:09:47.414860057Z" level=warning msg="cleaning up after shim disconnected" id=537d853197c791f4609a76ea7850936532adf8764f53005b7579390a962ad241 namespace=k8s.io Jan 13 20:09:47.415174 containerd[2089]: time="2025-01-13T20:09:47.415015349Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:09:47.441238 containerd[2089]: time="2025-01-13T20:09:47.441165990Z" level=warning msg="cleanup warnings time=\"2025-01-13T20:09:47Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 13 20:09:47.449110 containerd[2089]: time="2025-01-13T20:09:47.448655802Z" level=info msg="TearDown network for sandbox \"537d853197c791f4609a76ea7850936532adf8764f53005b7579390a962ad241\" successfully" Jan 13 20:09:47.449110 containerd[2089]: time="2025-01-13T20:09:47.448718322Z" level=info msg="StopPodSandbox for \"537d853197c791f4609a76ea7850936532adf8764f53005b7579390a962ad241\" returns successfully" Jan 13 20:09:47.467017 containerd[2089]: time="2025-01-13T20:09:47.466931382Z" level=info msg="shim disconnected" id=a739bf7cc08951ac4b3aa4ac88d501499d0e95803535736aa84711ac24e3762c namespace=k8s.io Jan 13 20:09:47.467017 containerd[2089]: time="2025-01-13T20:09:47.467015262Z" level=warning msg="cleaning up after shim disconnected" id=a739bf7cc08951ac4b3aa4ac88d501499d0e95803535736aa84711ac24e3762c namespace=k8s.io Jan 13 20:09:47.467462 containerd[2089]: time="2025-01-13T20:09:47.467038062Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:09:47.497903 containerd[2089]: time="2025-01-13T20:09:47.497843022Z" level=warning msg="cleanup warnings time=\"2025-01-13T20:09:47Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 13 20:09:47.500241 containerd[2089]: time="2025-01-13T20:09:47.499991022Z" level=info msg="TearDown network for sandbox \"a739bf7cc08951ac4b3aa4ac88d501499d0e95803535736aa84711ac24e3762c\" successfully" Jan 13 20:09:47.500241 containerd[2089]: time="2025-01-13T20:09:47.500055162Z" level=info msg="StopPodSandbox for \"a739bf7cc08951ac4b3aa4ac88d501499d0e95803535736aa84711ac24e3762c\" returns successfully" Jan 13 20:09:47.621652 kubelet[3649]: I0113 20:09:47.620954 3649 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cecb4869-cc53-4d7f-9f23-2b1d7002f5e6-hubble-tls\") pod \"cecb4869-cc53-4d7f-9f23-2b1d7002f5e6\" (UID: \"cecb4869-cc53-4d7f-9f23-2b1d7002f5e6\") " Jan 13 20:09:47.621652 kubelet[3649]: I0113 20:09:47.621520 3649 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cecb4869-cc53-4d7f-9f23-2b1d7002f5e6-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "cecb4869-cc53-4d7f-9f23-2b1d7002f5e6" (UID: "cecb4869-cc53-4d7f-9f23-2b1d7002f5e6"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:09:47.621652 kubelet[3649]: I0113 20:09:47.621609 3649 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cecb4869-cc53-4d7f-9f23-2b1d7002f5e6-host-proc-sys-net\") pod \"cecb4869-cc53-4d7f-9f23-2b1d7002f5e6\" (UID: \"cecb4869-cc53-4d7f-9f23-2b1d7002f5e6\") " Jan 13 20:09:47.622414 kubelet[3649]: I0113 20:09:47.621678 3649 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cecb4869-cc53-4d7f-9f23-2b1d7002f5e6-cilium-run\") pod \"cecb4869-cc53-4d7f-9f23-2b1d7002f5e6\" (UID: \"cecb4869-cc53-4d7f-9f23-2b1d7002f5e6\") " Jan 13 20:09:47.622414 kubelet[3649]: I0113 20:09:47.621768 3649 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cecb4869-cc53-4d7f-9f23-2b1d7002f5e6-cilium-cgroup\") pod \"cecb4869-cc53-4d7f-9f23-2b1d7002f5e6\" (UID: \"cecb4869-cc53-4d7f-9f23-2b1d7002f5e6\") " Jan 13 20:09:47.622414 kubelet[3649]: I0113 20:09:47.621859 3649 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cecb4869-cc53-4d7f-9f23-2b1d7002f5e6-etc-cni-netd\") pod \"cecb4869-cc53-4d7f-9f23-2b1d7002f5e6\" (UID: \"cecb4869-cc53-4d7f-9f23-2b1d7002f5e6\") " Jan 13 20:09:47.622414 kubelet[3649]: I0113 20:09:47.621985 3649 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cecb4869-cc53-4d7f-9f23-2b1d7002f5e6-clustermesh-secrets\") pod \"cecb4869-cc53-4d7f-9f23-2b1d7002f5e6\" (UID: \"cecb4869-cc53-4d7f-9f23-2b1d7002f5e6\") " Jan 13 20:09:47.622414 kubelet[3649]: I0113 20:09:47.622219 3649 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cecb4869-cc53-4d7f-9f23-2b1d7002f5e6-lib-modules\") pod \"cecb4869-cc53-4d7f-9f23-2b1d7002f5e6\" (UID: \"cecb4869-cc53-4d7f-9f23-2b1d7002f5e6\") " Jan 13 20:09:47.622414 kubelet[3649]: I0113 20:09:47.622287 3649 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-skccg\" (UniqueName: \"kubernetes.io/projected/107e644e-3160-4704-9289-0520c79b86dd-kube-api-access-skccg\") pod \"107e644e-3160-4704-9289-0520c79b86dd\" (UID: \"107e644e-3160-4704-9289-0520c79b86dd\") " Jan 13 20:09:47.622768 kubelet[3649]: I0113 20:09:47.622514 3649 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cecb4869-cc53-4d7f-9f23-2b1d7002f5e6-cni-path\") pod \"cecb4869-cc53-4d7f-9f23-2b1d7002f5e6\" (UID: \"cecb4869-cc53-4d7f-9f23-2b1d7002f5e6\") " Jan 13 20:09:47.622768 kubelet[3649]: I0113 20:09:47.622568 3649 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/107e644e-3160-4704-9289-0520c79b86dd-cilium-config-path\") pod \"107e644e-3160-4704-9289-0520c79b86dd\" (UID: \"107e644e-3160-4704-9289-0520c79b86dd\") " Jan 13 20:09:47.622768 kubelet[3649]: I0113 20:09:47.622750 3649 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x9c44\" (UniqueName: \"kubernetes.io/projected/cecb4869-cc53-4d7f-9f23-2b1d7002f5e6-kube-api-access-x9c44\") pod \"cecb4869-cc53-4d7f-9f23-2b1d7002f5e6\" (UID: \"cecb4869-cc53-4d7f-9f23-2b1d7002f5e6\") " Jan 13 20:09:47.622971 kubelet[3649]: I0113 20:09:47.622829 3649 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cecb4869-cc53-4d7f-9f23-2b1d7002f5e6-bpf-maps\") pod \"cecb4869-cc53-4d7f-9f23-2b1d7002f5e6\" (UID: \"cecb4869-cc53-4d7f-9f23-2b1d7002f5e6\") " Jan 13 20:09:47.622971 kubelet[3649]: I0113 20:09:47.622919 3649 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cecb4869-cc53-4d7f-9f23-2b1d7002f5e6-cilium-config-path\") pod \"cecb4869-cc53-4d7f-9f23-2b1d7002f5e6\" (UID: \"cecb4869-cc53-4d7f-9f23-2b1d7002f5e6\") " Jan 13 20:09:47.624309 kubelet[3649]: I0113 20:09:47.623164 3649 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cecb4869-cc53-4d7f-9f23-2b1d7002f5e6-host-proc-sys-kernel\") pod \"cecb4869-cc53-4d7f-9f23-2b1d7002f5e6\" (UID: \"cecb4869-cc53-4d7f-9f23-2b1d7002f5e6\") " Jan 13 20:09:47.624309 kubelet[3649]: I0113 20:09:47.623239 3649 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cecb4869-cc53-4d7f-9f23-2b1d7002f5e6-xtables-lock\") pod \"cecb4869-cc53-4d7f-9f23-2b1d7002f5e6\" (UID: \"cecb4869-cc53-4d7f-9f23-2b1d7002f5e6\") " Jan 13 20:09:47.624309 kubelet[3649]: I0113 20:09:47.623479 3649 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cecb4869-cc53-4d7f-9f23-2b1d7002f5e6-hostproc\") pod \"cecb4869-cc53-4d7f-9f23-2b1d7002f5e6\" (UID: \"cecb4869-cc53-4d7f-9f23-2b1d7002f5e6\") " Jan 13 20:09:47.624309 kubelet[3649]: I0113 20:09:47.623720 3649 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cecb4869-cc53-4d7f-9f23-2b1d7002f5e6-host-proc-sys-net\") on node \"ip-172-31-21-202\" DevicePath \"\"" Jan 13 20:09:47.624623 kubelet[3649]: I0113 20:09:47.624531 3649 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cecb4869-cc53-4d7f-9f23-2b1d7002f5e6-hostproc" (OuterVolumeSpecName: "hostproc") pod "cecb4869-cc53-4d7f-9f23-2b1d7002f5e6" (UID: "cecb4869-cc53-4d7f-9f23-2b1d7002f5e6"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:09:47.624623 kubelet[3649]: I0113 20:09:47.624603 3649 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cecb4869-cc53-4d7f-9f23-2b1d7002f5e6-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "cecb4869-cc53-4d7f-9f23-2b1d7002f5e6" (UID: "cecb4869-cc53-4d7f-9f23-2b1d7002f5e6"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:09:47.624731 kubelet[3649]: I0113 20:09:47.624652 3649 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cecb4869-cc53-4d7f-9f23-2b1d7002f5e6-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "cecb4869-cc53-4d7f-9f23-2b1d7002f5e6" (UID: "cecb4869-cc53-4d7f-9f23-2b1d7002f5e6"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:09:47.624731 kubelet[3649]: I0113 20:09:47.624705 3649 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cecb4869-cc53-4d7f-9f23-2b1d7002f5e6-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "cecb4869-cc53-4d7f-9f23-2b1d7002f5e6" (UID: "cecb4869-cc53-4d7f-9f23-2b1d7002f5e6"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:09:47.626467 kubelet[3649]: I0113 20:09:47.625863 3649 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cecb4869-cc53-4d7f-9f23-2b1d7002f5e6-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "cecb4869-cc53-4d7f-9f23-2b1d7002f5e6" (UID: "cecb4869-cc53-4d7f-9f23-2b1d7002f5e6"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:09:47.627097 kubelet[3649]: I0113 20:09:47.626756 3649 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cecb4869-cc53-4d7f-9f23-2b1d7002f5e6-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "cecb4869-cc53-4d7f-9f23-2b1d7002f5e6" (UID: "cecb4869-cc53-4d7f-9f23-2b1d7002f5e6"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:09:47.628302 kubelet[3649]: I0113 20:09:47.627732 3649 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cecb4869-cc53-4d7f-9f23-2b1d7002f5e6-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "cecb4869-cc53-4d7f-9f23-2b1d7002f5e6" (UID: "cecb4869-cc53-4d7f-9f23-2b1d7002f5e6"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:09:47.628302 kubelet[3649]: I0113 20:09:47.627866 3649 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cecb4869-cc53-4d7f-9f23-2b1d7002f5e6-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "cecb4869-cc53-4d7f-9f23-2b1d7002f5e6" (UID: "cecb4869-cc53-4d7f-9f23-2b1d7002f5e6"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:09:47.635329 kubelet[3649]: I0113 20:09:47.635262 3649 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cecb4869-cc53-4d7f-9f23-2b1d7002f5e6-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "cecb4869-cc53-4d7f-9f23-2b1d7002f5e6" (UID: "cecb4869-cc53-4d7f-9f23-2b1d7002f5e6"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 13 20:09:47.635491 kubelet[3649]: I0113 20:09:47.635380 3649 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cecb4869-cc53-4d7f-9f23-2b1d7002f5e6-cni-path" (OuterVolumeSpecName: "cni-path") pod "cecb4869-cc53-4d7f-9f23-2b1d7002f5e6" (UID: "cecb4869-cc53-4d7f-9f23-2b1d7002f5e6"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:09:47.636617 kubelet[3649]: I0113 20:09:47.636534 3649 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/107e644e-3160-4704-9289-0520c79b86dd-kube-api-access-skccg" (OuterVolumeSpecName: "kube-api-access-skccg") pod "107e644e-3160-4704-9289-0520c79b86dd" (UID: "107e644e-3160-4704-9289-0520c79b86dd"). InnerVolumeSpecName "kube-api-access-skccg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 20:09:47.642743 kubelet[3649]: I0113 20:09:47.642618 3649 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cecb4869-cc53-4d7f-9f23-2b1d7002f5e6-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "cecb4869-cc53-4d7f-9f23-2b1d7002f5e6" (UID: "cecb4869-cc53-4d7f-9f23-2b1d7002f5e6"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 20:09:47.644849 kubelet[3649]: I0113 20:09:47.643260 3649 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cecb4869-cc53-4d7f-9f23-2b1d7002f5e6-kube-api-access-x9c44" (OuterVolumeSpecName: "kube-api-access-x9c44") pod "cecb4869-cc53-4d7f-9f23-2b1d7002f5e6" (UID: "cecb4869-cc53-4d7f-9f23-2b1d7002f5e6"). InnerVolumeSpecName "kube-api-access-x9c44". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 20:09:47.645686 kubelet[3649]: I0113 20:09:47.645635 3649 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/107e644e-3160-4704-9289-0520c79b86dd-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "107e644e-3160-4704-9289-0520c79b86dd" (UID: "107e644e-3160-4704-9289-0520c79b86dd"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 13 20:09:47.653628 kubelet[3649]: I0113 20:09:47.653572 3649 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cecb4869-cc53-4d7f-9f23-2b1d7002f5e6-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "cecb4869-cc53-4d7f-9f23-2b1d7002f5e6" (UID: "cecb4869-cc53-4d7f-9f23-2b1d7002f5e6"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 13 20:09:47.724281 kubelet[3649]: I0113 20:09:47.724219 3649 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cecb4869-cc53-4d7f-9f23-2b1d7002f5e6-cilium-run\") on node \"ip-172-31-21-202\" DevicePath \"\"" Jan 13 20:09:47.724281 kubelet[3649]: I0113 20:09:47.724284 3649 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cecb4869-cc53-4d7f-9f23-2b1d7002f5e6-cilium-cgroup\") on node \"ip-172-31-21-202\" DevicePath \"\"" Jan 13 20:09:47.724522 kubelet[3649]: I0113 20:09:47.724319 3649 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cecb4869-cc53-4d7f-9f23-2b1d7002f5e6-etc-cni-netd\") on node \"ip-172-31-21-202\" DevicePath \"\"" Jan 13 20:09:47.724522 kubelet[3649]: I0113 20:09:47.724350 3649 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cecb4869-cc53-4d7f-9f23-2b1d7002f5e6-clustermesh-secrets\") on node \"ip-172-31-21-202\" DevicePath \"\"" Jan 13 20:09:47.724522 kubelet[3649]: I0113 20:09:47.724375 3649 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cecb4869-cc53-4d7f-9f23-2b1d7002f5e6-lib-modules\") on node \"ip-172-31-21-202\" DevicePath \"\"" Jan 13 20:09:47.724522 kubelet[3649]: I0113 20:09:47.724401 3649 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-skccg\" (UniqueName: \"kubernetes.io/projected/107e644e-3160-4704-9289-0520c79b86dd-kube-api-access-skccg\") on node \"ip-172-31-21-202\" DevicePath \"\"" Jan 13 20:09:47.724522 kubelet[3649]: I0113 20:09:47.724426 3649 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cecb4869-cc53-4d7f-9f23-2b1d7002f5e6-cni-path\") on node \"ip-172-31-21-202\" DevicePath \"\"" Jan 13 20:09:47.724522 kubelet[3649]: I0113 20:09:47.724452 3649 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/107e644e-3160-4704-9289-0520c79b86dd-cilium-config-path\") on node \"ip-172-31-21-202\" DevicePath \"\"" Jan 13 20:09:47.724522 kubelet[3649]: I0113 20:09:47.724479 3649 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-x9c44\" (UniqueName: \"kubernetes.io/projected/cecb4869-cc53-4d7f-9f23-2b1d7002f5e6-kube-api-access-x9c44\") on node \"ip-172-31-21-202\" DevicePath \"\"" Jan 13 20:09:47.724522 kubelet[3649]: I0113 20:09:47.724502 3649 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cecb4869-cc53-4d7f-9f23-2b1d7002f5e6-bpf-maps\") on node \"ip-172-31-21-202\" DevicePath \"\"" Jan 13 20:09:47.724972 kubelet[3649]: I0113 20:09:47.724528 3649 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cecb4869-cc53-4d7f-9f23-2b1d7002f5e6-cilium-config-path\") on node \"ip-172-31-21-202\" DevicePath \"\"" Jan 13 20:09:47.724972 kubelet[3649]: I0113 20:09:47.724552 3649 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cecb4869-cc53-4d7f-9f23-2b1d7002f5e6-host-proc-sys-kernel\") on node \"ip-172-31-21-202\" DevicePath \"\"" Jan 13 20:09:47.724972 kubelet[3649]: I0113 20:09:47.724575 3649 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cecb4869-cc53-4d7f-9f23-2b1d7002f5e6-xtables-lock\") on node \"ip-172-31-21-202\" DevicePath \"\"" Jan 13 20:09:47.724972 kubelet[3649]: I0113 20:09:47.724600 3649 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cecb4869-cc53-4d7f-9f23-2b1d7002f5e6-hostproc\") on node \"ip-172-31-21-202\" DevicePath \"\"" Jan 13 20:09:47.724972 kubelet[3649]: I0113 20:09:47.724625 3649 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cecb4869-cc53-4d7f-9f23-2b1d7002f5e6-hubble-tls\") on node \"ip-172-31-21-202\" DevicePath \"\"" Jan 13 20:09:48.130703 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a739bf7cc08951ac4b3aa4ac88d501499d0e95803535736aa84711ac24e3762c-rootfs.mount: Deactivated successfully. Jan 13 20:09:48.131736 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-537d853197c791f4609a76ea7850936532adf8764f53005b7579390a962ad241-rootfs.mount: Deactivated successfully. Jan 13 20:09:48.132046 systemd[1]: var-lib-kubelet-pods-cecb4869\x2dcc53\x2d4d7f\x2d9f23\x2d2b1d7002f5e6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dx9c44.mount: Deactivated successfully. Jan 13 20:09:48.132315 systemd[1]: var-lib-kubelet-pods-107e644e\x2d3160\x2d4704\x2d9289\x2d0520c79b86dd-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dskccg.mount: Deactivated successfully. Jan 13 20:09:48.132545 systemd[1]: var-lib-kubelet-pods-cecb4869\x2dcc53\x2d4d7f\x2d9f23\x2d2b1d7002f5e6-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 13 20:09:48.132763 systemd[1]: var-lib-kubelet-pods-cecb4869\x2dcc53\x2d4d7f\x2d9f23\x2d2b1d7002f5e6-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 13 20:09:48.405942 kubelet[3649]: I0113 20:09:48.405770 3649 scope.go:117] "RemoveContainer" containerID="eea70692f6fbc3292891452e81220eeadf939726f10efeb5f91be50a5c62d3b6" Jan 13 20:09:48.412549 containerd[2089]: time="2025-01-13T20:09:48.412085430Z" level=info msg="RemoveContainer for \"eea70692f6fbc3292891452e81220eeadf939726f10efeb5f91be50a5c62d3b6\"" Jan 13 20:09:48.422395 containerd[2089]: time="2025-01-13T20:09:48.422339010Z" level=info msg="RemoveContainer for \"eea70692f6fbc3292891452e81220eeadf939726f10efeb5f91be50a5c62d3b6\" returns successfully" Jan 13 20:09:48.423184 kubelet[3649]: I0113 20:09:48.423028 3649 scope.go:117] "RemoveContainer" containerID="39dca387943ea2ec6df25cd501a9ccb6bf3dfdfb0aba4350f008edf284cf7ee3" Jan 13 20:09:48.426169 containerd[2089]: time="2025-01-13T20:09:48.425960634Z" level=info msg="RemoveContainer for \"39dca387943ea2ec6df25cd501a9ccb6bf3dfdfb0aba4350f008edf284cf7ee3\"" Jan 13 20:09:48.437960 containerd[2089]: time="2025-01-13T20:09:48.437760403Z" level=info msg="RemoveContainer for \"39dca387943ea2ec6df25cd501a9ccb6bf3dfdfb0aba4350f008edf284cf7ee3\" returns successfully" Jan 13 20:09:48.438463 kubelet[3649]: I0113 20:09:48.438424 3649 scope.go:117] "RemoveContainer" containerID="cde11d79462c88f4e67edfa42d7938c5e9541873d05d6945271dada8fcf08e44" Jan 13 20:09:48.446010 containerd[2089]: time="2025-01-13T20:09:48.445945051Z" level=info msg="RemoveContainer for \"cde11d79462c88f4e67edfa42d7938c5e9541873d05d6945271dada8fcf08e44\"" Jan 13 20:09:48.459830 containerd[2089]: time="2025-01-13T20:09:48.459692503Z" level=info msg="RemoveContainer for \"cde11d79462c88f4e67edfa42d7938c5e9541873d05d6945271dada8fcf08e44\" returns successfully" Jan 13 20:09:48.461871 kubelet[3649]: I0113 20:09:48.460366 3649 scope.go:117] "RemoveContainer" containerID="e8f674f45a215422c83aca99f8950aada634f40a53bc48f0e992b7b16cc7ccca" Jan 13 20:09:48.463973 containerd[2089]: time="2025-01-13T20:09:48.463926859Z" level=info msg="RemoveContainer for \"e8f674f45a215422c83aca99f8950aada634f40a53bc48f0e992b7b16cc7ccca\"" Jan 13 20:09:48.469204 containerd[2089]: time="2025-01-13T20:09:48.469155595Z" level=info msg="RemoveContainer for \"e8f674f45a215422c83aca99f8950aada634f40a53bc48f0e992b7b16cc7ccca\" returns successfully" Jan 13 20:09:48.469738 kubelet[3649]: I0113 20:09:48.469699 3649 scope.go:117] "RemoveContainer" containerID="e5a4b92dbd67c6bf76c47b24aec02fe6c1cfdb18d6df083d94794a257320762f" Jan 13 20:09:48.471475 containerd[2089]: time="2025-01-13T20:09:48.471432619Z" level=info msg="RemoveContainer for \"e5a4b92dbd67c6bf76c47b24aec02fe6c1cfdb18d6df083d94794a257320762f\"" Jan 13 20:09:48.477109 containerd[2089]: time="2025-01-13T20:09:48.476973835Z" level=info msg="RemoveContainer for \"e5a4b92dbd67c6bf76c47b24aec02fe6c1cfdb18d6df083d94794a257320762f\" returns successfully" Jan 13 20:09:48.477374 kubelet[3649]: I0113 20:09:48.477324 3649 scope.go:117] "RemoveContainer" containerID="a1c2334ce0e569a8b8e4441f2fabee3ee34782c199eb06c8a7733b0cce570256" Jan 13 20:09:48.480461 containerd[2089]: time="2025-01-13T20:09:48.479996251Z" level=info msg="RemoveContainer for \"a1c2334ce0e569a8b8e4441f2fabee3ee34782c199eb06c8a7733b0cce570256\"" Jan 13 20:09:48.484953 containerd[2089]: time="2025-01-13T20:09:48.484906951Z" level=info msg="RemoveContainer for \"a1c2334ce0e569a8b8e4441f2fabee3ee34782c199eb06c8a7733b0cce570256\" returns successfully" Jan 13 20:09:49.028513 sshd[5272]: Connection closed by 139.178.68.195 port 42840 Jan 13 20:09:49.028990 sshd-session[5269]: pam_unix(sshd:session): session closed for user core Jan 13 20:09:49.035073 systemd[1]: sshd@24-172.31.21.202:22-139.178.68.195:42840.service: Deactivated successfully. Jan 13 20:09:49.044110 systemd[1]: session-25.scope: Deactivated successfully. Jan 13 20:09:49.046865 systemd-logind[2058]: Session 25 logged out. Waiting for processes to exit. Jan 13 20:09:49.050113 systemd-logind[2058]: Removed session 25. Jan 13 20:09:49.060386 systemd[1]: Started sshd@25-172.31.21.202:22-139.178.68.195:51560.service - OpenSSH per-connection server daemon (139.178.68.195:51560). Jan 13 20:09:49.253482 sshd[5438]: Accepted publickey for core from 139.178.68.195 port 51560 ssh2: RSA SHA256:dyUqkUNC7j9I+iFvInUHdwtzQ+aO/14TB1/ljGdoY9k Jan 13 20:09:49.255952 sshd-session[5438]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:09:49.264335 systemd-logind[2058]: New session 26 of user core. Jan 13 20:09:49.276455 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 13 20:09:49.431629 ntpd[2038]: Deleting interface #10 lxc_health, fe80::282e:4fff:fe4e:83%8#123, interface stats: received=0, sent=0, dropped=0, active_time=60 secs Jan 13 20:09:49.432491 ntpd[2038]: 13 Jan 20:09:49 ntpd[2038]: Deleting interface #10 lxc_health, fe80::282e:4fff:fe4e:83%8#123, interface stats: received=0, sent=0, dropped=0, active_time=60 secs Jan 13 20:09:49.849239 kubelet[3649]: I0113 20:09:49.849191 3649 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="107e644e-3160-4704-9289-0520c79b86dd" path="/var/lib/kubelet/pods/107e644e-3160-4704-9289-0520c79b86dd/volumes" Jan 13 20:09:49.850310 kubelet[3649]: I0113 20:09:49.850271 3649 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="cecb4869-cc53-4d7f-9f23-2b1d7002f5e6" path="/var/lib/kubelet/pods/cecb4869-cc53-4d7f-9f23-2b1d7002f5e6/volumes" Jan 13 20:09:50.069622 kubelet[3649]: E0113 20:09:50.069493 3649 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 13 20:09:50.911896 sshd[5441]: Connection closed by 139.178.68.195 port 51560 Jan 13 20:09:50.916014 sshd-session[5438]: pam_unix(sshd:session): session closed for user core Jan 13 20:09:50.931489 systemd[1]: sshd@25-172.31.21.202:22-139.178.68.195:51560.service: Deactivated successfully. Jan 13 20:09:50.936032 kubelet[3649]: I0113 20:09:50.933873 3649 topology_manager.go:215] "Topology Admit Handler" podUID="187960d7-e398-44b3-a5f1-31b18bc60f51" podNamespace="kube-system" podName="cilium-f95rx" Jan 13 20:09:50.936032 kubelet[3649]: E0113 20:09:50.934052 3649 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cecb4869-cc53-4d7f-9f23-2b1d7002f5e6" containerName="mount-cgroup" Jan 13 20:09:50.936032 kubelet[3649]: E0113 20:09:50.934077 3649 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cecb4869-cc53-4d7f-9f23-2b1d7002f5e6" containerName="mount-bpf-fs" Jan 13 20:09:50.936032 kubelet[3649]: E0113 20:09:50.934098 3649 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cecb4869-cc53-4d7f-9f23-2b1d7002f5e6" containerName="cilium-agent" Jan 13 20:09:50.936032 kubelet[3649]: E0113 20:09:50.934154 3649 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="107e644e-3160-4704-9289-0520c79b86dd" containerName="cilium-operator" Jan 13 20:09:50.936032 kubelet[3649]: E0113 20:09:50.934175 3649 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cecb4869-cc53-4d7f-9f23-2b1d7002f5e6" containerName="apply-sysctl-overwrites" Jan 13 20:09:50.936032 kubelet[3649]: E0113 20:09:50.934226 3649 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cecb4869-cc53-4d7f-9f23-2b1d7002f5e6" containerName="clean-cilium-state" Jan 13 20:09:50.936032 kubelet[3649]: I0113 20:09:50.934275 3649 memory_manager.go:354] "RemoveStaleState removing state" podUID="107e644e-3160-4704-9289-0520c79b86dd" containerName="cilium-operator" Jan 13 20:09:50.936032 kubelet[3649]: I0113 20:09:50.934320 3649 memory_manager.go:354] "RemoveStaleState removing state" podUID="cecb4869-cc53-4d7f-9f23-2b1d7002f5e6" containerName="cilium-agent" Jan 13 20:09:50.958103 systemd[1]: session-26.scope: Deactivated successfully. Jan 13 20:09:50.971202 systemd-logind[2058]: Session 26 logged out. Waiting for processes to exit. Jan 13 20:09:50.983276 systemd[1]: Started sshd@26-172.31.21.202:22-139.178.68.195:51566.service - OpenSSH per-connection server daemon (139.178.68.195:51566). Jan 13 20:09:50.987905 systemd-logind[2058]: Removed session 26. Jan 13 20:09:51.047825 kubelet[3649]: I0113 20:09:51.047751 3649 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/187960d7-e398-44b3-a5f1-31b18bc60f51-bpf-maps\") pod \"cilium-f95rx\" (UID: \"187960d7-e398-44b3-a5f1-31b18bc60f51\") " pod="kube-system/cilium-f95rx" Jan 13 20:09:51.049657 kubelet[3649]: I0113 20:09:51.048188 3649 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/187960d7-e398-44b3-a5f1-31b18bc60f51-etc-cni-netd\") pod \"cilium-f95rx\" (UID: \"187960d7-e398-44b3-a5f1-31b18bc60f51\") " pod="kube-system/cilium-f95rx" Jan 13 20:09:51.049657 kubelet[3649]: I0113 20:09:51.048264 3649 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/187960d7-e398-44b3-a5f1-31b18bc60f51-host-proc-sys-kernel\") pod \"cilium-f95rx\" (UID: \"187960d7-e398-44b3-a5f1-31b18bc60f51\") " pod="kube-system/cilium-f95rx" Jan 13 20:09:51.049657 kubelet[3649]: I0113 20:09:51.048390 3649 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/187960d7-e398-44b3-a5f1-31b18bc60f51-lib-modules\") pod \"cilium-f95rx\" (UID: \"187960d7-e398-44b3-a5f1-31b18bc60f51\") " pod="kube-system/cilium-f95rx" Jan 13 20:09:51.049657 kubelet[3649]: I0113 20:09:51.048443 3649 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/187960d7-e398-44b3-a5f1-31b18bc60f51-clustermesh-secrets\") pod \"cilium-f95rx\" (UID: \"187960d7-e398-44b3-a5f1-31b18bc60f51\") " pod="kube-system/cilium-f95rx" Jan 13 20:09:51.049657 kubelet[3649]: I0113 20:09:51.048498 3649 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/187960d7-e398-44b3-a5f1-31b18bc60f51-cni-path\") pod \"cilium-f95rx\" (UID: \"187960d7-e398-44b3-a5f1-31b18bc60f51\") " pod="kube-system/cilium-f95rx" Jan 13 20:09:51.049657 kubelet[3649]: I0113 20:09:51.048545 3649 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/187960d7-e398-44b3-a5f1-31b18bc60f51-cilium-config-path\") pod \"cilium-f95rx\" (UID: \"187960d7-e398-44b3-a5f1-31b18bc60f51\") " pod="kube-system/cilium-f95rx" Jan 13 20:09:51.052963 kubelet[3649]: I0113 20:09:51.048588 3649 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/187960d7-e398-44b3-a5f1-31b18bc60f51-cilium-run\") pod \"cilium-f95rx\" (UID: \"187960d7-e398-44b3-a5f1-31b18bc60f51\") " pod="kube-system/cilium-f95rx" Jan 13 20:09:51.052963 kubelet[3649]: I0113 20:09:51.048631 3649 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/187960d7-e398-44b3-a5f1-31b18bc60f51-cilium-cgroup\") pod \"cilium-f95rx\" (UID: \"187960d7-e398-44b3-a5f1-31b18bc60f51\") " pod="kube-system/cilium-f95rx" Jan 13 20:09:51.052963 kubelet[3649]: I0113 20:09:51.048672 3649 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/187960d7-e398-44b3-a5f1-31b18bc60f51-xtables-lock\") pod \"cilium-f95rx\" (UID: \"187960d7-e398-44b3-a5f1-31b18bc60f51\") " pod="kube-system/cilium-f95rx" Jan 13 20:09:51.052963 kubelet[3649]: I0113 20:09:51.048716 3649 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/187960d7-e398-44b3-a5f1-31b18bc60f51-host-proc-sys-net\") pod \"cilium-f95rx\" (UID: \"187960d7-e398-44b3-a5f1-31b18bc60f51\") " pod="kube-system/cilium-f95rx" Jan 13 20:09:51.052963 kubelet[3649]: I0113 20:09:51.048760 3649 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/187960d7-e398-44b3-a5f1-31b18bc60f51-hubble-tls\") pod \"cilium-f95rx\" (UID: \"187960d7-e398-44b3-a5f1-31b18bc60f51\") " pod="kube-system/cilium-f95rx" Jan 13 20:09:51.052963 kubelet[3649]: I0113 20:09:51.048823 3649 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/187960d7-e398-44b3-a5f1-31b18bc60f51-hostproc\") pod \"cilium-f95rx\" (UID: \"187960d7-e398-44b3-a5f1-31b18bc60f51\") " pod="kube-system/cilium-f95rx" Jan 13 20:09:51.053302 kubelet[3649]: I0113 20:09:51.048875 3649 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/187960d7-e398-44b3-a5f1-31b18bc60f51-cilium-ipsec-secrets\") pod \"cilium-f95rx\" (UID: \"187960d7-e398-44b3-a5f1-31b18bc60f51\") " pod="kube-system/cilium-f95rx" Jan 13 20:09:51.053302 kubelet[3649]: I0113 20:09:51.048918 3649 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjwxn\" (UniqueName: \"kubernetes.io/projected/187960d7-e398-44b3-a5f1-31b18bc60f51-kube-api-access-xjwxn\") pod \"cilium-f95rx\" (UID: \"187960d7-e398-44b3-a5f1-31b18bc60f51\") " pod="kube-system/cilium-f95rx" Jan 13 20:09:51.252318 containerd[2089]: time="2025-01-13T20:09:51.252249273Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-f95rx,Uid:187960d7-e398-44b3-a5f1-31b18bc60f51,Namespace:kube-system,Attempt:0,}" Jan 13 20:09:51.254505 sshd[5451]: Accepted publickey for core from 139.178.68.195 port 51566 ssh2: RSA SHA256:dyUqkUNC7j9I+iFvInUHdwtzQ+aO/14TB1/ljGdoY9k Jan 13 20:09:51.257731 sshd-session[5451]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:09:51.271323 systemd-logind[2058]: New session 27 of user core. Jan 13 20:09:51.286041 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 13 20:09:51.307471 containerd[2089]: time="2025-01-13T20:09:51.307299741Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:09:51.307471 containerd[2089]: time="2025-01-13T20:09:51.307416909Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:09:51.308988 containerd[2089]: time="2025-01-13T20:09:51.307455117Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:09:51.308988 containerd[2089]: time="2025-01-13T20:09:51.307616433Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:09:51.382611 containerd[2089]: time="2025-01-13T20:09:51.382533573Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-f95rx,Uid:187960d7-e398-44b3-a5f1-31b18bc60f51,Namespace:kube-system,Attempt:0,} returns sandbox id \"1a348e749538656e902bf3b59432f58665b1cd7263a3f0e5f8cac0f035df0feb\"" Jan 13 20:09:51.391488 containerd[2089]: time="2025-01-13T20:09:51.391411017Z" level=info msg="CreateContainer within sandbox \"1a348e749538656e902bf3b59432f58665b1cd7263a3f0e5f8cac0f035df0feb\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 13 20:09:51.415710 containerd[2089]: time="2025-01-13T20:09:51.415639341Z" level=info msg="CreateContainer within sandbox \"1a348e749538656e902bf3b59432f58665b1cd7263a3f0e5f8cac0f035df0feb\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1cb34f58a8455f4670f508fcb3bf3cdde4a701799b87729c2e64621fc6353c3b\"" Jan 13 20:09:51.418554 sshd[5467]: Connection closed by 139.178.68.195 port 51566 Jan 13 20:09:51.419386 sshd-session[5451]: pam_unix(sshd:session): session closed for user core Jan 13 20:09:51.420356 containerd[2089]: time="2025-01-13T20:09:51.419384709Z" level=info msg="StartContainer for \"1cb34f58a8455f4670f508fcb3bf3cdde4a701799b87729c2e64621fc6353c3b\"" Jan 13 20:09:51.431145 systemd[1]: sshd@26-172.31.21.202:22-139.178.68.195:51566.service: Deactivated successfully. Jan 13 20:09:51.441282 systemd[1]: session-27.scope: Deactivated successfully. Jan 13 20:09:51.442242 systemd-logind[2058]: Session 27 logged out. Waiting for processes to exit. Jan 13 20:09:51.457597 systemd[1]: Started sshd@27-172.31.21.202:22-139.178.68.195:51568.service - OpenSSH per-connection server daemon (139.178.68.195:51568). Jan 13 20:09:51.459826 systemd-logind[2058]: Removed session 27. Jan 13 20:09:51.554999 containerd[2089]: time="2025-01-13T20:09:51.554698762Z" level=info msg="StartContainer for \"1cb34f58a8455f4670f508fcb3bf3cdde4a701799b87729c2e64621fc6353c3b\" returns successfully" Jan 13 20:09:51.638683 containerd[2089]: time="2025-01-13T20:09:51.638570218Z" level=info msg="shim disconnected" id=1cb34f58a8455f4670f508fcb3bf3cdde4a701799b87729c2e64621fc6353c3b namespace=k8s.io Jan 13 20:09:51.638683 containerd[2089]: time="2025-01-13T20:09:51.638645422Z" level=warning msg="cleaning up after shim disconnected" id=1cb34f58a8455f4670f508fcb3bf3cdde4a701799b87729c2e64621fc6353c3b namespace=k8s.io Jan 13 20:09:51.638683 containerd[2089]: time="2025-01-13T20:09:51.638670370Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:09:51.666501 sshd[5511]: Accepted publickey for core from 139.178.68.195 port 51568 ssh2: RSA SHA256:dyUqkUNC7j9I+iFvInUHdwtzQ+aO/14TB1/ljGdoY9k Jan 13 20:09:51.669146 sshd-session[5511]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:09:51.682070 systemd-logind[2058]: New session 28 of user core. Jan 13 20:09:51.688363 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 13 20:09:52.392783 kubelet[3649]: I0113 20:09:52.392724 3649 setters.go:568] "Node became not ready" node="ip-172-31-21-202" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-13T20:09:52Z","lastTransitionTime":"2025-01-13T20:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 13 20:09:52.469855 containerd[2089]: time="2025-01-13T20:09:52.467724143Z" level=info msg="CreateContainer within sandbox \"1a348e749538656e902bf3b59432f58665b1cd7263a3f0e5f8cac0f035df0feb\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 13 20:09:52.497638 containerd[2089]: time="2025-01-13T20:09:52.497377871Z" level=info msg="CreateContainer within sandbox \"1a348e749538656e902bf3b59432f58665b1cd7263a3f0e5f8cac0f035df0feb\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"4df18da4cbaba2344c31e4233a7db9f5065f92f374a91b5297154ca3c5fe2fb0\"" Jan 13 20:09:52.500765 containerd[2089]: time="2025-01-13T20:09:52.500247059Z" level=info msg="StartContainer for \"4df18da4cbaba2344c31e4233a7db9f5065f92f374a91b5297154ca3c5fe2fb0\"" Jan 13 20:09:52.600243 containerd[2089]: time="2025-01-13T20:09:52.600055439Z" level=info msg="StartContainer for \"4df18da4cbaba2344c31e4233a7db9f5065f92f374a91b5297154ca3c5fe2fb0\" returns successfully" Jan 13 20:09:52.657325 containerd[2089]: time="2025-01-13T20:09:52.657005735Z" level=info msg="shim disconnected" id=4df18da4cbaba2344c31e4233a7db9f5065f92f374a91b5297154ca3c5fe2fb0 namespace=k8s.io Jan 13 20:09:52.657325 containerd[2089]: time="2025-01-13T20:09:52.657080543Z" level=warning msg="cleaning up after shim disconnected" id=4df18da4cbaba2344c31e4233a7db9f5065f92f374a91b5297154ca3c5fe2fb0 namespace=k8s.io Jan 13 20:09:52.657325 containerd[2089]: time="2025-01-13T20:09:52.657100367Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:09:53.158225 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4df18da4cbaba2344c31e4233a7db9f5065f92f374a91b5297154ca3c5fe2fb0-rootfs.mount: Deactivated successfully. Jan 13 20:09:53.479482 containerd[2089]: time="2025-01-13T20:09:53.478758936Z" level=info msg="CreateContainer within sandbox \"1a348e749538656e902bf3b59432f58665b1cd7263a3f0e5f8cac0f035df0feb\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 13 20:09:53.516674 containerd[2089]: time="2025-01-13T20:09:53.516587520Z" level=info msg="CreateContainer within sandbox \"1a348e749538656e902bf3b59432f58665b1cd7263a3f0e5f8cac0f035df0feb\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"9ad0c90f52630209656b8e8c4c51c6799bc8fe4754420275c8fb741011ef56a7\"" Jan 13 20:09:53.518428 containerd[2089]: time="2025-01-13T20:09:53.518072580Z" level=info msg="StartContainer for \"9ad0c90f52630209656b8e8c4c51c6799bc8fe4754420275c8fb741011ef56a7\"" Jan 13 20:09:53.637363 containerd[2089]: time="2025-01-13T20:09:53.637255932Z" level=info msg="StartContainer for \"9ad0c90f52630209656b8e8c4c51c6799bc8fe4754420275c8fb741011ef56a7\" returns successfully" Jan 13 20:09:53.686318 containerd[2089]: time="2025-01-13T20:09:53.686210581Z" level=info msg="shim disconnected" id=9ad0c90f52630209656b8e8c4c51c6799bc8fe4754420275c8fb741011ef56a7 namespace=k8s.io Jan 13 20:09:53.686318 containerd[2089]: time="2025-01-13T20:09:53.686308237Z" level=warning msg="cleaning up after shim disconnected" id=9ad0c90f52630209656b8e8c4c51c6799bc8fe4754420275c8fb741011ef56a7 namespace=k8s.io Jan 13 20:09:53.686611 containerd[2089]: time="2025-01-13T20:09:53.686329237Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:09:54.160306 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9ad0c90f52630209656b8e8c4c51c6799bc8fe4754420275c8fb741011ef56a7-rootfs.mount: Deactivated successfully. Jan 13 20:09:54.484368 containerd[2089]: time="2025-01-13T20:09:54.484303321Z" level=info msg="CreateContainer within sandbox \"1a348e749538656e902bf3b59432f58665b1cd7263a3f0e5f8cac0f035df0feb\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 13 20:09:54.518894 containerd[2089]: time="2025-01-13T20:09:54.515159305Z" level=info msg="CreateContainer within sandbox \"1a348e749538656e902bf3b59432f58665b1cd7263a3f0e5f8cac0f035df0feb\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d30e722b39acc33a53d8519d10bb95471c0071f217e60fc0043f27444a55d1b4\"" Jan 13 20:09:54.518894 containerd[2089]: time="2025-01-13T20:09:54.518347177Z" level=info msg="StartContainer for \"d30e722b39acc33a53d8519d10bb95471c0071f217e60fc0043f27444a55d1b4\"" Jan 13 20:09:54.621429 containerd[2089]: time="2025-01-13T20:09:54.621355129Z" level=info msg="StartContainer for \"d30e722b39acc33a53d8519d10bb95471c0071f217e60fc0043f27444a55d1b4\" returns successfully" Jan 13 20:09:54.674452 containerd[2089]: time="2025-01-13T20:09:54.674381186Z" level=info msg="shim disconnected" id=d30e722b39acc33a53d8519d10bb95471c0071f217e60fc0043f27444a55d1b4 namespace=k8s.io Jan 13 20:09:54.674962 containerd[2089]: time="2025-01-13T20:09:54.674719274Z" level=warning msg="cleaning up after shim disconnected" id=d30e722b39acc33a53d8519d10bb95471c0071f217e60fc0043f27444a55d1b4 namespace=k8s.io Jan 13 20:09:54.674962 containerd[2089]: time="2025-01-13T20:09:54.674746766Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:09:55.070531 kubelet[3649]: E0113 20:09:55.070468 3649 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 13 20:09:55.157615 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d30e722b39acc33a53d8519d10bb95471c0071f217e60fc0043f27444a55d1b4-rootfs.mount: Deactivated successfully. Jan 13 20:09:55.508995 containerd[2089]: time="2025-01-13T20:09:55.508929086Z" level=info msg="CreateContainer within sandbox \"1a348e749538656e902bf3b59432f58665b1cd7263a3f0e5f8cac0f035df0feb\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 13 20:09:55.565933 containerd[2089]: time="2025-01-13T20:09:55.565553570Z" level=info msg="CreateContainer within sandbox \"1a348e749538656e902bf3b59432f58665b1cd7263a3f0e5f8cac0f035df0feb\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d8f4c8e346bc0f9177271c8ebb03f2d7f14ef4af018265beb9f06821e96dbd0f\"" Jan 13 20:09:55.572551 containerd[2089]: time="2025-01-13T20:09:55.572481662Z" level=info msg="StartContainer for \"d8f4c8e346bc0f9177271c8ebb03f2d7f14ef4af018265beb9f06821e96dbd0f\"" Jan 13 20:09:55.723348 containerd[2089]: time="2025-01-13T20:09:55.722991615Z" level=info msg="StartContainer for \"d8f4c8e346bc0f9177271c8ebb03f2d7f14ef4af018265beb9f06821e96dbd0f\" returns successfully" Jan 13 20:09:56.494846 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jan 13 20:09:56.548358 kubelet[3649]: I0113 20:09:56.548294 3649 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-f95rx" podStartSLOduration=6.548213103 podStartE2EDuration="6.548213103s" podCreationTimestamp="2025-01-13 20:09:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:09:56.547489995 +0000 UTC m=+116.974275018" watchObservedRunningTime="2025-01-13 20:09:56.548213103 +0000 UTC m=+116.974998138" Jan 13 20:09:59.846527 kubelet[3649]: E0113 20:09:59.846456 3649 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-b8n94" podUID="0a7d9249-19a1-4ee4-88b4-198c864038a6" Jan 13 20:09:59.868566 containerd[2089]: time="2025-01-13T20:09:59.868495567Z" level=info msg="StopPodSandbox for \"537d853197c791f4609a76ea7850936532adf8764f53005b7579390a962ad241\"" Jan 13 20:09:59.871076 containerd[2089]: time="2025-01-13T20:09:59.868645435Z" level=info msg="TearDown network for sandbox \"537d853197c791f4609a76ea7850936532adf8764f53005b7579390a962ad241\" successfully" Jan 13 20:09:59.871076 containerd[2089]: time="2025-01-13T20:09:59.868669003Z" level=info msg="StopPodSandbox for \"537d853197c791f4609a76ea7850936532adf8764f53005b7579390a962ad241\" returns successfully" Jan 13 20:09:59.871076 containerd[2089]: time="2025-01-13T20:09:59.870092731Z" level=info msg="RemovePodSandbox for \"537d853197c791f4609a76ea7850936532adf8764f53005b7579390a962ad241\"" Jan 13 20:09:59.871076 containerd[2089]: time="2025-01-13T20:09:59.870251719Z" level=info msg="Forcibly stopping sandbox \"537d853197c791f4609a76ea7850936532adf8764f53005b7579390a962ad241\"" Jan 13 20:09:59.871076 containerd[2089]: time="2025-01-13T20:09:59.870424519Z" level=info msg="TearDown network for sandbox \"537d853197c791f4609a76ea7850936532adf8764f53005b7579390a962ad241\" successfully" Jan 13 20:09:59.876770 containerd[2089]: time="2025-01-13T20:09:59.876634855Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"537d853197c791f4609a76ea7850936532adf8764f53005b7579390a962ad241\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:09:59.878150 containerd[2089]: time="2025-01-13T20:09:59.876738655Z" level=info msg="RemovePodSandbox \"537d853197c791f4609a76ea7850936532adf8764f53005b7579390a962ad241\" returns successfully" Jan 13 20:09:59.878150 containerd[2089]: time="2025-01-13T20:09:59.877666495Z" level=info msg="StopPodSandbox for \"a739bf7cc08951ac4b3aa4ac88d501499d0e95803535736aa84711ac24e3762c\"" Jan 13 20:09:59.878150 containerd[2089]: time="2025-01-13T20:09:59.877840651Z" level=info msg="TearDown network for sandbox \"a739bf7cc08951ac4b3aa4ac88d501499d0e95803535736aa84711ac24e3762c\" successfully" Jan 13 20:09:59.878150 containerd[2089]: time="2025-01-13T20:09:59.877868275Z" level=info msg="StopPodSandbox for \"a739bf7cc08951ac4b3aa4ac88d501499d0e95803535736aa84711ac24e3762c\" returns successfully" Jan 13 20:09:59.879331 containerd[2089]: time="2025-01-13T20:09:59.879130927Z" level=info msg="RemovePodSandbox for \"a739bf7cc08951ac4b3aa4ac88d501499d0e95803535736aa84711ac24e3762c\"" Jan 13 20:09:59.879331 containerd[2089]: time="2025-01-13T20:09:59.879184483Z" level=info msg="Forcibly stopping sandbox \"a739bf7cc08951ac4b3aa4ac88d501499d0e95803535736aa84711ac24e3762c\"" Jan 13 20:09:59.879797 containerd[2089]: time="2025-01-13T20:09:59.879692227Z" level=info msg="TearDown network for sandbox \"a739bf7cc08951ac4b3aa4ac88d501499d0e95803535736aa84711ac24e3762c\" successfully" Jan 13 20:09:59.886165 containerd[2089]: time="2025-01-13T20:09:59.885796099Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a739bf7cc08951ac4b3aa4ac88d501499d0e95803535736aa84711ac24e3762c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:09:59.886165 containerd[2089]: time="2025-01-13T20:09:59.886007767Z" level=info msg="RemovePodSandbox \"a739bf7cc08951ac4b3aa4ac88d501499d0e95803535736aa84711ac24e3762c\" returns successfully" Jan 13 20:10:00.721314 systemd-networkd[1604]: lxc_health: Link UP Jan 13 20:10:00.738318 systemd-networkd[1604]: lxc_health: Gained carrier Jan 13 20:10:00.747451 (udev-worker)[6310]: Network interface NamePolicy= disabled on kernel command line. Jan 13 20:10:01.819173 systemd-networkd[1604]: lxc_health: Gained IPv6LL Jan 13 20:10:04.431683 ntpd[2038]: Listen normally on 13 lxc_health [fe80::b854:a4ff:fe2a:20b0%14]:123 Jan 13 20:10:04.432295 ntpd[2038]: 13 Jan 20:10:04 ntpd[2038]: Listen normally on 13 lxc_health [fe80::b854:a4ff:fe2a:20b0%14]:123 Jan 13 20:10:05.157376 systemd[1]: run-containerd-runc-k8s.io-d8f4c8e346bc0f9177271c8ebb03f2d7f14ef4af018265beb9f06821e96dbd0f-runc.CFWMwv.mount: Deactivated successfully. Jan 13 20:10:07.561356 sshd[5573]: Connection closed by 139.178.68.195 port 51568 Jan 13 20:10:07.562555 sshd-session[5511]: pam_unix(sshd:session): session closed for user core Jan 13 20:10:07.569321 systemd[1]: sshd@27-172.31.21.202:22-139.178.68.195:51568.service: Deactivated successfully. Jan 13 20:10:07.580596 systemd-logind[2058]: Session 28 logged out. Waiting for processes to exit. Jan 13 20:10:07.581696 systemd[1]: session-28.scope: Deactivated successfully. Jan 13 20:10:07.589985 systemd-logind[2058]: Removed session 28. Jan 13 20:10:22.504739 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ca76a4c11db3c4baa3930486910eead16bfba0680b7e0fdf86156a8f3f86aade-rootfs.mount: Deactivated successfully. Jan 13 20:10:22.528512 containerd[2089]: time="2025-01-13T20:10:22.528394324Z" level=info msg="shim disconnected" id=ca76a4c11db3c4baa3930486910eead16bfba0680b7e0fdf86156a8f3f86aade namespace=k8s.io Jan 13 20:10:22.530254 containerd[2089]: time="2025-01-13T20:10:22.528544996Z" level=warning msg="cleaning up after shim disconnected" id=ca76a4c11db3c4baa3930486910eead16bfba0680b7e0fdf86156a8f3f86aade namespace=k8s.io Jan 13 20:10:22.530254 containerd[2089]: time="2025-01-13T20:10:22.528568288Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:10:22.596018 kubelet[3649]: I0113 20:10:22.595849 3649 scope.go:117] "RemoveContainer" containerID="ca76a4c11db3c4baa3930486910eead16bfba0680b7e0fdf86156a8f3f86aade" Jan 13 20:10:22.600512 containerd[2089]: time="2025-01-13T20:10:22.600162028Z" level=info msg="CreateContainer within sandbox \"d47e02ed24d7f18fef721b76e01ab1da0d1e44955a3576b94c9649879517149f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 13 20:10:22.626363 containerd[2089]: time="2025-01-13T20:10:22.626182564Z" level=info msg="CreateContainer within sandbox \"d47e02ed24d7f18fef721b76e01ab1da0d1e44955a3576b94c9649879517149f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"919b394b0eaf92477462797bc847140107c5b6a81312fbd3647e5f5696e93aa4\"" Jan 13 20:10:22.627875 containerd[2089]: time="2025-01-13T20:10:22.627003016Z" level=info msg="StartContainer for \"919b394b0eaf92477462797bc847140107c5b6a81312fbd3647e5f5696e93aa4\"" Jan 13 20:10:22.708738 kubelet[3649]: E0113 20:10:22.708388 3649 controller.go:195] "Failed to update lease" err="Put \"https://172.31.21.202:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-202?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 13 20:10:22.745756 containerd[2089]: time="2025-01-13T20:10:22.745675865Z" level=info msg="StartContainer for \"919b394b0eaf92477462797bc847140107c5b6a81312fbd3647e5f5696e93aa4\" returns successfully" Jan 13 20:10:27.406600 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dff87eaea4bb084a1eb3e888a45dc6f8e261b648b4808c6742450116c29b3cc1-rootfs.mount: Deactivated successfully. Jan 13 20:10:27.418275 containerd[2089]: time="2025-01-13T20:10:27.417893936Z" level=info msg="shim disconnected" id=dff87eaea4bb084a1eb3e888a45dc6f8e261b648b4808c6742450116c29b3cc1 namespace=k8s.io Jan 13 20:10:27.418275 containerd[2089]: time="2025-01-13T20:10:27.418040192Z" level=warning msg="cleaning up after shim disconnected" id=dff87eaea4bb084a1eb3e888a45dc6f8e261b648b4808c6742450116c29b3cc1 namespace=k8s.io Jan 13 20:10:27.418275 containerd[2089]: time="2025-01-13T20:10:27.418061864Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:10:27.615377 kubelet[3649]: I0113 20:10:27.615252 3649 scope.go:117] "RemoveContainer" containerID="dff87eaea4bb084a1eb3e888a45dc6f8e261b648b4808c6742450116c29b3cc1" Jan 13 20:10:27.620018 containerd[2089]: time="2025-01-13T20:10:27.619795401Z" level=info msg="CreateContainer within sandbox \"df40b16e40783d61869bfe7ae39eeeb9e7a12705c51dcecdc5db3fc975d89396\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 13 20:10:27.651034 containerd[2089]: time="2025-01-13T20:10:27.650955189Z" level=info msg="CreateContainer within sandbox \"df40b16e40783d61869bfe7ae39eeeb9e7a12705c51dcecdc5db3fc975d89396\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"5ef959a43aa433d47bc7519c9ef481f9934da852b96b729d3b3d110ac7eeb7c3\"" Jan 13 20:10:27.652165 containerd[2089]: time="2025-01-13T20:10:27.651943017Z" level=info msg="StartContainer for \"5ef959a43aa433d47bc7519c9ef481f9934da852b96b729d3b3d110ac7eeb7c3\"" Jan 13 20:10:27.767778 containerd[2089]: time="2025-01-13T20:10:27.767605954Z" level=info msg="StartContainer for \"5ef959a43aa433d47bc7519c9ef481f9934da852b96b729d3b3d110ac7eeb7c3\" returns successfully"