May 17 00:05:00.194733 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] May 17 00:05:00.194778 kernel: Linux version 6.6.90-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri May 16 22:39:35 -00 2025 May 17 00:05:00.194803 kernel: KASLR disabled due to lack of seed May 17 00:05:00.194820 kernel: efi: EFI v2.7 by EDK II May 17 00:05:00.194836 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b000a98 MEMRESERVE=0x7852ee18 May 17 00:05:00.194852 kernel: ACPI: Early table checksum verification disabled May 17 00:05:00.194869 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) May 17 00:05:00.194885 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) May 17 00:05:00.194901 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) May 17 00:05:00.194917 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) May 17 00:05:00.194937 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) May 17 00:05:00.194953 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) May 17 00:05:00.194968 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) May 17 00:05:00.195003 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) May 17 00:05:00.195028 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) May 17 00:05:00.195052 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) May 17 00:05:00.195070 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) May 17 00:05:00.195087 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 May 17 00:05:00.195104 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') May 17 00:05:00.195120 kernel: printk: bootconsole [uart0] enabled May 17 00:05:00.195137 kernel: NUMA: Failed to initialise from firmware May 17 00:05:00.195153 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] May 17 00:05:00.195170 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] May 17 00:05:00.195186 kernel: Zone ranges: May 17 00:05:00.195203 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] May 17 00:05:00.195219 kernel: DMA32 empty May 17 00:05:00.195239 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] May 17 00:05:00.195256 kernel: Movable zone start for each node May 17 00:05:00.195272 kernel: Early memory node ranges May 17 00:05:00.195288 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] May 17 00:05:00.195305 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] May 17 00:05:00.195322 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] May 17 00:05:00.195338 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] May 17 00:05:00.195354 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] May 17 00:05:00.195371 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] May 17 00:05:00.195387 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] May 17 00:05:00.195403 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] May 17 00:05:00.195420 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] May 17 00:05:00.195441 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges May 17 00:05:00.195458 kernel: psci: probing for conduit method from ACPI. May 17 00:05:00.195481 kernel: psci: PSCIv1.0 detected in firmware. May 17 00:05:00.195499 kernel: psci: Using standard PSCI v0.2 function IDs May 17 00:05:00.195517 kernel: psci: Trusted OS migration not required May 17 00:05:00.195538 kernel: psci: SMC Calling Convention v1.1 May 17 00:05:00.195555 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 May 17 00:05:00.195573 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 May 17 00:05:00.195590 kernel: pcpu-alloc: [0] 0 [0] 1 May 17 00:05:00.195608 kernel: Detected PIPT I-cache on CPU0 May 17 00:05:00.195625 kernel: CPU features: detected: GIC system register CPU interface May 17 00:05:00.195642 kernel: CPU features: detected: Spectre-v2 May 17 00:05:00.195660 kernel: CPU features: detected: Spectre-v3a May 17 00:05:00.195677 kernel: CPU features: detected: Spectre-BHB May 17 00:05:00.195694 kernel: CPU features: detected: ARM erratum 1742098 May 17 00:05:00.195712 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 May 17 00:05:00.195733 kernel: alternatives: applying boot alternatives May 17 00:05:00.195753 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=3554ca41327a0c5ba7e4ac1b3147487d73f35805806dcb20264133a9c301eb5d May 17 00:05:00.195772 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 17 00:05:00.195790 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 17 00:05:00.195807 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 17 00:05:00.195825 kernel: Fallback order for Node 0: 0 May 17 00:05:00.195842 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 May 17 00:05:00.195860 kernel: Policy zone: Normal May 17 00:05:00.195877 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 17 00:05:00.195894 kernel: software IO TLB: area num 2. May 17 00:05:00.195912 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) May 17 00:05:00.195936 kernel: Memory: 3820152K/4030464K available (10240K kernel code, 2186K rwdata, 8104K rodata, 39424K init, 897K bss, 210312K reserved, 0K cma-reserved) May 17 00:05:00.195954 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 17 00:05:00.195972 kernel: rcu: Preemptible hierarchical RCU implementation. May 17 00:05:00.200058 kernel: rcu: RCU event tracing is enabled. May 17 00:05:00.200102 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 17 00:05:00.200122 kernel: Trampoline variant of Tasks RCU enabled. May 17 00:05:00.200141 kernel: Tracing variant of Tasks RCU enabled. May 17 00:05:00.200160 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 17 00:05:00.200178 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 17 00:05:00.200196 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 17 00:05:00.200214 kernel: GICv3: 96 SPIs implemented May 17 00:05:00.200241 kernel: GICv3: 0 Extended SPIs implemented May 17 00:05:00.200259 kernel: Root IRQ handler: gic_handle_irq May 17 00:05:00.200277 kernel: GICv3: GICv3 features: 16 PPIs May 17 00:05:00.200294 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 May 17 00:05:00.200312 kernel: ITS [mem 0x10080000-0x1009ffff] May 17 00:05:00.200330 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) May 17 00:05:00.200348 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) May 17 00:05:00.200365 kernel: GICv3: using LPI property table @0x00000004000d0000 May 17 00:05:00.200383 kernel: ITS: Using hypervisor restricted LPI range [128] May 17 00:05:00.200401 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 May 17 00:05:00.200418 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 17 00:05:00.200436 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). May 17 00:05:00.200460 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns May 17 00:05:00.200477 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns May 17 00:05:00.200495 kernel: Console: colour dummy device 80x25 May 17 00:05:00.200513 kernel: printk: console [tty1] enabled May 17 00:05:00.200531 kernel: ACPI: Core revision 20230628 May 17 00:05:00.200549 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) May 17 00:05:00.200567 kernel: pid_max: default: 32768 minimum: 301 May 17 00:05:00.200585 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 17 00:05:00.200603 kernel: landlock: Up and running. May 17 00:05:00.200625 kernel: SELinux: Initializing. May 17 00:05:00.200643 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 17 00:05:00.200661 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 17 00:05:00.200679 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 17 00:05:00.200697 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 17 00:05:00.200715 kernel: rcu: Hierarchical SRCU implementation. May 17 00:05:00.200733 kernel: rcu: Max phase no-delay instances is 400. May 17 00:05:00.200751 kernel: Platform MSI: ITS@0x10080000 domain created May 17 00:05:00.200768 kernel: PCI/MSI: ITS@0x10080000 domain created May 17 00:05:00.200790 kernel: Remapping and enabling EFI services. May 17 00:05:00.200808 kernel: smp: Bringing up secondary CPUs ... May 17 00:05:00.200826 kernel: Detected PIPT I-cache on CPU1 May 17 00:05:00.200844 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 May 17 00:05:00.200862 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 May 17 00:05:00.200880 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] May 17 00:05:00.200898 kernel: smp: Brought up 1 node, 2 CPUs May 17 00:05:00.200916 kernel: SMP: Total of 2 processors activated. May 17 00:05:00.200933 kernel: CPU features: detected: 32-bit EL0 Support May 17 00:05:00.200955 kernel: CPU features: detected: 32-bit EL1 Support May 17 00:05:00.200973 kernel: CPU features: detected: CRC32 instructions May 17 00:05:00.201026 kernel: CPU: All CPU(s) started at EL1 May 17 00:05:00.201060 kernel: alternatives: applying system-wide alternatives May 17 00:05:00.201083 kernel: devtmpfs: initialized May 17 00:05:00.201102 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 17 00:05:00.201121 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 17 00:05:00.201140 kernel: pinctrl core: initialized pinctrl subsystem May 17 00:05:00.201158 kernel: SMBIOS 3.0.0 present. May 17 00:05:00.201177 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 May 17 00:05:00.201200 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 17 00:05:00.201219 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 17 00:05:00.201238 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 17 00:05:00.201256 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 17 00:05:00.201275 kernel: audit: initializing netlink subsys (disabled) May 17 00:05:00.201294 kernel: audit: type=2000 audit(0.284:1): state=initialized audit_enabled=0 res=1 May 17 00:05:00.201312 kernel: thermal_sys: Registered thermal governor 'step_wise' May 17 00:05:00.201335 kernel: cpuidle: using governor menu May 17 00:05:00.201354 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 17 00:05:00.201372 kernel: ASID allocator initialised with 65536 entries May 17 00:05:00.201391 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 17 00:05:00.201424 kernel: Serial: AMBA PL011 UART driver May 17 00:05:00.201448 kernel: Modules: 17504 pages in range for non-PLT usage May 17 00:05:00.201466 kernel: Modules: 509024 pages in range for PLT usage May 17 00:05:00.201485 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 17 00:05:00.201504 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 17 00:05:00.201528 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 17 00:05:00.201547 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 17 00:05:00.201565 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 17 00:05:00.201584 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 17 00:05:00.201602 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 17 00:05:00.201621 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 17 00:05:00.201640 kernel: ACPI: Added _OSI(Module Device) May 17 00:05:00.201658 kernel: ACPI: Added _OSI(Processor Device) May 17 00:05:00.201676 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 17 00:05:00.201699 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 17 00:05:00.201718 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 17 00:05:00.201737 kernel: ACPI: Interpreter enabled May 17 00:05:00.201755 kernel: ACPI: Using GIC for interrupt routing May 17 00:05:00.201773 kernel: ACPI: MCFG table detected, 1 entries May 17 00:05:00.201792 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) May 17 00:05:00.202131 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 17 00:05:00.202347 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 17 00:05:00.202561 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 17 00:05:00.202766 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 May 17 00:05:00.202977 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] May 17 00:05:00.203033 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] May 17 00:05:00.203053 kernel: acpiphp: Slot [1] registered May 17 00:05:00.203073 kernel: acpiphp: Slot [2] registered May 17 00:05:00.203092 kernel: acpiphp: Slot [3] registered May 17 00:05:00.203110 kernel: acpiphp: Slot [4] registered May 17 00:05:00.203135 kernel: acpiphp: Slot [5] registered May 17 00:05:00.203154 kernel: acpiphp: Slot [6] registered May 17 00:05:00.203173 kernel: acpiphp: Slot [7] registered May 17 00:05:00.203191 kernel: acpiphp: Slot [8] registered May 17 00:05:00.203209 kernel: acpiphp: Slot [9] registered May 17 00:05:00.203228 kernel: acpiphp: Slot [10] registered May 17 00:05:00.203248 kernel: acpiphp: Slot [11] registered May 17 00:05:00.203266 kernel: acpiphp: Slot [12] registered May 17 00:05:00.203284 kernel: acpiphp: Slot [13] registered May 17 00:05:00.203303 kernel: acpiphp: Slot [14] registered May 17 00:05:00.203326 kernel: acpiphp: Slot [15] registered May 17 00:05:00.203345 kernel: acpiphp: Slot [16] registered May 17 00:05:00.203364 kernel: acpiphp: Slot [17] registered May 17 00:05:00.203382 kernel: acpiphp: Slot [18] registered May 17 00:05:00.203401 kernel: acpiphp: Slot [19] registered May 17 00:05:00.203420 kernel: acpiphp: Slot [20] registered May 17 00:05:00.203438 kernel: acpiphp: Slot [21] registered May 17 00:05:00.203457 kernel: acpiphp: Slot [22] registered May 17 00:05:00.203475 kernel: acpiphp: Slot [23] registered May 17 00:05:00.203498 kernel: acpiphp: Slot [24] registered May 17 00:05:00.203517 kernel: acpiphp: Slot [25] registered May 17 00:05:00.203536 kernel: acpiphp: Slot [26] registered May 17 00:05:00.203554 kernel: acpiphp: Slot [27] registered May 17 00:05:00.203573 kernel: acpiphp: Slot [28] registered May 17 00:05:00.203591 kernel: acpiphp: Slot [29] registered May 17 00:05:00.203611 kernel: acpiphp: Slot [30] registered May 17 00:05:00.203629 kernel: acpiphp: Slot [31] registered May 17 00:05:00.203648 kernel: PCI host bridge to bus 0000:00 May 17 00:05:00.203878 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] May 17 00:05:00.206210 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 17 00:05:00.206449 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] May 17 00:05:00.206654 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] May 17 00:05:00.206907 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 May 17 00:05:00.207225 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 May 17 00:05:00.207439 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] May 17 00:05:00.207669 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 May 17 00:05:00.207881 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] May 17 00:05:00.208273 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold May 17 00:05:00.208507 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 May 17 00:05:00.208716 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] May 17 00:05:00.208921 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] May 17 00:05:00.209166 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] May 17 00:05:00.209378 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold May 17 00:05:00.210417 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] May 17 00:05:00.210634 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] May 17 00:05:00.210841 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] May 17 00:05:00.211092 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] May 17 00:05:00.211313 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] May 17 00:05:00.211519 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] May 17 00:05:00.211708 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 17 00:05:00.212521 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] May 17 00:05:00.212560 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 17 00:05:00.212581 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 17 00:05:00.212600 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 17 00:05:00.212620 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 17 00:05:00.212639 kernel: iommu: Default domain type: Translated May 17 00:05:00.212658 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 17 00:05:00.212686 kernel: efivars: Registered efivars operations May 17 00:05:00.212705 kernel: vgaarb: loaded May 17 00:05:00.212724 kernel: clocksource: Switched to clocksource arch_sys_counter May 17 00:05:00.212743 kernel: VFS: Disk quotas dquot_6.6.0 May 17 00:05:00.212762 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 17 00:05:00.212781 kernel: pnp: PnP ACPI init May 17 00:05:00.213042 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved May 17 00:05:00.213076 kernel: pnp: PnP ACPI: found 1 devices May 17 00:05:00.213105 kernel: NET: Registered PF_INET protocol family May 17 00:05:00.213124 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 17 00:05:00.213144 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 17 00:05:00.213164 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 17 00:05:00.213183 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 17 00:05:00.213202 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 17 00:05:00.213221 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 17 00:05:00.213240 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 17 00:05:00.213259 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 17 00:05:00.213282 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 17 00:05:00.213301 kernel: PCI: CLS 0 bytes, default 64 May 17 00:05:00.213320 kernel: kvm [1]: HYP mode not available May 17 00:05:00.213339 kernel: Initialise system trusted keyrings May 17 00:05:00.213359 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 17 00:05:00.213378 kernel: Key type asymmetric registered May 17 00:05:00.213396 kernel: Asymmetric key parser 'x509' registered May 17 00:05:00.213437 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 17 00:05:00.213458 kernel: io scheduler mq-deadline registered May 17 00:05:00.213484 kernel: io scheduler kyber registered May 17 00:05:00.213508 kernel: io scheduler bfq registered May 17 00:05:00.213745 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered May 17 00:05:00.213777 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 17 00:05:00.213798 kernel: ACPI: button: Power Button [PWRB] May 17 00:05:00.213821 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 May 17 00:05:00.213841 kernel: ACPI: button: Sleep Button [SLPB] May 17 00:05:00.213860 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 17 00:05:00.213885 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 May 17 00:05:00.217936 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) May 17 00:05:00.217998 kernel: printk: console [ttyS0] disabled May 17 00:05:00.218024 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A May 17 00:05:00.218044 kernel: printk: console [ttyS0] enabled May 17 00:05:00.218063 kernel: printk: bootconsole [uart0] disabled May 17 00:05:00.218082 kernel: thunder_xcv, ver 1.0 May 17 00:05:00.218100 kernel: thunder_bgx, ver 1.0 May 17 00:05:00.218119 kernel: nicpf, ver 1.0 May 17 00:05:00.218148 kernel: nicvf, ver 1.0 May 17 00:05:00.218387 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 17 00:05:00.218581 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-17T00:04:59 UTC (1747440299) May 17 00:05:00.218608 kernel: hid: raw HID events driver (C) Jiri Kosina May 17 00:05:00.218627 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available May 17 00:05:00.218646 kernel: watchdog: Delayed init of the lockup detector failed: -19 May 17 00:05:00.218665 kernel: watchdog: Hard watchdog permanently disabled May 17 00:05:00.218684 kernel: NET: Registered PF_INET6 protocol family May 17 00:05:00.218707 kernel: Segment Routing with IPv6 May 17 00:05:00.218726 kernel: In-situ OAM (IOAM) with IPv6 May 17 00:05:00.218745 kernel: NET: Registered PF_PACKET protocol family May 17 00:05:00.218763 kernel: Key type dns_resolver registered May 17 00:05:00.218782 kernel: registered taskstats version 1 May 17 00:05:00.218800 kernel: Loading compiled-in X.509 certificates May 17 00:05:00.218819 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.90-flatcar: 02f7129968574a1ae76b1ee42e7674ea1c42071b' May 17 00:05:00.218838 kernel: Key type .fscrypt registered May 17 00:05:00.218856 kernel: Key type fscrypt-provisioning registered May 17 00:05:00.218879 kernel: ima: No TPM chip found, activating TPM-bypass! May 17 00:05:00.218898 kernel: ima: Allocated hash algorithm: sha1 May 17 00:05:00.218916 kernel: ima: No architecture policies found May 17 00:05:00.218935 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 17 00:05:00.218953 kernel: clk: Disabling unused clocks May 17 00:05:00.218972 kernel: Freeing unused kernel memory: 39424K May 17 00:05:00.219008 kernel: Run /init as init process May 17 00:05:00.219030 kernel: with arguments: May 17 00:05:00.219049 kernel: /init May 17 00:05:00.219067 kernel: with environment: May 17 00:05:00.219093 kernel: HOME=/ May 17 00:05:00.219111 kernel: TERM=linux May 17 00:05:00.219130 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 17 00:05:00.219153 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 17 00:05:00.219177 systemd[1]: Detected virtualization amazon. May 17 00:05:00.219198 systemd[1]: Detected architecture arm64. May 17 00:05:00.219218 systemd[1]: Running in initrd. May 17 00:05:00.219243 systemd[1]: No hostname configured, using default hostname. May 17 00:05:00.219263 systemd[1]: Hostname set to . May 17 00:05:00.219284 systemd[1]: Initializing machine ID from VM UUID. May 17 00:05:00.219304 systemd[1]: Queued start job for default target initrd.target. May 17 00:05:00.219324 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 00:05:00.219345 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 00:05:00.219367 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 17 00:05:00.219387 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 17 00:05:00.219412 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 17 00:05:00.219433 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 17 00:05:00.219456 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 17 00:05:00.219477 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 17 00:05:00.219498 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 00:05:00.219518 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 17 00:05:00.219539 systemd[1]: Reached target paths.target - Path Units. May 17 00:05:00.219564 systemd[1]: Reached target slices.target - Slice Units. May 17 00:05:00.219584 systemd[1]: Reached target swap.target - Swaps. May 17 00:05:00.219604 systemd[1]: Reached target timers.target - Timer Units. May 17 00:05:00.219625 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 17 00:05:00.219645 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 17 00:05:00.219665 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 17 00:05:00.219686 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 17 00:05:00.219706 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 17 00:05:00.219727 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 17 00:05:00.219752 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 17 00:05:00.219772 systemd[1]: Reached target sockets.target - Socket Units. May 17 00:05:00.219793 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 17 00:05:00.219814 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 17 00:05:00.219836 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 17 00:05:00.219858 systemd[1]: Starting systemd-fsck-usr.service... May 17 00:05:00.219878 systemd[1]: Starting systemd-journald.service - Journal Service... May 17 00:05:00.219899 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 17 00:05:00.219924 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:05:00.219945 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 17 00:05:00.219965 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 17 00:05:00.222034 systemd[1]: Finished systemd-fsck-usr.service. May 17 00:05:00.222127 systemd-journald[251]: Collecting audit messages is disabled. May 17 00:05:00.222182 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 17 00:05:00.222203 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 17 00:05:00.222224 kernel: Bridge firewalling registered May 17 00:05:00.222249 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 17 00:05:00.222270 systemd-journald[251]: Journal started May 17 00:05:00.222308 systemd-journald[251]: Runtime Journal (/run/log/journal/ec2b4132d2aaa7236f97cb8274973903) is 8.0M, max 75.3M, 67.3M free. May 17 00:05:00.170081 systemd-modules-load[252]: Inserted module 'overlay' May 17 00:05:00.211839 systemd-modules-load[252]: Inserted module 'br_netfilter' May 17 00:05:00.238872 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 17 00:05:00.243381 systemd[1]: Started systemd-journald.service - Journal Service. May 17 00:05:00.249974 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:05:00.252767 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 17 00:05:00.272423 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 00:05:00.283316 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 17 00:05:00.295243 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 17 00:05:00.296946 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 17 00:05:00.329070 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:05:00.348773 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 17 00:05:00.353436 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 00:05:00.370220 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 00:05:00.378275 dracut-cmdline[286]: dracut-dracut-053 May 17 00:05:00.385310 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 17 00:05:00.395563 dracut-cmdline[286]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=3554ca41327a0c5ba7e4ac1b3147487d73f35805806dcb20264133a9c301eb5d May 17 00:05:00.466204 systemd-resolved[297]: Positive Trust Anchors: May 17 00:05:00.466834 systemd-resolved[297]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 00:05:00.466902 systemd-resolved[297]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 17 00:05:00.562275 kernel: SCSI subsystem initialized May 17 00:05:00.570122 kernel: Loading iSCSI transport class v2.0-870. May 17 00:05:00.583125 kernel: iscsi: registered transport (tcp) May 17 00:05:00.605446 kernel: iscsi: registered transport (qla4xxx) May 17 00:05:00.605533 kernel: QLogic iSCSI HBA Driver May 17 00:05:00.687073 kernel: random: crng init done May 17 00:05:00.687424 systemd-resolved[297]: Defaulting to hostname 'linux'. May 17 00:05:00.691427 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 17 00:05:00.693803 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 17 00:05:00.720500 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 17 00:05:00.732333 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 17 00:05:00.774765 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 17 00:05:00.774855 kernel: device-mapper: uevent: version 1.0.3 May 17 00:05:00.774884 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 17 00:05:00.859026 kernel: raid6: neonx8 gen() 6662 MB/s May 17 00:05:00.861037 kernel: raid6: neonx4 gen() 6500 MB/s May 17 00:05:00.878019 kernel: raid6: neonx2 gen() 5440 MB/s May 17 00:05:00.895018 kernel: raid6: neonx1 gen() 3953 MB/s May 17 00:05:00.912018 kernel: raid6: int64x8 gen() 3795 MB/s May 17 00:05:00.929018 kernel: raid6: int64x4 gen() 3714 MB/s May 17 00:05:00.946018 kernel: raid6: int64x2 gen() 3596 MB/s May 17 00:05:00.963827 kernel: raid6: int64x1 gen() 2768 MB/s May 17 00:05:00.963860 kernel: raid6: using algorithm neonx8 gen() 6662 MB/s May 17 00:05:00.981823 kernel: raid6: .... xor() 4933 MB/s, rmw enabled May 17 00:05:00.981864 kernel: raid6: using neon recovery algorithm May 17 00:05:00.990263 kernel: xor: measuring software checksum speed May 17 00:05:00.990332 kernel: 8regs : 10957 MB/sec May 17 00:05:00.991380 kernel: 32regs : 11945 MB/sec May 17 00:05:00.992569 kernel: arm64_neon : 9580 MB/sec May 17 00:05:00.992601 kernel: xor: using function: 32regs (11945 MB/sec) May 17 00:05:01.077037 kernel: Btrfs loaded, zoned=no, fsverity=no May 17 00:05:01.095828 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 17 00:05:01.105320 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 00:05:01.151229 systemd-udevd[472]: Using default interface naming scheme 'v255'. May 17 00:05:01.160939 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 00:05:01.175507 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 17 00:05:01.207928 dracut-pre-trigger[477]: rd.md=0: removing MD RAID activation May 17 00:05:01.262730 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 17 00:05:01.272309 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 17 00:05:01.390522 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 17 00:05:01.404252 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 17 00:05:01.449383 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 17 00:05:01.454386 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 17 00:05:01.473636 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 00:05:01.489629 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 17 00:05:01.500570 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 17 00:05:01.543482 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 17 00:05:01.595841 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 17 00:05:01.595905 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) May 17 00:05:01.609432 kernel: ena 0000:00:05.0: ENA device version: 0.10 May 17 00:05:01.609779 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 May 17 00:05:01.610026 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 May 17 00:05:01.611517 kernel: nvme nvme0: pci function 0000:00:04.0 May 17 00:05:01.616309 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 17 00:05:01.619600 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:bb:8c:70:6f:bb May 17 00:05:01.616678 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:05:01.629495 kernel: nvme nvme0: 2/0/0 default/read/poll queues May 17 00:05:01.626395 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 00:05:01.628494 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:05:01.628756 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:05:01.630977 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:05:01.644826 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 17 00:05:01.644862 kernel: GPT:9289727 != 16777215 May 17 00:05:01.644888 kernel: GPT:Alternate GPT header not at the end of the disk. May 17 00:05:01.645640 kernel: GPT:9289727 != 16777215 May 17 00:05:01.646666 kernel: GPT: Use GNU Parted to correct GPT errors. May 17 00:05:01.647574 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 17 00:05:01.652399 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:05:01.662851 (udev-worker)[518]: Network interface NamePolicy= disabled on kernel command line. May 17 00:05:01.694785 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:05:01.707412 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 00:05:01.752125 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (519) May 17 00:05:01.758443 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:05:01.795048 kernel: BTRFS: device fsid 4797bc80-d55e-4b4a-8ede-cb88964b0162 devid 1 transid 43 /dev/nvme0n1p3 scanned by (udev-worker) (527) May 17 00:05:01.817778 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. May 17 00:05:01.903574 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. May 17 00:05:01.921099 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. May 17 00:05:01.936744 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. May 17 00:05:01.939075 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. May 17 00:05:01.953296 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 17 00:05:01.977881 disk-uuid[662]: Primary Header is updated. May 17 00:05:01.977881 disk-uuid[662]: Secondary Entries is updated. May 17 00:05:01.977881 disk-uuid[662]: Secondary Header is updated. May 17 00:05:01.988059 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 17 00:05:01.998018 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 17 00:05:02.007029 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 17 00:05:03.006188 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 17 00:05:03.006608 disk-uuid[663]: The operation has completed successfully. May 17 00:05:03.186092 systemd[1]: disk-uuid.service: Deactivated successfully. May 17 00:05:03.186291 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 17 00:05:03.232374 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 17 00:05:03.244415 sh[1006]: Success May 17 00:05:03.269049 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 17 00:05:03.378357 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 17 00:05:03.390209 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 17 00:05:03.397814 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 17 00:05:03.435034 kernel: BTRFS info (device dm-0): first mount of filesystem 4797bc80-d55e-4b4a-8ede-cb88964b0162 May 17 00:05:03.435098 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 17 00:05:03.435126 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 17 00:05:03.437445 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 17 00:05:03.437480 kernel: BTRFS info (device dm-0): using free space tree May 17 00:05:03.536015 kernel: BTRFS info (device dm-0): enabling ssd optimizations May 17 00:05:03.558340 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 17 00:05:03.562057 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 17 00:05:03.573254 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 17 00:05:03.578246 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 17 00:05:03.623170 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 28a3b64b-9ec4-4fbe-928b-f7ea14288ccf May 17 00:05:03.623255 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm May 17 00:05:03.624592 kernel: BTRFS info (device nvme0n1p6): using free space tree May 17 00:05:03.632024 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 17 00:05:03.651496 systemd[1]: mnt-oem.mount: Deactivated successfully. May 17 00:05:03.654098 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 28a3b64b-9ec4-4fbe-928b-f7ea14288ccf May 17 00:05:03.663055 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 17 00:05:03.674474 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 17 00:05:03.751023 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 17 00:05:03.762352 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 17 00:05:03.825729 systemd-networkd[1198]: lo: Link UP May 17 00:05:03.827302 systemd-networkd[1198]: lo: Gained carrier May 17 00:05:03.832077 systemd-networkd[1198]: Enumeration completed May 17 00:05:03.832856 systemd-networkd[1198]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:05:03.832862 systemd-networkd[1198]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:05:03.836660 systemd[1]: Started systemd-networkd.service - Network Configuration. May 17 00:05:03.844965 systemd[1]: Reached target network.target - Network. May 17 00:05:03.851074 systemd-networkd[1198]: eth0: Link UP May 17 00:05:03.851093 systemd-networkd[1198]: eth0: Gained carrier May 17 00:05:03.851110 systemd-networkd[1198]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:05:03.870072 systemd-networkd[1198]: eth0: DHCPv4 address 172.31.29.16/20, gateway 172.31.16.1 acquired from 172.31.16.1 May 17 00:05:04.082283 ignition[1136]: Ignition 2.19.0 May 17 00:05:04.082312 ignition[1136]: Stage: fetch-offline May 17 00:05:04.083852 ignition[1136]: no configs at "/usr/lib/ignition/base.d" May 17 00:05:04.083890 ignition[1136]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 17 00:05:04.085586 ignition[1136]: Ignition finished successfully May 17 00:05:04.091410 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 17 00:05:04.112420 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 17 00:05:04.136716 ignition[1209]: Ignition 2.19.0 May 17 00:05:04.136750 ignition[1209]: Stage: fetch May 17 00:05:04.137921 ignition[1209]: no configs at "/usr/lib/ignition/base.d" May 17 00:05:04.137947 ignition[1209]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 17 00:05:04.138353 ignition[1209]: PUT http://169.254.169.254/latest/api/token: attempt #1 May 17 00:05:04.175807 ignition[1209]: PUT result: OK May 17 00:05:04.178865 ignition[1209]: parsed url from cmdline: "" May 17 00:05:04.178887 ignition[1209]: no config URL provided May 17 00:05:04.178904 ignition[1209]: reading system config file "/usr/lib/ignition/user.ign" May 17 00:05:04.178958 ignition[1209]: no config at "/usr/lib/ignition/user.ign" May 17 00:05:04.179041 ignition[1209]: PUT http://169.254.169.254/latest/api/token: attempt #1 May 17 00:05:04.184522 ignition[1209]: PUT result: OK May 17 00:05:04.185161 ignition[1209]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 May 17 00:05:04.189424 ignition[1209]: GET result: OK May 17 00:05:04.189570 ignition[1209]: parsing config with SHA512: 0dfe5bf688081165e6d6ed65db535cce193c1bc7e2f87cbf44ded4a77c8d540a185357b65b67cd1dc6832ce83ad2cde2f8dd0570ea29af3ab44fef83ac5e766d May 17 00:05:04.199337 unknown[1209]: fetched base config from "system" May 17 00:05:04.199376 unknown[1209]: fetched base config from "system" May 17 00:05:04.199393 unknown[1209]: fetched user config from "aws" May 17 00:05:04.202148 ignition[1209]: fetch: fetch complete May 17 00:05:04.202162 ignition[1209]: fetch: fetch passed May 17 00:05:04.202640 ignition[1209]: Ignition finished successfully May 17 00:05:04.212569 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 17 00:05:04.227266 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 17 00:05:04.250730 ignition[1215]: Ignition 2.19.0 May 17 00:05:04.250756 ignition[1215]: Stage: kargs May 17 00:05:04.252577 ignition[1215]: no configs at "/usr/lib/ignition/base.d" May 17 00:05:04.252606 ignition[1215]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 17 00:05:04.252944 ignition[1215]: PUT http://169.254.169.254/latest/api/token: attempt #1 May 17 00:05:04.259862 ignition[1215]: PUT result: OK May 17 00:05:04.264426 ignition[1215]: kargs: kargs passed May 17 00:05:04.264710 ignition[1215]: Ignition finished successfully May 17 00:05:04.269774 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 17 00:05:04.287380 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 17 00:05:04.310530 ignition[1221]: Ignition 2.19.0 May 17 00:05:04.310551 ignition[1221]: Stage: disks May 17 00:05:04.311684 ignition[1221]: no configs at "/usr/lib/ignition/base.d" May 17 00:05:04.311710 ignition[1221]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 17 00:05:04.311866 ignition[1221]: PUT http://169.254.169.254/latest/api/token: attempt #1 May 17 00:05:04.314839 ignition[1221]: PUT result: OK May 17 00:05:04.324799 ignition[1221]: disks: disks passed May 17 00:05:04.324896 ignition[1221]: Ignition finished successfully May 17 00:05:04.329307 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 17 00:05:04.333110 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 17 00:05:04.339139 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 17 00:05:04.341675 systemd[1]: Reached target local-fs.target - Local File Systems. May 17 00:05:04.343574 systemd[1]: Reached target sysinit.target - System Initialization. May 17 00:05:04.350999 systemd[1]: Reached target basic.target - Basic System. May 17 00:05:04.363306 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 17 00:05:04.400918 systemd-fsck[1230]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 17 00:05:04.408051 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 17 00:05:04.419244 systemd[1]: Mounting sysroot.mount - /sysroot... May 17 00:05:04.512042 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 50a777b7-c00f-4923-84ce-1c186fc0fd3b r/w with ordered data mode. Quota mode: none. May 17 00:05:04.512840 systemd[1]: Mounted sysroot.mount - /sysroot. May 17 00:05:04.516452 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 17 00:05:04.531141 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 17 00:05:04.537118 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 17 00:05:04.541357 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 17 00:05:04.544666 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 17 00:05:04.544716 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 17 00:05:04.562018 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1249) May 17 00:05:04.566494 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 28a3b64b-9ec4-4fbe-928b-f7ea14288ccf May 17 00:05:04.566568 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm May 17 00:05:04.566596 kernel: BTRFS info (device nvme0n1p6): using free space tree May 17 00:05:04.568612 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 17 00:05:04.577027 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 17 00:05:04.581233 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 17 00:05:04.587569 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 17 00:05:04.990119 initrd-setup-root[1273]: cut: /sysroot/etc/passwd: No such file or directory May 17 00:05:04.999290 initrd-setup-root[1280]: cut: /sysroot/etc/group: No such file or directory May 17 00:05:05.018911 initrd-setup-root[1287]: cut: /sysroot/etc/shadow: No such file or directory May 17 00:05:05.027784 initrd-setup-root[1294]: cut: /sysroot/etc/gshadow: No such file or directory May 17 00:05:05.041131 systemd-networkd[1198]: eth0: Gained IPv6LL May 17 00:05:05.332753 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 17 00:05:05.342201 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 17 00:05:05.351339 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 17 00:05:05.368000 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 17 00:05:05.370322 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 28a3b64b-9ec4-4fbe-928b-f7ea14288ccf May 17 00:05:05.403267 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 17 00:05:05.421947 ignition[1362]: INFO : Ignition 2.19.0 May 17 00:05:05.421947 ignition[1362]: INFO : Stage: mount May 17 00:05:05.425156 ignition[1362]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:05:05.425156 ignition[1362]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" May 17 00:05:05.429314 ignition[1362]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 May 17 00:05:05.432147 ignition[1362]: INFO : PUT result: OK May 17 00:05:05.436572 ignition[1362]: INFO : mount: mount passed May 17 00:05:05.438484 ignition[1362]: INFO : Ignition finished successfully May 17 00:05:05.442272 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 17 00:05:05.460151 systemd[1]: Starting ignition-files.service - Ignition (files)... May 17 00:05:05.519368 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 17 00:05:05.552022 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1373) May 17 00:05:05.555822 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 28a3b64b-9ec4-4fbe-928b-f7ea14288ccf May 17 00:05:05.555862 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm May 17 00:05:05.557056 kernel: BTRFS info (device nvme0n1p6): using free space tree May 17 00:05:05.562004 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 17 00:05:05.565827 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 17 00:05:05.608284 ignition[1390]: INFO : Ignition 2.19.0 May 17 00:05:05.608284 ignition[1390]: INFO : Stage: files May 17 00:05:05.611608 ignition[1390]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:05:05.611608 ignition[1390]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" May 17 00:05:05.611608 ignition[1390]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 May 17 00:05:05.618038 ignition[1390]: INFO : PUT result: OK May 17 00:05:05.622704 ignition[1390]: DEBUG : files: compiled without relabeling support, skipping May 17 00:05:05.626539 ignition[1390]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 17 00:05:05.626539 ignition[1390]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 17 00:05:05.655870 ignition[1390]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 17 00:05:05.658738 ignition[1390]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 17 00:05:05.661794 unknown[1390]: wrote ssh authorized keys file for user: core May 17 00:05:05.664089 ignition[1390]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 17 00:05:05.667436 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" May 17 00:05:05.670983 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 May 17 00:05:05.765502 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 17 00:05:05.964144 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" May 17 00:05:05.964144 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 17 00:05:05.970982 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 May 17 00:05:06.349920 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 17 00:05:06.478912 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 17 00:05:06.482785 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 17 00:05:06.482785 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 17 00:05:06.482785 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 17 00:05:06.482785 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 17 00:05:06.482785 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 00:05:06.482785 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 00:05:06.482785 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 00:05:06.482785 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 00:05:06.482785 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:05:06.482785 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:05:06.514109 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" May 17 00:05:06.514109 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" May 17 00:05:06.514109 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" May 17 00:05:06.514109 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 May 17 00:05:07.403908 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 17 00:05:07.732884 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" May 17 00:05:07.736900 ignition[1390]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 17 00:05:07.740072 ignition[1390]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 00:05:07.740072 ignition[1390]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 00:05:07.740072 ignition[1390]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 17 00:05:07.740072 ignition[1390]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" May 17 00:05:07.740072 ignition[1390]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" May 17 00:05:07.740072 ignition[1390]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" May 17 00:05:07.740072 ignition[1390]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" May 17 00:05:07.740072 ignition[1390]: INFO : files: files passed May 17 00:05:07.740072 ignition[1390]: INFO : Ignition finished successfully May 17 00:05:07.765600 systemd[1]: Finished ignition-files.service - Ignition (files). May 17 00:05:07.779237 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 17 00:05:07.786442 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 17 00:05:07.795546 systemd[1]: ignition-quench.service: Deactivated successfully. May 17 00:05:07.795766 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 17 00:05:07.822644 initrd-setup-root-after-ignition[1418]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 17 00:05:07.822644 initrd-setup-root-after-ignition[1418]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 17 00:05:07.829808 initrd-setup-root-after-ignition[1422]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 17 00:05:07.836262 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 17 00:05:07.839094 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 17 00:05:07.849422 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 17 00:05:07.901469 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 17 00:05:07.902937 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 17 00:05:07.906426 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 17 00:05:07.911979 systemd[1]: Reached target initrd.target - Initrd Default Target. May 17 00:05:07.914420 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 17 00:05:07.925267 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 17 00:05:07.954040 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 17 00:05:07.974393 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 17 00:05:07.999634 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 17 00:05:08.004881 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 00:05:08.007779 systemd[1]: Stopped target timers.target - Timer Units. May 17 00:05:08.013285 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 17 00:05:08.013545 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 17 00:05:08.019794 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 17 00:05:08.021822 systemd[1]: Stopped target basic.target - Basic System. May 17 00:05:08.023705 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 17 00:05:08.031250 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 17 00:05:08.034969 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 17 00:05:08.039767 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 17 00:05:08.042605 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 17 00:05:08.049554 systemd[1]: Stopped target sysinit.target - System Initialization. May 17 00:05:08.051637 systemd[1]: Stopped target local-fs.target - Local File Systems. May 17 00:05:08.053748 systemd[1]: Stopped target swap.target - Swaps. May 17 00:05:08.061242 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 17 00:05:08.061487 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 17 00:05:08.064129 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 17 00:05:08.071876 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 00:05:08.074825 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 17 00:05:08.078968 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 00:05:08.086472 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 17 00:05:08.086693 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 17 00:05:08.089305 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 17 00:05:08.090870 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 17 00:05:08.093404 systemd[1]: ignition-files.service: Deactivated successfully. May 17 00:05:08.093637 systemd[1]: Stopped ignition-files.service - Ignition (files). May 17 00:05:08.115462 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 17 00:05:08.122349 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 17 00:05:08.126874 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 17 00:05:08.127348 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 17 00:05:08.135365 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 17 00:05:08.135627 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 17 00:05:08.150699 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 17 00:05:08.151052 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 17 00:05:08.180836 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 17 00:05:08.187040 ignition[1442]: INFO : Ignition 2.19.0 May 17 00:05:08.187040 ignition[1442]: INFO : Stage: umount May 17 00:05:08.190452 ignition[1442]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:05:08.190452 ignition[1442]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" May 17 00:05:08.190452 ignition[1442]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 May 17 00:05:08.196281 systemd[1]: sysroot-boot.service: Deactivated successfully. May 17 00:05:08.196666 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 17 00:05:08.204234 ignition[1442]: INFO : PUT result: OK May 17 00:05:08.208765 ignition[1442]: INFO : umount: umount passed May 17 00:05:08.210457 ignition[1442]: INFO : Ignition finished successfully May 17 00:05:08.214263 systemd[1]: ignition-mount.service: Deactivated successfully. May 17 00:05:08.216095 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 17 00:05:08.218456 systemd[1]: ignition-disks.service: Deactivated successfully. May 17 00:05:08.218541 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 17 00:05:08.226848 systemd[1]: ignition-kargs.service: Deactivated successfully. May 17 00:05:08.226937 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 17 00:05:08.241825 systemd[1]: ignition-fetch.service: Deactivated successfully. May 17 00:05:08.241904 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 17 00:05:08.245404 systemd[1]: Stopped target network.target - Network. May 17 00:05:08.248335 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 17 00:05:08.248421 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 17 00:05:08.250609 systemd[1]: Stopped target paths.target - Path Units. May 17 00:05:08.252240 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 17 00:05:08.257270 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 00:05:08.260344 systemd[1]: Stopped target slices.target - Slice Units. May 17 00:05:08.262072 systemd[1]: Stopped target sockets.target - Socket Units. May 17 00:05:08.264211 systemd[1]: iscsid.socket: Deactivated successfully. May 17 00:05:08.264285 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 17 00:05:08.266203 systemd[1]: iscsiuio.socket: Deactivated successfully. May 17 00:05:08.266271 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 17 00:05:08.268222 systemd[1]: ignition-setup.service: Deactivated successfully. May 17 00:05:08.268299 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 17 00:05:08.270180 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 17 00:05:08.270256 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 17 00:05:08.272246 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 17 00:05:08.272320 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 17 00:05:08.274774 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 17 00:05:08.277720 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 17 00:05:08.304415 systemd-networkd[1198]: eth0: DHCPv6 lease lost May 17 00:05:08.312507 systemd[1]: systemd-networkd.service: Deactivated successfully. May 17 00:05:08.314813 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 17 00:05:08.317889 systemd[1]: systemd-resolved.service: Deactivated successfully. May 17 00:05:08.318419 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 17 00:05:08.334896 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 17 00:05:08.335808 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 17 00:05:08.348338 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 17 00:05:08.353366 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 17 00:05:08.353501 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 17 00:05:08.355930 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 17 00:05:08.356031 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 17 00:05:08.358329 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 17 00:05:08.358405 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 17 00:05:08.360880 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 17 00:05:08.360955 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 00:05:08.366063 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 00:05:08.406478 systemd[1]: network-cleanup.service: Deactivated successfully. May 17 00:05:08.406870 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 17 00:05:08.417693 systemd[1]: systemd-udevd.service: Deactivated successfully. May 17 00:05:08.419545 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 00:05:08.423103 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 17 00:05:08.423186 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 17 00:05:08.431728 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 17 00:05:08.431802 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 17 00:05:08.433880 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 17 00:05:08.433967 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 17 00:05:08.436906 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 17 00:05:08.437480 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 17 00:05:08.449072 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 17 00:05:08.449166 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:05:08.466333 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 17 00:05:08.470919 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 17 00:05:08.471065 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 00:05:08.476359 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 17 00:05:08.476458 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 17 00:05:08.481778 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 17 00:05:08.481881 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 17 00:05:08.486970 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:05:08.487076 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:05:08.492487 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 17 00:05:08.492663 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 17 00:05:08.510240 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 17 00:05:08.522288 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 17 00:05:08.541812 systemd[1]: Switching root. May 17 00:05:08.595278 systemd-journald[251]: Journal stopped May 17 00:05:10.676110 systemd-journald[251]: Received SIGTERM from PID 1 (systemd). May 17 00:05:10.676237 kernel: SELinux: policy capability network_peer_controls=1 May 17 00:05:10.676280 kernel: SELinux: policy capability open_perms=1 May 17 00:05:10.676319 kernel: SELinux: policy capability extended_socket_class=1 May 17 00:05:10.676352 kernel: SELinux: policy capability always_check_network=0 May 17 00:05:10.676383 kernel: SELinux: policy capability cgroup_seclabel=1 May 17 00:05:10.676414 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 17 00:05:10.676445 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 17 00:05:10.676481 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 17 00:05:10.676520 kernel: audit: type=1403 audit(1747440308.972:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 17 00:05:10.676563 systemd[1]: Successfully loaded SELinux policy in 48.646ms. May 17 00:05:10.676613 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 23.031ms. May 17 00:05:10.676648 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 17 00:05:10.676680 systemd[1]: Detected virtualization amazon. May 17 00:05:10.676712 systemd[1]: Detected architecture arm64. May 17 00:05:10.676743 systemd[1]: Detected first boot. May 17 00:05:10.676779 systemd[1]: Initializing machine ID from VM UUID. May 17 00:05:10.676811 zram_generator::config[1484]: No configuration found. May 17 00:05:10.676843 systemd[1]: Populated /etc with preset unit settings. May 17 00:05:10.676876 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 17 00:05:10.676908 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 17 00:05:10.676940 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 17 00:05:10.676973 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 17 00:05:10.677863 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 17 00:05:10.677906 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 17 00:05:10.677946 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 17 00:05:10.677977 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 17 00:05:10.678056 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 17 00:05:10.678090 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 17 00:05:10.678122 systemd[1]: Created slice user.slice - User and Session Slice. May 17 00:05:10.678154 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 00:05:10.678187 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 00:05:10.678219 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 17 00:05:10.678255 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 17 00:05:10.678287 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 17 00:05:10.678318 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 17 00:05:10.678347 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 17 00:05:10.678378 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 00:05:10.678409 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 17 00:05:10.678440 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 17 00:05:10.678471 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 17 00:05:10.678504 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 17 00:05:10.678535 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 00:05:10.678568 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 17 00:05:10.678598 systemd[1]: Reached target slices.target - Slice Units. May 17 00:05:10.678629 systemd[1]: Reached target swap.target - Swaps. May 17 00:05:10.678659 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 17 00:05:10.678690 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 17 00:05:10.678720 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 17 00:05:10.678750 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 17 00:05:10.678784 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 17 00:05:10.678816 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 17 00:05:10.678845 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 17 00:05:10.678874 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 17 00:05:10.678903 systemd[1]: Mounting media.mount - External Media Directory... May 17 00:05:10.678935 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 17 00:05:10.678969 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 17 00:05:10.679033 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 17 00:05:10.679071 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 17 00:05:10.679108 systemd[1]: Reached target machines.target - Containers. May 17 00:05:10.679139 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 17 00:05:10.679169 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 00:05:10.679203 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 17 00:05:10.679233 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 17 00:05:10.679262 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 17 00:05:10.679291 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 17 00:05:10.679320 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 17 00:05:10.679354 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 17 00:05:10.679385 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 17 00:05:10.679418 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 17 00:05:10.679450 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 17 00:05:10.679479 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 17 00:05:10.679509 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 17 00:05:10.679538 systemd[1]: Stopped systemd-fsck-usr.service. May 17 00:05:10.679569 systemd[1]: Starting systemd-journald.service - Journal Service... May 17 00:05:10.679598 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 17 00:05:10.679631 kernel: fuse: init (API version 7.39) May 17 00:05:10.679663 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 17 00:05:10.679698 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 17 00:05:10.679727 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 17 00:05:10.679759 systemd[1]: verity-setup.service: Deactivated successfully. May 17 00:05:10.679788 systemd[1]: Stopped verity-setup.service. May 17 00:05:10.679817 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 17 00:05:10.679847 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 17 00:05:10.679876 kernel: loop: module loaded May 17 00:05:10.679909 systemd[1]: Mounted media.mount - External Media Directory. May 17 00:05:10.679942 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 17 00:05:10.679972 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 17 00:05:10.680045 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 17 00:05:10.680079 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 17 00:05:10.680115 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 17 00:05:10.680146 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 17 00:05:10.680177 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 17 00:05:10.682479 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:05:10.682523 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 17 00:05:10.682558 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:05:10.682589 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 17 00:05:10.682619 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 17 00:05:10.682649 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 17 00:05:10.682690 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:05:10.682753 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 17 00:05:10.682788 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 17 00:05:10.682818 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 17 00:05:10.682848 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 17 00:05:10.682884 systemd[1]: Reached target network-pre.target - Preparation for Network. May 17 00:05:10.682913 kernel: ACPI: bus type drm_connector registered May 17 00:05:10.682943 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 17 00:05:10.682972 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 17 00:05:10.683030 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 17 00:05:10.683063 systemd[1]: Reached target local-fs.target - Local File Systems. May 17 00:05:10.683093 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 17 00:05:10.683123 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 17 00:05:10.683204 systemd-journald[1573]: Collecting audit messages is disabled. May 17 00:05:10.683272 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 17 00:05:10.683310 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 00:05:10.683341 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 17 00:05:10.683375 systemd-journald[1573]: Journal started May 17 00:05:10.683424 systemd-journald[1573]: Runtime Journal (/run/log/journal/ec2b4132d2aaa7236f97cb8274973903) is 8.0M, max 75.3M, 67.3M free. May 17 00:05:10.001356 systemd[1]: Queued start job for default target multi-user.target. May 17 00:05:10.689427 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:05:10.035669 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. May 17 00:05:10.036476 systemd[1]: systemd-journald.service: Deactivated successfully. May 17 00:05:10.710257 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 17 00:05:10.710355 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 17 00:05:10.743163 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 17 00:05:10.743262 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 17 00:05:10.763254 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 17 00:05:10.767111 systemd[1]: Started systemd-journald.service - Journal Service. May 17 00:05:10.772607 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 00:05:10.772940 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 17 00:05:10.775404 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 17 00:05:10.777910 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 17 00:05:10.781035 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 17 00:05:10.809103 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 17 00:05:10.831901 kernel: loop0: detected capacity change from 0 to 114432 May 17 00:05:10.860072 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 17 00:05:10.873374 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 17 00:05:10.879681 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 17 00:05:10.897439 systemd-journald[1573]: Time spent on flushing to /var/log/journal/ec2b4132d2aaa7236f97cb8274973903 is 136.631ms for 916 entries. May 17 00:05:10.897439 systemd-journald[1573]: System Journal (/var/log/journal/ec2b4132d2aaa7236f97cb8274973903) is 8.0M, max 195.6M, 187.6M free. May 17 00:05:11.056283 systemd-journald[1573]: Received client request to flush runtime journal. May 17 00:05:11.056402 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 17 00:05:11.056456 kernel: loop1: detected capacity change from 0 to 211168 May 17 00:05:10.907321 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 17 00:05:10.935645 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 17 00:05:10.949350 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 17 00:05:10.978440 systemd-tmpfiles[1596]: ACLs are not supported, ignoring. May 17 00:05:10.978465 systemd-tmpfiles[1596]: ACLs are not supported, ignoring. May 17 00:05:10.991049 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 17 00:05:11.004117 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 17 00:05:11.039690 udevadm[1627]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 17 00:05:11.064843 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 17 00:05:11.066960 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 17 00:05:11.072744 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 17 00:05:11.119863 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 17 00:05:11.135528 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 17 00:05:11.185882 systemd-tmpfiles[1636]: ACLs are not supported, ignoring. May 17 00:05:11.185917 systemd-tmpfiles[1636]: ACLs are not supported, ignoring. May 17 00:05:11.194472 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 00:05:11.287078 kernel: loop2: detected capacity change from 0 to 52536 May 17 00:05:11.355554 kernel: loop3: detected capacity change from 0 to 114328 May 17 00:05:11.478039 kernel: loop4: detected capacity change from 0 to 114432 May 17 00:05:11.496035 kernel: loop5: detected capacity change from 0 to 211168 May 17 00:05:11.533136 kernel: loop6: detected capacity change from 0 to 52536 May 17 00:05:11.561031 kernel: loop7: detected capacity change from 0 to 114328 May 17 00:05:11.569892 (sd-merge)[1643]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. May 17 00:05:11.570941 (sd-merge)[1643]: Merged extensions into '/usr'. May 17 00:05:11.580933 systemd[1]: Reloading requested from client PID 1595 ('systemd-sysext') (unit systemd-sysext.service)... May 17 00:05:11.581276 systemd[1]: Reloading... May 17 00:05:11.639818 ldconfig[1591]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 17 00:05:11.724205 zram_generator::config[1665]: No configuration found. May 17 00:05:12.007098 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:05:12.116037 systemd[1]: Reloading finished in 533 ms. May 17 00:05:12.156420 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 17 00:05:12.159127 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 17 00:05:12.161933 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 17 00:05:12.177442 systemd[1]: Starting ensure-sysext.service... May 17 00:05:12.190636 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 17 00:05:12.198435 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 00:05:12.220197 systemd[1]: Reloading requested from client PID 1722 ('systemctl') (unit ensure-sysext.service)... May 17 00:05:12.220229 systemd[1]: Reloading... May 17 00:05:12.246604 systemd-tmpfiles[1723]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 17 00:05:12.247300 systemd-tmpfiles[1723]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 17 00:05:12.251867 systemd-tmpfiles[1723]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 17 00:05:12.252440 systemd-tmpfiles[1723]: ACLs are not supported, ignoring. May 17 00:05:12.252596 systemd-tmpfiles[1723]: ACLs are not supported, ignoring. May 17 00:05:12.270133 systemd-tmpfiles[1723]: Detected autofs mount point /boot during canonicalization of boot. May 17 00:05:12.270161 systemd-tmpfiles[1723]: Skipping /boot May 17 00:05:12.310050 systemd-udevd[1724]: Using default interface naming scheme 'v255'. May 17 00:05:12.317802 systemd-tmpfiles[1723]: Detected autofs mount point /boot during canonicalization of boot. May 17 00:05:12.322079 systemd-tmpfiles[1723]: Skipping /boot May 17 00:05:12.370024 zram_generator::config[1750]: No configuration found. May 17 00:05:12.506882 (udev-worker)[1756]: Network interface NamePolicy= disabled on kernel command line. May 17 00:05:12.825886 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:05:12.900537 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 43 scanned by (udev-worker) (1771) May 17 00:05:12.990665 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 17 00:05:12.991916 systemd[1]: Reloading finished in 771 ms. May 17 00:05:13.018869 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 00:05:13.046102 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 00:05:13.105390 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 17 00:05:13.158075 systemd[1]: Finished ensure-sysext.service. May 17 00:05:13.164433 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. May 17 00:05:13.175292 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 17 00:05:13.189343 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 17 00:05:13.192476 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 00:05:13.197334 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 17 00:05:13.210408 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 17 00:05:13.220342 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 17 00:05:13.227566 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 17 00:05:13.232845 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 17 00:05:13.235103 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 00:05:13.239357 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 17 00:05:13.254392 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 17 00:05:13.264382 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 17 00:05:13.272491 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 17 00:05:13.274559 systemd[1]: Reached target time-set.target - System Time Set. May 17 00:05:13.280366 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 17 00:05:13.287318 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:05:13.290800 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:05:13.291125 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 17 00:05:13.318112 lvm[1921]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 17 00:05:13.373375 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 17 00:05:13.376883 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 00:05:13.377713 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 17 00:05:13.383849 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:05:13.384218 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 17 00:05:13.389273 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:05:13.405337 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:05:13.406665 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 17 00:05:13.410191 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 17 00:05:13.418655 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 17 00:05:13.429743 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 17 00:05:13.469662 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 17 00:05:13.483350 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 17 00:05:13.486736 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 17 00:05:13.491057 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 17 00:05:13.500325 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 17 00:05:13.515055 augenrules[1959]: No rules May 17 00:05:13.517934 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 17 00:05:13.526709 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 17 00:05:13.527563 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 17 00:05:13.532720 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 17 00:05:13.550624 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 17 00:05:13.558547 lvm[1960]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 17 00:05:13.587128 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:05:13.617091 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 17 00:05:13.682730 systemd-networkd[1934]: lo: Link UP May 17 00:05:13.682757 systemd-networkd[1934]: lo: Gained carrier May 17 00:05:13.685600 systemd-networkd[1934]: Enumeration completed May 17 00:05:13.685789 systemd[1]: Started systemd-networkd.service - Network Configuration. May 17 00:05:13.690779 systemd-networkd[1934]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:05:13.690802 systemd-networkd[1934]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:05:13.693051 systemd-networkd[1934]: eth0: Link UP May 17 00:05:13.693374 systemd-networkd[1934]: eth0: Gained carrier May 17 00:05:13.693419 systemd-networkd[1934]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:05:13.694319 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 17 00:05:13.705141 systemd-networkd[1934]: eth0: DHCPv4 address 172.31.29.16/20, gateway 172.31.16.1 acquired from 172.31.16.1 May 17 00:05:13.711752 systemd-resolved[1935]: Positive Trust Anchors: May 17 00:05:13.711782 systemd-resolved[1935]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 00:05:13.711842 systemd-resolved[1935]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 17 00:05:13.720962 systemd-resolved[1935]: Defaulting to hostname 'linux'. May 17 00:05:13.724349 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 17 00:05:13.728450 systemd[1]: Reached target network.target - Network. May 17 00:05:13.730334 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 17 00:05:13.732692 systemd[1]: Reached target sysinit.target - System Initialization. May 17 00:05:13.734972 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 17 00:05:13.737454 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 17 00:05:13.740642 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 17 00:05:13.743284 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 17 00:05:13.745591 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 17 00:05:13.747920 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 17 00:05:13.748080 systemd[1]: Reached target paths.target - Path Units. May 17 00:05:13.750076 systemd[1]: Reached target timers.target - Timer Units. May 17 00:05:13.753461 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 17 00:05:13.758067 systemd[1]: Starting docker.socket - Docker Socket for the API... May 17 00:05:13.767405 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 17 00:05:13.770612 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 17 00:05:13.773062 systemd[1]: Reached target sockets.target - Socket Units. May 17 00:05:13.775038 systemd[1]: Reached target basic.target - Basic System. May 17 00:05:13.776847 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 17 00:05:13.776897 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 17 00:05:13.784332 systemd[1]: Starting containerd.service - containerd container runtime... May 17 00:05:13.792573 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 17 00:05:13.800120 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 17 00:05:13.808282 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 17 00:05:13.820152 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 17 00:05:13.825189 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 17 00:05:13.845020 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 17 00:05:13.859033 jq[1985]: false May 17 00:05:13.865609 systemd[1]: Started ntpd.service - Network Time Service. May 17 00:05:13.871247 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 17 00:05:13.886394 systemd[1]: Starting setup-oem.service - Setup OEM... May 17 00:05:13.900260 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 17 00:05:13.907316 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 17 00:05:13.915462 extend-filesystems[1986]: Found loop4 May 17 00:05:13.918200 extend-filesystems[1986]: Found loop5 May 17 00:05:13.918200 extend-filesystems[1986]: Found loop6 May 17 00:05:13.918200 extend-filesystems[1986]: Found loop7 May 17 00:05:13.918200 extend-filesystems[1986]: Found nvme0n1 May 17 00:05:13.918200 extend-filesystems[1986]: Found nvme0n1p1 May 17 00:05:13.918200 extend-filesystems[1986]: Found nvme0n1p2 May 17 00:05:13.918200 extend-filesystems[1986]: Found nvme0n1p3 May 17 00:05:13.918200 extend-filesystems[1986]: Found usr May 17 00:05:13.918200 extend-filesystems[1986]: Found nvme0n1p4 May 17 00:05:13.918200 extend-filesystems[1986]: Found nvme0n1p6 May 17 00:05:13.918200 extend-filesystems[1986]: Found nvme0n1p7 May 17 00:05:13.918200 extend-filesystems[1986]: Found nvme0n1p9 May 17 00:05:13.922963 systemd[1]: Starting systemd-logind.service - User Login Management... May 17 00:05:13.951511 extend-filesystems[1986]: Checking size of /dev/nvme0n1p9 May 17 00:05:13.928187 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 17 00:05:13.929110 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 17 00:05:13.933189 systemd[1]: Starting update-engine.service - Update Engine... May 17 00:05:13.953262 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 17 00:05:13.964611 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 17 00:05:13.964935 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 17 00:05:14.008071 ntpd[1990]: ntpd 4.2.8p17@1.4004-o Fri May 16 22:02:25 UTC 2025 (1): Starting May 17 00:05:14.016616 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 17 00:05:14.020139 ntpd[1990]: 17 May 00:05:14 ntpd[1990]: ntpd 4.2.8p17@1.4004-o Fri May 16 22:02:25 UTC 2025 (1): Starting May 17 00:05:14.020139 ntpd[1990]: 17 May 00:05:14 ntpd[1990]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp May 17 00:05:14.020139 ntpd[1990]: 17 May 00:05:14 ntpd[1990]: ---------------------------------------------------- May 17 00:05:14.020139 ntpd[1990]: 17 May 00:05:14 ntpd[1990]: ntp-4 is maintained by Network Time Foundation, May 17 00:05:14.020139 ntpd[1990]: 17 May 00:05:14 ntpd[1990]: Inc. (NTF), a non-profit 501(c)(3) public-benefit May 17 00:05:14.020139 ntpd[1990]: 17 May 00:05:14 ntpd[1990]: corporation. Support and training for ntp-4 are May 17 00:05:14.020139 ntpd[1990]: 17 May 00:05:14 ntpd[1990]: available at https://www.nwtime.org/support May 17 00:05:14.020139 ntpd[1990]: 17 May 00:05:14 ntpd[1990]: ---------------------------------------------------- May 17 00:05:14.020139 ntpd[1990]: 17 May 00:05:14 ntpd[1990]: proto: precision = 0.108 usec (-23) May 17 00:05:14.008132 ntpd[1990]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp May 17 00:05:14.017082 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 17 00:05:14.008152 ntpd[1990]: ---------------------------------------------------- May 17 00:05:14.046884 ntpd[1990]: 17 May 00:05:14 ntpd[1990]: basedate set to 2025-05-04 May 17 00:05:14.046884 ntpd[1990]: 17 May 00:05:14 ntpd[1990]: gps base set to 2025-05-04 (week 2365) May 17 00:05:14.046884 ntpd[1990]: 17 May 00:05:14 ntpd[1990]: Listen and drop on 0 v6wildcard [::]:123 May 17 00:05:14.046884 ntpd[1990]: 17 May 00:05:14 ntpd[1990]: Listen and drop on 1 v4wildcard 0.0.0.0:123 May 17 00:05:14.046884 ntpd[1990]: 17 May 00:05:14 ntpd[1990]: Listen normally on 2 lo 127.0.0.1:123 May 17 00:05:14.046884 ntpd[1990]: 17 May 00:05:14 ntpd[1990]: Listen normally on 3 eth0 172.31.29.16:123 May 17 00:05:14.046884 ntpd[1990]: 17 May 00:05:14 ntpd[1990]: Listen normally on 4 lo [::1]:123 May 17 00:05:14.046884 ntpd[1990]: 17 May 00:05:14 ntpd[1990]: bind(21) AF_INET6 fe80::4bb:8cff:fe70:6fbb%2#123 flags 0x11 failed: Cannot assign requested address May 17 00:05:14.046884 ntpd[1990]: 17 May 00:05:14 ntpd[1990]: unable to create socket on eth0 (5) for fe80::4bb:8cff:fe70:6fbb%2#123 May 17 00:05:14.046884 ntpd[1990]: 17 May 00:05:14 ntpd[1990]: failed to init interface for address fe80::4bb:8cff:fe70:6fbb%2 May 17 00:05:14.046884 ntpd[1990]: 17 May 00:05:14 ntpd[1990]: Listening on routing socket on fd #21 for interface updates May 17 00:05:14.046884 ntpd[1990]: 17 May 00:05:14 ntpd[1990]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 17 00:05:14.046884 ntpd[1990]: 17 May 00:05:14 ntpd[1990]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 17 00:05:14.051370 extend-filesystems[1986]: Resized partition /dev/nvme0n1p9 May 17 00:05:14.008171 ntpd[1990]: ntp-4 is maintained by Network Time Foundation, May 17 00:05:14.048781 systemd[1]: motdgen.service: Deactivated successfully. May 17 00:05:14.064547 jq[2002]: true May 17 00:05:14.008190 ntpd[1990]: Inc. (NTF), a non-profit 501(c)(3) public-benefit May 17 00:05:14.049207 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 17 00:05:14.008208 ntpd[1990]: corporation. Support and training for ntp-4 are May 17 00:05:14.008226 ntpd[1990]: available at https://www.nwtime.org/support May 17 00:05:14.008244 ntpd[1990]: ---------------------------------------------------- May 17 00:05:14.013403 ntpd[1990]: proto: precision = 0.108 usec (-23) May 17 00:05:14.021941 ntpd[1990]: basedate set to 2025-05-04 May 17 00:05:14.078057 extend-filesystems[2022]: resize2fs 1.47.1 (20-May-2024) May 17 00:05:14.021971 ntpd[1990]: gps base set to 2025-05-04 (week 2365) May 17 00:05:14.074372 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 17 00:05:14.024574 ntpd[1990]: Listen and drop on 0 v6wildcard [::]:123 May 17 00:05:14.024659 ntpd[1990]: Listen and drop on 1 v4wildcard 0.0.0.0:123 May 17 00:05:14.024976 ntpd[1990]: Listen normally on 2 lo 127.0.0.1:123 May 17 00:05:14.025102 ntpd[1990]: Listen normally on 3 eth0 172.31.29.16:123 May 17 00:05:14.025170 ntpd[1990]: Listen normally on 4 lo [::1]:123 May 17 00:05:14.116210 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks May 17 00:05:14.025246 ntpd[1990]: bind(21) AF_INET6 fe80::4bb:8cff:fe70:6fbb%2#123 flags 0x11 failed: Cannot assign requested address May 17 00:05:14.098910 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 17 00:05:14.025283 ntpd[1990]: unable to create socket on eth0 (5) for fe80::4bb:8cff:fe70:6fbb%2#123 May 17 00:05:14.099016 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 17 00:05:14.025315 ntpd[1990]: failed to init interface for address fe80::4bb:8cff:fe70:6fbb%2 May 17 00:05:14.101576 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 17 00:05:14.025391 ntpd[1990]: Listening on routing socket on fd #21 for interface updates May 17 00:05:14.101646 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 17 00:05:14.027664 ntpd[1990]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 17 00:05:14.027716 ntpd[1990]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 17 00:05:14.073821 dbus-daemon[1984]: [system] SELinux support is enabled May 17 00:05:14.131822 update_engine[1999]: I20250517 00:05:14.124033 1999 main.cc:92] Flatcar Update Engine starting May 17 00:05:14.126726 (ntainerd)[2020]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 17 00:05:14.134067 dbus-daemon[1984]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1934 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") May 17 00:05:14.146314 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... May 17 00:05:14.159050 tar[2008]: linux-arm64/LICENSE May 17 00:05:14.159050 tar[2008]: linux-arm64/helm May 17 00:05:14.159532 jq[2025]: true May 17 00:05:14.156899 systemd[1]: Started update-engine.service - Update Engine. May 17 00:05:14.168894 update_engine[1999]: I20250517 00:05:14.160281 1999 update_check_scheduler.cc:74] Next update check in 10m33s May 17 00:05:14.187223 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 17 00:05:14.263853 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 May 17 00:05:14.298949 extend-filesystems[2022]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required May 17 00:05:14.298949 extend-filesystems[2022]: old_desc_blocks = 1, new_desc_blocks = 1 May 17 00:05:14.298949 extend-filesystems[2022]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. May 17 00:05:14.317167 extend-filesystems[1986]: Resized filesystem in /dev/nvme0n1p9 May 17 00:05:14.299143 systemd[1]: extend-filesystems.service: Deactivated successfully. May 17 00:05:14.313468 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 17 00:05:14.330144 systemd[1]: Finished setup-oem.service - Setup OEM. May 17 00:05:14.367166 coreos-metadata[1983]: May 17 00:05:14.367 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 May 17 00:05:14.382091 coreos-metadata[1983]: May 17 00:05:14.382 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 May 17 00:05:14.383412 coreos-metadata[1983]: May 17 00:05:14.383 INFO Fetch successful May 17 00:05:14.383412 coreos-metadata[1983]: May 17 00:05:14.383 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 May 17 00:05:14.392047 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 43 scanned by (udev-worker) (1771) May 17 00:05:14.392840 coreos-metadata[1983]: May 17 00:05:14.392 INFO Fetch successful May 17 00:05:14.392840 coreos-metadata[1983]: May 17 00:05:14.392 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 May 17 00:05:14.394949 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 17 00:05:14.396650 coreos-metadata[1983]: May 17 00:05:14.395 INFO Fetch successful May 17 00:05:14.400033 bash[2064]: Updated "/home/core/.ssh/authorized_keys" May 17 00:05:14.400508 coreos-metadata[1983]: May 17 00:05:14.400 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 May 17 00:05:14.407121 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 17 00:05:14.431122 coreos-metadata[1983]: May 17 00:05:14.407 INFO Fetch successful May 17 00:05:14.431122 coreos-metadata[1983]: May 17 00:05:14.430 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 May 17 00:05:14.431122 coreos-metadata[1983]: May 17 00:05:14.431 INFO Fetch failed with 404: resource not found May 17 00:05:14.431122 coreos-metadata[1983]: May 17 00:05:14.431 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 May 17 00:05:14.438453 coreos-metadata[1983]: May 17 00:05:14.438 INFO Fetch successful May 17 00:05:14.438453 coreos-metadata[1983]: May 17 00:05:14.438 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 May 17 00:05:14.439276 coreos-metadata[1983]: May 17 00:05:14.439 INFO Fetch successful May 17 00:05:14.439276 coreos-metadata[1983]: May 17 00:05:14.439 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 May 17 00:05:14.446064 coreos-metadata[1983]: May 17 00:05:14.446 INFO Fetch successful May 17 00:05:14.446064 coreos-metadata[1983]: May 17 00:05:14.446 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 May 17 00:05:14.451086 coreos-metadata[1983]: May 17 00:05:14.451 INFO Fetch successful May 17 00:05:14.451086 coreos-metadata[1983]: May 17 00:05:14.451 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 May 17 00:05:14.453682 coreos-metadata[1983]: May 17 00:05:14.453 INFO Fetch successful May 17 00:05:14.456397 systemd[1]: Starting sshkeys.service... May 17 00:05:14.477625 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 17 00:05:14.489663 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 17 00:05:14.627216 locksmithd[2035]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 17 00:05:14.648635 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 17 00:05:14.651668 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 17 00:05:14.656423 systemd-logind[1997]: Watching system buttons on /dev/input/event0 (Power Button) May 17 00:05:14.656485 systemd-logind[1997]: Watching system buttons on /dev/input/event1 (Sleep Button) May 17 00:05:14.656836 systemd-logind[1997]: New seat seat0. May 17 00:05:14.664353 systemd[1]: Started systemd-logind.service - User Login Management. May 17 00:05:14.928061 coreos-metadata[2086]: May 17 00:05:14.927 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 May 17 00:05:14.933926 coreos-metadata[2086]: May 17 00:05:14.933 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 May 17 00:05:14.937265 coreos-metadata[2086]: May 17 00:05:14.937 INFO Fetch successful May 17 00:05:14.937265 coreos-metadata[2086]: May 17 00:05:14.937 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 May 17 00:05:14.939070 dbus-daemon[1984]: [system] Successfully activated service 'org.freedesktop.hostname1' May 17 00:05:14.939338 systemd[1]: Started systemd-hostnamed.service - Hostname Service. May 17 00:05:14.942875 coreos-metadata[2086]: May 17 00:05:14.942 INFO Fetch successful May 17 00:05:14.944433 dbus-daemon[1984]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2031 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") May 17 00:05:14.945165 unknown[2086]: wrote ssh authorized keys file for user: core May 17 00:05:14.958606 systemd[1]: Starting polkit.service - Authorization Manager... May 17 00:05:15.011475 ntpd[1990]: bind(24) AF_INET6 fe80::4bb:8cff:fe70:6fbb%2#123 flags 0x11 failed: Cannot assign requested address May 17 00:05:15.012068 ntpd[1990]: 17 May 00:05:15 ntpd[1990]: bind(24) AF_INET6 fe80::4bb:8cff:fe70:6fbb%2#123 flags 0x11 failed: Cannot assign requested address May 17 00:05:15.012068 ntpd[1990]: 17 May 00:05:15 ntpd[1990]: unable to create socket on eth0 (6) for fe80::4bb:8cff:fe70:6fbb%2#123 May 17 00:05:15.012068 ntpd[1990]: 17 May 00:05:15 ntpd[1990]: failed to init interface for address fe80::4bb:8cff:fe70:6fbb%2 May 17 00:05:15.011543 ntpd[1990]: unable to create socket on eth0 (6) for fe80::4bb:8cff:fe70:6fbb%2#123 May 17 00:05:15.011572 ntpd[1990]: failed to init interface for address fe80::4bb:8cff:fe70:6fbb%2 May 17 00:05:15.041444 update-ssh-keys[2174]: Updated "/home/core/.ssh/authorized_keys" May 17 00:05:15.047124 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 17 00:05:15.058135 systemd[1]: Finished sshkeys.service. May 17 00:05:15.062222 polkitd[2172]: Started polkitd version 121 May 17 00:05:15.063660 containerd[2020]: time="2025-05-17T00:05:15.060478065Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 May 17 00:05:15.089641 polkitd[2172]: Loading rules from directory /etc/polkit-1/rules.d May 17 00:05:15.091148 polkitd[2172]: Loading rules from directory /usr/share/polkit-1/rules.d May 17 00:05:15.094043 polkitd[2172]: Finished loading, compiling and executing 2 rules May 17 00:05:15.099585 dbus-daemon[1984]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' May 17 00:05:15.102630 polkitd[2172]: Acquired the name org.freedesktop.PolicyKit1 on the system bus May 17 00:05:15.100040 systemd[1]: Started polkit.service - Authorization Manager. May 17 00:05:15.136744 systemd-hostnamed[2031]: Hostname set to (transient) May 17 00:05:15.137121 systemd-resolved[1935]: System hostname changed to 'ip-172-31-29-16'. May 17 00:05:15.147507 containerd[2020]: time="2025-05-17T00:05:15.147174537Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 17 00:05:15.149959 containerd[2020]: time="2025-05-17T00:05:15.149882385Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.90-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 17 00:05:15.150078 containerd[2020]: time="2025-05-17T00:05:15.149955009Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 17 00:05:15.150555 containerd[2020]: time="2025-05-17T00:05:15.150513621Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 17 00:05:15.150893 containerd[2020]: time="2025-05-17T00:05:15.150851169Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 17 00:05:15.150974 containerd[2020]: time="2025-05-17T00:05:15.150897573Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 17 00:05:15.151083 containerd[2020]: time="2025-05-17T00:05:15.151039929Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:05:15.151136 containerd[2020]: time="2025-05-17T00:05:15.151079961Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 17 00:05:15.151430 containerd[2020]: time="2025-05-17T00:05:15.151383417Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:05:15.151487 containerd[2020]: time="2025-05-17T00:05:15.151427649Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 17 00:05:15.151487 containerd[2020]: time="2025-05-17T00:05:15.151462329Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:05:15.151587 containerd[2020]: time="2025-05-17T00:05:15.151488981Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 17 00:05:15.151702 containerd[2020]: time="2025-05-17T00:05:15.151662153Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 17 00:05:15.154192 containerd[2020]: time="2025-05-17T00:05:15.154139853Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 17 00:05:15.154442 containerd[2020]: time="2025-05-17T00:05:15.154394121Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:05:15.154499 containerd[2020]: time="2025-05-17T00:05:15.154438281Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 17 00:05:15.154676 containerd[2020]: time="2025-05-17T00:05:15.154637553Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 17 00:05:15.154791 containerd[2020]: time="2025-05-17T00:05:15.154749225Z" level=info msg="metadata content store policy set" policy=shared May 17 00:05:15.162742 containerd[2020]: time="2025-05-17T00:05:15.162677025Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 17 00:05:15.162852 containerd[2020]: time="2025-05-17T00:05:15.162793809Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 17 00:05:15.163038 containerd[2020]: time="2025-05-17T00:05:15.162980589Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 17 00:05:15.163095 containerd[2020]: time="2025-05-17T00:05:15.163048881Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 17 00:05:15.163142 containerd[2020]: time="2025-05-17T00:05:15.163094937Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 17 00:05:15.163399 containerd[2020]: time="2025-05-17T00:05:15.163357653Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 17 00:05:15.164734 containerd[2020]: time="2025-05-17T00:05:15.164689017Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 17 00:05:15.164947 containerd[2020]: time="2025-05-17T00:05:15.164907897Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 17 00:05:15.165020 containerd[2020]: time="2025-05-17T00:05:15.164966253Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 17 00:05:15.165092 containerd[2020]: time="2025-05-17T00:05:15.165019329Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 17 00:05:15.165092 containerd[2020]: time="2025-05-17T00:05:15.165062193Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 17 00:05:15.165177 containerd[2020]: time="2025-05-17T00:05:15.165095601Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 17 00:05:15.165177 containerd[2020]: time="2025-05-17T00:05:15.165132309Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 17 00:05:15.165177 containerd[2020]: time="2025-05-17T00:05:15.165164541Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 17 00:05:15.165294 containerd[2020]: time="2025-05-17T00:05:15.165196869Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 17 00:05:15.165294 containerd[2020]: time="2025-05-17T00:05:15.165235557Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 17 00:05:15.165294 containerd[2020]: time="2025-05-17T00:05:15.165265581Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 17 00:05:15.165452 containerd[2020]: time="2025-05-17T00:05:15.165293169Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 17 00:05:15.165452 containerd[2020]: time="2025-05-17T00:05:15.165332553Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 17 00:05:15.165452 containerd[2020]: time="2025-05-17T00:05:15.165401289Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 17 00:05:15.165452 containerd[2020]: time="2025-05-17T00:05:15.165432441Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 17 00:05:15.165630 containerd[2020]: time="2025-05-17T00:05:15.165470289Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 17 00:05:15.165630 containerd[2020]: time="2025-05-17T00:05:15.165501873Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 17 00:05:15.165630 containerd[2020]: time="2025-05-17T00:05:15.165532221Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 17 00:05:15.165630 containerd[2020]: time="2025-05-17T00:05:15.165560013Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 17 00:05:15.165630 containerd[2020]: time="2025-05-17T00:05:15.165589857Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 17 00:05:15.165630 containerd[2020]: time="2025-05-17T00:05:15.165620445Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 17 00:05:15.165871 containerd[2020]: time="2025-05-17T00:05:15.165655725Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 17 00:05:15.165871 containerd[2020]: time="2025-05-17T00:05:15.165690777Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 17 00:05:15.165871 containerd[2020]: time="2025-05-17T00:05:15.165720585Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 17 00:05:15.165871 containerd[2020]: time="2025-05-17T00:05:15.165756465Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 17 00:05:15.165871 containerd[2020]: time="2025-05-17T00:05:15.165791469Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 17 00:05:15.165871 containerd[2020]: time="2025-05-17T00:05:15.165838149Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 17 00:05:15.165871 containerd[2020]: time="2025-05-17T00:05:15.165866937Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 17 00:05:15.166202 containerd[2020]: time="2025-05-17T00:05:15.165893613Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 17 00:05:15.167116 containerd[2020]: time="2025-05-17T00:05:15.167078445Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 17 00:05:15.168961 containerd[2020]: time="2025-05-17T00:05:15.167321937Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 17 00:05:15.168961 containerd[2020]: time="2025-05-17T00:05:15.167356341Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 17 00:05:15.168961 containerd[2020]: time="2025-05-17T00:05:15.167391777Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 17 00:05:15.168961 containerd[2020]: time="2025-05-17T00:05:15.167417625Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 17 00:05:15.168961 containerd[2020]: time="2025-05-17T00:05:15.167449725Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 17 00:05:15.168961 containerd[2020]: time="2025-05-17T00:05:15.167484333Z" level=info msg="NRI interface is disabled by configuration." May 17 00:05:15.168961 containerd[2020]: time="2025-05-17T00:05:15.167518689Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 17 00:05:15.169390 containerd[2020]: time="2025-05-17T00:05:15.168037401Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 17 00:05:15.169390 containerd[2020]: time="2025-05-17T00:05:15.168146049Z" level=info msg="Connect containerd service" May 17 00:05:15.169390 containerd[2020]: time="2025-05-17T00:05:15.168194793Z" level=info msg="using legacy CRI server" May 17 00:05:15.169390 containerd[2020]: time="2025-05-17T00:05:15.168211881Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 17 00:05:15.169390 containerd[2020]: time="2025-05-17T00:05:15.168414969Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 17 00:05:15.170415 containerd[2020]: time="2025-05-17T00:05:15.170318457Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 17 00:05:15.170771 containerd[2020]: time="2025-05-17T00:05:15.170718861Z" level=info msg="Start subscribing containerd event" May 17 00:05:15.170923 containerd[2020]: time="2025-05-17T00:05:15.170895513Z" level=info msg="Start recovering state" May 17 00:05:15.171150 containerd[2020]: time="2025-05-17T00:05:15.171124149Z" level=info msg="Start event monitor" May 17 00:05:15.171251 containerd[2020]: time="2025-05-17T00:05:15.171225489Z" level=info msg="Start snapshots syncer" May 17 00:05:15.171347 containerd[2020]: time="2025-05-17T00:05:15.171322017Z" level=info msg="Start cni network conf syncer for default" May 17 00:05:15.171441 containerd[2020]: time="2025-05-17T00:05:15.171416025Z" level=info msg="Start streaming server" May 17 00:05:15.175266 containerd[2020]: time="2025-05-17T00:05:15.175212321Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 17 00:05:15.185562 containerd[2020]: time="2025-05-17T00:05:15.182481693Z" level=info msg=serving... address=/run/containerd/containerd.sock May 17 00:05:15.185562 containerd[2020]: time="2025-05-17T00:05:15.182627961Z" level=info msg="containerd successfully booted in 0.125319s" May 17 00:05:15.182751 systemd[1]: Started containerd.service - containerd container runtime. May 17 00:05:15.281176 systemd-networkd[1934]: eth0: Gained IPv6LL May 17 00:05:15.289283 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 17 00:05:15.293010 systemd[1]: Reached target network-online.target - Network is Online. May 17 00:05:15.305553 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. May 17 00:05:15.319169 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:05:15.327557 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 17 00:05:15.431098 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 17 00:05:15.478273 amazon-ssm-agent[2190]: Initializing new seelog logger May 17 00:05:15.482013 amazon-ssm-agent[2190]: New Seelog Logger Creation Complete May 17 00:05:15.482013 amazon-ssm-agent[2190]: 2025/05/17 00:05:15 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 17 00:05:15.482013 amazon-ssm-agent[2190]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 17 00:05:15.482013 amazon-ssm-agent[2190]: 2025/05/17 00:05:15 processing appconfig overrides May 17 00:05:15.482808 amazon-ssm-agent[2190]: 2025/05/17 00:05:15 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 17 00:05:15.482808 amazon-ssm-agent[2190]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 17 00:05:15.482937 amazon-ssm-agent[2190]: 2025/05/17 00:05:15 processing appconfig overrides May 17 00:05:15.483211 amazon-ssm-agent[2190]: 2025/05/17 00:05:15 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 17 00:05:15.483211 amazon-ssm-agent[2190]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 17 00:05:15.483335 amazon-ssm-agent[2190]: 2025/05/17 00:05:15 processing appconfig overrides May 17 00:05:15.484135 amazon-ssm-agent[2190]: 2025-05-17 00:05:15 INFO Proxy environment variables: May 17 00:05:15.490188 amazon-ssm-agent[2190]: 2025/05/17 00:05:15 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 17 00:05:15.494026 amazon-ssm-agent[2190]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 17 00:05:15.494026 amazon-ssm-agent[2190]: 2025/05/17 00:05:15 processing appconfig overrides May 17 00:05:15.585072 amazon-ssm-agent[2190]: 2025-05-17 00:05:15 INFO https_proxy: May 17 00:05:15.684838 amazon-ssm-agent[2190]: 2025-05-17 00:05:15 INFO http_proxy: May 17 00:05:15.786126 amazon-ssm-agent[2190]: 2025-05-17 00:05:15 INFO no_proxy: May 17 00:05:15.886007 amazon-ssm-agent[2190]: 2025-05-17 00:05:15 INFO Checking if agent identity type OnPrem can be assumed May 17 00:05:15.949381 tar[2008]: linux-arm64/README.md May 17 00:05:15.984082 amazon-ssm-agent[2190]: 2025-05-17 00:05:15 INFO Checking if agent identity type EC2 can be assumed May 17 00:05:15.991173 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 17 00:05:16.083830 amazon-ssm-agent[2190]: 2025-05-17 00:05:15 INFO Agent will take identity from EC2 May 17 00:05:16.182470 amazon-ssm-agent[2190]: 2025-05-17 00:05:15 INFO [amazon-ssm-agent] using named pipe channel for IPC May 17 00:05:16.281428 amazon-ssm-agent[2190]: 2025-05-17 00:05:15 INFO [amazon-ssm-agent] using named pipe channel for IPC May 17 00:05:16.336330 amazon-ssm-agent[2190]: 2025-05-17 00:05:15 INFO [amazon-ssm-agent] using named pipe channel for IPC May 17 00:05:16.336880 amazon-ssm-agent[2190]: 2025-05-17 00:05:15 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 May 17 00:05:16.336937 amazon-ssm-agent[2190]: 2025-05-17 00:05:15 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 May 17 00:05:16.336937 amazon-ssm-agent[2190]: 2025-05-17 00:05:15 INFO [amazon-ssm-agent] Starting Core Agent May 17 00:05:16.336937 amazon-ssm-agent[2190]: 2025-05-17 00:05:15 INFO [amazon-ssm-agent] registrar detected. Attempting registration May 17 00:05:16.337084 amazon-ssm-agent[2190]: 2025-05-17 00:05:15 INFO [Registrar] Starting registrar module May 17 00:05:16.337084 amazon-ssm-agent[2190]: 2025-05-17 00:05:15 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration May 17 00:05:16.337084 amazon-ssm-agent[2190]: 2025-05-17 00:05:16 INFO [EC2Identity] EC2 registration was successful. May 17 00:05:16.337084 amazon-ssm-agent[2190]: 2025-05-17 00:05:16 INFO [CredentialRefresher] credentialRefresher has started May 17 00:05:16.337084 amazon-ssm-agent[2190]: 2025-05-17 00:05:16 INFO [CredentialRefresher] Starting credentials refresher loop May 17 00:05:16.337084 amazon-ssm-agent[2190]: 2025-05-17 00:05:16 INFO EC2RoleProvider Successfully connected with instance profile role credentials May 17 00:05:16.380901 amazon-ssm-agent[2190]: 2025-05-17 00:05:16 INFO [CredentialRefresher] Next credential rotation will be in 30.141645548466666 minutes May 17 00:05:17.014182 sshd_keygen[2038]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 17 00:05:17.053623 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 17 00:05:17.065546 systemd[1]: Starting issuegen.service - Generate /run/issue... May 17 00:05:17.077408 systemd[1]: Started sshd@0-172.31.29.16:22-139.178.89.65:46294.service - OpenSSH per-connection server daemon (139.178.89.65:46294). May 17 00:05:17.097425 systemd[1]: issuegen.service: Deactivated successfully. May 17 00:05:17.097812 systemd[1]: Finished issuegen.service - Generate /run/issue. May 17 00:05:17.110466 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 17 00:05:17.133255 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 17 00:05:17.146713 systemd[1]: Started getty@tty1.service - Getty on tty1. May 17 00:05:17.151537 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 17 00:05:17.154230 systemd[1]: Reached target getty.target - Login Prompts. May 17 00:05:17.272416 sshd[2220]: Accepted publickey for core from 139.178.89.65 port 46294 ssh2: RSA SHA256:eHxhS8dryg5cUeSGvw1rEy5lVBwPN/UG03VmUnZDeRM May 17 00:05:17.276850 sshd[2220]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:05:17.297842 systemd-logind[1997]: New session 1 of user core. May 17 00:05:17.299449 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 17 00:05:17.310526 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 17 00:05:17.351068 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 17 00:05:17.366697 systemd[1]: Starting user@500.service - User Manager for UID 500... May 17 00:05:17.380751 (systemd)[2232]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 17 00:05:17.388187 amazon-ssm-agent[2190]: 2025-05-17 00:05:17 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process May 17 00:05:17.489258 amazon-ssm-agent[2190]: 2025-05-17 00:05:17 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2234) started May 17 00:05:17.590136 amazon-ssm-agent[2190]: 2025-05-17 00:05:17 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds May 17 00:05:17.641588 systemd[2232]: Queued start job for default target default.target. May 17 00:05:17.652290 systemd[2232]: Created slice app.slice - User Application Slice. May 17 00:05:17.652358 systemd[2232]: Reached target paths.target - Paths. May 17 00:05:17.652448 systemd[2232]: Reached target timers.target - Timers. May 17 00:05:17.655233 systemd[2232]: Starting dbus.socket - D-Bus User Message Bus Socket... May 17 00:05:17.685471 systemd[2232]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 17 00:05:17.686932 systemd[2232]: Reached target sockets.target - Sockets. May 17 00:05:17.687010 systemd[2232]: Reached target basic.target - Basic System. May 17 00:05:17.687116 systemd[2232]: Reached target default.target - Main User Target. May 17 00:05:17.687191 systemd[2232]: Startup finished in 290ms. May 17 00:05:17.687531 systemd[1]: Started user@500.service - User Manager for UID 500. May 17 00:05:17.701238 systemd[1]: Started session-1.scope - Session 1 of User core. May 17 00:05:17.865548 systemd[1]: Started sshd@1-172.31.29.16:22-139.178.89.65:60576.service - OpenSSH per-connection server daemon (139.178.89.65:60576). May 17 00:05:18.008777 ntpd[1990]: Listen normally on 7 eth0 [fe80::4bb:8cff:fe70:6fbb%2]:123 May 17 00:05:18.009289 ntpd[1990]: 17 May 00:05:18 ntpd[1990]: Listen normally on 7 eth0 [fe80::4bb:8cff:fe70:6fbb%2]:123 May 17 00:05:18.043080 sshd[2252]: Accepted publickey for core from 139.178.89.65 port 60576 ssh2: RSA SHA256:eHxhS8dryg5cUeSGvw1rEy5lVBwPN/UG03VmUnZDeRM May 17 00:05:18.045766 sshd[2252]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:05:18.053054 systemd-logind[1997]: New session 2 of user core. May 17 00:05:18.060282 systemd[1]: Started session-2.scope - Session 2 of User core. May 17 00:05:18.187318 sshd[2252]: pam_unix(sshd:session): session closed for user core May 17 00:05:18.200371 systemd-logind[1997]: Session 2 logged out. Waiting for processes to exit. May 17 00:05:18.201669 systemd[1]: sshd@1-172.31.29.16:22-139.178.89.65:60576.service: Deactivated successfully. May 17 00:05:18.210976 systemd[1]: session-2.scope: Deactivated successfully. May 17 00:05:18.233548 systemd[1]: Started sshd@2-172.31.29.16:22-139.178.89.65:60588.service - OpenSSH per-connection server daemon (139.178.89.65:60588). May 17 00:05:18.239174 systemd-logind[1997]: Removed session 2. May 17 00:05:18.402016 sshd[2259]: Accepted publickey for core from 139.178.89.65 port 60588 ssh2: RSA SHA256:eHxhS8dryg5cUeSGvw1rEy5lVBwPN/UG03VmUnZDeRM May 17 00:05:18.405936 sshd[2259]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:05:18.413784 systemd-logind[1997]: New session 3 of user core. May 17 00:05:18.418260 systemd[1]: Started session-3.scope - Session 3 of User core. May 17 00:05:18.546592 sshd[2259]: pam_unix(sshd:session): session closed for user core May 17 00:05:18.554325 systemd[1]: sshd@2-172.31.29.16:22-139.178.89.65:60588.service: Deactivated successfully. May 17 00:05:18.559920 systemd[1]: session-3.scope: Deactivated successfully. May 17 00:05:18.561693 systemd-logind[1997]: Session 3 logged out. Waiting for processes to exit. May 17 00:05:18.564314 systemd-logind[1997]: Removed session 3. May 17 00:05:19.490748 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:05:19.494342 systemd[1]: Reached target multi-user.target - Multi-User System. May 17 00:05:19.500121 systemd[1]: Startup finished in 1.141s (kernel) + 9.174s (initrd) + 10.574s (userspace) = 20.890s. May 17 00:05:19.505835 (kubelet)[2270]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:05:20.863973 kubelet[2270]: E0517 00:05:20.863883 2270 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:05:20.868675 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:05:20.869463 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:05:20.869961 systemd[1]: kubelet.service: Consumed 1.355s CPU time. May 17 00:05:28.584452 systemd[1]: Started sshd@3-172.31.29.16:22-139.178.89.65:49896.service - OpenSSH per-connection server daemon (139.178.89.65:49896). May 17 00:05:28.745299 sshd[2283]: Accepted publickey for core from 139.178.89.65 port 49896 ssh2: RSA SHA256:eHxhS8dryg5cUeSGvw1rEy5lVBwPN/UG03VmUnZDeRM May 17 00:05:28.747930 sshd[2283]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:05:28.754860 systemd-logind[1997]: New session 4 of user core. May 17 00:05:28.764239 systemd[1]: Started session-4.scope - Session 4 of User core. May 17 00:05:28.887380 sshd[2283]: pam_unix(sshd:session): session closed for user core May 17 00:05:28.894471 systemd[1]: sshd@3-172.31.29.16:22-139.178.89.65:49896.service: Deactivated successfully. May 17 00:05:28.894473 systemd-logind[1997]: Session 4 logged out. Waiting for processes to exit. May 17 00:05:28.898665 systemd[1]: session-4.scope: Deactivated successfully. May 17 00:05:28.900663 systemd-logind[1997]: Removed session 4. May 17 00:05:28.927479 systemd[1]: Started sshd@4-172.31.29.16:22-139.178.89.65:49904.service - OpenSSH per-connection server daemon (139.178.89.65:49904). May 17 00:05:29.110272 sshd[2290]: Accepted publickey for core from 139.178.89.65 port 49904 ssh2: RSA SHA256:eHxhS8dryg5cUeSGvw1rEy5lVBwPN/UG03VmUnZDeRM May 17 00:05:29.112852 sshd[2290]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:05:29.122130 systemd-logind[1997]: New session 5 of user core. May 17 00:05:29.131294 systemd[1]: Started session-5.scope - Session 5 of User core. May 17 00:05:29.252612 sshd[2290]: pam_unix(sshd:session): session closed for user core May 17 00:05:29.257854 systemd-logind[1997]: Session 5 logged out. Waiting for processes to exit. May 17 00:05:29.258535 systemd[1]: sshd@4-172.31.29.16:22-139.178.89.65:49904.service: Deactivated successfully. May 17 00:05:29.262658 systemd[1]: session-5.scope: Deactivated successfully. May 17 00:05:29.267005 systemd-logind[1997]: Removed session 5. May 17 00:05:29.293543 systemd[1]: Started sshd@5-172.31.29.16:22-139.178.89.65:49910.service - OpenSSH per-connection server daemon (139.178.89.65:49910). May 17 00:05:29.469218 sshd[2297]: Accepted publickey for core from 139.178.89.65 port 49910 ssh2: RSA SHA256:eHxhS8dryg5cUeSGvw1rEy5lVBwPN/UG03VmUnZDeRM May 17 00:05:29.471715 sshd[2297]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:05:29.480067 systemd-logind[1997]: New session 6 of user core. May 17 00:05:29.487225 systemd[1]: Started session-6.scope - Session 6 of User core. May 17 00:05:29.612596 sshd[2297]: pam_unix(sshd:session): session closed for user core May 17 00:05:29.618158 systemd[1]: sshd@5-172.31.29.16:22-139.178.89.65:49910.service: Deactivated successfully. May 17 00:05:29.622349 systemd[1]: session-6.scope: Deactivated successfully. May 17 00:05:29.625754 systemd-logind[1997]: Session 6 logged out. Waiting for processes to exit. May 17 00:05:29.628020 systemd-logind[1997]: Removed session 6. May 17 00:05:29.651532 systemd[1]: Started sshd@6-172.31.29.16:22-139.178.89.65:49912.service - OpenSSH per-connection server daemon (139.178.89.65:49912). May 17 00:05:29.821805 sshd[2304]: Accepted publickey for core from 139.178.89.65 port 49912 ssh2: RSA SHA256:eHxhS8dryg5cUeSGvw1rEy5lVBwPN/UG03VmUnZDeRM May 17 00:05:29.824370 sshd[2304]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:05:29.831699 systemd-logind[1997]: New session 7 of user core. May 17 00:05:29.844230 systemd[1]: Started session-7.scope - Session 7 of User core. May 17 00:05:29.962060 sudo[2307]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 17 00:05:29.962708 sudo[2307]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:05:29.981596 sudo[2307]: pam_unix(sudo:session): session closed for user root May 17 00:05:30.006044 sshd[2304]: pam_unix(sshd:session): session closed for user core May 17 00:05:30.013866 systemd[1]: sshd@6-172.31.29.16:22-139.178.89.65:49912.service: Deactivated successfully. May 17 00:05:30.017761 systemd[1]: session-7.scope: Deactivated successfully. May 17 00:05:30.019716 systemd-logind[1997]: Session 7 logged out. Waiting for processes to exit. May 17 00:05:30.021880 systemd-logind[1997]: Removed session 7. May 17 00:05:30.047515 systemd[1]: Started sshd@7-172.31.29.16:22-139.178.89.65:49918.service - OpenSSH per-connection server daemon (139.178.89.65:49918). May 17 00:05:30.223392 sshd[2312]: Accepted publickey for core from 139.178.89.65 port 49918 ssh2: RSA SHA256:eHxhS8dryg5cUeSGvw1rEy5lVBwPN/UG03VmUnZDeRM May 17 00:05:30.226353 sshd[2312]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:05:30.233748 systemd-logind[1997]: New session 8 of user core. May 17 00:05:30.242261 systemd[1]: Started session-8.scope - Session 8 of User core. May 17 00:05:30.346966 sudo[2316]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 17 00:05:30.347630 sudo[2316]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:05:30.353467 sudo[2316]: pam_unix(sudo:session): session closed for user root May 17 00:05:30.363397 sudo[2315]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules May 17 00:05:30.364024 sudo[2315]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:05:30.391834 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... May 17 00:05:30.394491 auditctl[2319]: No rules May 17 00:05:30.395211 systemd[1]: audit-rules.service: Deactivated successfully. May 17 00:05:30.395569 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. May 17 00:05:30.402854 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 17 00:05:30.466044 augenrules[2337]: No rules May 17 00:05:30.469178 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 17 00:05:30.471334 sudo[2315]: pam_unix(sudo:session): session closed for user root May 17 00:05:30.494949 sshd[2312]: pam_unix(sshd:session): session closed for user core May 17 00:05:30.500787 systemd[1]: sshd@7-172.31.29.16:22-139.178.89.65:49918.service: Deactivated successfully. May 17 00:05:30.504701 systemd[1]: session-8.scope: Deactivated successfully. May 17 00:05:30.507806 systemd-logind[1997]: Session 8 logged out. Waiting for processes to exit. May 17 00:05:30.510677 systemd-logind[1997]: Removed session 8. May 17 00:05:30.533530 systemd[1]: Started sshd@8-172.31.29.16:22-139.178.89.65:49924.service - OpenSSH per-connection server daemon (139.178.89.65:49924). May 17 00:05:30.707042 sshd[2345]: Accepted publickey for core from 139.178.89.65 port 49924 ssh2: RSA SHA256:eHxhS8dryg5cUeSGvw1rEy5lVBwPN/UG03VmUnZDeRM May 17 00:05:30.709576 sshd[2345]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:05:30.716847 systemd-logind[1997]: New session 9 of user core. May 17 00:05:30.726257 systemd[1]: Started session-9.scope - Session 9 of User core. May 17 00:05:30.831234 sudo[2348]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 17 00:05:30.833614 sudo[2348]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:05:30.936972 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 17 00:05:30.949464 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:05:31.309541 systemd[1]: Starting docker.service - Docker Application Container Engine... May 17 00:05:31.311529 (dockerd)[2367]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 17 00:05:31.408287 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:05:31.426574 (kubelet)[2372]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:05:31.504980 kubelet[2372]: E0517 00:05:31.504846 2372 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:05:31.513408 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:05:31.513748 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:05:31.694533 dockerd[2367]: time="2025-05-17T00:05:31.694433642Z" level=info msg="Starting up" May 17 00:05:31.841475 dockerd[2367]: time="2025-05-17T00:05:31.841078691Z" level=info msg="Loading containers: start." May 17 00:05:32.008024 kernel: Initializing XFRM netlink socket May 17 00:05:32.040872 (udev-worker)[2403]: Network interface NamePolicy= disabled on kernel command line. May 17 00:05:32.125296 systemd-networkd[1934]: docker0: Link UP May 17 00:05:32.152380 dockerd[2367]: time="2025-05-17T00:05:32.152329325Z" level=info msg="Loading containers: done." May 17 00:05:32.185873 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2114070885-merged.mount: Deactivated successfully. May 17 00:05:32.194219 dockerd[2367]: time="2025-05-17T00:05:32.194163749Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 17 00:05:32.194911 dockerd[2367]: time="2025-05-17T00:05:32.194476313Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 May 17 00:05:32.194911 dockerd[2367]: time="2025-05-17T00:05:32.194671095Z" level=info msg="Daemon has completed initialization" May 17 00:05:32.259542 dockerd[2367]: time="2025-05-17T00:05:32.259333712Z" level=info msg="API listen on /run/docker.sock" May 17 00:05:32.260933 systemd[1]: Started docker.service - Docker Application Container Engine. May 17 00:05:33.180653 containerd[2020]: time="2025-05-17T00:05:33.180252957Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.1\"" May 17 00:05:33.861161 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3933686852.mount: Deactivated successfully. May 17 00:05:35.330974 containerd[2020]: time="2025-05-17T00:05:35.330907742Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:05:35.333039 containerd[2020]: time="2025-05-17T00:05:35.332967268Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.1: active requests=0, bytes read=27349350" May 17 00:05:35.333860 containerd[2020]: time="2025-05-17T00:05:35.333377019Z" level=info msg="ImageCreate event name:\"sha256:9a2b7cf4f8540534c6ec5b758462c6d7885c6e734652172078bba899c0e3089a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:05:35.343340 containerd[2020]: time="2025-05-17T00:05:35.343218144Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:d8ae2fb01c39aa1c7add84f3d54425cf081c24c11e3946830292a8cfa4293548\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:05:35.349290 containerd[2020]: time="2025-05-17T00:05:35.349048787Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.1\" with image id \"sha256:9a2b7cf4f8540534c6ec5b758462c6d7885c6e734652172078bba899c0e3089a\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:d8ae2fb01c39aa1c7add84f3d54425cf081c24c11e3946830292a8cfa4293548\", size \"27346150\" in 2.168731962s" May 17 00:05:35.349290 containerd[2020]: time="2025-05-17T00:05:35.349121147Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.1\" returns image reference \"sha256:9a2b7cf4f8540534c6ec5b758462c6d7885c6e734652172078bba899c0e3089a\"" May 17 00:05:35.351517 containerd[2020]: time="2025-05-17T00:05:35.351459257Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.1\"" May 17 00:05:36.822664 containerd[2020]: time="2025-05-17T00:05:36.822385706Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:05:36.823972 containerd[2020]: time="2025-05-17T00:05:36.823862564Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.1: active requests=0, bytes read=23531735" May 17 00:05:36.824888 containerd[2020]: time="2025-05-17T00:05:36.824830227Z" level=info msg="ImageCreate event name:\"sha256:674996a72aa5900cbbbcd410437021fa4c62a7f829a56f58eb23ac430f2ae383\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:05:36.830887 containerd[2020]: time="2025-05-17T00:05:36.830790562Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7c9bea694e3a3c01ed6a5ee02d55a6124cc08e0b2eec6caa33f2c396b8cbc3f8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:05:36.837321 containerd[2020]: time="2025-05-17T00:05:36.837253085Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.1\" with image id \"sha256:674996a72aa5900cbbbcd410437021fa4c62a7f829a56f58eb23ac430f2ae383\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7c9bea694e3a3c01ed6a5ee02d55a6124cc08e0b2eec6caa33f2c396b8cbc3f8\", size \"25086427\" in 1.485505684s" May 17 00:05:36.840981 containerd[2020]: time="2025-05-17T00:05:36.840136754Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.1\" returns image reference \"sha256:674996a72aa5900cbbbcd410437021fa4c62a7f829a56f58eb23ac430f2ae383\"" May 17 00:05:36.841313 containerd[2020]: time="2025-05-17T00:05:36.841208190Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.1\"" May 17 00:05:38.087179 containerd[2020]: time="2025-05-17T00:05:38.087112159Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:05:38.089244 containerd[2020]: time="2025-05-17T00:05:38.089176134Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.1: active requests=0, bytes read=18293731" May 17 00:05:38.090031 containerd[2020]: time="2025-05-17T00:05:38.089760949Z" level=info msg="ImageCreate event name:\"sha256:014094c90caacf743dc5fb4281363492da1df31cd8218aeceab3be3326277d2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:05:38.095552 containerd[2020]: time="2025-05-17T00:05:38.095454189Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:395b7de7cdbdcc3c3a3db270844a3f71d757e2447a1e4db76b4cce46fba7fd55\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:05:38.098129 containerd[2020]: time="2025-05-17T00:05:38.097920600Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.1\" with image id \"sha256:014094c90caacf743dc5fb4281363492da1df31cd8218aeceab3be3326277d2e\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:395b7de7cdbdcc3c3a3db270844a3f71d757e2447a1e4db76b4cce46fba7fd55\", size \"19848441\" in 1.256550778s" May 17 00:05:38.098129 containerd[2020]: time="2025-05-17T00:05:38.097975928Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.1\" returns image reference \"sha256:014094c90caacf743dc5fb4281363492da1df31cd8218aeceab3be3326277d2e\"" May 17 00:05:38.098819 containerd[2020]: time="2025-05-17T00:05:38.098782405Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.1\"" May 17 00:05:39.490063 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount404124105.mount: Deactivated successfully. May 17 00:05:40.047817 containerd[2020]: time="2025-05-17T00:05:40.047748892Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:05:40.049298 containerd[2020]: time="2025-05-17T00:05:40.049223794Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.1: active requests=0, bytes read=28196004" May 17 00:05:40.051872 containerd[2020]: time="2025-05-17T00:05:40.051798150Z" level=info msg="ImageCreate event name:\"sha256:3e58848989f556e36aa29d7852ab1712163960651e074d11cae9d31fb27192db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:05:40.056357 containerd[2020]: time="2025-05-17T00:05:40.056290383Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7ddf379897139ae8ade8b33cb9373b70c632a4d5491da6e234f5d830e0a50807\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:05:40.057954 containerd[2020]: time="2025-05-17T00:05:40.057766604Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.1\" with image id \"sha256:3e58848989f556e36aa29d7852ab1712163960651e074d11cae9d31fb27192db\", repo tag \"registry.k8s.io/kube-proxy:v1.33.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:7ddf379897139ae8ade8b33cb9373b70c632a4d5491da6e234f5d830e0a50807\", size \"28195023\" in 1.958842874s" May 17 00:05:40.057954 containerd[2020]: time="2025-05-17T00:05:40.057820397Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.1\" returns image reference \"sha256:3e58848989f556e36aa29d7852ab1712163960651e074d11cae9d31fb27192db\"" May 17 00:05:40.059295 containerd[2020]: time="2025-05-17T00:05:40.059237548Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" May 17 00:05:40.617620 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3453542160.mount: Deactivated successfully. May 17 00:05:41.686180 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 17 00:05:41.695591 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:05:41.747027 containerd[2020]: time="2025-05-17T00:05:41.746031147Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:05:41.748813 containerd[2020]: time="2025-05-17T00:05:41.748740220Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152117" May 17 00:05:41.752308 containerd[2020]: time="2025-05-17T00:05:41.752249364Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:05:41.760022 containerd[2020]: time="2025-05-17T00:05:41.759903647Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:05:41.763909 containerd[2020]: time="2025-05-17T00:05:41.763824042Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.704523681s" May 17 00:05:41.764622 containerd[2020]: time="2025-05-17T00:05:41.764033793Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" May 17 00:05:41.766204 containerd[2020]: time="2025-05-17T00:05:41.765877055Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 17 00:05:42.054026 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:05:42.069706 (kubelet)[2649]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:05:42.146476 kubelet[2649]: E0517 00:05:42.146289 2649 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:05:42.152947 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:05:42.153379 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:05:42.299804 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount89096816.mount: Deactivated successfully. May 17 00:05:42.307136 containerd[2020]: time="2025-05-17T00:05:42.306310157Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:05:42.308366 containerd[2020]: time="2025-05-17T00:05:42.308299074Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" May 17 00:05:42.309694 containerd[2020]: time="2025-05-17T00:05:42.309624183Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:05:42.313833 containerd[2020]: time="2025-05-17T00:05:42.313766803Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:05:42.316047 containerd[2020]: time="2025-05-17T00:05:42.315803588Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 549.862904ms" May 17 00:05:42.316047 containerd[2020]: time="2025-05-17T00:05:42.315855078Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" May 17 00:05:42.317145 containerd[2020]: time="2025-05-17T00:05:42.316940943Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" May 17 00:05:44.753038 containerd[2020]: time="2025-05-17T00:05:44.752108173Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:05:44.754735 containerd[2020]: time="2025-05-17T00:05:44.754650266Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69230163" May 17 00:05:44.755687 containerd[2020]: time="2025-05-17T00:05:44.755598187Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:05:44.763590 containerd[2020]: time="2025-05-17T00:05:44.762534917Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:05:44.769100 containerd[2020]: time="2025-05-17T00:05:44.769040175Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 2.452047263s" May 17 00:05:44.769683 containerd[2020]: time="2025-05-17T00:05:44.769101464Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" May 17 00:05:45.161284 systemd[1]: systemd-hostnamed.service: Deactivated successfully. May 17 00:05:51.964050 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:05:51.981443 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:05:52.031790 systemd[1]: Reloading requested from client PID 2701 ('systemctl') (unit session-9.scope)... May 17 00:05:52.031829 systemd[1]: Reloading... May 17 00:05:52.266056 zram_generator::config[2744]: No configuration found. May 17 00:05:52.503200 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:05:52.674894 systemd[1]: Reloading finished in 642 ms. May 17 00:05:52.770253 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 17 00:05:52.770642 systemd[1]: kubelet.service: Failed with result 'signal'. May 17 00:05:52.772125 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:05:52.782490 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:05:53.098309 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:05:53.102345 (kubelet)[2804]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 17 00:05:53.171906 kubelet[2804]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:05:53.171906 kubelet[2804]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 17 00:05:53.171906 kubelet[2804]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:05:53.172609 kubelet[2804]: I0517 00:05:53.172019 2804 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 17 00:05:54.402767 kubelet[2804]: I0517 00:05:54.402692 2804 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" May 17 00:05:54.402767 kubelet[2804]: I0517 00:05:54.402748 2804 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 17 00:05:54.403459 kubelet[2804]: I0517 00:05:54.403170 2804 server.go:956] "Client rotation is on, will bootstrap in background" May 17 00:05:54.450708 kubelet[2804]: E0517 00:05:54.449688 2804 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.29.16:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.29.16:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" May 17 00:05:54.450708 kubelet[2804]: I0517 00:05:54.449805 2804 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 00:05:54.464629 kubelet[2804]: E0517 00:05:54.464565 2804 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 17 00:05:54.464629 kubelet[2804]: I0517 00:05:54.464619 2804 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 17 00:05:54.472883 kubelet[2804]: I0517 00:05:54.472664 2804 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 17 00:05:54.473254 kubelet[2804]: I0517 00:05:54.473180 2804 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 17 00:05:54.473552 kubelet[2804]: I0517 00:05:54.473245 2804 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-29-16","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 17 00:05:54.473695 kubelet[2804]: I0517 00:05:54.473569 2804 topology_manager.go:138] "Creating topology manager with none policy" May 17 00:05:54.473695 kubelet[2804]: I0517 00:05:54.473590 2804 container_manager_linux.go:303] "Creating device plugin manager" May 17 00:05:54.473819 kubelet[2804]: I0517 00:05:54.473801 2804 state_mem.go:36] "Initialized new in-memory state store" May 17 00:05:54.484098 kubelet[2804]: I0517 00:05:54.483899 2804 kubelet.go:480] "Attempting to sync node with API server" May 17 00:05:54.484098 kubelet[2804]: I0517 00:05:54.483961 2804 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" May 17 00:05:54.487025 kubelet[2804]: I0517 00:05:54.485177 2804 kubelet.go:386] "Adding apiserver pod source" May 17 00:05:54.487025 kubelet[2804]: I0517 00:05:54.485222 2804 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 17 00:05:54.487025 kubelet[2804]: E0517 00:05:54.486836 2804 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.29.16:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.29.16:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" May 17 00:05:54.487290 kubelet[2804]: E0517 00:05:54.487095 2804 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.29.16:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-16&limit=500&resourceVersion=0\": dial tcp 172.31.29.16:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" May 17 00:05:54.487647 kubelet[2804]: I0517 00:05:54.487593 2804 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 17 00:05:54.488651 kubelet[2804]: I0517 00:05:54.488587 2804 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" May 17 00:05:54.488778 kubelet[2804]: W0517 00:05:54.488710 2804 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 17 00:05:54.497819 kubelet[2804]: I0517 00:05:54.497785 2804 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 17 00:05:54.498350 kubelet[2804]: I0517 00:05:54.498055 2804 server.go:1289] "Started kubelet" May 17 00:05:54.501602 kubelet[2804]: I0517 00:05:54.501496 2804 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 May 17 00:05:54.503789 kubelet[2804]: I0517 00:05:54.503703 2804 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 17 00:05:54.506057 kubelet[2804]: I0517 00:05:54.504272 2804 server.go:317] "Adding debug handlers to kubelet server" May 17 00:05:54.506057 kubelet[2804]: I0517 00:05:54.504453 2804 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 17 00:05:54.511680 kubelet[2804]: I0517 00:05:54.511644 2804 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 17 00:05:54.512873 kubelet[2804]: E0517 00:05:54.509696 2804 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.29.16:6443/api/v1/namespaces/default/events\": dial tcp 172.31.29.16:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-29-16.184027be7c08e009 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-29-16,UID:ip-172-31-29-16,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-29-16,},FirstTimestamp:2025-05-17 00:05:54.497978377 +0000 UTC m=+1.388557139,LastTimestamp:2025-05-17 00:05:54.497978377 +0000 UTC m=+1.388557139,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-29-16,}" May 17 00:05:54.514766 kubelet[2804]: I0517 00:05:54.514705 2804 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 17 00:05:54.517400 kubelet[2804]: I0517 00:05:54.517366 2804 volume_manager.go:297] "Starting Kubelet Volume Manager" May 17 00:05:54.520041 kubelet[2804]: E0517 00:05:54.517981 2804 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-29-16\" not found" May 17 00:05:54.520041 kubelet[2804]: I0517 00:05:54.519483 2804 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 17 00:05:54.520041 kubelet[2804]: I0517 00:05:54.519566 2804 reconciler.go:26] "Reconciler: start to sync state" May 17 00:05:54.522865 kubelet[2804]: E0517 00:05:54.522818 2804 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.29.16:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.29.16:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" May 17 00:05:54.523353 kubelet[2804]: E0517 00:05:54.523294 2804 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-16?timeout=10s\": dial tcp 172.31.29.16:6443: connect: connection refused" interval="200ms" May 17 00:05:54.525847 kubelet[2804]: I0517 00:05:54.525789 2804 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 17 00:05:54.530149 kubelet[2804]: E0517 00:05:54.529553 2804 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 17 00:05:54.531098 kubelet[2804]: I0517 00:05:54.530685 2804 factory.go:223] Registration of the containerd container factory successfully May 17 00:05:54.531098 kubelet[2804]: I0517 00:05:54.530713 2804 factory.go:223] Registration of the systemd container factory successfully May 17 00:05:54.566497 kubelet[2804]: I0517 00:05:54.566166 2804 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" May 17 00:05:54.570200 kubelet[2804]: I0517 00:05:54.570153 2804 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" May 17 00:05:54.570397 kubelet[2804]: I0517 00:05:54.570376 2804 status_manager.go:230] "Starting to sync pod status with apiserver" May 17 00:05:54.570515 kubelet[2804]: I0517 00:05:54.570494 2804 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 17 00:05:54.570615 kubelet[2804]: I0517 00:05:54.570594 2804 kubelet.go:2436] "Starting kubelet main sync loop" May 17 00:05:54.570848 kubelet[2804]: E0517 00:05:54.570793 2804 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 17 00:05:54.582435 kubelet[2804]: E0517 00:05:54.582369 2804 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.29.16:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.29.16:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" May 17 00:05:54.584646 kubelet[2804]: I0517 00:05:54.584612 2804 cpu_manager.go:221] "Starting CPU manager" policy="none" May 17 00:05:54.584957 kubelet[2804]: I0517 00:05:54.584929 2804 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 17 00:05:54.585334 kubelet[2804]: I0517 00:05:54.585282 2804 state_mem.go:36] "Initialized new in-memory state store" May 17 00:05:54.591115 kubelet[2804]: I0517 00:05:54.591075 2804 policy_none.go:49] "None policy: Start" May 17 00:05:54.591306 kubelet[2804]: I0517 00:05:54.591285 2804 memory_manager.go:186] "Starting memorymanager" policy="None" May 17 00:05:54.591408 kubelet[2804]: I0517 00:05:54.591390 2804 state_mem.go:35] "Initializing new in-memory state store" May 17 00:05:54.604775 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 17 00:05:54.620105 kubelet[2804]: E0517 00:05:54.620066 2804 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-29-16\" not found" May 17 00:05:54.630157 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 17 00:05:54.637080 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 17 00:05:54.649472 kubelet[2804]: E0517 00:05:54.648123 2804 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" May 17 00:05:54.649472 kubelet[2804]: I0517 00:05:54.648444 2804 eviction_manager.go:189] "Eviction manager: starting control loop" May 17 00:05:54.649472 kubelet[2804]: I0517 00:05:54.648465 2804 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 17 00:05:54.654251 kubelet[2804]: I0517 00:05:54.652858 2804 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 17 00:05:54.659015 kubelet[2804]: E0517 00:05:54.658251 2804 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 17 00:05:54.659015 kubelet[2804]: E0517 00:05:54.658354 2804 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-29-16\" not found" May 17 00:05:54.694185 systemd[1]: Created slice kubepods-burstable-poda3cd0bc949e4f24e759a4c221b0f9cb0.slice - libcontainer container kubepods-burstable-poda3cd0bc949e4f24e759a4c221b0f9cb0.slice. May 17 00:05:54.708006 kubelet[2804]: E0517 00:05:54.707646 2804 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-16\" not found" node="ip-172-31-29-16" May 17 00:05:54.714096 systemd[1]: Created slice kubepods-burstable-pode1f1fdd3f1878a75934a8bd7cb0f4740.slice - libcontainer container kubepods-burstable-pode1f1fdd3f1878a75934a8bd7cb0f4740.slice. May 17 00:05:54.719087 kubelet[2804]: E0517 00:05:54.718519 2804 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-16\" not found" node="ip-172-31-29-16" May 17 00:05:54.722525 systemd[1]: Created slice kubepods-burstable-pod67dc0c9ffb0130de399834352e4efad0.slice - libcontainer container kubepods-burstable-pod67dc0c9ffb0130de399834352e4efad0.slice. May 17 00:05:54.724944 kubelet[2804]: E0517 00:05:54.724868 2804 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-16?timeout=10s\": dial tcp 172.31.29.16:6443: connect: connection refused" interval="400ms" May 17 00:05:54.726925 kubelet[2804]: E0517 00:05:54.726882 2804 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-16\" not found" node="ip-172-31-29-16" May 17 00:05:54.750919 kubelet[2804]: I0517 00:05:54.750870 2804 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-29-16" May 17 00:05:54.751682 kubelet[2804]: E0517 00:05:54.751635 2804 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.29.16:6443/api/v1/nodes\": dial tcp 172.31.29.16:6443: connect: connection refused" node="ip-172-31-29-16" May 17 00:05:54.820925 kubelet[2804]: I0517 00:05:54.820873 2804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a3cd0bc949e4f24e759a4c221b0f9cb0-ca-certs\") pod \"kube-apiserver-ip-172-31-29-16\" (UID: \"a3cd0bc949e4f24e759a4c221b0f9cb0\") " pod="kube-system/kube-apiserver-ip-172-31-29-16" May 17 00:05:54.821076 kubelet[2804]: I0517 00:05:54.820932 2804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a3cd0bc949e4f24e759a4c221b0f9cb0-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-29-16\" (UID: \"a3cd0bc949e4f24e759a4c221b0f9cb0\") " pod="kube-system/kube-apiserver-ip-172-31-29-16" May 17 00:05:54.821076 kubelet[2804]: I0517 00:05:54.820980 2804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e1f1fdd3f1878a75934a8bd7cb0f4740-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-29-16\" (UID: \"e1f1fdd3f1878a75934a8bd7cb0f4740\") " pod="kube-system/kube-controller-manager-ip-172-31-29-16" May 17 00:05:54.821076 kubelet[2804]: I0517 00:05:54.821043 2804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e1f1fdd3f1878a75934a8bd7cb0f4740-k8s-certs\") pod \"kube-controller-manager-ip-172-31-29-16\" (UID: \"e1f1fdd3f1878a75934a8bd7cb0f4740\") " pod="kube-system/kube-controller-manager-ip-172-31-29-16" May 17 00:05:54.821228 kubelet[2804]: I0517 00:05:54.821081 2804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e1f1fdd3f1878a75934a8bd7cb0f4740-kubeconfig\") pod \"kube-controller-manager-ip-172-31-29-16\" (UID: \"e1f1fdd3f1878a75934a8bd7cb0f4740\") " pod="kube-system/kube-controller-manager-ip-172-31-29-16" May 17 00:05:54.821228 kubelet[2804]: I0517 00:05:54.821120 2804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e1f1fdd3f1878a75934a8bd7cb0f4740-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-29-16\" (UID: \"e1f1fdd3f1878a75934a8bd7cb0f4740\") " pod="kube-system/kube-controller-manager-ip-172-31-29-16" May 17 00:05:54.821228 kubelet[2804]: I0517 00:05:54.821157 2804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a3cd0bc949e4f24e759a4c221b0f9cb0-k8s-certs\") pod \"kube-apiserver-ip-172-31-29-16\" (UID: \"a3cd0bc949e4f24e759a4c221b0f9cb0\") " pod="kube-system/kube-apiserver-ip-172-31-29-16" May 17 00:05:54.821228 kubelet[2804]: I0517 00:05:54.821193 2804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e1f1fdd3f1878a75934a8bd7cb0f4740-ca-certs\") pod \"kube-controller-manager-ip-172-31-29-16\" (UID: \"e1f1fdd3f1878a75934a8bd7cb0f4740\") " pod="kube-system/kube-controller-manager-ip-172-31-29-16" May 17 00:05:54.821437 kubelet[2804]: I0517 00:05:54.821240 2804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/67dc0c9ffb0130de399834352e4efad0-kubeconfig\") pod \"kube-scheduler-ip-172-31-29-16\" (UID: \"67dc0c9ffb0130de399834352e4efad0\") " pod="kube-system/kube-scheduler-ip-172-31-29-16" May 17 00:05:54.954051 kubelet[2804]: I0517 00:05:54.953811 2804 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-29-16" May 17 00:05:54.954435 kubelet[2804]: E0517 00:05:54.954359 2804 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.29.16:6443/api/v1/nodes\": dial tcp 172.31.29.16:6443: connect: connection refused" node="ip-172-31-29-16" May 17 00:05:55.010159 containerd[2020]: time="2025-05-17T00:05:55.009972458Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-29-16,Uid:a3cd0bc949e4f24e759a4c221b0f9cb0,Namespace:kube-system,Attempt:0,}" May 17 00:05:55.020664 containerd[2020]: time="2025-05-17T00:05:55.020583082Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-29-16,Uid:e1f1fdd3f1878a75934a8bd7cb0f4740,Namespace:kube-system,Attempt:0,}" May 17 00:05:55.028300 containerd[2020]: time="2025-05-17T00:05:55.028229629Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-29-16,Uid:67dc0c9ffb0130de399834352e4efad0,Namespace:kube-system,Attempt:0,}" May 17 00:05:55.126302 kubelet[2804]: E0517 00:05:55.126226 2804 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-16?timeout=10s\": dial tcp 172.31.29.16:6443: connect: connection refused" interval="800ms" May 17 00:05:55.357590 kubelet[2804]: I0517 00:05:55.357421 2804 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-29-16" May 17 00:05:55.358448 kubelet[2804]: E0517 00:05:55.358391 2804 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.29.16:6443/api/v1/nodes\": dial tcp 172.31.29.16:6443: connect: connection refused" node="ip-172-31-29-16" May 17 00:05:55.541879 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1312580393.mount: Deactivated successfully. May 17 00:05:55.561053 containerd[2020]: time="2025-05-17T00:05:55.560225671Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:05:55.562725 containerd[2020]: time="2025-05-17T00:05:55.562662552Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" May 17 00:05:55.566519 containerd[2020]: time="2025-05-17T00:05:55.566452188Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:05:55.570008 containerd[2020]: time="2025-05-17T00:05:55.569181723Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:05:55.573481 containerd[2020]: time="2025-05-17T00:05:55.573405362Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:05:55.575480 containerd[2020]: time="2025-05-17T00:05:55.575363119Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 17 00:05:55.578322 containerd[2020]: time="2025-05-17T00:05:55.578148114Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:05:55.580649 containerd[2020]: time="2025-05-17T00:05:55.579834914Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 17 00:05:55.580649 containerd[2020]: time="2025-05-17T00:05:55.579954206Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 569.822276ms" May 17 00:05:55.590578 containerd[2020]: time="2025-05-17T00:05:55.590519133Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 562.180083ms" May 17 00:05:55.593671 kubelet[2804]: E0517 00:05:55.593597 2804 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.29.16:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.29.16:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" May 17 00:05:55.596042 containerd[2020]: time="2025-05-17T00:05:55.595894520Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 575.187876ms" May 17 00:05:55.663312 kubelet[2804]: E0517 00:05:55.661895 2804 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.29.16:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.29.16:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" May 17 00:05:55.788296 containerd[2020]: time="2025-05-17T00:05:55.786054509Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:05:55.788296 containerd[2020]: time="2025-05-17T00:05:55.786167600Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:05:55.788296 containerd[2020]: time="2025-05-17T00:05:55.786204926Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:05:55.792040 containerd[2020]: time="2025-05-17T00:05:55.791857434Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:05:55.806792 containerd[2020]: time="2025-05-17T00:05:55.804873527Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:05:55.806792 containerd[2020]: time="2025-05-17T00:05:55.805339950Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:05:55.806792 containerd[2020]: time="2025-05-17T00:05:55.805442342Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:05:55.806792 containerd[2020]: time="2025-05-17T00:05:55.805471152Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:05:55.806792 containerd[2020]: time="2025-05-17T00:05:55.805619746Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:05:55.807740 containerd[2020]: time="2025-05-17T00:05:55.805534312Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:05:55.807904 containerd[2020]: time="2025-05-17T00:05:55.807768207Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:05:55.808196 containerd[2020]: time="2025-05-17T00:05:55.808116980Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:05:55.849362 systemd[1]: Started cri-containerd-9b7526efec2c41c55e47af3738034a09b8a85002fdd4712a6f365ac425b62427.scope - libcontainer container 9b7526efec2c41c55e47af3738034a09b8a85002fdd4712a6f365ac425b62427. May 17 00:05:55.868320 systemd[1]: Started cri-containerd-6e3bc8990dea4ef35ad0083707959f8804e5eb96aadddc16ed63c660e1d12615.scope - libcontainer container 6e3bc8990dea4ef35ad0083707959f8804e5eb96aadddc16ed63c660e1d12615. May 17 00:05:55.883347 systemd[1]: Started cri-containerd-7893ceccd9f825d20cfb0410e4e01a59648fbbbba9be16f2f3f72083b0695033.scope - libcontainer container 7893ceccd9f825d20cfb0410e4e01a59648fbbbba9be16f2f3f72083b0695033. May 17 00:05:55.929237 kubelet[2804]: E0517 00:05:55.928106 2804 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-16?timeout=10s\": dial tcp 172.31.29.16:6443: connect: connection refused" interval="1.6s" May 17 00:05:55.945554 containerd[2020]: time="2025-05-17T00:05:55.945355063Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-29-16,Uid:e1f1fdd3f1878a75934a8bd7cb0f4740,Namespace:kube-system,Attempt:0,} returns sandbox id \"9b7526efec2c41c55e47af3738034a09b8a85002fdd4712a6f365ac425b62427\"" May 17 00:05:55.965838 containerd[2020]: time="2025-05-17T00:05:55.965620809Z" level=info msg="CreateContainer within sandbox \"9b7526efec2c41c55e47af3738034a09b8a85002fdd4712a6f365ac425b62427\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 17 00:05:55.975615 kubelet[2804]: E0517 00:05:55.975553 2804 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.29.16:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.29.16:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" May 17 00:05:56.003585 containerd[2020]: time="2025-05-17T00:05:56.003455007Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-29-16,Uid:a3cd0bc949e4f24e759a4c221b0f9cb0,Namespace:kube-system,Attempt:0,} returns sandbox id \"6e3bc8990dea4ef35ad0083707959f8804e5eb96aadddc16ed63c660e1d12615\"" May 17 00:05:56.008732 containerd[2020]: time="2025-05-17T00:05:56.008664889Z" level=info msg="CreateContainer within sandbox \"9b7526efec2c41c55e47af3738034a09b8a85002fdd4712a6f365ac425b62427\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"bc292043e49078969b8af35b0ae74fc5e32bdf341ef842853b54125aa59f49d2\"" May 17 00:05:56.011026 containerd[2020]: time="2025-05-17T00:05:56.010863461Z" level=info msg="StartContainer for \"bc292043e49078969b8af35b0ae74fc5e32bdf341ef842853b54125aa59f49d2\"" May 17 00:05:56.012895 kubelet[2804]: E0517 00:05:56.012783 2804 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.29.16:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-16&limit=500&resourceVersion=0\": dial tcp 172.31.29.16:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" May 17 00:05:56.014360 containerd[2020]: time="2025-05-17T00:05:56.014287676Z" level=info msg="CreateContainer within sandbox \"6e3bc8990dea4ef35ad0083707959f8804e5eb96aadddc16ed63c660e1d12615\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 17 00:05:56.029707 containerd[2020]: time="2025-05-17T00:05:56.029455804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-29-16,Uid:67dc0c9ffb0130de399834352e4efad0,Namespace:kube-system,Attempt:0,} returns sandbox id \"7893ceccd9f825d20cfb0410e4e01a59648fbbbba9be16f2f3f72083b0695033\"" May 17 00:05:56.039193 containerd[2020]: time="2025-05-17T00:05:56.038899304Z" level=info msg="CreateContainer within sandbox \"7893ceccd9f825d20cfb0410e4e01a59648fbbbba9be16f2f3f72083b0695033\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 17 00:05:56.062681 containerd[2020]: time="2025-05-17T00:05:56.062612102Z" level=info msg="CreateContainer within sandbox \"6e3bc8990dea4ef35ad0083707959f8804e5eb96aadddc16ed63c660e1d12615\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ff4c36cf4272752e608a7b2c3a580edb87d6e17a44cf57811e42f5f8db1b24a5\"" May 17 00:05:56.065318 containerd[2020]: time="2025-05-17T00:05:56.065246020Z" level=info msg="StartContainer for \"ff4c36cf4272752e608a7b2c3a580edb87d6e17a44cf57811e42f5f8db1b24a5\"" May 17 00:05:56.071656 systemd[1]: Started cri-containerd-bc292043e49078969b8af35b0ae74fc5e32bdf341ef842853b54125aa59f49d2.scope - libcontainer container bc292043e49078969b8af35b0ae74fc5e32bdf341ef842853b54125aa59f49d2. May 17 00:05:56.104091 containerd[2020]: time="2025-05-17T00:05:56.104024481Z" level=info msg="CreateContainer within sandbox \"7893ceccd9f825d20cfb0410e4e01a59648fbbbba9be16f2f3f72083b0695033\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"3ef73c7306abdc790a4b6560424754ec1c3fe331864e36d4cf1c4452817d0d9b\"" May 17 00:05:56.107033 containerd[2020]: time="2025-05-17T00:05:56.105270310Z" level=info msg="StartContainer for \"3ef73c7306abdc790a4b6560424754ec1c3fe331864e36d4cf1c4452817d0d9b\"" May 17 00:05:56.148038 systemd[1]: Started cri-containerd-ff4c36cf4272752e608a7b2c3a580edb87d6e17a44cf57811e42f5f8db1b24a5.scope - libcontainer container ff4c36cf4272752e608a7b2c3a580edb87d6e17a44cf57811e42f5f8db1b24a5. May 17 00:05:56.170277 kubelet[2804]: I0517 00:05:56.170239 2804 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-29-16" May 17 00:05:56.172700 kubelet[2804]: E0517 00:05:56.172456 2804 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.29.16:6443/api/v1/nodes\": dial tcp 172.31.29.16:6443: connect: connection refused" node="ip-172-31-29-16" May 17 00:05:56.200425 containerd[2020]: time="2025-05-17T00:05:56.200351414Z" level=info msg="StartContainer for \"bc292043e49078969b8af35b0ae74fc5e32bdf341ef842853b54125aa59f49d2\" returns successfully" May 17 00:05:56.217682 systemd[1]: Started cri-containerd-3ef73c7306abdc790a4b6560424754ec1c3fe331864e36d4cf1c4452817d0d9b.scope - libcontainer container 3ef73c7306abdc790a4b6560424754ec1c3fe331864e36d4cf1c4452817d0d9b. May 17 00:05:56.283282 containerd[2020]: time="2025-05-17T00:05:56.283206578Z" level=info msg="StartContainer for \"ff4c36cf4272752e608a7b2c3a580edb87d6e17a44cf57811e42f5f8db1b24a5\" returns successfully" May 17 00:05:56.395436 containerd[2020]: time="2025-05-17T00:05:56.395259348Z" level=info msg="StartContainer for \"3ef73c7306abdc790a4b6560424754ec1c3fe331864e36d4cf1c4452817d0d9b\" returns successfully" May 17 00:05:56.597396 kubelet[2804]: E0517 00:05:56.597043 2804 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-16\" not found" node="ip-172-31-29-16" May 17 00:05:56.604037 kubelet[2804]: E0517 00:05:56.601258 2804 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-16\" not found" node="ip-172-31-29-16" May 17 00:05:56.609246 kubelet[2804]: E0517 00:05:56.608653 2804 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-16\" not found" node="ip-172-31-29-16" May 17 00:05:57.614023 kubelet[2804]: E0517 00:05:57.613508 2804 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-16\" not found" node="ip-172-31-29-16" May 17 00:05:57.620022 kubelet[2804]: E0517 00:05:57.615569 2804 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-16\" not found" node="ip-172-31-29-16" May 17 00:05:57.775774 kubelet[2804]: I0517 00:05:57.775738 2804 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-29-16" May 17 00:05:58.617469 kubelet[2804]: E0517 00:05:58.617421 2804 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-16\" not found" node="ip-172-31-29-16" May 17 00:05:59.616012 update_engine[1999]: I20250517 00:05:59.614039 1999 update_attempter.cc:509] Updating boot flags... May 17 00:05:59.753216 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 43 scanned by (udev-worker) (3102) May 17 00:06:00.202164 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 43 scanned by (udev-worker) (3102) May 17 00:06:00.667052 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 43 scanned by (udev-worker) (3102) May 17 00:06:02.047639 kubelet[2804]: E0517 00:06:02.047203 2804 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-16\" not found" node="ip-172-31-29-16" May 17 00:06:04.659054 kubelet[2804]: E0517 00:06:04.658642 2804 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-29-16\" not found" May 17 00:06:04.878089 kubelet[2804]: E0517 00:06:04.876311 2804 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-29-16\" not found" node="ip-172-31-29-16" May 17 00:06:04.939031 kubelet[2804]: I0517 00:06:04.937685 2804 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-29-16" May 17 00:06:04.977617 kubelet[2804]: E0517 00:06:04.977431 2804 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-29-16.184027be7c08e009 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-29-16,UID:ip-172-31-29-16,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-29-16,},FirstTimestamp:2025-05-17 00:05:54.497978377 +0000 UTC m=+1.388557139,LastTimestamp:2025-05-17 00:05:54.497978377 +0000 UTC m=+1.388557139,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-29-16,}" May 17 00:06:05.020449 kubelet[2804]: I0517 00:06:05.020388 2804 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-29-16" May 17 00:06:05.038313 kubelet[2804]: E0517 00:06:05.037271 2804 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-29-16\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-29-16" May 17 00:06:05.038313 kubelet[2804]: I0517 00:06:05.037325 2804 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-29-16" May 17 00:06:05.042600 kubelet[2804]: E0517 00:06:05.042549 2804 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-29-16\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-29-16" May 17 00:06:05.043054 kubelet[2804]: I0517 00:06:05.043014 2804 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-29-16" May 17 00:06:05.046666 kubelet[2804]: E0517 00:06:05.046611 2804 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-29-16\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-29-16" May 17 00:06:05.507417 kubelet[2804]: I0517 00:06:05.507365 2804 apiserver.go:52] "Watching apiserver" May 17 00:06:05.520573 kubelet[2804]: I0517 00:06:05.520478 2804 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 17 00:06:05.649055 kubelet[2804]: I0517 00:06:05.648641 2804 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-29-16" May 17 00:06:07.104210 systemd[1]: Reloading requested from client PID 3362 ('systemctl') (unit session-9.scope)... May 17 00:06:07.104244 systemd[1]: Reloading... May 17 00:06:07.276053 zram_generator::config[3406]: No configuration found. May 17 00:06:07.515831 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:06:07.680021 kubelet[2804]: I0517 00:06:07.679862 2804 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-29-16" May 17 00:06:07.732565 systemd[1]: Reloading finished in 627 ms. May 17 00:06:07.775123 kubelet[2804]: I0517 00:06:07.772733 2804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-29-16" podStartSLOduration=2.772712346 podStartE2EDuration="2.772712346s" podCreationTimestamp="2025-05-17 00:06:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:06:07.740955184 +0000 UTC m=+14.631533958" watchObservedRunningTime="2025-05-17 00:06:07.772712346 +0000 UTC m=+14.663291132" May 17 00:06:07.838160 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:06:07.860896 systemd[1]: kubelet.service: Deactivated successfully. May 17 00:06:07.861547 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:06:07.861723 systemd[1]: kubelet.service: Consumed 2.246s CPU time, 128.1M memory peak, 0B memory swap peak. May 17 00:06:07.877569 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:06:08.241411 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:06:08.247175 (kubelet)[3462]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 17 00:06:08.362977 kubelet[3462]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:06:08.362977 kubelet[3462]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 17 00:06:08.362977 kubelet[3462]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:06:08.362977 kubelet[3462]: I0517 00:06:08.362411 3462 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 17 00:06:08.380172 kubelet[3462]: I0517 00:06:08.379189 3462 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" May 17 00:06:08.380172 kubelet[3462]: I0517 00:06:08.379244 3462 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 17 00:06:08.380172 kubelet[3462]: I0517 00:06:08.379714 3462 server.go:956] "Client rotation is on, will bootstrap in background" May 17 00:06:08.383016 kubelet[3462]: I0517 00:06:08.382929 3462 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" May 17 00:06:08.388815 kubelet[3462]: I0517 00:06:08.388741 3462 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 00:06:08.404414 kubelet[3462]: E0517 00:06:08.403794 3462 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 17 00:06:08.404414 kubelet[3462]: I0517 00:06:08.403862 3462 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 17 00:06:08.410177 kubelet[3462]: I0517 00:06:08.409954 3462 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 17 00:06:08.410806 kubelet[3462]: I0517 00:06:08.410728 3462 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 17 00:06:08.411132 kubelet[3462]: I0517 00:06:08.410806 3462 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-29-16","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 17 00:06:08.411340 kubelet[3462]: I0517 00:06:08.411156 3462 topology_manager.go:138] "Creating topology manager with none policy" May 17 00:06:08.411340 kubelet[3462]: I0517 00:06:08.411179 3462 container_manager_linux.go:303] "Creating device plugin manager" May 17 00:06:08.411340 kubelet[3462]: I0517 00:06:08.411264 3462 state_mem.go:36] "Initialized new in-memory state store" May 17 00:06:08.412619 kubelet[3462]: I0517 00:06:08.411608 3462 kubelet.go:480] "Attempting to sync node with API server" May 17 00:06:08.412619 kubelet[3462]: I0517 00:06:08.411645 3462 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" May 17 00:06:08.412619 kubelet[3462]: I0517 00:06:08.411696 3462 kubelet.go:386] "Adding apiserver pod source" May 17 00:06:08.412619 kubelet[3462]: I0517 00:06:08.411725 3462 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 17 00:06:08.414121 sudo[3476]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 17 00:06:08.414905 sudo[3476]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 17 00:06:08.420923 kubelet[3462]: I0517 00:06:08.420644 3462 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 17 00:06:08.421912 kubelet[3462]: I0517 00:06:08.421759 3462 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" May 17 00:06:08.448501 kubelet[3462]: I0517 00:06:08.448449 3462 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 17 00:06:08.449240 kubelet[3462]: I0517 00:06:08.448527 3462 server.go:1289] "Started kubelet" May 17 00:06:08.456147 kubelet[3462]: I0517 00:06:08.456056 3462 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 May 17 00:06:08.460238 kubelet[3462]: I0517 00:06:08.460113 3462 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 17 00:06:08.460783 kubelet[3462]: I0517 00:06:08.460714 3462 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 17 00:06:08.480704 kubelet[3462]: I0517 00:06:08.480588 3462 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 17 00:06:08.496200 kubelet[3462]: I0517 00:06:08.496039 3462 server.go:317] "Adding debug handlers to kubelet server" May 17 00:06:08.512212 kubelet[3462]: I0517 00:06:08.512141 3462 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 17 00:06:08.532586 kubelet[3462]: I0517 00:06:08.522391 3462 volume_manager.go:297] "Starting Kubelet Volume Manager" May 17 00:06:08.536489 kubelet[3462]: I0517 00:06:08.522836 3462 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 17 00:06:08.539235 kubelet[3462]: E0517 00:06:08.524314 3462 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-29-16\" not found" May 17 00:06:08.539235 kubelet[3462]: I0517 00:06:08.537422 3462 reconciler.go:26] "Reconciler: start to sync state" May 17 00:06:08.557106 kubelet[3462]: E0517 00:06:08.555860 3462 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 17 00:06:08.557646 kubelet[3462]: I0517 00:06:08.557584 3462 factory.go:223] Registration of the containerd container factory successfully May 17 00:06:08.557646 kubelet[3462]: I0517 00:06:08.557630 3462 factory.go:223] Registration of the systemd container factory successfully May 17 00:06:08.557905 kubelet[3462]: I0517 00:06:08.557789 3462 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 17 00:06:08.634960 kubelet[3462]: I0517 00:06:08.634804 3462 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" May 17 00:06:08.653690 kubelet[3462]: I0517 00:06:08.652780 3462 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" May 17 00:06:08.653690 kubelet[3462]: I0517 00:06:08.652844 3462 status_manager.go:230] "Starting to sync pod status with apiserver" May 17 00:06:08.653690 kubelet[3462]: I0517 00:06:08.652880 3462 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 17 00:06:08.653690 kubelet[3462]: I0517 00:06:08.652897 3462 kubelet.go:2436] "Starting kubelet main sync loop" May 17 00:06:08.662673 kubelet[3462]: E0517 00:06:08.652983 3462 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 17 00:06:08.764689 kubelet[3462]: E0517 00:06:08.762979 3462 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 17 00:06:08.822325 kubelet[3462]: I0517 00:06:08.821406 3462 cpu_manager.go:221] "Starting CPU manager" policy="none" May 17 00:06:08.822325 kubelet[3462]: I0517 00:06:08.821474 3462 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 17 00:06:08.822325 kubelet[3462]: I0517 00:06:08.821513 3462 state_mem.go:36] "Initialized new in-memory state store" May 17 00:06:08.825171 kubelet[3462]: I0517 00:06:08.823222 3462 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 17 00:06:08.825171 kubelet[3462]: I0517 00:06:08.823279 3462 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 17 00:06:08.825171 kubelet[3462]: I0517 00:06:08.823348 3462 policy_none.go:49] "None policy: Start" May 17 00:06:08.825171 kubelet[3462]: I0517 00:06:08.823370 3462 memory_manager.go:186] "Starting memorymanager" policy="None" May 17 00:06:08.825171 kubelet[3462]: I0517 00:06:08.823404 3462 state_mem.go:35] "Initializing new in-memory state store" May 17 00:06:08.825171 kubelet[3462]: I0517 00:06:08.823684 3462 state_mem.go:75] "Updated machine memory state" May 17 00:06:08.839030 kubelet[3462]: E0517 00:06:08.838153 3462 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" May 17 00:06:08.841015 kubelet[3462]: I0517 00:06:08.840929 3462 eviction_manager.go:189] "Eviction manager: starting control loop" May 17 00:06:08.841188 kubelet[3462]: I0517 00:06:08.841035 3462 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 17 00:06:08.842190 kubelet[3462]: I0517 00:06:08.841602 3462 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 17 00:06:08.852540 kubelet[3462]: E0517 00:06:08.852146 3462 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 17 00:06:08.964921 kubelet[3462]: I0517 00:06:08.964725 3462 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-29-16" May 17 00:06:08.964921 kubelet[3462]: I0517 00:06:08.964810 3462 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-29-16" May 17 00:06:08.966944 kubelet[3462]: I0517 00:06:08.966529 3462 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-29-16" May 17 00:06:08.972296 kubelet[3462]: I0517 00:06:08.971705 3462 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-29-16" May 17 00:06:09.006130 kubelet[3462]: I0517 00:06:09.004346 3462 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-29-16" May 17 00:06:09.006130 kubelet[3462]: I0517 00:06:09.004478 3462 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-29-16" May 17 00:06:09.010852 kubelet[3462]: E0517 00:06:09.010678 3462 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-29-16\" already exists" pod="kube-system/kube-scheduler-ip-172-31-29-16" May 17 00:06:09.013051 kubelet[3462]: E0517 00:06:09.011849 3462 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-29-16\" already exists" pod="kube-system/kube-apiserver-ip-172-31-29-16" May 17 00:06:09.058457 kubelet[3462]: I0517 00:06:09.058264 3462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e1f1fdd3f1878a75934a8bd7cb0f4740-k8s-certs\") pod \"kube-controller-manager-ip-172-31-29-16\" (UID: \"e1f1fdd3f1878a75934a8bd7cb0f4740\") " pod="kube-system/kube-controller-manager-ip-172-31-29-16" May 17 00:06:09.062321 kubelet[3462]: I0517 00:06:09.062148 3462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e1f1fdd3f1878a75934a8bd7cb0f4740-kubeconfig\") pod \"kube-controller-manager-ip-172-31-29-16\" (UID: \"e1f1fdd3f1878a75934a8bd7cb0f4740\") " pod="kube-system/kube-controller-manager-ip-172-31-29-16" May 17 00:06:09.062617 kubelet[3462]: I0517 00:06:09.062268 3462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e1f1fdd3f1878a75934a8bd7cb0f4740-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-29-16\" (UID: \"e1f1fdd3f1878a75934a8bd7cb0f4740\") " pod="kube-system/kube-controller-manager-ip-172-31-29-16" May 17 00:06:09.064034 kubelet[3462]: I0517 00:06:09.062914 3462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a3cd0bc949e4f24e759a4c221b0f9cb0-ca-certs\") pod \"kube-apiserver-ip-172-31-29-16\" (UID: \"a3cd0bc949e4f24e759a4c221b0f9cb0\") " pod="kube-system/kube-apiserver-ip-172-31-29-16" May 17 00:06:09.064818 kubelet[3462]: I0517 00:06:09.063254 3462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a3cd0bc949e4f24e759a4c221b0f9cb0-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-29-16\" (UID: \"a3cd0bc949e4f24e759a4c221b0f9cb0\") " pod="kube-system/kube-apiserver-ip-172-31-29-16" May 17 00:06:09.065543 kubelet[3462]: I0517 00:06:09.065375 3462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e1f1fdd3f1878a75934a8bd7cb0f4740-ca-certs\") pod \"kube-controller-manager-ip-172-31-29-16\" (UID: \"e1f1fdd3f1878a75934a8bd7cb0f4740\") " pod="kube-system/kube-controller-manager-ip-172-31-29-16" May 17 00:06:09.065543 kubelet[3462]: I0517 00:06:09.065486 3462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e1f1fdd3f1878a75934a8bd7cb0f4740-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-29-16\" (UID: \"e1f1fdd3f1878a75934a8bd7cb0f4740\") " pod="kube-system/kube-controller-manager-ip-172-31-29-16" May 17 00:06:09.066388 kubelet[3462]: I0517 00:06:09.066229 3462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/67dc0c9ffb0130de399834352e4efad0-kubeconfig\") pod \"kube-scheduler-ip-172-31-29-16\" (UID: \"67dc0c9ffb0130de399834352e4efad0\") " pod="kube-system/kube-scheduler-ip-172-31-29-16" May 17 00:06:09.066388 kubelet[3462]: I0517 00:06:09.066301 3462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a3cd0bc949e4f24e759a4c221b0f9cb0-k8s-certs\") pod \"kube-apiserver-ip-172-31-29-16\" (UID: \"a3cd0bc949e4f24e759a4c221b0f9cb0\") " pod="kube-system/kube-apiserver-ip-172-31-29-16" May 17 00:06:09.418872 kubelet[3462]: I0517 00:06:09.416532 3462 apiserver.go:52] "Watching apiserver" May 17 00:06:09.437330 kubelet[3462]: I0517 00:06:09.437275 3462 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 17 00:06:09.591247 sudo[3476]: pam_unix(sudo:session): session closed for user root May 17 00:06:09.711688 kubelet[3462]: I0517 00:06:09.711523 3462 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-29-16" podStartSLOduration=1.711496394 podStartE2EDuration="1.711496394s" podCreationTimestamp="2025-05-17 00:06:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:06:09.693514642 +0000 UTC m=+1.436775275" watchObservedRunningTime="2025-05-17 00:06:09.711496394 +0000 UTC m=+1.454757003" May 17 00:06:09.744650 kubelet[3462]: I0517 00:06:09.743742 3462 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-29-16" May 17 00:06:09.744650 kubelet[3462]: I0517 00:06:09.744162 3462 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-29-16" May 17 00:06:09.767288 kubelet[3462]: E0517 00:06:09.767236 3462 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-29-16\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-29-16" May 17 00:06:09.770122 kubelet[3462]: E0517 00:06:09.770006 3462 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-29-16\" already exists" pod="kube-system/kube-scheduler-ip-172-31-29-16" May 17 00:06:13.285839 kubelet[3462]: I0517 00:06:13.285790 3462 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 17 00:06:13.287691 kubelet[3462]: I0517 00:06:13.286572 3462 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 17 00:06:13.287882 containerd[2020]: time="2025-05-17T00:06:13.286270804Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 17 00:06:13.716563 sudo[2348]: pam_unix(sudo:session): session closed for user root May 17 00:06:13.741375 sshd[2345]: pam_unix(sshd:session): session closed for user core May 17 00:06:13.749081 systemd[1]: sshd@8-172.31.29.16:22-139.178.89.65:49924.service: Deactivated successfully. May 17 00:06:13.753635 systemd[1]: session-9.scope: Deactivated successfully. May 17 00:06:13.753969 systemd[1]: session-9.scope: Consumed 12.383s CPU time, 154.2M memory peak, 0B memory swap peak. May 17 00:06:13.760426 systemd-logind[1997]: Session 9 logged out. Waiting for processes to exit. May 17 00:06:13.763092 systemd-logind[1997]: Removed session 9. May 17 00:06:13.858037 systemd[1]: Created slice kubepods-besteffort-pod414f2c12_6e3b_4b90_8367_ac69fb4666c3.slice - libcontainer container kubepods-besteffort-pod414f2c12_6e3b_4b90_8367_ac69fb4666c3.slice. May 17 00:06:13.883541 systemd[1]: Created slice kubepods-burstable-pod5803be12_9bed_4ab3_86c3_79f403cd26a3.slice - libcontainer container kubepods-burstable-pod5803be12_9bed_4ab3_86c3_79f403cd26a3.slice. May 17 00:06:13.889217 kubelet[3462]: E0517 00:06:13.888291 3462 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:ip-172-31-29-16\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-29-16' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"cilium-config\"" type="*v1.ConfigMap" May 17 00:06:13.889217 kubelet[3462]: E0517 00:06:13.888442 3462 reflector.go:200] "Failed to watch" err="failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:ip-172-31-29-16\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-29-16' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"hubble-server-certs\"" type="*v1.Secret" May 17 00:06:13.889217 kubelet[3462]: E0517 00:06:13.888615 3462 reflector.go:200] "Failed to watch" err="failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:ip-172-31-29-16\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-29-16' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"cilium-clustermesh\"" type="*v1.Secret" May 17 00:06:13.898139 kubelet[3462]: I0517 00:06:13.898063 3462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4v7tj\" (UniqueName: \"kubernetes.io/projected/414f2c12-6e3b-4b90-8367-ac69fb4666c3-kube-api-access-4v7tj\") pod \"kube-proxy-v9frj\" (UID: \"414f2c12-6e3b-4b90-8367-ac69fb4666c3\") " pod="kube-system/kube-proxy-v9frj" May 17 00:06:13.898139 kubelet[3462]: I0517 00:06:13.898142 3462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/414f2c12-6e3b-4b90-8367-ac69fb4666c3-kube-proxy\") pod \"kube-proxy-v9frj\" (UID: \"414f2c12-6e3b-4b90-8367-ac69fb4666c3\") " pod="kube-system/kube-proxy-v9frj" May 17 00:06:13.898583 kubelet[3462]: I0517 00:06:13.898186 3462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/414f2c12-6e3b-4b90-8367-ac69fb4666c3-xtables-lock\") pod \"kube-proxy-v9frj\" (UID: \"414f2c12-6e3b-4b90-8367-ac69fb4666c3\") " pod="kube-system/kube-proxy-v9frj" May 17 00:06:13.898583 kubelet[3462]: I0517 00:06:13.898223 3462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/414f2c12-6e3b-4b90-8367-ac69fb4666c3-lib-modules\") pod \"kube-proxy-v9frj\" (UID: \"414f2c12-6e3b-4b90-8367-ac69fb4666c3\") " pod="kube-system/kube-proxy-v9frj" May 17 00:06:14.001812 kubelet[3462]: I0517 00:06:13.999349 3462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5803be12-9bed-4ab3-86c3-79f403cd26a3-bpf-maps\") pod \"cilium-zqrb7\" (UID: \"5803be12-9bed-4ab3-86c3-79f403cd26a3\") " pod="kube-system/cilium-zqrb7" May 17 00:06:14.001812 kubelet[3462]: I0517 00:06:13.999427 3462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5803be12-9bed-4ab3-86c3-79f403cd26a3-etc-cni-netd\") pod \"cilium-zqrb7\" (UID: \"5803be12-9bed-4ab3-86c3-79f403cd26a3\") " pod="kube-system/cilium-zqrb7" May 17 00:06:14.001812 kubelet[3462]: I0517 00:06:13.999464 3462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5803be12-9bed-4ab3-86c3-79f403cd26a3-lib-modules\") pod \"cilium-zqrb7\" (UID: \"5803be12-9bed-4ab3-86c3-79f403cd26a3\") " pod="kube-system/cilium-zqrb7" May 17 00:06:14.001812 kubelet[3462]: I0517 00:06:13.999503 3462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5803be12-9bed-4ab3-86c3-79f403cd26a3-cilium-cgroup\") pod \"cilium-zqrb7\" (UID: \"5803be12-9bed-4ab3-86c3-79f403cd26a3\") " pod="kube-system/cilium-zqrb7" May 17 00:06:14.001812 kubelet[3462]: I0517 00:06:13.999548 3462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5803be12-9bed-4ab3-86c3-79f403cd26a3-cilium-config-path\") pod \"cilium-zqrb7\" (UID: \"5803be12-9bed-4ab3-86c3-79f403cd26a3\") " pod="kube-system/cilium-zqrb7" May 17 00:06:14.001812 kubelet[3462]: I0517 00:06:13.999614 3462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5803be12-9bed-4ab3-86c3-79f403cd26a3-host-proc-sys-net\") pod \"cilium-zqrb7\" (UID: \"5803be12-9bed-4ab3-86c3-79f403cd26a3\") " pod="kube-system/cilium-zqrb7" May 17 00:06:14.002428 kubelet[3462]: I0517 00:06:13.999661 3462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5803be12-9bed-4ab3-86c3-79f403cd26a3-host-proc-sys-kernel\") pod \"cilium-zqrb7\" (UID: \"5803be12-9bed-4ab3-86c3-79f403cd26a3\") " pod="kube-system/cilium-zqrb7" May 17 00:06:14.002428 kubelet[3462]: I0517 00:06:13.999697 3462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5803be12-9bed-4ab3-86c3-79f403cd26a3-hubble-tls\") pod \"cilium-zqrb7\" (UID: \"5803be12-9bed-4ab3-86c3-79f403cd26a3\") " pod="kube-system/cilium-zqrb7" May 17 00:06:14.002428 kubelet[3462]: I0517 00:06:13.999743 3462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5803be12-9bed-4ab3-86c3-79f403cd26a3-clustermesh-secrets\") pod \"cilium-zqrb7\" (UID: \"5803be12-9bed-4ab3-86c3-79f403cd26a3\") " pod="kube-system/cilium-zqrb7" May 17 00:06:14.002428 kubelet[3462]: I0517 00:06:13.999842 3462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5803be12-9bed-4ab3-86c3-79f403cd26a3-cilium-run\") pod \"cilium-zqrb7\" (UID: \"5803be12-9bed-4ab3-86c3-79f403cd26a3\") " pod="kube-system/cilium-zqrb7" May 17 00:06:14.002428 kubelet[3462]: I0517 00:06:13.999898 3462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5803be12-9bed-4ab3-86c3-79f403cd26a3-hostproc\") pod \"cilium-zqrb7\" (UID: \"5803be12-9bed-4ab3-86c3-79f403cd26a3\") " pod="kube-system/cilium-zqrb7" May 17 00:06:14.002428 kubelet[3462]: I0517 00:06:13.999938 3462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5803be12-9bed-4ab3-86c3-79f403cd26a3-cni-path\") pod \"cilium-zqrb7\" (UID: \"5803be12-9bed-4ab3-86c3-79f403cd26a3\") " pod="kube-system/cilium-zqrb7" May 17 00:06:14.002733 kubelet[3462]: I0517 00:06:13.999973 3462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5803be12-9bed-4ab3-86c3-79f403cd26a3-xtables-lock\") pod \"cilium-zqrb7\" (UID: \"5803be12-9bed-4ab3-86c3-79f403cd26a3\") " pod="kube-system/cilium-zqrb7" May 17 00:06:14.002733 kubelet[3462]: I0517 00:06:14.000037 3462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7sqf\" (UniqueName: \"kubernetes.io/projected/5803be12-9bed-4ab3-86c3-79f403cd26a3-kube-api-access-n7sqf\") pod \"cilium-zqrb7\" (UID: \"5803be12-9bed-4ab3-86c3-79f403cd26a3\") " pod="kube-system/cilium-zqrb7" May 17 00:06:14.022036 kubelet[3462]: E0517 00:06:14.020669 3462 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found May 17 00:06:14.022337 kubelet[3462]: E0517 00:06:14.022276 3462 projected.go:194] Error preparing data for projected volume kube-api-access-4v7tj for pod kube-system/kube-proxy-v9frj: configmap "kube-root-ca.crt" not found May 17 00:06:14.022865 kubelet[3462]: E0517 00:06:14.022829 3462 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/414f2c12-6e3b-4b90-8367-ac69fb4666c3-kube-api-access-4v7tj podName:414f2c12-6e3b-4b90-8367-ac69fb4666c3 nodeName:}" failed. No retries permitted until 2025-05-17 00:06:14.522576385 +0000 UTC m=+6.265836994 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-4v7tj" (UniqueName: "kubernetes.io/projected/414f2c12-6e3b-4b90-8367-ac69fb4666c3-kube-api-access-4v7tj") pod "kube-proxy-v9frj" (UID: "414f2c12-6e3b-4b90-8367-ac69fb4666c3") : configmap "kube-root-ca.crt" not found May 17 00:06:14.494099 systemd[1]: Created slice kubepods-besteffort-pod24265796_28fc_4f94_b264_dc5634109fe8.slice - libcontainer container kubepods-besteffort-pod24265796_28fc_4f94_b264_dc5634109fe8.slice. May 17 00:06:14.606345 kubelet[3462]: I0517 00:06:14.606267 3462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/24265796-28fc-4f94-b264-dc5634109fe8-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-zxdvs\" (UID: \"24265796-28fc-4f94-b264-dc5634109fe8\") " pod="kube-system/cilium-operator-6c4d7847fc-zxdvs" May 17 00:06:14.606922 kubelet[3462]: I0517 00:06:14.606390 3462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkt6z\" (UniqueName: \"kubernetes.io/projected/24265796-28fc-4f94-b264-dc5634109fe8-kube-api-access-hkt6z\") pod \"cilium-operator-6c4d7847fc-zxdvs\" (UID: \"24265796-28fc-4f94-b264-dc5634109fe8\") " pod="kube-system/cilium-operator-6c4d7847fc-zxdvs" May 17 00:06:14.773133 containerd[2020]: time="2025-05-17T00:06:14.772764158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-v9frj,Uid:414f2c12-6e3b-4b90-8367-ac69fb4666c3,Namespace:kube-system,Attempt:0,}" May 17 00:06:14.825707 containerd[2020]: time="2025-05-17T00:06:14.823493189Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:06:14.825707 containerd[2020]: time="2025-05-17T00:06:14.824061140Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:06:14.825707 containerd[2020]: time="2025-05-17T00:06:14.824096775Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:06:14.825707 containerd[2020]: time="2025-05-17T00:06:14.824251965Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:06:14.858390 systemd[1]: Started cri-containerd-cb0368132912045661d00cf2049e8420c72f57896c8e520095c79f4ab073ccde.scope - libcontainer container cb0368132912045661d00cf2049e8420c72f57896c8e520095c79f4ab073ccde. May 17 00:06:14.902259 containerd[2020]: time="2025-05-17T00:06:14.902165457Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-v9frj,Uid:414f2c12-6e3b-4b90-8367-ac69fb4666c3,Namespace:kube-system,Attempt:0,} returns sandbox id \"cb0368132912045661d00cf2049e8420c72f57896c8e520095c79f4ab073ccde\"" May 17 00:06:14.913596 containerd[2020]: time="2025-05-17T00:06:14.913523703Z" level=info msg="CreateContainer within sandbox \"cb0368132912045661d00cf2049e8420c72f57896c8e520095c79f4ab073ccde\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 17 00:06:14.942718 containerd[2020]: time="2025-05-17T00:06:14.942640865Z" level=info msg="CreateContainer within sandbox \"cb0368132912045661d00cf2049e8420c72f57896c8e520095c79f4ab073ccde\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b9ba6d58ac5630ab24c1526d954258be5cacc76e9131f99a0f475f574f624ee0\"" May 17 00:06:14.944661 containerd[2020]: time="2025-05-17T00:06:14.943823905Z" level=info msg="StartContainer for \"b9ba6d58ac5630ab24c1526d954258be5cacc76e9131f99a0f475f574f624ee0\"" May 17 00:06:14.994393 systemd[1]: Started cri-containerd-b9ba6d58ac5630ab24c1526d954258be5cacc76e9131f99a0f475f574f624ee0.scope - libcontainer container b9ba6d58ac5630ab24c1526d954258be5cacc76e9131f99a0f475f574f624ee0. May 17 00:06:15.063745 containerd[2020]: time="2025-05-17T00:06:15.063408847Z" level=info msg="StartContainer for \"b9ba6d58ac5630ab24c1526d954258be5cacc76e9131f99a0f475f574f624ee0\" returns successfully" May 17 00:06:15.102365 kubelet[3462]: E0517 00:06:15.102295 3462 secret.go:189] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition May 17 00:06:15.102528 kubelet[3462]: E0517 00:06:15.102450 3462 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5803be12-9bed-4ab3-86c3-79f403cd26a3-clustermesh-secrets podName:5803be12-9bed-4ab3-86c3-79f403cd26a3 nodeName:}" failed. No retries permitted until 2025-05-17 00:06:15.60242046 +0000 UTC m=+7.345681069 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/5803be12-9bed-4ab3-86c3-79f403cd26a3-clustermesh-secrets") pod "cilium-zqrb7" (UID: "5803be12-9bed-4ab3-86c3-79f403cd26a3") : failed to sync secret cache: timed out waiting for the condition May 17 00:06:15.102528 kubelet[3462]: E0517 00:06:15.102297 3462 configmap.go:193] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition May 17 00:06:15.103151 kubelet[3462]: E0517 00:06:15.103070 3462 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5803be12-9bed-4ab3-86c3-79f403cd26a3-cilium-config-path podName:5803be12-9bed-4ab3-86c3-79f403cd26a3 nodeName:}" failed. No retries permitted until 2025-05-17 00:06:15.603035272 +0000 UTC m=+7.346295917 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/5803be12-9bed-4ab3-86c3-79f403cd26a3-cilium-config-path") pod "cilium-zqrb7" (UID: "5803be12-9bed-4ab3-86c3-79f403cd26a3") : failed to sync configmap cache: timed out waiting for the condition May 17 00:06:15.696956 containerd[2020]: time="2025-05-17T00:06:15.696865399Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zqrb7,Uid:5803be12-9bed-4ab3-86c3-79f403cd26a3,Namespace:kube-system,Attempt:0,}" May 17 00:06:15.702942 containerd[2020]: time="2025-05-17T00:06:15.702344702Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-zxdvs,Uid:24265796-28fc-4f94-b264-dc5634109fe8,Namespace:kube-system,Attempt:0,}" May 17 00:06:15.782363 containerd[2020]: time="2025-05-17T00:06:15.777975388Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:06:15.782363 containerd[2020]: time="2025-05-17T00:06:15.780341396Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:06:15.782363 containerd[2020]: time="2025-05-17T00:06:15.780386745Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:06:15.782363 containerd[2020]: time="2025-05-17T00:06:15.780582775Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:06:15.803094 containerd[2020]: time="2025-05-17T00:06:15.800806410Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:06:15.803094 containerd[2020]: time="2025-05-17T00:06:15.800942626Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:06:15.803094 containerd[2020]: time="2025-05-17T00:06:15.800982206Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:06:15.803094 containerd[2020]: time="2025-05-17T00:06:15.801207730Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:06:15.848185 systemd[1]: Started cri-containerd-a1e08f1c00f90e62ea5240fd6fc068f8d84a0c7669107486faafe716659e0cc3.scope - libcontainer container a1e08f1c00f90e62ea5240fd6fc068f8d84a0c7669107486faafe716659e0cc3. May 17 00:06:15.866310 systemd[1]: Started cri-containerd-316babfe37ce073d9afe1eec2a981419dc8af30bb0119f628f886227991dbf08.scope - libcontainer container 316babfe37ce073d9afe1eec2a981419dc8af30bb0119f628f886227991dbf08. May 17 00:06:15.908195 containerd[2020]: time="2025-05-17T00:06:15.907927631Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zqrb7,Uid:5803be12-9bed-4ab3-86c3-79f403cd26a3,Namespace:kube-system,Attempt:0,} returns sandbox id \"a1e08f1c00f90e62ea5240fd6fc068f8d84a0c7669107486faafe716659e0cc3\"" May 17 00:06:15.913449 containerd[2020]: time="2025-05-17T00:06:15.912454347Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 17 00:06:15.961688 containerd[2020]: time="2025-05-17T00:06:15.961540500Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-zxdvs,Uid:24265796-28fc-4f94-b264-dc5634109fe8,Namespace:kube-system,Attempt:0,} returns sandbox id \"316babfe37ce073d9afe1eec2a981419dc8af30bb0119f628f886227991dbf08\"" May 17 00:06:18.618281 kubelet[3462]: I0517 00:06:18.618171 3462 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-v9frj" podStartSLOduration=5.618147999 podStartE2EDuration="5.618147999s" podCreationTimestamp="2025-05-17 00:06:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:06:15.80956311 +0000 UTC m=+7.552823743" watchObservedRunningTime="2025-05-17 00:06:18.618147999 +0000 UTC m=+10.361408608" May 17 00:06:20.999266 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount622091358.mount: Deactivated successfully. May 17 00:06:23.517607 containerd[2020]: time="2025-05-17T00:06:23.516872978Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:06:23.520810 containerd[2020]: time="2025-05-17T00:06:23.520713049Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" May 17 00:06:23.523307 containerd[2020]: time="2025-05-17T00:06:23.523224485Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:06:23.527031 containerd[2020]: time="2025-05-17T00:06:23.526294901Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 7.613767103s" May 17 00:06:23.527031 containerd[2020]: time="2025-05-17T00:06:23.526357797Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" May 17 00:06:23.528724 containerd[2020]: time="2025-05-17T00:06:23.528653425Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 17 00:06:23.535581 containerd[2020]: time="2025-05-17T00:06:23.535511798Z" level=info msg="CreateContainer within sandbox \"a1e08f1c00f90e62ea5240fd6fc068f8d84a0c7669107486faafe716659e0cc3\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 17 00:06:23.561937 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1246557478.mount: Deactivated successfully. May 17 00:06:23.567369 containerd[2020]: time="2025-05-17T00:06:23.567247862Z" level=info msg="CreateContainer within sandbox \"a1e08f1c00f90e62ea5240fd6fc068f8d84a0c7669107486faafe716659e0cc3\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"44e3d14fc6908f623f2058fb8f23e8f07ef69966273e8249c1bc7db823698a04\"" May 17 00:06:23.568345 containerd[2020]: time="2025-05-17T00:06:23.568222135Z" level=info msg="StartContainer for \"44e3d14fc6908f623f2058fb8f23e8f07ef69966273e8249c1bc7db823698a04\"" May 17 00:06:23.633289 systemd[1]: Started cri-containerd-44e3d14fc6908f623f2058fb8f23e8f07ef69966273e8249c1bc7db823698a04.scope - libcontainer container 44e3d14fc6908f623f2058fb8f23e8f07ef69966273e8249c1bc7db823698a04. May 17 00:06:23.684750 containerd[2020]: time="2025-05-17T00:06:23.684391773Z" level=info msg="StartContainer for \"44e3d14fc6908f623f2058fb8f23e8f07ef69966273e8249c1bc7db823698a04\" returns successfully" May 17 00:06:23.701501 systemd[1]: cri-containerd-44e3d14fc6908f623f2058fb8f23e8f07ef69966273e8249c1bc7db823698a04.scope: Deactivated successfully. May 17 00:06:24.554256 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-44e3d14fc6908f623f2058fb8f23e8f07ef69966273e8249c1bc7db823698a04-rootfs.mount: Deactivated successfully. May 17 00:06:25.026423 containerd[2020]: time="2025-05-17T00:06:25.026311874Z" level=info msg="shim disconnected" id=44e3d14fc6908f623f2058fb8f23e8f07ef69966273e8249c1bc7db823698a04 namespace=k8s.io May 17 00:06:25.027166 containerd[2020]: time="2025-05-17T00:06:25.026414554Z" level=warning msg="cleaning up after shim disconnected" id=44e3d14fc6908f623f2058fb8f23e8f07ef69966273e8249c1bc7db823698a04 namespace=k8s.io May 17 00:06:25.027166 containerd[2020]: time="2025-05-17T00:06:25.026459640Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:06:25.589943 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3240624403.mount: Deactivated successfully. May 17 00:06:25.846399 containerd[2020]: time="2025-05-17T00:06:25.846030868Z" level=info msg="CreateContainer within sandbox \"a1e08f1c00f90e62ea5240fd6fc068f8d84a0c7669107486faafe716659e0cc3\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 17 00:06:25.890439 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2221586563.mount: Deactivated successfully. May 17 00:06:25.904865 containerd[2020]: time="2025-05-17T00:06:25.904806973Z" level=info msg="CreateContainer within sandbox \"a1e08f1c00f90e62ea5240fd6fc068f8d84a0c7669107486faafe716659e0cc3\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"876fffd523f614e5c00600b4bf4eff1af8dca621d745e87ff68176cda4d06102\"" May 17 00:06:25.906588 containerd[2020]: time="2025-05-17T00:06:25.906534853Z" level=info msg="StartContainer for \"876fffd523f614e5c00600b4bf4eff1af8dca621d745e87ff68176cda4d06102\"" May 17 00:06:25.983547 systemd[1]: Started cri-containerd-876fffd523f614e5c00600b4bf4eff1af8dca621d745e87ff68176cda4d06102.scope - libcontainer container 876fffd523f614e5c00600b4bf4eff1af8dca621d745e87ff68176cda4d06102. May 17 00:06:26.056340 containerd[2020]: time="2025-05-17T00:06:26.056273982Z" level=info msg="StartContainer for \"876fffd523f614e5c00600b4bf4eff1af8dca621d745e87ff68176cda4d06102\" returns successfully" May 17 00:06:26.078696 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 17 00:06:26.080569 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 17 00:06:26.080709 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 17 00:06:26.091449 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 17 00:06:26.092672 systemd[1]: cri-containerd-876fffd523f614e5c00600b4bf4eff1af8dca621d745e87ff68176cda4d06102.scope: Deactivated successfully. May 17 00:06:26.146112 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 17 00:06:26.230701 containerd[2020]: time="2025-05-17T00:06:26.230360836Z" level=info msg="shim disconnected" id=876fffd523f614e5c00600b4bf4eff1af8dca621d745e87ff68176cda4d06102 namespace=k8s.io May 17 00:06:26.230701 containerd[2020]: time="2025-05-17T00:06:26.230435822Z" level=warning msg="cleaning up after shim disconnected" id=876fffd523f614e5c00600b4bf4eff1af8dca621d745e87ff68176cda4d06102 namespace=k8s.io May 17 00:06:26.230701 containerd[2020]: time="2025-05-17T00:06:26.230455768Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:06:26.496051 containerd[2020]: time="2025-05-17T00:06:26.495958527Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:06:26.498269 containerd[2020]: time="2025-05-17T00:06:26.498210221Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" May 17 00:06:26.500792 containerd[2020]: time="2025-05-17T00:06:26.500686586Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:06:26.504162 containerd[2020]: time="2025-05-17T00:06:26.504073428Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.975347379s" May 17 00:06:26.504162 containerd[2020]: time="2025-05-17T00:06:26.504148174Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" May 17 00:06:26.513732 containerd[2020]: time="2025-05-17T00:06:26.513513965Z" level=info msg="CreateContainer within sandbox \"316babfe37ce073d9afe1eec2a981419dc8af30bb0119f628f886227991dbf08\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 17 00:06:26.544409 containerd[2020]: time="2025-05-17T00:06:26.544205472Z" level=info msg="CreateContainer within sandbox \"316babfe37ce073d9afe1eec2a981419dc8af30bb0119f628f886227991dbf08\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"cb11c888de97080f1c8851c5b1fa905cb6f2def74eaf772bf8946243479593f4\"" May 17 00:06:26.547451 containerd[2020]: time="2025-05-17T00:06:26.546348044Z" level=info msg="StartContainer for \"cb11c888de97080f1c8851c5b1fa905cb6f2def74eaf772bf8946243479593f4\"" May 17 00:06:26.577328 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-876fffd523f614e5c00600b4bf4eff1af8dca621d745e87ff68176cda4d06102-rootfs.mount: Deactivated successfully. May 17 00:06:26.606311 systemd[1]: Started cri-containerd-cb11c888de97080f1c8851c5b1fa905cb6f2def74eaf772bf8946243479593f4.scope - libcontainer container cb11c888de97080f1c8851c5b1fa905cb6f2def74eaf772bf8946243479593f4. May 17 00:06:26.652505 containerd[2020]: time="2025-05-17T00:06:26.652240935Z" level=info msg="StartContainer for \"cb11c888de97080f1c8851c5b1fa905cb6f2def74eaf772bf8946243479593f4\" returns successfully" May 17 00:06:26.891353 containerd[2020]: time="2025-05-17T00:06:26.891057056Z" level=info msg="CreateContainer within sandbox \"a1e08f1c00f90e62ea5240fd6fc068f8d84a0c7669107486faafe716659e0cc3\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 17 00:06:26.953839 containerd[2020]: time="2025-05-17T00:06:26.952909767Z" level=info msg="CreateContainer within sandbox \"a1e08f1c00f90e62ea5240fd6fc068f8d84a0c7669107486faafe716659e0cc3\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e8e65e9114848acb9cb93e3806adb4ad51a864f74b9b5969778d316b527ad0f3\"" May 17 00:06:26.955852 containerd[2020]: time="2025-05-17T00:06:26.954300399Z" level=info msg="StartContainer for \"e8e65e9114848acb9cb93e3806adb4ad51a864f74b9b5969778d316b527ad0f3\"" May 17 00:06:27.038381 systemd[1]: Started cri-containerd-e8e65e9114848acb9cb93e3806adb4ad51a864f74b9b5969778d316b527ad0f3.scope - libcontainer container e8e65e9114848acb9cb93e3806adb4ad51a864f74b9b5969778d316b527ad0f3. May 17 00:06:27.122164 containerd[2020]: time="2025-05-17T00:06:27.121950252Z" level=info msg="StartContainer for \"e8e65e9114848acb9cb93e3806adb4ad51a864f74b9b5969778d316b527ad0f3\" returns successfully" May 17 00:06:27.143933 systemd[1]: cri-containerd-e8e65e9114848acb9cb93e3806adb4ad51a864f74b9b5969778d316b527ad0f3.scope: Deactivated successfully. May 17 00:06:27.216242 kubelet[3462]: I0517 00:06:27.214100 3462 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-zxdvs" podStartSLOduration=2.67298451 podStartE2EDuration="13.214079201s" podCreationTimestamp="2025-05-17 00:06:14 +0000 UTC" firstStartedPulling="2025-05-17 00:06:15.964792409 +0000 UTC m=+7.708053018" lastFinishedPulling="2025-05-17 00:06:26.505887112 +0000 UTC m=+18.249147709" observedRunningTime="2025-05-17 00:06:26.909754587 +0000 UTC m=+18.653015220" watchObservedRunningTime="2025-05-17 00:06:27.214079201 +0000 UTC m=+18.957339810" May 17 00:06:27.258401 containerd[2020]: time="2025-05-17T00:06:27.258046807Z" level=info msg="shim disconnected" id=e8e65e9114848acb9cb93e3806adb4ad51a864f74b9b5969778d316b527ad0f3 namespace=k8s.io May 17 00:06:27.258401 containerd[2020]: time="2025-05-17T00:06:27.258121841Z" level=warning msg="cleaning up after shim disconnected" id=e8e65e9114848acb9cb93e3806adb4ad51a864f74b9b5969778d316b527ad0f3 namespace=k8s.io May 17 00:06:27.258401 containerd[2020]: time="2025-05-17T00:06:27.258142087Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:06:27.577009 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e8e65e9114848acb9cb93e3806adb4ad51a864f74b9b5969778d316b527ad0f3-rootfs.mount: Deactivated successfully. May 17 00:06:27.890327 containerd[2020]: time="2025-05-17T00:06:27.890154069Z" level=info msg="CreateContainer within sandbox \"a1e08f1c00f90e62ea5240fd6fc068f8d84a0c7669107486faafe716659e0cc3\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 17 00:06:27.917228 containerd[2020]: time="2025-05-17T00:06:27.916751256Z" level=info msg="CreateContainer within sandbox \"a1e08f1c00f90e62ea5240fd6fc068f8d84a0c7669107486faafe716659e0cc3\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"94893197ab12f64acc98c6c817f6b981b9ae5bf254123d9b8f62b8ea992e6ee5\"" May 17 00:06:27.919490 containerd[2020]: time="2025-05-17T00:06:27.918273246Z" level=info msg="StartContainer for \"94893197ab12f64acc98c6c817f6b981b9ae5bf254123d9b8f62b8ea992e6ee5\"" May 17 00:06:27.998334 systemd[1]: Started cri-containerd-94893197ab12f64acc98c6c817f6b981b9ae5bf254123d9b8f62b8ea992e6ee5.scope - libcontainer container 94893197ab12f64acc98c6c817f6b981b9ae5bf254123d9b8f62b8ea992e6ee5. May 17 00:06:28.080269 containerd[2020]: time="2025-05-17T00:06:28.080202190Z" level=info msg="StartContainer for \"94893197ab12f64acc98c6c817f6b981b9ae5bf254123d9b8f62b8ea992e6ee5\" returns successfully" May 17 00:06:28.085595 systemd[1]: cri-containerd-94893197ab12f64acc98c6c817f6b981b9ae5bf254123d9b8f62b8ea992e6ee5.scope: Deactivated successfully. May 17 00:06:28.149714 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-94893197ab12f64acc98c6c817f6b981b9ae5bf254123d9b8f62b8ea992e6ee5-rootfs.mount: Deactivated successfully. May 17 00:06:28.159131 containerd[2020]: time="2025-05-17T00:06:28.159045385Z" level=info msg="shim disconnected" id=94893197ab12f64acc98c6c817f6b981b9ae5bf254123d9b8f62b8ea992e6ee5 namespace=k8s.io May 17 00:06:28.159131 containerd[2020]: time="2025-05-17T00:06:28.159118752Z" level=warning msg="cleaning up after shim disconnected" id=94893197ab12f64acc98c6c817f6b981b9ae5bf254123d9b8f62b8ea992e6ee5 namespace=k8s.io May 17 00:06:28.159806 containerd[2020]: time="2025-05-17T00:06:28.159140785Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:06:28.894918 containerd[2020]: time="2025-05-17T00:06:28.894828171Z" level=info msg="CreateContainer within sandbox \"a1e08f1c00f90e62ea5240fd6fc068f8d84a0c7669107486faafe716659e0cc3\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 17 00:06:28.934866 containerd[2020]: time="2025-05-17T00:06:28.934785775Z" level=info msg="CreateContainer within sandbox \"a1e08f1c00f90e62ea5240fd6fc068f8d84a0c7669107486faafe716659e0cc3\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"bd4515875c5f4d4a4113e139d1d9bf33770e927ae87876b64a8e3905b4352f57\"" May 17 00:06:28.936402 containerd[2020]: time="2025-05-17T00:06:28.936315070Z" level=info msg="StartContainer for \"bd4515875c5f4d4a4113e139d1d9bf33770e927ae87876b64a8e3905b4352f57\"" May 17 00:06:29.003808 systemd[1]: Started cri-containerd-bd4515875c5f4d4a4113e139d1d9bf33770e927ae87876b64a8e3905b4352f57.scope - libcontainer container bd4515875c5f4d4a4113e139d1d9bf33770e927ae87876b64a8e3905b4352f57. May 17 00:06:29.086532 containerd[2020]: time="2025-05-17T00:06:29.086470067Z" level=info msg="StartContainer for \"bd4515875c5f4d4a4113e139d1d9bf33770e927ae87876b64a8e3905b4352f57\" returns successfully" May 17 00:06:29.247608 kubelet[3462]: I0517 00:06:29.247553 3462 kubelet_node_status.go:501] "Fast updating node status as it just became ready" May 17 00:06:29.424286 systemd[1]: Created slice kubepods-burstable-pod6dd2bd78_9cc8_4f0d_aaa8_33aa05509f57.slice - libcontainer container kubepods-burstable-pod6dd2bd78_9cc8_4f0d_aaa8_33aa05509f57.slice. May 17 00:06:29.466403 systemd[1]: Created slice kubepods-burstable-pode2c21b6b_5916_4910_9594_91aa64aec124.slice - libcontainer container kubepods-burstable-pode2c21b6b_5916_4910_9594_91aa64aec124.slice. May 17 00:06:29.544784 kubelet[3462]: I0517 00:06:29.544435 3462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6dd2bd78-9cc8-4f0d-aaa8-33aa05509f57-config-volume\") pod \"coredns-674b8bbfcf-4blpj\" (UID: \"6dd2bd78-9cc8-4f0d-aaa8-33aa05509f57\") " pod="kube-system/coredns-674b8bbfcf-4blpj" May 17 00:06:29.544784 kubelet[3462]: I0517 00:06:29.544508 3462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrz7l\" (UniqueName: \"kubernetes.io/projected/e2c21b6b-5916-4910-9594-91aa64aec124-kube-api-access-vrz7l\") pod \"coredns-674b8bbfcf-kxs2t\" (UID: \"e2c21b6b-5916-4910-9594-91aa64aec124\") " pod="kube-system/coredns-674b8bbfcf-kxs2t" May 17 00:06:29.544784 kubelet[3462]: I0517 00:06:29.544555 3462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tlnr\" (UniqueName: \"kubernetes.io/projected/6dd2bd78-9cc8-4f0d-aaa8-33aa05509f57-kube-api-access-9tlnr\") pod \"coredns-674b8bbfcf-4blpj\" (UID: \"6dd2bd78-9cc8-4f0d-aaa8-33aa05509f57\") " pod="kube-system/coredns-674b8bbfcf-4blpj" May 17 00:06:29.544784 kubelet[3462]: I0517 00:06:29.544594 3462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e2c21b6b-5916-4910-9594-91aa64aec124-config-volume\") pod \"coredns-674b8bbfcf-kxs2t\" (UID: \"e2c21b6b-5916-4910-9594-91aa64aec124\") " pod="kube-system/coredns-674b8bbfcf-kxs2t" May 17 00:06:29.735213 containerd[2020]: time="2025-05-17T00:06:29.734573007Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-4blpj,Uid:6dd2bd78-9cc8-4f0d-aaa8-33aa05509f57,Namespace:kube-system,Attempt:0,}" May 17 00:06:29.778966 containerd[2020]: time="2025-05-17T00:06:29.777686028Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-kxs2t,Uid:e2c21b6b-5916-4910-9594-91aa64aec124,Namespace:kube-system,Attempt:0,}" May 17 00:06:29.956288 kubelet[3462]: I0517 00:06:29.955960 3462 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-zqrb7" podStartSLOduration=9.339683596 podStartE2EDuration="16.955936911s" podCreationTimestamp="2025-05-17 00:06:13 +0000 UTC" firstStartedPulling="2025-05-17 00:06:15.911920302 +0000 UTC m=+7.655180911" lastFinishedPulling="2025-05-17 00:06:23.528173521 +0000 UTC m=+15.271434226" observedRunningTime="2025-05-17 00:06:29.953831208 +0000 UTC m=+21.697091853" watchObservedRunningTime="2025-05-17 00:06:29.955936911 +0000 UTC m=+21.699197520" May 17 00:06:32.079822 (udev-worker)[4272]: Network interface NamePolicy= disabled on kernel command line. May 17 00:06:32.085015 systemd-networkd[1934]: cilium_host: Link UP May 17 00:06:32.086128 systemd-networkd[1934]: cilium_net: Link UP May 17 00:06:32.086691 systemd-networkd[1934]: cilium_net: Gained carrier May 17 00:06:32.087417 systemd-networkd[1934]: cilium_host: Gained carrier May 17 00:06:32.089768 (udev-worker)[4308]: Network interface NamePolicy= disabled on kernel command line. May 17 00:06:32.266713 (udev-worker)[4323]: Network interface NamePolicy= disabled on kernel command line. May 17 00:06:32.282912 systemd-networkd[1934]: cilium_vxlan: Link UP May 17 00:06:32.282928 systemd-networkd[1934]: cilium_vxlan: Gained carrier May 17 00:06:32.329665 systemd-networkd[1934]: cilium_net: Gained IPv6LL May 17 00:06:32.826042 kernel: NET: Registered PF_ALG protocol family May 17 00:06:32.850851 systemd-networkd[1934]: cilium_host: Gained IPv6LL May 17 00:06:34.001366 systemd-networkd[1934]: cilium_vxlan: Gained IPv6LL May 17 00:06:34.258194 (udev-worker)[4321]: Network interface NamePolicy= disabled on kernel command line. May 17 00:06:34.264274 systemd-networkd[1934]: lxc_health: Link UP May 17 00:06:34.273608 systemd-networkd[1934]: lxc_health: Gained carrier May 17 00:06:34.813421 systemd-networkd[1934]: lxc28172d38cf88: Link UP May 17 00:06:34.824419 kernel: eth0: renamed from tmp956e4 May 17 00:06:34.828815 systemd-networkd[1934]: lxc28172d38cf88: Gained carrier May 17 00:06:34.857700 (udev-worker)[4319]: Network interface NamePolicy= disabled on kernel command line. May 17 00:06:34.876165 kernel: eth0: renamed from tmp95df0 May 17 00:06:34.880408 systemd-networkd[1934]: lxc706db29fd005: Link UP May 17 00:06:34.895661 systemd-networkd[1934]: lxc706db29fd005: Gained carrier May 17 00:06:35.346124 systemd-networkd[1934]: lxc_health: Gained IPv6LL May 17 00:06:36.753833 systemd-networkd[1934]: lxc706db29fd005: Gained IPv6LL May 17 00:06:36.817287 systemd-networkd[1934]: lxc28172d38cf88: Gained IPv6LL May 17 00:06:39.008929 ntpd[1990]: Listen normally on 8 cilium_host 192.168.0.143:123 May 17 00:06:39.009615 ntpd[1990]: 17 May 00:06:39 ntpd[1990]: Listen normally on 8 cilium_host 192.168.0.143:123 May 17 00:06:39.009615 ntpd[1990]: 17 May 00:06:39 ntpd[1990]: Listen normally on 9 cilium_net [fe80::7810:1fff:fe32:af20%4]:123 May 17 00:06:39.009615 ntpd[1990]: 17 May 00:06:39 ntpd[1990]: Listen normally on 10 cilium_host [fe80::3cec:adff:feb9:b4f6%5]:123 May 17 00:06:39.009615 ntpd[1990]: 17 May 00:06:39 ntpd[1990]: Listen normally on 11 cilium_vxlan [fe80::b455:6cff:fecb:cf35%6]:123 May 17 00:06:39.009615 ntpd[1990]: 17 May 00:06:39 ntpd[1990]: Listen normally on 12 lxc_health [fe80::40ad:66ff:fefa:6dbf%8]:123 May 17 00:06:39.009615 ntpd[1990]: 17 May 00:06:39 ntpd[1990]: Listen normally on 13 lxc28172d38cf88 [fe80::304b:ffff:feca:f7c0%10]:123 May 17 00:06:39.009615 ntpd[1990]: 17 May 00:06:39 ntpd[1990]: Listen normally on 14 lxc706db29fd005 [fe80::f8cf:79ff:fedc:a561%12]:123 May 17 00:06:39.009148 ntpd[1990]: Listen normally on 9 cilium_net [fe80::7810:1fff:fe32:af20%4]:123 May 17 00:06:39.009251 ntpd[1990]: Listen normally on 10 cilium_host [fe80::3cec:adff:feb9:b4f6%5]:123 May 17 00:06:39.009342 ntpd[1990]: Listen normally on 11 cilium_vxlan [fe80::b455:6cff:fecb:cf35%6]:123 May 17 00:06:39.009418 ntpd[1990]: Listen normally on 12 lxc_health [fe80::40ad:66ff:fefa:6dbf%8]:123 May 17 00:06:39.009492 ntpd[1990]: Listen normally on 13 lxc28172d38cf88 [fe80::304b:ffff:feca:f7c0%10]:123 May 17 00:06:39.009565 ntpd[1990]: Listen normally on 14 lxc706db29fd005 [fe80::f8cf:79ff:fedc:a561%12]:123 May 17 00:06:43.816618 kubelet[3462]: I0517 00:06:43.816459 3462 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:06:44.321213 containerd[2020]: time="2025-05-17T00:06:44.320697212Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:06:44.321213 containerd[2020]: time="2025-05-17T00:06:44.320811959Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:06:44.321213 containerd[2020]: time="2025-05-17T00:06:44.320850711Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:06:44.326271 containerd[2020]: time="2025-05-17T00:06:44.325168168Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:06:44.331774 containerd[2020]: time="2025-05-17T00:06:44.331481449Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:06:44.332609 containerd[2020]: time="2025-05-17T00:06:44.331788088Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:06:44.333415 containerd[2020]: time="2025-05-17T00:06:44.331898049Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:06:44.338143 containerd[2020]: time="2025-05-17T00:06:44.336278486Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:06:44.409611 systemd[1]: Started cri-containerd-95df0ab90ffaf2dcd5775e1da513dbc502129bd07a527ec0116776600176e96a.scope - libcontainer container 95df0ab90ffaf2dcd5775e1da513dbc502129bd07a527ec0116776600176e96a. May 17 00:06:44.422409 systemd[1]: Started cri-containerd-956e4eb98d3c03eda46afbe66c812afbef1cf149f0a5112efd5ab6e2c54ce353.scope - libcontainer container 956e4eb98d3c03eda46afbe66c812afbef1cf149f0a5112efd5ab6e2c54ce353. May 17 00:06:44.550014 containerd[2020]: time="2025-05-17T00:06:44.548289852Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-4blpj,Uid:6dd2bd78-9cc8-4f0d-aaa8-33aa05509f57,Namespace:kube-system,Attempt:0,} returns sandbox id \"956e4eb98d3c03eda46afbe66c812afbef1cf149f0a5112efd5ab6e2c54ce353\"" May 17 00:06:44.571807 containerd[2020]: time="2025-05-17T00:06:44.571647711Z" level=info msg="CreateContainer within sandbox \"956e4eb98d3c03eda46afbe66c812afbef1cf149f0a5112efd5ab6e2c54ce353\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 17 00:06:44.576779 containerd[2020]: time="2025-05-17T00:06:44.576718762Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-kxs2t,Uid:e2c21b6b-5916-4910-9594-91aa64aec124,Namespace:kube-system,Attempt:0,} returns sandbox id \"95df0ab90ffaf2dcd5775e1da513dbc502129bd07a527ec0116776600176e96a\"" May 17 00:06:44.593332 containerd[2020]: time="2025-05-17T00:06:44.593265865Z" level=info msg="CreateContainer within sandbox \"95df0ab90ffaf2dcd5775e1da513dbc502129bd07a527ec0116776600176e96a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 17 00:06:44.622696 containerd[2020]: time="2025-05-17T00:06:44.619391495Z" level=info msg="CreateContainer within sandbox \"956e4eb98d3c03eda46afbe66c812afbef1cf149f0a5112efd5ab6e2c54ce353\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6334ce4dab80b5c25e90b8ccd2e4bbc949a8db4b09071579d1d5e8ac19911663\"" May 17 00:06:44.623018 containerd[2020]: time="2025-05-17T00:06:44.622912238Z" level=info msg="StartContainer for \"6334ce4dab80b5c25e90b8ccd2e4bbc949a8db4b09071579d1d5e8ac19911663\"" May 17 00:06:44.644702 containerd[2020]: time="2025-05-17T00:06:44.644615693Z" level=info msg="CreateContainer within sandbox \"95df0ab90ffaf2dcd5775e1da513dbc502129bd07a527ec0116776600176e96a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"913611bcefc2347ef9f5b98f8b02bb5c41cf7969a45fccc978a7ec25e9b31e43\"" May 17 00:06:44.647440 containerd[2020]: time="2025-05-17T00:06:44.647364262Z" level=info msg="StartContainer for \"913611bcefc2347ef9f5b98f8b02bb5c41cf7969a45fccc978a7ec25e9b31e43\"" May 17 00:06:44.722457 systemd[1]: Started cri-containerd-6334ce4dab80b5c25e90b8ccd2e4bbc949a8db4b09071579d1d5e8ac19911663.scope - libcontainer container 6334ce4dab80b5c25e90b8ccd2e4bbc949a8db4b09071579d1d5e8ac19911663. May 17 00:06:44.750333 systemd[1]: Started cri-containerd-913611bcefc2347ef9f5b98f8b02bb5c41cf7969a45fccc978a7ec25e9b31e43.scope - libcontainer container 913611bcefc2347ef9f5b98f8b02bb5c41cf7969a45fccc978a7ec25e9b31e43. May 17 00:06:44.835576 containerd[2020]: time="2025-05-17T00:06:44.835410944Z" level=info msg="StartContainer for \"6334ce4dab80b5c25e90b8ccd2e4bbc949a8db4b09071579d1d5e8ac19911663\" returns successfully" May 17 00:06:44.855798 containerd[2020]: time="2025-05-17T00:06:44.855730735Z" level=info msg="StartContainer for \"913611bcefc2347ef9f5b98f8b02bb5c41cf7969a45fccc978a7ec25e9b31e43\" returns successfully" May 17 00:06:44.992698 kubelet[3462]: I0517 00:06:44.991854 3462 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-kxs2t" podStartSLOduration=30.991832447 podStartE2EDuration="30.991832447s" podCreationTimestamp="2025-05-17 00:06:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:06:44.989817959 +0000 UTC m=+36.733078603" watchObservedRunningTime="2025-05-17 00:06:44.991832447 +0000 UTC m=+36.735093056" May 17 00:06:45.613544 systemd[1]: Started sshd@9-172.31.29.16:22-139.178.89.65:49962.service - OpenSSH per-connection server daemon (139.178.89.65:49962). May 17 00:06:45.790622 sshd[4853]: Accepted publickey for core from 139.178.89.65 port 49962 ssh2: RSA SHA256:eHxhS8dryg5cUeSGvw1rEy5lVBwPN/UG03VmUnZDeRM May 17 00:06:45.793351 sshd[4853]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:06:45.801894 systemd-logind[1997]: New session 10 of user core. May 17 00:06:45.808296 systemd[1]: Started session-10.scope - Session 10 of User core. May 17 00:06:45.992023 kubelet[3462]: I0517 00:06:45.991338 3462 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-4blpj" podStartSLOduration=31.99131551 podStartE2EDuration="31.99131551s" podCreationTimestamp="2025-05-17 00:06:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:06:45.021161147 +0000 UTC m=+36.764421768" watchObservedRunningTime="2025-05-17 00:06:45.99131551 +0000 UTC m=+37.734576119" May 17 00:06:46.118348 sshd[4853]: pam_unix(sshd:session): session closed for user core May 17 00:06:46.124954 systemd[1]: sshd@9-172.31.29.16:22-139.178.89.65:49962.service: Deactivated successfully. May 17 00:06:46.130600 systemd[1]: session-10.scope: Deactivated successfully. May 17 00:06:46.133407 systemd-logind[1997]: Session 10 logged out. Waiting for processes to exit. May 17 00:06:46.135763 systemd-logind[1997]: Removed session 10. May 17 00:06:51.158503 systemd[1]: Started sshd@10-172.31.29.16:22-139.178.89.65:57104.service - OpenSSH per-connection server daemon (139.178.89.65:57104). May 17 00:06:51.333939 sshd[4875]: Accepted publickey for core from 139.178.89.65 port 57104 ssh2: RSA SHA256:eHxhS8dryg5cUeSGvw1rEy5lVBwPN/UG03VmUnZDeRM May 17 00:06:51.337215 sshd[4875]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:06:51.346439 systemd-logind[1997]: New session 11 of user core. May 17 00:06:51.355252 systemd[1]: Started session-11.scope - Session 11 of User core. May 17 00:06:51.596047 sshd[4875]: pam_unix(sshd:session): session closed for user core May 17 00:06:51.602374 systemd[1]: sshd@10-172.31.29.16:22-139.178.89.65:57104.service: Deactivated successfully. May 17 00:06:51.605612 systemd[1]: session-11.scope: Deactivated successfully. May 17 00:06:51.608052 systemd-logind[1997]: Session 11 logged out. Waiting for processes to exit. May 17 00:06:51.610491 systemd-logind[1997]: Removed session 11. May 17 00:06:56.639477 systemd[1]: Started sshd@11-172.31.29.16:22-139.178.89.65:36994.service - OpenSSH per-connection server daemon (139.178.89.65:36994). May 17 00:06:56.828820 sshd[4889]: Accepted publickey for core from 139.178.89.65 port 36994 ssh2: RSA SHA256:eHxhS8dryg5cUeSGvw1rEy5lVBwPN/UG03VmUnZDeRM May 17 00:06:56.831503 sshd[4889]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:06:56.839605 systemd-logind[1997]: New session 12 of user core. May 17 00:06:56.846248 systemd[1]: Started session-12.scope - Session 12 of User core. May 17 00:06:57.086489 sshd[4889]: pam_unix(sshd:session): session closed for user core May 17 00:06:57.093740 systemd[1]: sshd@11-172.31.29.16:22-139.178.89.65:36994.service: Deactivated successfully. May 17 00:06:57.098474 systemd[1]: session-12.scope: Deactivated successfully. May 17 00:06:57.102045 systemd-logind[1997]: Session 12 logged out. Waiting for processes to exit. May 17 00:06:57.103930 systemd-logind[1997]: Removed session 12. May 17 00:07:02.127551 systemd[1]: Started sshd@12-172.31.29.16:22-139.178.89.65:37008.service - OpenSSH per-connection server daemon (139.178.89.65:37008). May 17 00:07:02.304684 sshd[4903]: Accepted publickey for core from 139.178.89.65 port 37008 ssh2: RSA SHA256:eHxhS8dryg5cUeSGvw1rEy5lVBwPN/UG03VmUnZDeRM May 17 00:07:02.307364 sshd[4903]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:07:02.316029 systemd-logind[1997]: New session 13 of user core. May 17 00:07:02.321265 systemd[1]: Started session-13.scope - Session 13 of User core. May 17 00:07:02.576170 sshd[4903]: pam_unix(sshd:session): session closed for user core May 17 00:07:02.581959 systemd[1]: sshd@12-172.31.29.16:22-139.178.89.65:37008.service: Deactivated successfully. May 17 00:07:02.587124 systemd[1]: session-13.scope: Deactivated successfully. May 17 00:07:02.591752 systemd-logind[1997]: Session 13 logged out. Waiting for processes to exit. May 17 00:07:02.594017 systemd-logind[1997]: Removed session 13. May 17 00:07:07.615519 systemd[1]: Started sshd@13-172.31.29.16:22-139.178.89.65:55260.service - OpenSSH per-connection server daemon (139.178.89.65:55260). May 17 00:07:07.792916 sshd[4916]: Accepted publickey for core from 139.178.89.65 port 55260 ssh2: RSA SHA256:eHxhS8dryg5cUeSGvw1rEy5lVBwPN/UG03VmUnZDeRM May 17 00:07:07.796201 sshd[4916]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:07:07.805598 systemd-logind[1997]: New session 14 of user core. May 17 00:07:07.815271 systemd[1]: Started session-14.scope - Session 14 of User core. May 17 00:07:08.052638 sshd[4916]: pam_unix(sshd:session): session closed for user core May 17 00:07:08.059536 systemd[1]: sshd@13-172.31.29.16:22-139.178.89.65:55260.service: Deactivated successfully. May 17 00:07:08.064474 systemd[1]: session-14.scope: Deactivated successfully. May 17 00:07:08.066431 systemd-logind[1997]: Session 14 logged out. Waiting for processes to exit. May 17 00:07:08.068888 systemd-logind[1997]: Removed session 14. May 17 00:07:08.089554 systemd[1]: Started sshd@14-172.31.29.16:22-139.178.89.65:55262.service - OpenSSH per-connection server daemon (139.178.89.65:55262). May 17 00:07:08.271166 sshd[4930]: Accepted publickey for core from 139.178.89.65 port 55262 ssh2: RSA SHA256:eHxhS8dryg5cUeSGvw1rEy5lVBwPN/UG03VmUnZDeRM May 17 00:07:08.272792 sshd[4930]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:07:08.280429 systemd-logind[1997]: New session 15 of user core. May 17 00:07:08.290276 systemd[1]: Started session-15.scope - Session 15 of User core. May 17 00:07:08.596748 sshd[4930]: pam_unix(sshd:session): session closed for user core May 17 00:07:08.613468 systemd[1]: sshd@14-172.31.29.16:22-139.178.89.65:55262.service: Deactivated successfully. May 17 00:07:08.620333 systemd[1]: session-15.scope: Deactivated successfully. May 17 00:07:08.622772 systemd-logind[1997]: Session 15 logged out. Waiting for processes to exit. May 17 00:07:08.646562 systemd[1]: Started sshd@15-172.31.29.16:22-139.178.89.65:55276.service - OpenSSH per-connection server daemon (139.178.89.65:55276). May 17 00:07:08.649248 systemd-logind[1997]: Removed session 15. May 17 00:07:08.828260 sshd[4941]: Accepted publickey for core from 139.178.89.65 port 55276 ssh2: RSA SHA256:eHxhS8dryg5cUeSGvw1rEy5lVBwPN/UG03VmUnZDeRM May 17 00:07:08.830844 sshd[4941]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:07:08.839467 systemd-logind[1997]: New session 16 of user core. May 17 00:07:08.851263 systemd[1]: Started session-16.scope - Session 16 of User core. May 17 00:07:09.098572 sshd[4941]: pam_unix(sshd:session): session closed for user core May 17 00:07:09.104640 systemd-logind[1997]: Session 16 logged out. Waiting for processes to exit. May 17 00:07:09.105613 systemd[1]: sshd@15-172.31.29.16:22-139.178.89.65:55276.service: Deactivated successfully. May 17 00:07:09.110608 systemd[1]: session-16.scope: Deactivated successfully. May 17 00:07:09.115783 systemd-logind[1997]: Removed session 16. May 17 00:07:14.136642 systemd[1]: Started sshd@16-172.31.29.16:22-139.178.89.65:55284.service - OpenSSH per-connection server daemon (139.178.89.65:55284). May 17 00:07:14.308435 sshd[4956]: Accepted publickey for core from 139.178.89.65 port 55284 ssh2: RSA SHA256:eHxhS8dryg5cUeSGvw1rEy5lVBwPN/UG03VmUnZDeRM May 17 00:07:14.311231 sshd[4956]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:07:14.319394 systemd-logind[1997]: New session 17 of user core. May 17 00:07:14.324709 systemd[1]: Started session-17.scope - Session 17 of User core. May 17 00:07:14.564898 sshd[4956]: pam_unix(sshd:session): session closed for user core May 17 00:07:14.571717 systemd[1]: sshd@16-172.31.29.16:22-139.178.89.65:55284.service: Deactivated successfully. May 17 00:07:14.575812 systemd[1]: session-17.scope: Deactivated successfully. May 17 00:07:14.578249 systemd-logind[1997]: Session 17 logged out. Waiting for processes to exit. May 17 00:07:14.580275 systemd-logind[1997]: Removed session 17. May 17 00:07:19.615439 systemd[1]: Started sshd@17-172.31.29.16:22-139.178.89.65:42810.service - OpenSSH per-connection server daemon (139.178.89.65:42810). May 17 00:07:19.784282 sshd[4972]: Accepted publickey for core from 139.178.89.65 port 42810 ssh2: RSA SHA256:eHxhS8dryg5cUeSGvw1rEy5lVBwPN/UG03VmUnZDeRM May 17 00:07:19.786911 sshd[4972]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:07:19.794413 systemd-logind[1997]: New session 18 of user core. May 17 00:07:19.800257 systemd[1]: Started session-18.scope - Session 18 of User core. May 17 00:07:20.046445 sshd[4972]: pam_unix(sshd:session): session closed for user core May 17 00:07:20.052781 systemd-logind[1997]: Session 18 logged out. Waiting for processes to exit. May 17 00:07:20.054390 systemd[1]: sshd@17-172.31.29.16:22-139.178.89.65:42810.service: Deactivated successfully. May 17 00:07:20.060406 systemd[1]: session-18.scope: Deactivated successfully. May 17 00:07:20.062739 systemd-logind[1997]: Removed session 18. May 17 00:07:25.084540 systemd[1]: Started sshd@18-172.31.29.16:22-139.178.89.65:42826.service - OpenSSH per-connection server daemon (139.178.89.65:42826). May 17 00:07:25.257933 sshd[4985]: Accepted publickey for core from 139.178.89.65 port 42826 ssh2: RSA SHA256:eHxhS8dryg5cUeSGvw1rEy5lVBwPN/UG03VmUnZDeRM May 17 00:07:25.260592 sshd[4985]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:07:25.269342 systemd-logind[1997]: New session 19 of user core. May 17 00:07:25.276265 systemd[1]: Started session-19.scope - Session 19 of User core. May 17 00:07:25.509257 sshd[4985]: pam_unix(sshd:session): session closed for user core May 17 00:07:25.515627 systemd-logind[1997]: Session 19 logged out. Waiting for processes to exit. May 17 00:07:25.516964 systemd[1]: sshd@18-172.31.29.16:22-139.178.89.65:42826.service: Deactivated successfully. May 17 00:07:25.521239 systemd[1]: session-19.scope: Deactivated successfully. May 17 00:07:25.524198 systemd-logind[1997]: Removed session 19. May 17 00:07:25.552476 systemd[1]: Started sshd@19-172.31.29.16:22-139.178.89.65:42830.service - OpenSSH per-connection server daemon (139.178.89.65:42830). May 17 00:07:25.717870 sshd[4998]: Accepted publickey for core from 139.178.89.65 port 42830 ssh2: RSA SHA256:eHxhS8dryg5cUeSGvw1rEy5lVBwPN/UG03VmUnZDeRM May 17 00:07:25.720527 sshd[4998]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:07:25.730062 systemd-logind[1997]: New session 20 of user core. May 17 00:07:25.741269 systemd[1]: Started session-20.scope - Session 20 of User core. May 17 00:07:26.033123 sshd[4998]: pam_unix(sshd:session): session closed for user core May 17 00:07:26.039782 systemd[1]: sshd@19-172.31.29.16:22-139.178.89.65:42830.service: Deactivated successfully. May 17 00:07:26.044469 systemd[1]: session-20.scope: Deactivated successfully. May 17 00:07:26.047296 systemd-logind[1997]: Session 20 logged out. Waiting for processes to exit. May 17 00:07:26.049411 systemd-logind[1997]: Removed session 20. May 17 00:07:26.070573 systemd[1]: Started sshd@20-172.31.29.16:22-139.178.89.65:42834.service - OpenSSH per-connection server daemon (139.178.89.65:42834). May 17 00:07:26.242148 sshd[5008]: Accepted publickey for core from 139.178.89.65 port 42834 ssh2: RSA SHA256:eHxhS8dryg5cUeSGvw1rEy5lVBwPN/UG03VmUnZDeRM May 17 00:07:26.244705 sshd[5008]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:07:26.253429 systemd-logind[1997]: New session 21 of user core. May 17 00:07:26.260240 systemd[1]: Started session-21.scope - Session 21 of User core. May 17 00:07:27.615337 sshd[5008]: pam_unix(sshd:session): session closed for user core May 17 00:07:27.627306 systemd[1]: sshd@20-172.31.29.16:22-139.178.89.65:42834.service: Deactivated successfully. May 17 00:07:27.635079 systemd[1]: session-21.scope: Deactivated successfully. May 17 00:07:27.637177 systemd-logind[1997]: Session 21 logged out. Waiting for processes to exit. May 17 00:07:27.661193 systemd[1]: Started sshd@21-172.31.29.16:22-139.178.89.65:45260.service - OpenSSH per-connection server daemon (139.178.89.65:45260). May 17 00:07:27.663719 systemd-logind[1997]: Removed session 21. May 17 00:07:27.840554 sshd[5025]: Accepted publickey for core from 139.178.89.65 port 45260 ssh2: RSA SHA256:eHxhS8dryg5cUeSGvw1rEy5lVBwPN/UG03VmUnZDeRM May 17 00:07:27.843243 sshd[5025]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:07:27.851327 systemd-logind[1997]: New session 22 of user core. May 17 00:07:27.860230 systemd[1]: Started session-22.scope - Session 22 of User core. May 17 00:07:28.358460 sshd[5025]: pam_unix(sshd:session): session closed for user core May 17 00:07:28.365478 systemd[1]: sshd@21-172.31.29.16:22-139.178.89.65:45260.service: Deactivated successfully. May 17 00:07:28.365786 systemd-logind[1997]: Session 22 logged out. Waiting for processes to exit. May 17 00:07:28.374578 systemd[1]: session-22.scope: Deactivated successfully. May 17 00:07:28.381632 systemd-logind[1997]: Removed session 22. May 17 00:07:28.398527 systemd[1]: Started sshd@22-172.31.29.16:22-139.178.89.65:45266.service - OpenSSH per-connection server daemon (139.178.89.65:45266). May 17 00:07:28.583190 sshd[5036]: Accepted publickey for core from 139.178.89.65 port 45266 ssh2: RSA SHA256:eHxhS8dryg5cUeSGvw1rEy5lVBwPN/UG03VmUnZDeRM May 17 00:07:28.585965 sshd[5036]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:07:28.593618 systemd-logind[1997]: New session 23 of user core. May 17 00:07:28.602323 systemd[1]: Started session-23.scope - Session 23 of User core. May 17 00:07:28.841609 sshd[5036]: pam_unix(sshd:session): session closed for user core May 17 00:07:28.846419 systemd-logind[1997]: Session 23 logged out. Waiting for processes to exit. May 17 00:07:28.847685 systemd[1]: sshd@22-172.31.29.16:22-139.178.89.65:45266.service: Deactivated successfully. May 17 00:07:28.851743 systemd[1]: session-23.scope: Deactivated successfully. May 17 00:07:28.857708 systemd-logind[1997]: Removed session 23. May 17 00:07:33.877496 systemd[1]: Started sshd@23-172.31.29.16:22-139.178.89.65:45268.service - OpenSSH per-connection server daemon (139.178.89.65:45268). May 17 00:07:34.054369 sshd[5049]: Accepted publickey for core from 139.178.89.65 port 45268 ssh2: RSA SHA256:eHxhS8dryg5cUeSGvw1rEy5lVBwPN/UG03VmUnZDeRM May 17 00:07:34.057481 sshd[5049]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:07:34.068280 systemd-logind[1997]: New session 24 of user core. May 17 00:07:34.075291 systemd[1]: Started session-24.scope - Session 24 of User core. May 17 00:07:34.305350 sshd[5049]: pam_unix(sshd:session): session closed for user core May 17 00:07:34.311422 systemd[1]: sshd@23-172.31.29.16:22-139.178.89.65:45268.service: Deactivated successfully. May 17 00:07:34.315595 systemd[1]: session-24.scope: Deactivated successfully. May 17 00:07:34.317233 systemd-logind[1997]: Session 24 logged out. Waiting for processes to exit. May 17 00:07:34.318934 systemd-logind[1997]: Removed session 24. May 17 00:07:39.348530 systemd[1]: Started sshd@24-172.31.29.16:22-139.178.89.65:59244.service - OpenSSH per-connection server daemon (139.178.89.65:59244). May 17 00:07:39.552125 sshd[5064]: Accepted publickey for core from 139.178.89.65 port 59244 ssh2: RSA SHA256:eHxhS8dryg5cUeSGvw1rEy5lVBwPN/UG03VmUnZDeRM May 17 00:07:39.555117 sshd[5064]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:07:39.566309 systemd-logind[1997]: New session 25 of user core. May 17 00:07:39.571282 systemd[1]: Started session-25.scope - Session 25 of User core. May 17 00:07:39.814595 sshd[5064]: pam_unix(sshd:session): session closed for user core May 17 00:07:39.820606 systemd[1]: sshd@24-172.31.29.16:22-139.178.89.65:59244.service: Deactivated successfully. May 17 00:07:39.825198 systemd[1]: session-25.scope: Deactivated successfully. May 17 00:07:39.828765 systemd-logind[1997]: Session 25 logged out. Waiting for processes to exit. May 17 00:07:39.831427 systemd-logind[1997]: Removed session 25. May 17 00:07:44.861060 systemd[1]: Started sshd@25-172.31.29.16:22-139.178.89.65:59254.service - OpenSSH per-connection server daemon (139.178.89.65:59254). May 17 00:07:45.029701 sshd[5077]: Accepted publickey for core from 139.178.89.65 port 59254 ssh2: RSA SHA256:eHxhS8dryg5cUeSGvw1rEy5lVBwPN/UG03VmUnZDeRM May 17 00:07:45.032451 sshd[5077]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:07:45.041082 systemd-logind[1997]: New session 26 of user core. May 17 00:07:45.049274 systemd[1]: Started session-26.scope - Session 26 of User core. May 17 00:07:45.286351 sshd[5077]: pam_unix(sshd:session): session closed for user core May 17 00:07:45.291401 systemd[1]: sshd@25-172.31.29.16:22-139.178.89.65:59254.service: Deactivated successfully. May 17 00:07:45.295586 systemd[1]: session-26.scope: Deactivated successfully. May 17 00:07:45.299570 systemd-logind[1997]: Session 26 logged out. Waiting for processes to exit. May 17 00:07:45.302617 systemd-logind[1997]: Removed session 26. May 17 00:07:45.322482 systemd[1]: Started sshd@26-172.31.29.16:22-139.178.89.65:59262.service - OpenSSH per-connection server daemon (139.178.89.65:59262). May 17 00:07:45.513686 sshd[5091]: Accepted publickey for core from 139.178.89.65 port 59262 ssh2: RSA SHA256:eHxhS8dryg5cUeSGvw1rEy5lVBwPN/UG03VmUnZDeRM May 17 00:07:45.516335 sshd[5091]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:07:45.524833 systemd-logind[1997]: New session 27 of user core. May 17 00:07:45.537278 systemd[1]: Started session-27.scope - Session 27 of User core. May 17 00:07:47.951015 containerd[2020]: time="2025-05-17T00:07:47.950905326Z" level=info msg="StopContainer for \"cb11c888de97080f1c8851c5b1fa905cb6f2def74eaf772bf8946243479593f4\" with timeout 30 (s)" May 17 00:07:47.952693 containerd[2020]: time="2025-05-17T00:07:47.952543430Z" level=info msg="Stop container \"cb11c888de97080f1c8851c5b1fa905cb6f2def74eaf772bf8946243479593f4\" with signal terminated" May 17 00:07:47.975219 systemd[1]: run-containerd-runc-k8s.io-bd4515875c5f4d4a4113e139d1d9bf33770e927ae87876b64a8e3905b4352f57-runc.SwusFW.mount: Deactivated successfully. May 17 00:07:47.987925 systemd[1]: cri-containerd-cb11c888de97080f1c8851c5b1fa905cb6f2def74eaf772bf8946243479593f4.scope: Deactivated successfully. May 17 00:07:48.003354 containerd[2020]: time="2025-05-17T00:07:48.003139364Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 17 00:07:48.018482 containerd[2020]: time="2025-05-17T00:07:48.018180032Z" level=info msg="StopContainer for \"bd4515875c5f4d4a4113e139d1d9bf33770e927ae87876b64a8e3905b4352f57\" with timeout 2 (s)" May 17 00:07:48.020163 containerd[2020]: time="2025-05-17T00:07:48.020041584Z" level=info msg="Stop container \"bd4515875c5f4d4a4113e139d1d9bf33770e927ae87876b64a8e3905b4352f57\" with signal terminated" May 17 00:07:48.031771 systemd-networkd[1934]: lxc_health: Link DOWN May 17 00:07:48.031784 systemd-networkd[1934]: lxc_health: Lost carrier May 17 00:07:48.071172 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cb11c888de97080f1c8851c5b1fa905cb6f2def74eaf772bf8946243479593f4-rootfs.mount: Deactivated successfully. May 17 00:07:48.074147 systemd[1]: cri-containerd-bd4515875c5f4d4a4113e139d1d9bf33770e927ae87876b64a8e3905b4352f57.scope: Deactivated successfully. May 17 00:07:48.075649 systemd[1]: cri-containerd-bd4515875c5f4d4a4113e139d1d9bf33770e927ae87876b64a8e3905b4352f57.scope: Consumed 15.743s CPU time. May 17 00:07:48.091412 containerd[2020]: time="2025-05-17T00:07:48.091320499Z" level=info msg="shim disconnected" id=cb11c888de97080f1c8851c5b1fa905cb6f2def74eaf772bf8946243479593f4 namespace=k8s.io May 17 00:07:48.092531 containerd[2020]: time="2025-05-17T00:07:48.092216007Z" level=warning msg="cleaning up after shim disconnected" id=cb11c888de97080f1c8851c5b1fa905cb6f2def74eaf772bf8946243479593f4 namespace=k8s.io May 17 00:07:48.092531 containerd[2020]: time="2025-05-17T00:07:48.092396817Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:07:48.120863 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bd4515875c5f4d4a4113e139d1d9bf33770e927ae87876b64a8e3905b4352f57-rootfs.mount: Deactivated successfully. May 17 00:07:48.130812 containerd[2020]: time="2025-05-17T00:07:48.130653059Z" level=info msg="shim disconnected" id=bd4515875c5f4d4a4113e139d1d9bf33770e927ae87876b64a8e3905b4352f57 namespace=k8s.io May 17 00:07:48.130812 containerd[2020]: time="2025-05-17T00:07:48.130749011Z" level=warning msg="cleaning up after shim disconnected" id=bd4515875c5f4d4a4113e139d1d9bf33770e927ae87876b64a8e3905b4352f57 namespace=k8s.io May 17 00:07:48.130812 containerd[2020]: time="2025-05-17T00:07:48.130770277Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:07:48.134238 containerd[2020]: time="2025-05-17T00:07:48.133766702Z" level=info msg="StopContainer for \"cb11c888de97080f1c8851c5b1fa905cb6f2def74eaf772bf8946243479593f4\" returns successfully" May 17 00:07:48.134701 containerd[2020]: time="2025-05-17T00:07:48.134649976Z" level=info msg="StopPodSandbox for \"316babfe37ce073d9afe1eec2a981419dc8af30bb0119f628f886227991dbf08\"" May 17 00:07:48.134815 containerd[2020]: time="2025-05-17T00:07:48.134725022Z" level=info msg="Container to stop \"cb11c888de97080f1c8851c5b1fa905cb6f2def74eaf772bf8946243479593f4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:07:48.138491 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-316babfe37ce073d9afe1eec2a981419dc8af30bb0119f628f886227991dbf08-shm.mount: Deactivated successfully. May 17 00:07:48.154281 systemd[1]: cri-containerd-316babfe37ce073d9afe1eec2a981419dc8af30bb0119f628f886227991dbf08.scope: Deactivated successfully. May 17 00:07:48.172268 containerd[2020]: time="2025-05-17T00:07:48.172154974Z" level=info msg="StopContainer for \"bd4515875c5f4d4a4113e139d1d9bf33770e927ae87876b64a8e3905b4352f57\" returns successfully" May 17 00:07:48.173735 containerd[2020]: time="2025-05-17T00:07:48.173374824Z" level=info msg="StopPodSandbox for \"a1e08f1c00f90e62ea5240fd6fc068f8d84a0c7669107486faafe716659e0cc3\"" May 17 00:07:48.173735 containerd[2020]: time="2025-05-17T00:07:48.173438248Z" level=info msg="Container to stop \"44e3d14fc6908f623f2058fb8f23e8f07ef69966273e8249c1bc7db823698a04\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:07:48.173735 containerd[2020]: time="2025-05-17T00:07:48.173465319Z" level=info msg="Container to stop \"876fffd523f614e5c00600b4bf4eff1af8dca621d745e87ff68176cda4d06102\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:07:48.173735 containerd[2020]: time="2025-05-17T00:07:48.173490686Z" level=info msg="Container to stop \"e8e65e9114848acb9cb93e3806adb4ad51a864f74b9b5969778d316b527ad0f3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:07:48.173735 containerd[2020]: time="2025-05-17T00:07:48.173513966Z" level=info msg="Container to stop \"94893197ab12f64acc98c6c817f6b981b9ae5bf254123d9b8f62b8ea992e6ee5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:07:48.173735 containerd[2020]: time="2025-05-17T00:07:48.173538806Z" level=info msg="Container to stop \"bd4515875c5f4d4a4113e139d1d9bf33770e927ae87876b64a8e3905b4352f57\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:07:48.187472 systemd[1]: cri-containerd-a1e08f1c00f90e62ea5240fd6fc068f8d84a0c7669107486faafe716659e0cc3.scope: Deactivated successfully. May 17 00:07:48.218252 containerd[2020]: time="2025-05-17T00:07:48.217820150Z" level=info msg="shim disconnected" id=316babfe37ce073d9afe1eec2a981419dc8af30bb0119f628f886227991dbf08 namespace=k8s.io May 17 00:07:48.218252 containerd[2020]: time="2025-05-17T00:07:48.218155035Z" level=warning msg="cleaning up after shim disconnected" id=316babfe37ce073d9afe1eec2a981419dc8af30bb0119f628f886227991dbf08 namespace=k8s.io May 17 00:07:48.219561 containerd[2020]: time="2025-05-17T00:07:48.218181457Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:07:48.247708 containerd[2020]: time="2025-05-17T00:07:48.247456857Z" level=info msg="shim disconnected" id=a1e08f1c00f90e62ea5240fd6fc068f8d84a0c7669107486faafe716659e0cc3 namespace=k8s.io May 17 00:07:48.247708 containerd[2020]: time="2025-05-17T00:07:48.247553096Z" level=warning msg="cleaning up after shim disconnected" id=a1e08f1c00f90e62ea5240fd6fc068f8d84a0c7669107486faafe716659e0cc3 namespace=k8s.io May 17 00:07:48.247708 containerd[2020]: time="2025-05-17T00:07:48.247574853Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:07:48.252721 containerd[2020]: time="2025-05-17T00:07:48.252469197Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:07:48Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 17 00:07:48.254839 containerd[2020]: time="2025-05-17T00:07:48.254579122Z" level=info msg="TearDown network for sandbox \"316babfe37ce073d9afe1eec2a981419dc8af30bb0119f628f886227991dbf08\" successfully" May 17 00:07:48.254839 containerd[2020]: time="2025-05-17T00:07:48.254636657Z" level=info msg="StopPodSandbox for \"316babfe37ce073d9afe1eec2a981419dc8af30bb0119f628f886227991dbf08\" returns successfully" May 17 00:07:48.290107 containerd[2020]: time="2025-05-17T00:07:48.289858477Z" level=info msg="TearDown network for sandbox \"a1e08f1c00f90e62ea5240fd6fc068f8d84a0c7669107486faafe716659e0cc3\" successfully" May 17 00:07:48.290107 containerd[2020]: time="2025-05-17T00:07:48.290033901Z" level=info msg="StopPodSandbox for \"a1e08f1c00f90e62ea5240fd6fc068f8d84a0c7669107486faafe716659e0cc3\" returns successfully" May 17 00:07:48.359729 kubelet[3462]: I0517 00:07:48.359151 3462 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/24265796-28fc-4f94-b264-dc5634109fe8-cilium-config-path\") pod \"24265796-28fc-4f94-b264-dc5634109fe8\" (UID: \"24265796-28fc-4f94-b264-dc5634109fe8\") " May 17 00:07:48.359729 kubelet[3462]: I0517 00:07:48.359228 3462 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hkt6z\" (UniqueName: \"kubernetes.io/projected/24265796-28fc-4f94-b264-dc5634109fe8-kube-api-access-hkt6z\") pod \"24265796-28fc-4f94-b264-dc5634109fe8\" (UID: \"24265796-28fc-4f94-b264-dc5634109fe8\") " May 17 00:07:48.364436 kubelet[3462]: I0517 00:07:48.364339 3462 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24265796-28fc-4f94-b264-dc5634109fe8-kube-api-access-hkt6z" (OuterVolumeSpecName: "kube-api-access-hkt6z") pod "24265796-28fc-4f94-b264-dc5634109fe8" (UID: "24265796-28fc-4f94-b264-dc5634109fe8"). InnerVolumeSpecName "kube-api-access-hkt6z". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 17 00:07:48.366253 kubelet[3462]: I0517 00:07:48.366194 3462 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24265796-28fc-4f94-b264-dc5634109fe8-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "24265796-28fc-4f94-b264-dc5634109fe8" (UID: "24265796-28fc-4f94-b264-dc5634109fe8"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 17 00:07:48.461046 kubelet[3462]: I0517 00:07:48.460072 3462 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5803be12-9bed-4ab3-86c3-79f403cd26a3-xtables-lock\") pod \"5803be12-9bed-4ab3-86c3-79f403cd26a3\" (UID: \"5803be12-9bed-4ab3-86c3-79f403cd26a3\") " May 17 00:07:48.461046 kubelet[3462]: I0517 00:07:48.460140 3462 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5803be12-9bed-4ab3-86c3-79f403cd26a3-lib-modules\") pod \"5803be12-9bed-4ab3-86c3-79f403cd26a3\" (UID: \"5803be12-9bed-4ab3-86c3-79f403cd26a3\") " May 17 00:07:48.461046 kubelet[3462]: I0517 00:07:48.460173 3462 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5803be12-9bed-4ab3-86c3-79f403cd26a3-cilium-cgroup\") pod \"5803be12-9bed-4ab3-86c3-79f403cd26a3\" (UID: \"5803be12-9bed-4ab3-86c3-79f403cd26a3\") " May 17 00:07:48.461046 kubelet[3462]: I0517 00:07:48.460215 3462 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5803be12-9bed-4ab3-86c3-79f403cd26a3-clustermesh-secrets\") pod \"5803be12-9bed-4ab3-86c3-79f403cd26a3\" (UID: \"5803be12-9bed-4ab3-86c3-79f403cd26a3\") " May 17 00:07:48.461046 kubelet[3462]: I0517 00:07:48.460250 3462 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5803be12-9bed-4ab3-86c3-79f403cd26a3-cni-path\") pod \"5803be12-9bed-4ab3-86c3-79f403cd26a3\" (UID: \"5803be12-9bed-4ab3-86c3-79f403cd26a3\") " May 17 00:07:48.461046 kubelet[3462]: I0517 00:07:48.460295 3462 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5803be12-9bed-4ab3-86c3-79f403cd26a3-cilium-config-path\") pod \"5803be12-9bed-4ab3-86c3-79f403cd26a3\" (UID: \"5803be12-9bed-4ab3-86c3-79f403cd26a3\") " May 17 00:07:48.461498 kubelet[3462]: I0517 00:07:48.460327 3462 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5803be12-9bed-4ab3-86c3-79f403cd26a3-host-proc-sys-net\") pod \"5803be12-9bed-4ab3-86c3-79f403cd26a3\" (UID: \"5803be12-9bed-4ab3-86c3-79f403cd26a3\") " May 17 00:07:48.461498 kubelet[3462]: I0517 00:07:48.460364 3462 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5803be12-9bed-4ab3-86c3-79f403cd26a3-host-proc-sys-kernel\") pod \"5803be12-9bed-4ab3-86c3-79f403cd26a3\" (UID: \"5803be12-9bed-4ab3-86c3-79f403cd26a3\") " May 17 00:07:48.461498 kubelet[3462]: I0517 00:07:48.460400 3462 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5803be12-9bed-4ab3-86c3-79f403cd26a3-cilium-run\") pod \"5803be12-9bed-4ab3-86c3-79f403cd26a3\" (UID: \"5803be12-9bed-4ab3-86c3-79f403cd26a3\") " May 17 00:07:48.461498 kubelet[3462]: I0517 00:07:48.460463 3462 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5803be12-9bed-4ab3-86c3-79f403cd26a3-hostproc\") pod \"5803be12-9bed-4ab3-86c3-79f403cd26a3\" (UID: \"5803be12-9bed-4ab3-86c3-79f403cd26a3\") " May 17 00:07:48.461498 kubelet[3462]: I0517 00:07:48.460500 3462 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n7sqf\" (UniqueName: \"kubernetes.io/projected/5803be12-9bed-4ab3-86c3-79f403cd26a3-kube-api-access-n7sqf\") pod \"5803be12-9bed-4ab3-86c3-79f403cd26a3\" (UID: \"5803be12-9bed-4ab3-86c3-79f403cd26a3\") " May 17 00:07:48.461498 kubelet[3462]: I0517 00:07:48.460539 3462 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5803be12-9bed-4ab3-86c3-79f403cd26a3-bpf-maps\") pod \"5803be12-9bed-4ab3-86c3-79f403cd26a3\" (UID: \"5803be12-9bed-4ab3-86c3-79f403cd26a3\") " May 17 00:07:48.461812 kubelet[3462]: I0517 00:07:48.460571 3462 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5803be12-9bed-4ab3-86c3-79f403cd26a3-etc-cni-netd\") pod \"5803be12-9bed-4ab3-86c3-79f403cd26a3\" (UID: \"5803be12-9bed-4ab3-86c3-79f403cd26a3\") " May 17 00:07:48.461812 kubelet[3462]: I0517 00:07:48.460614 3462 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5803be12-9bed-4ab3-86c3-79f403cd26a3-hubble-tls\") pod \"5803be12-9bed-4ab3-86c3-79f403cd26a3\" (UID: \"5803be12-9bed-4ab3-86c3-79f403cd26a3\") " May 17 00:07:48.461812 kubelet[3462]: I0517 00:07:48.460679 3462 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hkt6z\" (UniqueName: \"kubernetes.io/projected/24265796-28fc-4f94-b264-dc5634109fe8-kube-api-access-hkt6z\") on node \"ip-172-31-29-16\" DevicePath \"\"" May 17 00:07:48.461812 kubelet[3462]: I0517 00:07:48.460703 3462 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/24265796-28fc-4f94-b264-dc5634109fe8-cilium-config-path\") on node \"ip-172-31-29-16\" DevicePath \"\"" May 17 00:07:48.461812 kubelet[3462]: I0517 00:07:48.461069 3462 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5803be12-9bed-4ab3-86c3-79f403cd26a3-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "5803be12-9bed-4ab3-86c3-79f403cd26a3" (UID: "5803be12-9bed-4ab3-86c3-79f403cd26a3"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:07:48.461812 kubelet[3462]: I0517 00:07:48.461134 3462 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5803be12-9bed-4ab3-86c3-79f403cd26a3-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "5803be12-9bed-4ab3-86c3-79f403cd26a3" (UID: "5803be12-9bed-4ab3-86c3-79f403cd26a3"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:07:48.462154 kubelet[3462]: I0517 00:07:48.461173 3462 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5803be12-9bed-4ab3-86c3-79f403cd26a3-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "5803be12-9bed-4ab3-86c3-79f403cd26a3" (UID: "5803be12-9bed-4ab3-86c3-79f403cd26a3"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:07:48.462154 kubelet[3462]: I0517 00:07:48.461208 3462 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5803be12-9bed-4ab3-86c3-79f403cd26a3-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "5803be12-9bed-4ab3-86c3-79f403cd26a3" (UID: "5803be12-9bed-4ab3-86c3-79f403cd26a3"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:07:48.466572 kubelet[3462]: I0517 00:07:48.466504 3462 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5803be12-9bed-4ab3-86c3-79f403cd26a3-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "5803be12-9bed-4ab3-86c3-79f403cd26a3" (UID: "5803be12-9bed-4ab3-86c3-79f403cd26a3"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 17 00:07:48.466931 kubelet[3462]: I0517 00:07:48.466504 3462 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5803be12-9bed-4ab3-86c3-79f403cd26a3-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "5803be12-9bed-4ab3-86c3-79f403cd26a3" (UID: "5803be12-9bed-4ab3-86c3-79f403cd26a3"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 17 00:07:48.466931 kubelet[3462]: I0517 00:07:48.466552 3462 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5803be12-9bed-4ab3-86c3-79f403cd26a3-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "5803be12-9bed-4ab3-86c3-79f403cd26a3" (UID: "5803be12-9bed-4ab3-86c3-79f403cd26a3"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:07:48.466931 kubelet[3462]: I0517 00:07:48.466578 3462 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5803be12-9bed-4ab3-86c3-79f403cd26a3-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "5803be12-9bed-4ab3-86c3-79f403cd26a3" (UID: "5803be12-9bed-4ab3-86c3-79f403cd26a3"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:07:48.466931 kubelet[3462]: I0517 00:07:48.466605 3462 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5803be12-9bed-4ab3-86c3-79f403cd26a3-hostproc" (OuterVolumeSpecName: "hostproc") pod "5803be12-9bed-4ab3-86c3-79f403cd26a3" (UID: "5803be12-9bed-4ab3-86c3-79f403cd26a3"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:07:48.468383 kubelet[3462]: I0517 00:07:48.466901 3462 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5803be12-9bed-4ab3-86c3-79f403cd26a3-cni-path" (OuterVolumeSpecName: "cni-path") pod "5803be12-9bed-4ab3-86c3-79f403cd26a3" (UID: "5803be12-9bed-4ab3-86c3-79f403cd26a3"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:07:48.472409 kubelet[3462]: I0517 00:07:48.471742 3462 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5803be12-9bed-4ab3-86c3-79f403cd26a3-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "5803be12-9bed-4ab3-86c3-79f403cd26a3" (UID: "5803be12-9bed-4ab3-86c3-79f403cd26a3"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:07:48.472409 kubelet[3462]: I0517 00:07:48.471821 3462 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5803be12-9bed-4ab3-86c3-79f403cd26a3-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "5803be12-9bed-4ab3-86c3-79f403cd26a3" (UID: "5803be12-9bed-4ab3-86c3-79f403cd26a3"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:07:48.472409 kubelet[3462]: I0517 00:07:48.471953 3462 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5803be12-9bed-4ab3-86c3-79f403cd26a3-kube-api-access-n7sqf" (OuterVolumeSpecName: "kube-api-access-n7sqf") pod "5803be12-9bed-4ab3-86c3-79f403cd26a3" (UID: "5803be12-9bed-4ab3-86c3-79f403cd26a3"). InnerVolumeSpecName "kube-api-access-n7sqf". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 17 00:07:48.475331 kubelet[3462]: I0517 00:07:48.475268 3462 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5803be12-9bed-4ab3-86c3-79f403cd26a3-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "5803be12-9bed-4ab3-86c3-79f403cd26a3" (UID: "5803be12-9bed-4ab3-86c3-79f403cd26a3"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 17 00:07:48.561399 kubelet[3462]: I0517 00:07:48.561334 3462 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5803be12-9bed-4ab3-86c3-79f403cd26a3-cilium-cgroup\") on node \"ip-172-31-29-16\" DevicePath \"\"" May 17 00:07:48.561399 kubelet[3462]: I0517 00:07:48.561390 3462 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5803be12-9bed-4ab3-86c3-79f403cd26a3-clustermesh-secrets\") on node \"ip-172-31-29-16\" DevicePath \"\"" May 17 00:07:48.561624 kubelet[3462]: I0517 00:07:48.561422 3462 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5803be12-9bed-4ab3-86c3-79f403cd26a3-cni-path\") on node \"ip-172-31-29-16\" DevicePath \"\"" May 17 00:07:48.561624 kubelet[3462]: I0517 00:07:48.561444 3462 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5803be12-9bed-4ab3-86c3-79f403cd26a3-cilium-config-path\") on node \"ip-172-31-29-16\" DevicePath \"\"" May 17 00:07:48.561624 kubelet[3462]: I0517 00:07:48.561465 3462 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5803be12-9bed-4ab3-86c3-79f403cd26a3-host-proc-sys-net\") on node \"ip-172-31-29-16\" DevicePath \"\"" May 17 00:07:48.561624 kubelet[3462]: I0517 00:07:48.561487 3462 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5803be12-9bed-4ab3-86c3-79f403cd26a3-host-proc-sys-kernel\") on node \"ip-172-31-29-16\" DevicePath \"\"" May 17 00:07:48.561624 kubelet[3462]: I0517 00:07:48.561507 3462 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5803be12-9bed-4ab3-86c3-79f403cd26a3-cilium-run\") on node \"ip-172-31-29-16\" DevicePath \"\"" May 17 00:07:48.561624 kubelet[3462]: I0517 00:07:48.561530 3462 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5803be12-9bed-4ab3-86c3-79f403cd26a3-hostproc\") on node \"ip-172-31-29-16\" DevicePath \"\"" May 17 00:07:48.561624 kubelet[3462]: I0517 00:07:48.561551 3462 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-n7sqf\" (UniqueName: \"kubernetes.io/projected/5803be12-9bed-4ab3-86c3-79f403cd26a3-kube-api-access-n7sqf\") on node \"ip-172-31-29-16\" DevicePath \"\"" May 17 00:07:48.561624 kubelet[3462]: I0517 00:07:48.561571 3462 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5803be12-9bed-4ab3-86c3-79f403cd26a3-bpf-maps\") on node \"ip-172-31-29-16\" DevicePath \"\"" May 17 00:07:48.562059 kubelet[3462]: I0517 00:07:48.561592 3462 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5803be12-9bed-4ab3-86c3-79f403cd26a3-etc-cni-netd\") on node \"ip-172-31-29-16\" DevicePath \"\"" May 17 00:07:48.562059 kubelet[3462]: I0517 00:07:48.561613 3462 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5803be12-9bed-4ab3-86c3-79f403cd26a3-hubble-tls\") on node \"ip-172-31-29-16\" DevicePath \"\"" May 17 00:07:48.562059 kubelet[3462]: I0517 00:07:48.561633 3462 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5803be12-9bed-4ab3-86c3-79f403cd26a3-xtables-lock\") on node \"ip-172-31-29-16\" DevicePath \"\"" May 17 00:07:48.562059 kubelet[3462]: I0517 00:07:48.561653 3462 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5803be12-9bed-4ab3-86c3-79f403cd26a3-lib-modules\") on node \"ip-172-31-29-16\" DevicePath \"\"" May 17 00:07:48.668151 systemd[1]: Removed slice kubepods-burstable-pod5803be12_9bed_4ab3_86c3_79f403cd26a3.slice - libcontainer container kubepods-burstable-pod5803be12_9bed_4ab3_86c3_79f403cd26a3.slice. May 17 00:07:48.669520 systemd[1]: kubepods-burstable-pod5803be12_9bed_4ab3_86c3_79f403cd26a3.slice: Consumed 15.892s CPU time. May 17 00:07:48.673759 systemd[1]: Removed slice kubepods-besteffort-pod24265796_28fc_4f94_b264_dc5634109fe8.slice - libcontainer container kubepods-besteffort-pod24265796_28fc_4f94_b264_dc5634109fe8.slice. May 17 00:07:48.885942 kubelet[3462]: E0517 00:07:48.885224 3462 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 17 00:07:48.963772 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-316babfe37ce073d9afe1eec2a981419dc8af30bb0119f628f886227991dbf08-rootfs.mount: Deactivated successfully. May 17 00:07:48.963942 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a1e08f1c00f90e62ea5240fd6fc068f8d84a0c7669107486faafe716659e0cc3-rootfs.mount: Deactivated successfully. May 17 00:07:48.964098 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a1e08f1c00f90e62ea5240fd6fc068f8d84a0c7669107486faafe716659e0cc3-shm.mount: Deactivated successfully. May 17 00:07:48.964250 systemd[1]: var-lib-kubelet-pods-5803be12\x2d9bed\x2d4ab3\x2d86c3\x2d79f403cd26a3-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 17 00:07:48.964388 systemd[1]: var-lib-kubelet-pods-5803be12\x2d9bed\x2d4ab3\x2d86c3\x2d79f403cd26a3-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 17 00:07:48.964544 systemd[1]: var-lib-kubelet-pods-24265796\x2d28fc\x2d4f94\x2db264\x2ddc5634109fe8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhkt6z.mount: Deactivated successfully. May 17 00:07:48.964690 systemd[1]: var-lib-kubelet-pods-5803be12\x2d9bed\x2d4ab3\x2d86c3\x2d79f403cd26a3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dn7sqf.mount: Deactivated successfully. May 17 00:07:49.133106 kubelet[3462]: I0517 00:07:49.133024 3462 scope.go:117] "RemoveContainer" containerID="bd4515875c5f4d4a4113e139d1d9bf33770e927ae87876b64a8e3905b4352f57" May 17 00:07:49.136654 containerd[2020]: time="2025-05-17T00:07:49.136249704Z" level=info msg="RemoveContainer for \"bd4515875c5f4d4a4113e139d1d9bf33770e927ae87876b64a8e3905b4352f57\"" May 17 00:07:49.153142 containerd[2020]: time="2025-05-17T00:07:49.152957838Z" level=info msg="RemoveContainer for \"bd4515875c5f4d4a4113e139d1d9bf33770e927ae87876b64a8e3905b4352f57\" returns successfully" May 17 00:07:49.153748 kubelet[3462]: I0517 00:07:49.153571 3462 scope.go:117] "RemoveContainer" containerID="94893197ab12f64acc98c6c817f6b981b9ae5bf254123d9b8f62b8ea992e6ee5" May 17 00:07:49.157431 containerd[2020]: time="2025-05-17T00:07:49.157345663Z" level=info msg="RemoveContainer for \"94893197ab12f64acc98c6c817f6b981b9ae5bf254123d9b8f62b8ea992e6ee5\"" May 17 00:07:49.164236 containerd[2020]: time="2025-05-17T00:07:49.164175647Z" level=info msg="RemoveContainer for \"94893197ab12f64acc98c6c817f6b981b9ae5bf254123d9b8f62b8ea992e6ee5\" returns successfully" May 17 00:07:49.164620 kubelet[3462]: I0517 00:07:49.164590 3462 scope.go:117] "RemoveContainer" containerID="e8e65e9114848acb9cb93e3806adb4ad51a864f74b9b5969778d316b527ad0f3" May 17 00:07:49.169331 containerd[2020]: time="2025-05-17T00:07:49.168948743Z" level=info msg="RemoveContainer for \"e8e65e9114848acb9cb93e3806adb4ad51a864f74b9b5969778d316b527ad0f3\"" May 17 00:07:49.180726 containerd[2020]: time="2025-05-17T00:07:49.180277136Z" level=info msg="RemoveContainer for \"e8e65e9114848acb9cb93e3806adb4ad51a864f74b9b5969778d316b527ad0f3\" returns successfully" May 17 00:07:49.180855 kubelet[3462]: I0517 00:07:49.180568 3462 scope.go:117] "RemoveContainer" containerID="876fffd523f614e5c00600b4bf4eff1af8dca621d745e87ff68176cda4d06102" May 17 00:07:49.183673 containerd[2020]: time="2025-05-17T00:07:49.183023234Z" level=info msg="RemoveContainer for \"876fffd523f614e5c00600b4bf4eff1af8dca621d745e87ff68176cda4d06102\"" May 17 00:07:49.190907 containerd[2020]: time="2025-05-17T00:07:49.190698039Z" level=info msg="RemoveContainer for \"876fffd523f614e5c00600b4bf4eff1af8dca621d745e87ff68176cda4d06102\" returns successfully" May 17 00:07:49.191150 kubelet[3462]: I0517 00:07:49.191099 3462 scope.go:117] "RemoveContainer" containerID="44e3d14fc6908f623f2058fb8f23e8f07ef69966273e8249c1bc7db823698a04" May 17 00:07:49.195824 containerd[2020]: time="2025-05-17T00:07:49.195775951Z" level=info msg="RemoveContainer for \"44e3d14fc6908f623f2058fb8f23e8f07ef69966273e8249c1bc7db823698a04\"" May 17 00:07:49.201799 containerd[2020]: time="2025-05-17T00:07:49.201664009Z" level=info msg="RemoveContainer for \"44e3d14fc6908f623f2058fb8f23e8f07ef69966273e8249c1bc7db823698a04\" returns successfully" May 17 00:07:49.202432 kubelet[3462]: I0517 00:07:49.202258 3462 scope.go:117] "RemoveContainer" containerID="bd4515875c5f4d4a4113e139d1d9bf33770e927ae87876b64a8e3905b4352f57" May 17 00:07:49.202915 containerd[2020]: time="2025-05-17T00:07:49.202759853Z" level=error msg="ContainerStatus for \"bd4515875c5f4d4a4113e139d1d9bf33770e927ae87876b64a8e3905b4352f57\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bd4515875c5f4d4a4113e139d1d9bf33770e927ae87876b64a8e3905b4352f57\": not found" May 17 00:07:49.203143 kubelet[3462]: E0517 00:07:49.203089 3462 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bd4515875c5f4d4a4113e139d1d9bf33770e927ae87876b64a8e3905b4352f57\": not found" containerID="bd4515875c5f4d4a4113e139d1d9bf33770e927ae87876b64a8e3905b4352f57" May 17 00:07:49.203267 kubelet[3462]: I0517 00:07:49.203169 3462 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bd4515875c5f4d4a4113e139d1d9bf33770e927ae87876b64a8e3905b4352f57"} err="failed to get container status \"bd4515875c5f4d4a4113e139d1d9bf33770e927ae87876b64a8e3905b4352f57\": rpc error: code = NotFound desc = an error occurred when try to find container \"bd4515875c5f4d4a4113e139d1d9bf33770e927ae87876b64a8e3905b4352f57\": not found" May 17 00:07:49.203331 kubelet[3462]: I0517 00:07:49.203263 3462 scope.go:117] "RemoveContainer" containerID="94893197ab12f64acc98c6c817f6b981b9ae5bf254123d9b8f62b8ea992e6ee5" May 17 00:07:49.203733 containerd[2020]: time="2025-05-17T00:07:49.203665340Z" level=error msg="ContainerStatus for \"94893197ab12f64acc98c6c817f6b981b9ae5bf254123d9b8f62b8ea992e6ee5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"94893197ab12f64acc98c6c817f6b981b9ae5bf254123d9b8f62b8ea992e6ee5\": not found" May 17 00:07:49.203912 kubelet[3462]: E0517 00:07:49.203870 3462 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"94893197ab12f64acc98c6c817f6b981b9ae5bf254123d9b8f62b8ea992e6ee5\": not found" containerID="94893197ab12f64acc98c6c817f6b981b9ae5bf254123d9b8f62b8ea992e6ee5" May 17 00:07:49.204021 kubelet[3462]: I0517 00:07:49.203920 3462 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"94893197ab12f64acc98c6c817f6b981b9ae5bf254123d9b8f62b8ea992e6ee5"} err="failed to get container status \"94893197ab12f64acc98c6c817f6b981b9ae5bf254123d9b8f62b8ea992e6ee5\": rpc error: code = NotFound desc = an error occurred when try to find container \"94893197ab12f64acc98c6c817f6b981b9ae5bf254123d9b8f62b8ea992e6ee5\": not found" May 17 00:07:49.204021 kubelet[3462]: I0517 00:07:49.203956 3462 scope.go:117] "RemoveContainer" containerID="e8e65e9114848acb9cb93e3806adb4ad51a864f74b9b5969778d316b527ad0f3" May 17 00:07:49.204439 containerd[2020]: time="2025-05-17T00:07:49.204321244Z" level=error msg="ContainerStatus for \"e8e65e9114848acb9cb93e3806adb4ad51a864f74b9b5969778d316b527ad0f3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e8e65e9114848acb9cb93e3806adb4ad51a864f74b9b5969778d316b527ad0f3\": not found" May 17 00:07:49.204598 kubelet[3462]: E0517 00:07:49.204548 3462 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e8e65e9114848acb9cb93e3806adb4ad51a864f74b9b5969778d316b527ad0f3\": not found" containerID="e8e65e9114848acb9cb93e3806adb4ad51a864f74b9b5969778d316b527ad0f3" May 17 00:07:49.204665 kubelet[3462]: I0517 00:07:49.204596 3462 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e8e65e9114848acb9cb93e3806adb4ad51a864f74b9b5969778d316b527ad0f3"} err="failed to get container status \"e8e65e9114848acb9cb93e3806adb4ad51a864f74b9b5969778d316b527ad0f3\": rpc error: code = NotFound desc = an error occurred when try to find container \"e8e65e9114848acb9cb93e3806adb4ad51a864f74b9b5969778d316b527ad0f3\": not found" May 17 00:07:49.204665 kubelet[3462]: I0517 00:07:49.204635 3462 scope.go:117] "RemoveContainer" containerID="876fffd523f614e5c00600b4bf4eff1af8dca621d745e87ff68176cda4d06102" May 17 00:07:49.205245 containerd[2020]: time="2025-05-17T00:07:49.205130083Z" level=error msg="ContainerStatus for \"876fffd523f614e5c00600b4bf4eff1af8dca621d745e87ff68176cda4d06102\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"876fffd523f614e5c00600b4bf4eff1af8dca621d745e87ff68176cda4d06102\": not found" May 17 00:07:49.205387 kubelet[3462]: E0517 00:07:49.205334 3462 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"876fffd523f614e5c00600b4bf4eff1af8dca621d745e87ff68176cda4d06102\": not found" containerID="876fffd523f614e5c00600b4bf4eff1af8dca621d745e87ff68176cda4d06102" May 17 00:07:49.205460 kubelet[3462]: I0517 00:07:49.205387 3462 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"876fffd523f614e5c00600b4bf4eff1af8dca621d745e87ff68176cda4d06102"} err="failed to get container status \"876fffd523f614e5c00600b4bf4eff1af8dca621d745e87ff68176cda4d06102\": rpc error: code = NotFound desc = an error occurred when try to find container \"876fffd523f614e5c00600b4bf4eff1af8dca621d745e87ff68176cda4d06102\": not found" May 17 00:07:49.205460 kubelet[3462]: I0517 00:07:49.205418 3462 scope.go:117] "RemoveContainer" containerID="44e3d14fc6908f623f2058fb8f23e8f07ef69966273e8249c1bc7db823698a04" May 17 00:07:49.205898 containerd[2020]: time="2025-05-17T00:07:49.205688728Z" level=error msg="ContainerStatus for \"44e3d14fc6908f623f2058fb8f23e8f07ef69966273e8249c1bc7db823698a04\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"44e3d14fc6908f623f2058fb8f23e8f07ef69966273e8249c1bc7db823698a04\": not found" May 17 00:07:49.205961 kubelet[3462]: E0517 00:07:49.205931 3462 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"44e3d14fc6908f623f2058fb8f23e8f07ef69966273e8249c1bc7db823698a04\": not found" containerID="44e3d14fc6908f623f2058fb8f23e8f07ef69966273e8249c1bc7db823698a04" May 17 00:07:49.206052 kubelet[3462]: I0517 00:07:49.205966 3462 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"44e3d14fc6908f623f2058fb8f23e8f07ef69966273e8249c1bc7db823698a04"} err="failed to get container status \"44e3d14fc6908f623f2058fb8f23e8f07ef69966273e8249c1bc7db823698a04\": rpc error: code = NotFound desc = an error occurred when try to find container \"44e3d14fc6908f623f2058fb8f23e8f07ef69966273e8249c1bc7db823698a04\": not found" May 17 00:07:49.206132 kubelet[3462]: I0517 00:07:49.206057 3462 scope.go:117] "RemoveContainer" containerID="cb11c888de97080f1c8851c5b1fa905cb6f2def74eaf772bf8946243479593f4" May 17 00:07:49.209180 containerd[2020]: time="2025-05-17T00:07:49.209008991Z" level=info msg="RemoveContainer for \"cb11c888de97080f1c8851c5b1fa905cb6f2def74eaf772bf8946243479593f4\"" May 17 00:07:49.214673 containerd[2020]: time="2025-05-17T00:07:49.214609109Z" level=info msg="RemoveContainer for \"cb11c888de97080f1c8851c5b1fa905cb6f2def74eaf772bf8946243479593f4\" returns successfully" May 17 00:07:49.215207 kubelet[3462]: I0517 00:07:49.215095 3462 scope.go:117] "RemoveContainer" containerID="cb11c888de97080f1c8851c5b1fa905cb6f2def74eaf772bf8946243479593f4" May 17 00:07:49.215898 containerd[2020]: time="2025-05-17T00:07:49.215572551Z" level=error msg="ContainerStatus for \"cb11c888de97080f1c8851c5b1fa905cb6f2def74eaf772bf8946243479593f4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cb11c888de97080f1c8851c5b1fa905cb6f2def74eaf772bf8946243479593f4\": not found" May 17 00:07:49.216029 kubelet[3462]: E0517 00:07:49.215893 3462 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cb11c888de97080f1c8851c5b1fa905cb6f2def74eaf772bf8946243479593f4\": not found" containerID="cb11c888de97080f1c8851c5b1fa905cb6f2def74eaf772bf8946243479593f4" May 17 00:07:49.216029 kubelet[3462]: I0517 00:07:49.215934 3462 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cb11c888de97080f1c8851c5b1fa905cb6f2def74eaf772bf8946243479593f4"} err="failed to get container status \"cb11c888de97080f1c8851c5b1fa905cb6f2def74eaf772bf8946243479593f4\": rpc error: code = NotFound desc = an error occurred when try to find container \"cb11c888de97080f1c8851c5b1fa905cb6f2def74eaf772bf8946243479593f4\": not found" May 17 00:07:49.873838 sshd[5091]: pam_unix(sshd:session): session closed for user core May 17 00:07:49.880963 systemd[1]: sshd@26-172.31.29.16:22-139.178.89.65:59262.service: Deactivated successfully. May 17 00:07:49.885702 systemd[1]: session-27.scope: Deactivated successfully. May 17 00:07:49.886600 systemd[1]: session-27.scope: Consumed 1.641s CPU time. May 17 00:07:49.887729 systemd-logind[1997]: Session 27 logged out. Waiting for processes to exit. May 17 00:07:49.890809 systemd-logind[1997]: Removed session 27. May 17 00:07:49.912595 systemd[1]: Started sshd@27-172.31.29.16:22-139.178.89.65:50212.service - OpenSSH per-connection server daemon (139.178.89.65:50212). May 17 00:07:50.079385 sshd[5253]: Accepted publickey for core from 139.178.89.65 port 50212 ssh2: RSA SHA256:eHxhS8dryg5cUeSGvw1rEy5lVBwPN/UG03VmUnZDeRM May 17 00:07:50.082105 sshd[5253]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:07:50.091254 systemd-logind[1997]: New session 28 of user core. May 17 00:07:50.097297 systemd[1]: Started session-28.scope - Session 28 of User core. May 17 00:07:50.659330 kubelet[3462]: I0517 00:07:50.659186 3462 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24265796-28fc-4f94-b264-dc5634109fe8" path="/var/lib/kubelet/pods/24265796-28fc-4f94-b264-dc5634109fe8/volumes" May 17 00:07:50.662101 kubelet[3462]: I0517 00:07:50.661428 3462 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5803be12-9bed-4ab3-86c3-79f403cd26a3" path="/var/lib/kubelet/pods/5803be12-9bed-4ab3-86c3-79f403cd26a3/volumes" May 17 00:07:50.782072 kubelet[3462]: I0517 00:07:50.778391 3462 setters.go:618] "Node became not ready" node="ip-172-31-29-16" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-17T00:07:50Z","lastTransitionTime":"2025-05-17T00:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 17 00:07:51.008865 ntpd[1990]: Deleting interface #12 lxc_health, fe80::40ad:66ff:fefa:6dbf%8#123, interface stats: received=0, sent=0, dropped=0, active_time=72 secs May 17 00:07:51.009524 ntpd[1990]: 17 May 00:07:51 ntpd[1990]: Deleting interface #12 lxc_health, fe80::40ad:66ff:fefa:6dbf%8#123, interface stats: received=0, sent=0, dropped=0, active_time=72 secs May 17 00:07:51.656205 kubelet[3462]: E0517 00:07:51.654241 3462 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-kxs2t" podUID="e2c21b6b-5916-4910-9594-91aa64aec124" May 17 00:07:51.864303 sshd[5253]: pam_unix(sshd:session): session closed for user core May 17 00:07:51.871920 systemd[1]: sshd@27-172.31.29.16:22-139.178.89.65:50212.service: Deactivated successfully. May 17 00:07:51.878860 systemd[1]: session-28.scope: Deactivated successfully. May 17 00:07:51.880909 systemd[1]: session-28.scope: Consumed 1.541s CPU time. May 17 00:07:51.887260 systemd-logind[1997]: Session 28 logged out. Waiting for processes to exit. May 17 00:07:51.914903 systemd[1]: Started sshd@28-172.31.29.16:22-139.178.89.65:50224.service - OpenSSH per-connection server daemon (139.178.89.65:50224). May 17 00:07:51.917213 systemd-logind[1997]: Removed session 28. May 17 00:07:51.961525 systemd[1]: Created slice kubepods-burstable-pod985d7c97_7000_45ef_8c67_2b251690fac7.slice - libcontainer container kubepods-burstable-pod985d7c97_7000_45ef_8c67_2b251690fac7.slice. May 17 00:07:52.088066 kubelet[3462]: I0517 00:07:52.087950 3462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/985d7c97-7000-45ef-8c67-2b251690fac7-cilium-run\") pod \"cilium-9msqw\" (UID: \"985d7c97-7000-45ef-8c67-2b251690fac7\") " pod="kube-system/cilium-9msqw" May 17 00:07:52.088066 kubelet[3462]: I0517 00:07:52.088063 3462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/985d7c97-7000-45ef-8c67-2b251690fac7-hostproc\") pod \"cilium-9msqw\" (UID: \"985d7c97-7000-45ef-8c67-2b251690fac7\") " pod="kube-system/cilium-9msqw" May 17 00:07:52.088691 kubelet[3462]: I0517 00:07:52.088122 3462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/985d7c97-7000-45ef-8c67-2b251690fac7-cilium-cgroup\") pod \"cilium-9msqw\" (UID: \"985d7c97-7000-45ef-8c67-2b251690fac7\") " pod="kube-system/cilium-9msqw" May 17 00:07:52.088691 kubelet[3462]: I0517 00:07:52.088174 3462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/985d7c97-7000-45ef-8c67-2b251690fac7-host-proc-sys-net\") pod \"cilium-9msqw\" (UID: \"985d7c97-7000-45ef-8c67-2b251690fac7\") " pod="kube-system/cilium-9msqw" May 17 00:07:52.088691 kubelet[3462]: I0517 00:07:52.088222 3462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/985d7c97-7000-45ef-8c67-2b251690fac7-clustermesh-secrets\") pod \"cilium-9msqw\" (UID: \"985d7c97-7000-45ef-8c67-2b251690fac7\") " pod="kube-system/cilium-9msqw" May 17 00:07:52.088691 kubelet[3462]: I0517 00:07:52.088275 3462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/985d7c97-7000-45ef-8c67-2b251690fac7-cilium-ipsec-secrets\") pod \"cilium-9msqw\" (UID: \"985d7c97-7000-45ef-8c67-2b251690fac7\") " pod="kube-system/cilium-9msqw" May 17 00:07:52.088691 kubelet[3462]: I0517 00:07:52.088314 3462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/985d7c97-7000-45ef-8c67-2b251690fac7-cilium-config-path\") pod \"cilium-9msqw\" (UID: \"985d7c97-7000-45ef-8c67-2b251690fac7\") " pod="kube-system/cilium-9msqw" May 17 00:07:52.088969 kubelet[3462]: I0517 00:07:52.088366 3462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/985d7c97-7000-45ef-8c67-2b251690fac7-host-proc-sys-kernel\") pod \"cilium-9msqw\" (UID: \"985d7c97-7000-45ef-8c67-2b251690fac7\") " pod="kube-system/cilium-9msqw" May 17 00:07:52.088969 kubelet[3462]: I0517 00:07:52.088415 3462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/985d7c97-7000-45ef-8c67-2b251690fac7-etc-cni-netd\") pod \"cilium-9msqw\" (UID: \"985d7c97-7000-45ef-8c67-2b251690fac7\") " pod="kube-system/cilium-9msqw" May 17 00:07:52.088969 kubelet[3462]: I0517 00:07:52.088460 3462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jvnt\" (UniqueName: \"kubernetes.io/projected/985d7c97-7000-45ef-8c67-2b251690fac7-kube-api-access-4jvnt\") pod \"cilium-9msqw\" (UID: \"985d7c97-7000-45ef-8c67-2b251690fac7\") " pod="kube-system/cilium-9msqw" May 17 00:07:52.088969 kubelet[3462]: I0517 00:07:52.088507 3462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/985d7c97-7000-45ef-8c67-2b251690fac7-cni-path\") pod \"cilium-9msqw\" (UID: \"985d7c97-7000-45ef-8c67-2b251690fac7\") " pod="kube-system/cilium-9msqw" May 17 00:07:52.088969 kubelet[3462]: I0517 00:07:52.088558 3462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/985d7c97-7000-45ef-8c67-2b251690fac7-lib-modules\") pod \"cilium-9msqw\" (UID: \"985d7c97-7000-45ef-8c67-2b251690fac7\") " pod="kube-system/cilium-9msqw" May 17 00:07:52.088969 kubelet[3462]: I0517 00:07:52.088604 3462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/985d7c97-7000-45ef-8c67-2b251690fac7-xtables-lock\") pod \"cilium-9msqw\" (UID: \"985d7c97-7000-45ef-8c67-2b251690fac7\") " pod="kube-system/cilium-9msqw" May 17 00:07:52.089810 kubelet[3462]: I0517 00:07:52.088650 3462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/985d7c97-7000-45ef-8c67-2b251690fac7-hubble-tls\") pod \"cilium-9msqw\" (UID: \"985d7c97-7000-45ef-8c67-2b251690fac7\") " pod="kube-system/cilium-9msqw" May 17 00:07:52.089810 kubelet[3462]: I0517 00:07:52.088698 3462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/985d7c97-7000-45ef-8c67-2b251690fac7-bpf-maps\") pod \"cilium-9msqw\" (UID: \"985d7c97-7000-45ef-8c67-2b251690fac7\") " pod="kube-system/cilium-9msqw" May 17 00:07:52.116585 sshd[5265]: Accepted publickey for core from 139.178.89.65 port 50224 ssh2: RSA SHA256:eHxhS8dryg5cUeSGvw1rEy5lVBwPN/UG03VmUnZDeRM May 17 00:07:52.120495 sshd[5265]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:07:52.133064 systemd-logind[1997]: New session 29 of user core. May 17 00:07:52.138269 systemd[1]: Started session-29.scope - Session 29 of User core. May 17 00:07:52.267417 sshd[5265]: pam_unix(sshd:session): session closed for user core May 17 00:07:52.273179 containerd[2020]: time="2025-05-17T00:07:52.272637152Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9msqw,Uid:985d7c97-7000-45ef-8c67-2b251690fac7,Namespace:kube-system,Attempt:0,}" May 17 00:07:52.274214 systemd[1]: sshd@28-172.31.29.16:22-139.178.89.65:50224.service: Deactivated successfully. May 17 00:07:52.280285 systemd[1]: session-29.scope: Deactivated successfully. May 17 00:07:52.283429 systemd-logind[1997]: Session 29 logged out. Waiting for processes to exit. May 17 00:07:52.286457 systemd-logind[1997]: Removed session 29. May 17 00:07:52.318211 systemd[1]: Started sshd@29-172.31.29.16:22-139.178.89.65:50226.service - OpenSSH per-connection server daemon (139.178.89.65:50226). May 17 00:07:52.324905 containerd[2020]: time="2025-05-17T00:07:52.324462218Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:07:52.324905 containerd[2020]: time="2025-05-17T00:07:52.324582446Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:07:52.325233 containerd[2020]: time="2025-05-17T00:07:52.324636911Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:07:52.325233 containerd[2020]: time="2025-05-17T00:07:52.324820407Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:07:52.384312 systemd[1]: Started cri-containerd-83a14e1bc99af6cf424ccef311856a427cc077941d240d6b6bfdf119bc9a31c2.scope - libcontainer container 83a14e1bc99af6cf424ccef311856a427cc077941d240d6b6bfdf119bc9a31c2. May 17 00:07:52.429880 containerd[2020]: time="2025-05-17T00:07:52.429825946Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9msqw,Uid:985d7c97-7000-45ef-8c67-2b251690fac7,Namespace:kube-system,Attempt:0,} returns sandbox id \"83a14e1bc99af6cf424ccef311856a427cc077941d240d6b6bfdf119bc9a31c2\"" May 17 00:07:52.440631 containerd[2020]: time="2025-05-17T00:07:52.440461314Z" level=info msg="CreateContainer within sandbox \"83a14e1bc99af6cf424ccef311856a427cc077941d240d6b6bfdf119bc9a31c2\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 17 00:07:52.461841 containerd[2020]: time="2025-05-17T00:07:52.461764181Z" level=info msg="CreateContainer within sandbox \"83a14e1bc99af6cf424ccef311856a427cc077941d240d6b6bfdf119bc9a31c2\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"cde94d740a0ae8e8cdc08bafd5609a1812f55caff6a6e9590945b217a4f7832c\"" May 17 00:07:52.463079 containerd[2020]: time="2025-05-17T00:07:52.463035521Z" level=info msg="StartContainer for \"cde94d740a0ae8e8cdc08bafd5609a1812f55caff6a6e9590945b217a4f7832c\"" May 17 00:07:52.508359 systemd[1]: Started cri-containerd-cde94d740a0ae8e8cdc08bafd5609a1812f55caff6a6e9590945b217a4f7832c.scope - libcontainer container cde94d740a0ae8e8cdc08bafd5609a1812f55caff6a6e9590945b217a4f7832c. May 17 00:07:52.522429 sshd[5288]: Accepted publickey for core from 139.178.89.65 port 50226 ssh2: RSA SHA256:eHxhS8dryg5cUeSGvw1rEy5lVBwPN/UG03VmUnZDeRM May 17 00:07:52.526367 sshd[5288]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:07:52.537900 systemd-logind[1997]: New session 30 of user core. May 17 00:07:52.543839 systemd[1]: Started session-30.scope - Session 30 of User core. May 17 00:07:52.565021 containerd[2020]: time="2025-05-17T00:07:52.564711490Z" level=info msg="StartContainer for \"cde94d740a0ae8e8cdc08bafd5609a1812f55caff6a6e9590945b217a4f7832c\" returns successfully" May 17 00:07:52.579237 systemd[1]: cri-containerd-cde94d740a0ae8e8cdc08bafd5609a1812f55caff6a6e9590945b217a4f7832c.scope: Deactivated successfully. May 17 00:07:52.633440 containerd[2020]: time="2025-05-17T00:07:52.633345776Z" level=info msg="shim disconnected" id=cde94d740a0ae8e8cdc08bafd5609a1812f55caff6a6e9590945b217a4f7832c namespace=k8s.io May 17 00:07:52.633440 containerd[2020]: time="2025-05-17T00:07:52.633424673Z" level=warning msg="cleaning up after shim disconnected" id=cde94d740a0ae8e8cdc08bafd5609a1812f55caff6a6e9590945b217a4f7832c namespace=k8s.io May 17 00:07:52.633908 containerd[2020]: time="2025-05-17T00:07:52.633447293Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:07:53.170214 containerd[2020]: time="2025-05-17T00:07:53.170152096Z" level=info msg="CreateContainer within sandbox \"83a14e1bc99af6cf424ccef311856a427cc077941d240d6b6bfdf119bc9a31c2\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 17 00:07:53.196647 containerd[2020]: time="2025-05-17T00:07:53.195472666Z" level=info msg="CreateContainer within sandbox \"83a14e1bc99af6cf424ccef311856a427cc077941d240d6b6bfdf119bc9a31c2\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b76a14de04896df203af4e3edeb2e41d47e26fac8b1b88b6ef84b302d0cd52f8\"" May 17 00:07:53.196946 containerd[2020]: time="2025-05-17T00:07:53.196888581Z" level=info msg="StartContainer for \"b76a14de04896df203af4e3edeb2e41d47e26fac8b1b88b6ef84b302d0cd52f8\"" May 17 00:07:53.268745 systemd[1]: run-containerd-runc-k8s.io-b76a14de04896df203af4e3edeb2e41d47e26fac8b1b88b6ef84b302d0cd52f8-runc.8pa2Zg.mount: Deactivated successfully. May 17 00:07:53.280352 systemd[1]: Started cri-containerd-b76a14de04896df203af4e3edeb2e41d47e26fac8b1b88b6ef84b302d0cd52f8.scope - libcontainer container b76a14de04896df203af4e3edeb2e41d47e26fac8b1b88b6ef84b302d0cd52f8. May 17 00:07:53.337725 containerd[2020]: time="2025-05-17T00:07:53.337655215Z" level=info msg="StartContainer for \"b76a14de04896df203af4e3edeb2e41d47e26fac8b1b88b6ef84b302d0cd52f8\" returns successfully" May 17 00:07:53.354640 systemd[1]: cri-containerd-b76a14de04896df203af4e3edeb2e41d47e26fac8b1b88b6ef84b302d0cd52f8.scope: Deactivated successfully. May 17 00:07:53.388291 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b76a14de04896df203af4e3edeb2e41d47e26fac8b1b88b6ef84b302d0cd52f8-rootfs.mount: Deactivated successfully. May 17 00:07:53.397400 containerd[2020]: time="2025-05-17T00:07:53.397223896Z" level=info msg="shim disconnected" id=b76a14de04896df203af4e3edeb2e41d47e26fac8b1b88b6ef84b302d0cd52f8 namespace=k8s.io May 17 00:07:53.397400 containerd[2020]: time="2025-05-17T00:07:53.397353168Z" level=warning msg="cleaning up after shim disconnected" id=b76a14de04896df203af4e3edeb2e41d47e26fac8b1b88b6ef84b302d0cd52f8 namespace=k8s.io May 17 00:07:53.397400 containerd[2020]: time="2025-05-17T00:07:53.397374001Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:07:53.654177 kubelet[3462]: E0517 00:07:53.654068 3462 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-kxs2t" podUID="e2c21b6b-5916-4910-9594-91aa64aec124" May 17 00:07:53.886458 kubelet[3462]: E0517 00:07:53.886389 3462 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 17 00:07:54.178682 containerd[2020]: time="2025-05-17T00:07:54.178605340Z" level=info msg="CreateContainer within sandbox \"83a14e1bc99af6cf424ccef311856a427cc077941d240d6b6bfdf119bc9a31c2\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 17 00:07:54.209391 containerd[2020]: time="2025-05-17T00:07:54.209312571Z" level=info msg="CreateContainer within sandbox \"83a14e1bc99af6cf424ccef311856a427cc077941d240d6b6bfdf119bc9a31c2\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ca00e5e013d2f095f304dcd2791a76c23c884c78c407941154f183a64233e767\"" May 17 00:07:54.212272 containerd[2020]: time="2025-05-17T00:07:54.211169926Z" level=info msg="StartContainer for \"ca00e5e013d2f095f304dcd2791a76c23c884c78c407941154f183a64233e767\"" May 17 00:07:54.269329 systemd[1]: Started cri-containerd-ca00e5e013d2f095f304dcd2791a76c23c884c78c407941154f183a64233e767.scope - libcontainer container ca00e5e013d2f095f304dcd2791a76c23c884c78c407941154f183a64233e767. May 17 00:07:54.319901 containerd[2020]: time="2025-05-17T00:07:54.319836286Z" level=info msg="StartContainer for \"ca00e5e013d2f095f304dcd2791a76c23c884c78c407941154f183a64233e767\" returns successfully" May 17 00:07:54.326184 systemd[1]: cri-containerd-ca00e5e013d2f095f304dcd2791a76c23c884c78c407941154f183a64233e767.scope: Deactivated successfully. May 17 00:07:54.365460 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ca00e5e013d2f095f304dcd2791a76c23c884c78c407941154f183a64233e767-rootfs.mount: Deactivated successfully. May 17 00:07:54.375794 containerd[2020]: time="2025-05-17T00:07:54.375549867Z" level=info msg="shim disconnected" id=ca00e5e013d2f095f304dcd2791a76c23c884c78c407941154f183a64233e767 namespace=k8s.io May 17 00:07:54.375794 containerd[2020]: time="2025-05-17T00:07:54.375657741Z" level=warning msg="cleaning up after shim disconnected" id=ca00e5e013d2f095f304dcd2791a76c23c884c78c407941154f183a64233e767 namespace=k8s.io May 17 00:07:54.375794 containerd[2020]: time="2025-05-17T00:07:54.375678983Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:07:55.185436 containerd[2020]: time="2025-05-17T00:07:55.185083580Z" level=info msg="CreateContainer within sandbox \"83a14e1bc99af6cf424ccef311856a427cc077941d240d6b6bfdf119bc9a31c2\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 17 00:07:55.215304 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4151425470.mount: Deactivated successfully. May 17 00:07:55.221090 containerd[2020]: time="2025-05-17T00:07:55.221021934Z" level=info msg="CreateContainer within sandbox \"83a14e1bc99af6cf424ccef311856a427cc077941d240d6b6bfdf119bc9a31c2\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"e3e97df504d0ea08962b5601474005718848a800ded5c67a4fa5e903b1a40797\"" May 17 00:07:55.222855 containerd[2020]: time="2025-05-17T00:07:55.222794132Z" level=info msg="StartContainer for \"e3e97df504d0ea08962b5601474005718848a800ded5c67a4fa5e903b1a40797\"" May 17 00:07:55.279311 systemd[1]: Started cri-containerd-e3e97df504d0ea08962b5601474005718848a800ded5c67a4fa5e903b1a40797.scope - libcontainer container e3e97df504d0ea08962b5601474005718848a800ded5c67a4fa5e903b1a40797. May 17 00:07:55.323878 systemd[1]: cri-containerd-e3e97df504d0ea08962b5601474005718848a800ded5c67a4fa5e903b1a40797.scope: Deactivated successfully. May 17 00:07:55.331874 containerd[2020]: time="2025-05-17T00:07:55.329893896Z" level=info msg="StartContainer for \"e3e97df504d0ea08962b5601474005718848a800ded5c67a4fa5e903b1a40797\" returns successfully" May 17 00:07:55.367416 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e3e97df504d0ea08962b5601474005718848a800ded5c67a4fa5e903b1a40797-rootfs.mount: Deactivated successfully. May 17 00:07:55.376465 containerd[2020]: time="2025-05-17T00:07:55.376391575Z" level=info msg="shim disconnected" id=e3e97df504d0ea08962b5601474005718848a800ded5c67a4fa5e903b1a40797 namespace=k8s.io May 17 00:07:55.377131 containerd[2020]: time="2025-05-17T00:07:55.377016283Z" level=warning msg="cleaning up after shim disconnected" id=e3e97df504d0ea08962b5601474005718848a800ded5c67a4fa5e903b1a40797 namespace=k8s.io May 17 00:07:55.377131 containerd[2020]: time="2025-05-17T00:07:55.377047779Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:07:55.656476 kubelet[3462]: E0517 00:07:55.654138 3462 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-kxs2t" podUID="e2c21b6b-5916-4910-9594-91aa64aec124" May 17 00:07:56.188393 containerd[2020]: time="2025-05-17T00:07:56.188292268Z" level=info msg="CreateContainer within sandbox \"83a14e1bc99af6cf424ccef311856a427cc077941d240d6b6bfdf119bc9a31c2\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 17 00:07:56.220594 containerd[2020]: time="2025-05-17T00:07:56.220443349Z" level=info msg="CreateContainer within sandbox \"83a14e1bc99af6cf424ccef311856a427cc077941d240d6b6bfdf119bc9a31c2\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f28c1d673f4cb6b9d3b20dccfa9785df1c21f181d1c59b84eba1fc77a8143f76\"" May 17 00:07:56.222038 containerd[2020]: time="2025-05-17T00:07:56.221829531Z" level=info msg="StartContainer for \"f28c1d673f4cb6b9d3b20dccfa9785df1c21f181d1c59b84eba1fc77a8143f76\"" May 17 00:07:56.285774 systemd[1]: Started cri-containerd-f28c1d673f4cb6b9d3b20dccfa9785df1c21f181d1c59b84eba1fc77a8143f76.scope - libcontainer container f28c1d673f4cb6b9d3b20dccfa9785df1c21f181d1c59b84eba1fc77a8143f76. May 17 00:07:56.342463 containerd[2020]: time="2025-05-17T00:07:56.341914815Z" level=info msg="StartContainer for \"f28c1d673f4cb6b9d3b20dccfa9785df1c21f181d1c59b84eba1fc77a8143f76\" returns successfully" May 17 00:07:56.385484 systemd[1]: run-containerd-runc-k8s.io-f28c1d673f4cb6b9d3b20dccfa9785df1c21f181d1c59b84eba1fc77a8143f76-runc.7Pafy5.mount: Deactivated successfully. May 17 00:07:57.132234 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) May 17 00:07:57.223412 kubelet[3462]: I0517 00:07:57.222312 3462 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-9msqw" podStartSLOduration=6.22229284 podStartE2EDuration="6.22229284s" podCreationTimestamp="2025-05-17 00:07:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:07:57.221943491 +0000 UTC m=+108.965204100" watchObservedRunningTime="2025-05-17 00:07:57.22229284 +0000 UTC m=+108.965553485" May 17 00:07:57.654846 kubelet[3462]: E0517 00:07:57.654301 3462 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-kxs2t" podUID="e2c21b6b-5916-4910-9594-91aa64aec124" May 17 00:07:59.062508 systemd[1]: run-containerd-runc-k8s.io-f28c1d673f4cb6b9d3b20dccfa9785df1c21f181d1c59b84eba1fc77a8143f76-runc.MPrqnw.mount: Deactivated successfully. May 17 00:08:01.288514 systemd[1]: run-containerd-runc-k8s.io-f28c1d673f4cb6b9d3b20dccfa9785df1c21f181d1c59b84eba1fc77a8143f76-runc.3Fw7mE.mount: Deactivated successfully. May 17 00:08:01.409876 systemd-networkd[1934]: lxc_health: Link UP May 17 00:08:01.419915 (udev-worker)[6110]: Network interface NamePolicy= disabled on kernel command line. May 17 00:08:01.423017 systemd-networkd[1934]: lxc_health: Gained carrier May 17 00:08:02.106557 systemd[1]: Started sshd@30-172.31.29.16:22-45.79.181.223:4252.service - OpenSSH per-connection server daemon (45.79.181.223:4252). May 17 00:08:03.025406 systemd-networkd[1934]: lxc_health: Gained IPv6LL May 17 00:08:03.220219 sshd[6128]: Connection closed by 45.79.181.223 port 4252 [preauth] May 17 00:08:03.222731 systemd[1]: sshd@30-172.31.29.16:22-45.79.181.223:4252.service: Deactivated successfully. May 17 00:08:03.327516 systemd[1]: Started sshd@31-172.31.29.16:22-45.79.181.223:4256.service - OpenSSH per-connection server daemon (45.79.181.223:4256). May 17 00:08:04.340971 sshd[6139]: Connection closed by 45.79.181.223 port 4256 [preauth] May 17 00:08:04.342255 systemd[1]: sshd@31-172.31.29.16:22-45.79.181.223:4256.service: Deactivated successfully. May 17 00:08:04.472849 systemd[1]: Started sshd@32-172.31.29.16:22-45.79.181.223:4262.service - OpenSSH per-connection server daemon (45.79.181.223:4262). May 17 00:08:05.537661 sshd[6166]: Connection closed by 45.79.181.223 port 4262 [preauth] May 17 00:08:05.540845 systemd[1]: sshd@32-172.31.29.16:22-45.79.181.223:4262.service: Deactivated successfully. May 17 00:08:06.010250 ntpd[1990]: Listen normally on 15 lxc_health [fe80::9839:c1ff:fe7b:d90c%14]:123 May 17 00:08:06.010814 ntpd[1990]: 17 May 00:08:06 ntpd[1990]: Listen normally on 15 lxc_health [fe80::9839:c1ff:fe7b:d90c%14]:123 May 17 00:08:06.054974 systemd[1]: run-containerd-runc-k8s.io-f28c1d673f4cb6b9d3b20dccfa9785df1c21f181d1c59b84eba1fc77a8143f76-runc.pdyz0b.mount: Deactivated successfully. May 17 00:08:06.143905 kubelet[3462]: E0517 00:08:06.143834 3462 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 127.0.0.1:56806->127.0.0.1:40381: read tcp 127.0.0.1:56806->127.0.0.1:40381: read: connection reset by peer May 17 00:08:08.319899 systemd[1]: run-containerd-runc-k8s.io-f28c1d673f4cb6b9d3b20dccfa9785df1c21f181d1c59b84eba1fc77a8143f76-runc.iislmQ.mount: Deactivated successfully. May 17 00:08:08.437805 kubelet[3462]: E0517 00:08:08.436556 3462 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:56822->127.0.0.1:40381: write tcp 127.0.0.1:56822->127.0.0.1:40381: write: broken pipe May 17 00:08:08.463447 sshd[5288]: pam_unix(sshd:session): session closed for user core May 17 00:08:08.471707 systemd[1]: sshd@29-172.31.29.16:22-139.178.89.65:50226.service: Deactivated successfully. May 17 00:08:08.479320 systemd[1]: session-30.scope: Deactivated successfully. May 17 00:08:08.482643 systemd-logind[1997]: Session 30 logged out. Waiting for processes to exit. May 17 00:08:08.486425 systemd-logind[1997]: Removed session 30. May 17 00:08:08.546670 containerd[2020]: time="2025-05-17T00:08:08.546376460Z" level=info msg="StopPodSandbox for \"316babfe37ce073d9afe1eec2a981419dc8af30bb0119f628f886227991dbf08\"" May 17 00:08:08.546670 containerd[2020]: time="2025-05-17T00:08:08.546526709Z" level=info msg="TearDown network for sandbox \"316babfe37ce073d9afe1eec2a981419dc8af30bb0119f628f886227991dbf08\" successfully" May 17 00:08:08.546670 containerd[2020]: time="2025-05-17T00:08:08.546551920Z" level=info msg="StopPodSandbox for \"316babfe37ce073d9afe1eec2a981419dc8af30bb0119f628f886227991dbf08\" returns successfully" May 17 00:08:08.551040 containerd[2020]: time="2025-05-17T00:08:08.549128195Z" level=info msg="RemovePodSandbox for \"316babfe37ce073d9afe1eec2a981419dc8af30bb0119f628f886227991dbf08\"" May 17 00:08:08.551040 containerd[2020]: time="2025-05-17T00:08:08.549188381Z" level=info msg="Forcibly stopping sandbox \"316babfe37ce073d9afe1eec2a981419dc8af30bb0119f628f886227991dbf08\"" May 17 00:08:08.551040 containerd[2020]: time="2025-05-17T00:08:08.549313754Z" level=info msg="TearDown network for sandbox \"316babfe37ce073d9afe1eec2a981419dc8af30bb0119f628f886227991dbf08\" successfully" May 17 00:08:08.557276 containerd[2020]: time="2025-05-17T00:08:08.557148259Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"316babfe37ce073d9afe1eec2a981419dc8af30bb0119f628f886227991dbf08\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:08:08.557572 containerd[2020]: time="2025-05-17T00:08:08.557533219Z" level=info msg="RemovePodSandbox \"316babfe37ce073d9afe1eec2a981419dc8af30bb0119f628f886227991dbf08\" returns successfully" May 17 00:08:08.558660 containerd[2020]: time="2025-05-17T00:08:08.558617152Z" level=info msg="StopPodSandbox for \"a1e08f1c00f90e62ea5240fd6fc068f8d84a0c7669107486faafe716659e0cc3\"" May 17 00:08:08.559225 containerd[2020]: time="2025-05-17T00:08:08.559189038Z" level=info msg="TearDown network for sandbox \"a1e08f1c00f90e62ea5240fd6fc068f8d84a0c7669107486faafe716659e0cc3\" successfully" May 17 00:08:08.559371 containerd[2020]: time="2025-05-17T00:08:08.559339167Z" level=info msg="StopPodSandbox for \"a1e08f1c00f90e62ea5240fd6fc068f8d84a0c7669107486faafe716659e0cc3\" returns successfully" May 17 00:08:08.560640 containerd[2020]: time="2025-05-17T00:08:08.560309950Z" level=info msg="RemovePodSandbox for \"a1e08f1c00f90e62ea5240fd6fc068f8d84a0c7669107486faafe716659e0cc3\"" May 17 00:08:08.560863 containerd[2020]: time="2025-05-17T00:08:08.560828762Z" level=info msg="Forcibly stopping sandbox \"a1e08f1c00f90e62ea5240fd6fc068f8d84a0c7669107486faafe716659e0cc3\"" May 17 00:08:08.561080 containerd[2020]: time="2025-05-17T00:08:08.561049116Z" level=info msg="TearDown network for sandbox \"a1e08f1c00f90e62ea5240fd6fc068f8d84a0c7669107486faafe716659e0cc3\" successfully" May 17 00:08:08.568311 containerd[2020]: time="2025-05-17T00:08:08.568196508Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a1e08f1c00f90e62ea5240fd6fc068f8d84a0c7669107486faafe716659e0cc3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:08:08.568567 containerd[2020]: time="2025-05-17T00:08:08.568533060Z" level=info msg="RemovePodSandbox \"a1e08f1c00f90e62ea5240fd6fc068f8d84a0c7669107486faafe716659e0cc3\" returns successfully" May 17 00:08:22.918791 systemd[1]: cri-containerd-bc292043e49078969b8af35b0ae74fc5e32bdf341ef842853b54125aa59f49d2.scope: Deactivated successfully. May 17 00:08:22.919923 systemd[1]: cri-containerd-bc292043e49078969b8af35b0ae74fc5e32bdf341ef842853b54125aa59f49d2.scope: Consumed 5.215s CPU time, 18.1M memory peak, 0B memory swap peak. May 17 00:08:22.959497 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bc292043e49078969b8af35b0ae74fc5e32bdf341ef842853b54125aa59f49d2-rootfs.mount: Deactivated successfully. May 17 00:08:22.977900 containerd[2020]: time="2025-05-17T00:08:22.977802808Z" level=info msg="shim disconnected" id=bc292043e49078969b8af35b0ae74fc5e32bdf341ef842853b54125aa59f49d2 namespace=k8s.io May 17 00:08:22.977900 containerd[2020]: time="2025-05-17T00:08:22.977882472Z" level=warning msg="cleaning up after shim disconnected" id=bc292043e49078969b8af35b0ae74fc5e32bdf341ef842853b54125aa59f49d2 namespace=k8s.io May 17 00:08:22.978843 containerd[2020]: time="2025-05-17T00:08:22.977906760Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:08:23.268056 kubelet[3462]: I0517 00:08:23.267904 3462 scope.go:117] "RemoveContainer" containerID="bc292043e49078969b8af35b0ae74fc5e32bdf341ef842853b54125aa59f49d2" May 17 00:08:23.272617 containerd[2020]: time="2025-05-17T00:08:23.272522891Z" level=info msg="CreateContainer within sandbox \"9b7526efec2c41c55e47af3738034a09b8a85002fdd4712a6f365ac425b62427\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" May 17 00:08:23.303154 containerd[2020]: time="2025-05-17T00:08:23.303079537Z" level=info msg="CreateContainer within sandbox \"9b7526efec2c41c55e47af3738034a09b8a85002fdd4712a6f365ac425b62427\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"83f5ea8112289a784a2b5394605b9aba7ad9332cdad4918d6f569aabb083bbe3\"" May 17 00:08:23.304027 containerd[2020]: time="2025-05-17T00:08:23.303922811Z" level=info msg="StartContainer for \"83f5ea8112289a784a2b5394605b9aba7ad9332cdad4918d6f569aabb083bbe3\"" May 17 00:08:23.365310 systemd[1]: Started cri-containerd-83f5ea8112289a784a2b5394605b9aba7ad9332cdad4918d6f569aabb083bbe3.scope - libcontainer container 83f5ea8112289a784a2b5394605b9aba7ad9332cdad4918d6f569aabb083bbe3. May 17 00:08:23.436235 containerd[2020]: time="2025-05-17T00:08:23.435883030Z" level=info msg="StartContainer for \"83f5ea8112289a784a2b5394605b9aba7ad9332cdad4918d6f569aabb083bbe3\" returns successfully" May 17 00:08:28.130312 systemd[1]: cri-containerd-3ef73c7306abdc790a4b6560424754ec1c3fe331864e36d4cf1c4452817d0d9b.scope: Deactivated successfully. May 17 00:08:28.131582 systemd[1]: cri-containerd-3ef73c7306abdc790a4b6560424754ec1c3fe331864e36d4cf1c4452817d0d9b.scope: Consumed 5.770s CPU time, 16.0M memory peak, 0B memory swap peak. May 17 00:08:28.173176 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3ef73c7306abdc790a4b6560424754ec1c3fe331864e36d4cf1c4452817d0d9b-rootfs.mount: Deactivated successfully. May 17 00:08:28.182435 containerd[2020]: time="2025-05-17T00:08:28.182360779Z" level=info msg="shim disconnected" id=3ef73c7306abdc790a4b6560424754ec1c3fe331864e36d4cf1c4452817d0d9b namespace=k8s.io May 17 00:08:28.183756 containerd[2020]: time="2025-05-17T00:08:28.183147981Z" level=warning msg="cleaning up after shim disconnected" id=3ef73c7306abdc790a4b6560424754ec1c3fe331864e36d4cf1c4452817d0d9b namespace=k8s.io May 17 00:08:28.183756 containerd[2020]: time="2025-05-17T00:08:28.183184490Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:08:28.286686 kubelet[3462]: I0517 00:08:28.286633 3462 scope.go:117] "RemoveContainer" containerID="3ef73c7306abdc790a4b6560424754ec1c3fe331864e36d4cf1c4452817d0d9b" May 17 00:08:28.290039 containerd[2020]: time="2025-05-17T00:08:28.289904560Z" level=info msg="CreateContainer within sandbox \"7893ceccd9f825d20cfb0410e4e01a59648fbbbba9be16f2f3f72083b0695033\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" May 17 00:08:28.317175 containerd[2020]: time="2025-05-17T00:08:28.317100307Z" level=info msg="CreateContainer within sandbox \"7893ceccd9f825d20cfb0410e4e01a59648fbbbba9be16f2f3f72083b0695033\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"ddc31687b7781413742820d0af7d810250a24bdff8a66e74c973edf9ee24c176\"" May 17 00:08:28.318060 containerd[2020]: time="2025-05-17T00:08:28.317960445Z" level=info msg="StartContainer for \"ddc31687b7781413742820d0af7d810250a24bdff8a66e74c973edf9ee24c176\"" May 17 00:08:28.370295 systemd[1]: Started cri-containerd-ddc31687b7781413742820d0af7d810250a24bdff8a66e74c973edf9ee24c176.scope - libcontainer container ddc31687b7781413742820d0af7d810250a24bdff8a66e74c973edf9ee24c176. May 17 00:08:28.435661 containerd[2020]: time="2025-05-17T00:08:28.434885238Z" level=info msg="StartContainer for \"ddc31687b7781413742820d0af7d810250a24bdff8a66e74c973edf9ee24c176\" returns successfully" May 17 00:08:30.779013 kubelet[3462]: E0517 00:08:30.777340 3462 controller.go:195] "Failed to update lease" err="Put \"https://172.31.29.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-16?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"