Sep 12 16:50:52.276041 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Sep 12 16:50:52.276099 kernel: Linux version 6.6.106-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Fri Sep 12 15:34:33 -00 2025 Sep 12 16:50:52.276127 kernel: KASLR disabled due to lack of seed Sep 12 16:50:52.276144 kernel: efi: EFI v2.7 by EDK II Sep 12 16:50:52.276160 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7a731a98 MEMRESERVE=0x78557598 Sep 12 16:50:52.276177 kernel: secureboot: Secure boot disabled Sep 12 16:50:52.276196 kernel: ACPI: Early table checksum verification disabled Sep 12 16:50:52.276213 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Sep 12 16:50:52.276229 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Sep 12 16:50:52.276246 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Sep 12 16:50:52.276268 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Sep 12 16:50:52.276285 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Sep 12 16:50:52.276301 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Sep 12 16:50:52.276317 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Sep 12 16:50:52.276337 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Sep 12 16:50:52.276359 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Sep 12 16:50:52.276377 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Sep 12 16:50:52.276394 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Sep 12 16:50:52.276411 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Sep 12 16:50:52.276428 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Sep 12 16:50:52.276445 kernel: printk: bootconsole [uart0] enabled Sep 12 16:50:52.276462 kernel: NUMA: Failed to initialise from firmware Sep 12 16:50:52.276479 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Sep 12 16:50:52.276496 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Sep 12 16:50:52.276513 kernel: Zone ranges: Sep 12 16:50:52.276530 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Sep 12 16:50:52.276552 kernel: DMA32 empty Sep 12 16:50:52.276569 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Sep 12 16:50:52.276585 kernel: Movable zone start for each node Sep 12 16:50:52.276602 kernel: Early memory node ranges Sep 12 16:50:52.276619 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Sep 12 16:50:52.276635 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Sep 12 16:50:52.276652 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Sep 12 16:50:52.276669 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Sep 12 16:50:52.276685 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Sep 12 16:50:52.276702 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Sep 12 16:50:52.276718 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Sep 12 16:50:52.276735 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Sep 12 16:50:52.276758 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Sep 12 16:50:52.276775 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Sep 12 16:50:52.278867 kernel: psci: probing for conduit method from ACPI. Sep 12 16:50:52.278982 kernel: psci: PSCIv1.0 detected in firmware. Sep 12 16:50:52.279007 kernel: psci: Using standard PSCI v0.2 function IDs Sep 12 16:50:52.279039 kernel: psci: Trusted OS migration not required Sep 12 16:50:52.279058 kernel: psci: SMC Calling Convention v1.1 Sep 12 16:50:52.279078 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Sep 12 16:50:52.279097 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Sep 12 16:50:52.279115 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Sep 12 16:50:52.279133 kernel: pcpu-alloc: [0] 0 [0] 1 Sep 12 16:50:52.279152 kernel: Detected PIPT I-cache on CPU0 Sep 12 16:50:52.279169 kernel: CPU features: detected: GIC system register CPU interface Sep 12 16:50:52.279188 kernel: CPU features: detected: Spectre-v2 Sep 12 16:50:52.279205 kernel: CPU features: detected: Spectre-v3a Sep 12 16:50:52.279223 kernel: CPU features: detected: Spectre-BHB Sep 12 16:50:52.279247 kernel: CPU features: detected: ARM erratum 1742098 Sep 12 16:50:52.279266 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Sep 12 16:50:52.279284 kernel: alternatives: applying boot alternatives Sep 12 16:50:52.279304 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=82b413d7549dba6b35b1edf421a17f61aa80704059d10fedd611b1eff5298abd Sep 12 16:50:52.279323 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 12 16:50:52.279341 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 12 16:50:52.279359 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 12 16:50:52.279377 kernel: Fallback order for Node 0: 0 Sep 12 16:50:52.279395 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Sep 12 16:50:52.279412 kernel: Policy zone: Normal Sep 12 16:50:52.279430 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 12 16:50:52.279452 kernel: software IO TLB: area num 2. Sep 12 16:50:52.279470 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Sep 12 16:50:52.279489 kernel: Memory: 3821112K/4030464K available (10368K kernel code, 2186K rwdata, 8104K rodata, 38400K init, 897K bss, 209352K reserved, 0K cma-reserved) Sep 12 16:50:52.279508 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 12 16:50:52.279526 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 12 16:50:52.279546 kernel: rcu: RCU event tracing is enabled. Sep 12 16:50:52.279565 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 12 16:50:52.279583 kernel: Trampoline variant of Tasks RCU enabled. Sep 12 16:50:52.279601 kernel: Tracing variant of Tasks RCU enabled. Sep 12 16:50:52.279620 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 12 16:50:52.279638 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 12 16:50:52.279662 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 12 16:50:52.279679 kernel: GICv3: 96 SPIs implemented Sep 12 16:50:52.279697 kernel: GICv3: 0 Extended SPIs implemented Sep 12 16:50:52.279714 kernel: Root IRQ handler: gic_handle_irq Sep 12 16:50:52.279731 kernel: GICv3: GICv3 features: 16 PPIs Sep 12 16:50:52.279749 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Sep 12 16:50:52.279766 kernel: ITS [mem 0x10080000-0x1009ffff] Sep 12 16:50:52.279784 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Sep 12 16:50:52.279839 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Sep 12 16:50:52.279863 kernel: GICv3: using LPI property table @0x00000004000d0000 Sep 12 16:50:52.279881 kernel: ITS: Using hypervisor restricted LPI range [128] Sep 12 16:50:52.279899 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Sep 12 16:50:52.279927 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 12 16:50:52.279945 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Sep 12 16:50:52.279963 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Sep 12 16:50:52.279981 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Sep 12 16:50:52.280000 kernel: Console: colour dummy device 80x25 Sep 12 16:50:52.280018 kernel: printk: console [tty1] enabled Sep 12 16:50:52.280035 kernel: ACPI: Core revision 20230628 Sep 12 16:50:52.280054 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Sep 12 16:50:52.280071 kernel: pid_max: default: 32768 minimum: 301 Sep 12 16:50:52.280089 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 12 16:50:52.280112 kernel: landlock: Up and running. Sep 12 16:50:52.280129 kernel: SELinux: Initializing. Sep 12 16:50:52.280147 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 16:50:52.280165 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 16:50:52.280183 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 16:50:52.280202 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 16:50:52.280220 kernel: rcu: Hierarchical SRCU implementation. Sep 12 16:50:52.280238 kernel: rcu: Max phase no-delay instances is 400. Sep 12 16:50:52.280261 kernel: Platform MSI: ITS@0x10080000 domain created Sep 12 16:50:52.280280 kernel: PCI/MSI: ITS@0x10080000 domain created Sep 12 16:50:52.280298 kernel: Remapping and enabling EFI services. Sep 12 16:50:52.280316 kernel: smp: Bringing up secondary CPUs ... Sep 12 16:50:52.280335 kernel: Detected PIPT I-cache on CPU1 Sep 12 16:50:52.280354 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Sep 12 16:50:52.280373 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Sep 12 16:50:52.280392 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Sep 12 16:50:52.280411 kernel: smp: Brought up 1 node, 2 CPUs Sep 12 16:50:52.280430 kernel: SMP: Total of 2 processors activated. Sep 12 16:50:52.280454 kernel: CPU features: detected: 32-bit EL0 Support Sep 12 16:50:52.280473 kernel: CPU features: detected: 32-bit EL1 Support Sep 12 16:50:52.280503 kernel: CPU features: detected: CRC32 instructions Sep 12 16:50:52.280526 kernel: CPU: All CPU(s) started at EL1 Sep 12 16:50:52.280544 kernel: alternatives: applying system-wide alternatives Sep 12 16:50:52.280563 kernel: devtmpfs: initialized Sep 12 16:50:52.280582 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 12 16:50:52.280601 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 12 16:50:52.280620 kernel: pinctrl core: initialized pinctrl subsystem Sep 12 16:50:52.280643 kernel: SMBIOS 3.0.0 present. Sep 12 16:50:52.280662 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Sep 12 16:50:52.280681 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 12 16:50:52.280700 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 12 16:50:52.280719 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 12 16:50:52.280738 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 12 16:50:52.280757 kernel: audit: initializing netlink subsys (disabled) Sep 12 16:50:52.280780 kernel: audit: type=2000 audit(0.229:1): state=initialized audit_enabled=0 res=1 Sep 12 16:50:52.282866 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 12 16:50:52.282931 kernel: cpuidle: using governor menu Sep 12 16:50:52.282956 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 12 16:50:52.282976 kernel: ASID allocator initialised with 65536 entries Sep 12 16:50:52.282995 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 12 16:50:52.283015 kernel: Serial: AMBA PL011 UART driver Sep 12 16:50:52.283034 kernel: Modules: 17728 pages in range for non-PLT usage Sep 12 16:50:52.283053 kernel: Modules: 509248 pages in range for PLT usage Sep 12 16:50:52.283083 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 12 16:50:52.283103 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 12 16:50:52.283121 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 12 16:50:52.283140 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 12 16:50:52.283159 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 12 16:50:52.283179 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 12 16:50:52.283198 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 12 16:50:52.283218 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 12 16:50:52.283237 kernel: ACPI: Added _OSI(Module Device) Sep 12 16:50:52.283264 kernel: ACPI: Added _OSI(Processor Device) Sep 12 16:50:52.283285 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 12 16:50:52.283304 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 12 16:50:52.283324 kernel: ACPI: Interpreter enabled Sep 12 16:50:52.283344 kernel: ACPI: Using GIC for interrupt routing Sep 12 16:50:52.283363 kernel: ACPI: MCFG table detected, 1 entries Sep 12 16:50:52.283382 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Sep 12 16:50:52.283704 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 12 16:50:52.284007 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 12 16:50:52.284233 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 12 16:50:52.284451 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Sep 12 16:50:52.284686 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Sep 12 16:50:52.284716 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Sep 12 16:50:52.284735 kernel: acpiphp: Slot [1] registered Sep 12 16:50:52.284755 kernel: acpiphp: Slot [2] registered Sep 12 16:50:52.284774 kernel: acpiphp: Slot [3] registered Sep 12 16:50:52.286933 kernel: acpiphp: Slot [4] registered Sep 12 16:50:52.286968 kernel: acpiphp: Slot [5] registered Sep 12 16:50:52.286988 kernel: acpiphp: Slot [6] registered Sep 12 16:50:52.287008 kernel: acpiphp: Slot [7] registered Sep 12 16:50:52.287026 kernel: acpiphp: Slot [8] registered Sep 12 16:50:52.287046 kernel: acpiphp: Slot [9] registered Sep 12 16:50:52.287065 kernel: acpiphp: Slot [10] registered Sep 12 16:50:52.287084 kernel: acpiphp: Slot [11] registered Sep 12 16:50:52.287102 kernel: acpiphp: Slot [12] registered Sep 12 16:50:52.287123 kernel: acpiphp: Slot [13] registered Sep 12 16:50:52.287155 kernel: acpiphp: Slot [14] registered Sep 12 16:50:52.287175 kernel: acpiphp: Slot [15] registered Sep 12 16:50:52.287195 kernel: acpiphp: Slot [16] registered Sep 12 16:50:52.287213 kernel: acpiphp: Slot [17] registered Sep 12 16:50:52.287232 kernel: acpiphp: Slot [18] registered Sep 12 16:50:52.287251 kernel: acpiphp: Slot [19] registered Sep 12 16:50:52.287269 kernel: acpiphp: Slot [20] registered Sep 12 16:50:52.287288 kernel: acpiphp: Slot [21] registered Sep 12 16:50:52.287307 kernel: acpiphp: Slot [22] registered Sep 12 16:50:52.287331 kernel: acpiphp: Slot [23] registered Sep 12 16:50:52.287350 kernel: acpiphp: Slot [24] registered Sep 12 16:50:52.287369 kernel: acpiphp: Slot [25] registered Sep 12 16:50:52.287388 kernel: acpiphp: Slot [26] registered Sep 12 16:50:52.287406 kernel: acpiphp: Slot [27] registered Sep 12 16:50:52.287424 kernel: acpiphp: Slot [28] registered Sep 12 16:50:52.287442 kernel: acpiphp: Slot [29] registered Sep 12 16:50:52.287461 kernel: acpiphp: Slot [30] registered Sep 12 16:50:52.287479 kernel: acpiphp: Slot [31] registered Sep 12 16:50:52.287497 kernel: PCI host bridge to bus 0000:00 Sep 12 16:50:52.287781 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Sep 12 16:50:52.288019 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 12 16:50:52.288205 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Sep 12 16:50:52.288402 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Sep 12 16:50:52.288648 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Sep 12 16:50:52.289337 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Sep 12 16:50:52.289597 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Sep 12 16:50:52.289921 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Sep 12 16:50:52.290152 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Sep 12 16:50:52.290372 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 12 16:50:52.290612 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Sep 12 16:50:52.292042 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Sep 12 16:50:52.292321 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Sep 12 16:50:52.292568 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Sep 12 16:50:52.292783 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 12 16:50:52.293051 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Sep 12 16:50:52.293270 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Sep 12 16:50:52.293495 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Sep 12 16:50:52.293728 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Sep 12 16:50:52.296112 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Sep 12 16:50:52.296383 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Sep 12 16:50:52.296588 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 12 16:50:52.296783 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Sep 12 16:50:52.299028 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 12 16:50:52.299405 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 12 16:50:52.299431 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 12 16:50:52.299548 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 12 16:50:52.299570 kernel: iommu: Default domain type: Translated Sep 12 16:50:52.299602 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 12 16:50:52.299623 kernel: efivars: Registered efivars operations Sep 12 16:50:52.299642 kernel: vgaarb: loaded Sep 12 16:50:52.299662 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 12 16:50:52.299681 kernel: VFS: Disk quotas dquot_6.6.0 Sep 12 16:50:52.299700 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 12 16:50:52.299719 kernel: pnp: PnP ACPI init Sep 12 16:50:52.301190 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Sep 12 16:50:52.301252 kernel: pnp: PnP ACPI: found 1 devices Sep 12 16:50:52.301273 kernel: NET: Registered PF_INET protocol family Sep 12 16:50:52.301293 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 12 16:50:52.301313 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 12 16:50:52.301332 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 12 16:50:52.301351 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 12 16:50:52.301370 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 12 16:50:52.301389 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 12 16:50:52.301408 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 16:50:52.301432 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 16:50:52.301451 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 12 16:50:52.301470 kernel: PCI: CLS 0 bytes, default 64 Sep 12 16:50:52.301488 kernel: kvm [1]: HYP mode not available Sep 12 16:50:52.301507 kernel: Initialise system trusted keyrings Sep 12 16:50:52.301528 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 12 16:50:52.301548 kernel: Key type asymmetric registered Sep 12 16:50:52.301566 kernel: Asymmetric key parser 'x509' registered Sep 12 16:50:52.301585 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 12 16:50:52.301610 kernel: io scheduler mq-deadline registered Sep 12 16:50:52.301631 kernel: io scheduler kyber registered Sep 12 16:50:52.301650 kernel: io scheduler bfq registered Sep 12 16:50:52.301951 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Sep 12 16:50:52.301987 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 12 16:50:52.302007 kernel: ACPI: button: Power Button [PWRB] Sep 12 16:50:52.302027 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Sep 12 16:50:52.302045 kernel: ACPI: button: Sleep Button [SLPB] Sep 12 16:50:52.302073 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 12 16:50:52.302093 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Sep 12 16:50:52.302332 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Sep 12 16:50:52.302364 kernel: printk: console [ttyS0] disabled Sep 12 16:50:52.302383 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Sep 12 16:50:52.302402 kernel: printk: console [ttyS0] enabled Sep 12 16:50:52.302421 kernel: printk: bootconsole [uart0] disabled Sep 12 16:50:52.302439 kernel: thunder_xcv, ver 1.0 Sep 12 16:50:52.302458 kernel: thunder_bgx, ver 1.0 Sep 12 16:50:52.302484 kernel: nicpf, ver 1.0 Sep 12 16:50:52.302503 kernel: nicvf, ver 1.0 Sep 12 16:50:52.302732 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 12 16:50:52.303085 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-12T16:50:51 UTC (1757695851) Sep 12 16:50:52.303119 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 12 16:50:52.303139 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Sep 12 16:50:52.303158 kernel: watchdog: Delayed init of the lockup detector failed: -19 Sep 12 16:50:52.303178 kernel: watchdog: Hard watchdog permanently disabled Sep 12 16:50:52.303207 kernel: NET: Registered PF_INET6 protocol family Sep 12 16:50:52.303227 kernel: Segment Routing with IPv6 Sep 12 16:50:52.303245 kernel: In-situ OAM (IOAM) with IPv6 Sep 12 16:50:52.303264 kernel: NET: Registered PF_PACKET protocol family Sep 12 16:50:52.303284 kernel: Key type dns_resolver registered Sep 12 16:50:52.303303 kernel: registered taskstats version 1 Sep 12 16:50:52.303322 kernel: Loading compiled-in X.509 certificates Sep 12 16:50:52.303343 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.106-flatcar: d6f11852774cea54e4c26b4ad4f8effa8d89e628' Sep 12 16:50:52.303363 kernel: Key type .fscrypt registered Sep 12 16:50:52.303382 kernel: Key type fscrypt-provisioning registered Sep 12 16:50:52.303406 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 12 16:50:52.303426 kernel: ima: Allocated hash algorithm: sha1 Sep 12 16:50:52.303447 kernel: ima: No architecture policies found Sep 12 16:50:52.303466 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 12 16:50:52.303486 kernel: clk: Disabling unused clocks Sep 12 16:50:52.303507 kernel: Freeing unused kernel memory: 38400K Sep 12 16:50:52.303527 kernel: Run /init as init process Sep 12 16:50:52.303547 kernel: with arguments: Sep 12 16:50:52.303565 kernel: /init Sep 12 16:50:52.303588 kernel: with environment: Sep 12 16:50:52.303607 kernel: HOME=/ Sep 12 16:50:52.303625 kernel: TERM=linux Sep 12 16:50:52.303644 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 12 16:50:52.303664 systemd[1]: Successfully made /usr/ read-only. Sep 12 16:50:52.303690 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 12 16:50:52.303713 systemd[1]: Detected virtualization amazon. Sep 12 16:50:52.303741 systemd[1]: Detected architecture arm64. Sep 12 16:50:52.303763 systemd[1]: Running in initrd. Sep 12 16:50:52.303784 systemd[1]: No hostname configured, using default hostname. Sep 12 16:50:52.303842 systemd[1]: Hostname set to . Sep 12 16:50:52.303866 systemd[1]: Initializing machine ID from VM UUID. Sep 12 16:50:52.303887 systemd[1]: Queued start job for default target initrd.target. Sep 12 16:50:52.303907 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 16:50:52.303928 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 16:50:52.303959 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 12 16:50:52.303982 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 16:50:52.304004 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 12 16:50:52.304027 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 12 16:50:52.304051 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 12 16:50:52.304073 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 12 16:50:52.304094 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 16:50:52.304121 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 16:50:52.304142 systemd[1]: Reached target paths.target - Path Units. Sep 12 16:50:52.304162 systemd[1]: Reached target slices.target - Slice Units. Sep 12 16:50:52.304184 systemd[1]: Reached target swap.target - Swaps. Sep 12 16:50:52.304204 systemd[1]: Reached target timers.target - Timer Units. Sep 12 16:50:52.304225 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 16:50:52.304246 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 16:50:52.304267 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 12 16:50:52.304287 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 12 16:50:52.304315 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 16:50:52.304336 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 16:50:52.304357 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 16:50:52.304377 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 16:50:52.304397 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 12 16:50:52.304418 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 16:50:52.304438 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 12 16:50:52.304458 systemd[1]: Starting systemd-fsck-usr.service... Sep 12 16:50:52.304485 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 16:50:52.304505 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 16:50:52.304525 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 16:50:52.304546 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 12 16:50:52.304566 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 16:50:52.304588 systemd[1]: Finished systemd-fsck-usr.service. Sep 12 16:50:52.304613 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 16:50:52.304699 systemd-journald[251]: Collecting audit messages is disabled. Sep 12 16:50:52.304747 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 12 16:50:52.304778 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 16:50:52.304852 kernel: Bridge firewalling registered Sep 12 16:50:52.304880 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 16:50:52.304901 systemd-journald[251]: Journal started Sep 12 16:50:52.304939 systemd-journald[251]: Runtime Journal (/run/log/journal/ec2e0c4dbe79f08a2a8c8f976230fb8d) is 8M, max 75.3M, 67.3M free. Sep 12 16:50:52.243677 systemd-modules-load[252]: Inserted module 'overlay' Sep 12 16:50:52.296124 systemd-modules-load[252]: Inserted module 'br_netfilter' Sep 12 16:50:52.316902 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 16:50:52.319583 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 16:50:52.336368 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 16:50:52.345147 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 16:50:52.349144 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 16:50:52.361399 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 16:50:52.389252 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 16:50:52.402898 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 16:50:52.418227 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 16:50:52.430215 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 16:50:52.441931 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 16:50:52.446121 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 12 16:50:52.491730 dracut-cmdline[292]: dracut-dracut-053 Sep 12 16:50:52.502517 dracut-cmdline[292]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=82b413d7549dba6b35b1edf421a17f61aa80704059d10fedd611b1eff5298abd Sep 12 16:50:52.544214 systemd-resolved[286]: Positive Trust Anchors: Sep 12 16:50:52.544244 systemd-resolved[286]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 16:50:52.544305 systemd-resolved[286]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 16:50:52.669841 kernel: SCSI subsystem initialized Sep 12 16:50:52.677858 kernel: Loading iSCSI transport class v2.0-870. Sep 12 16:50:52.690945 kernel: iscsi: registered transport (tcp) Sep 12 16:50:52.714442 kernel: iscsi: registered transport (qla4xxx) Sep 12 16:50:52.714538 kernel: QLogic iSCSI HBA Driver Sep 12 16:50:52.800860 kernel: random: crng init done Sep 12 16:50:52.801780 systemd-resolved[286]: Defaulting to hostname 'linux'. Sep 12 16:50:52.806280 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 16:50:52.812446 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 16:50:52.839904 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 12 16:50:52.850342 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 12 16:50:52.903140 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 12 16:50:52.903233 kernel: device-mapper: uevent: version 1.0.3 Sep 12 16:50:52.903262 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 12 16:50:52.974920 kernel: raid6: neonx8 gen() 6439 MB/s Sep 12 16:50:52.992858 kernel: raid6: neonx4 gen() 6437 MB/s Sep 12 16:50:53.009873 kernel: raid6: neonx2 gen() 5356 MB/s Sep 12 16:50:53.026873 kernel: raid6: neonx1 gen() 3886 MB/s Sep 12 16:50:53.043876 kernel: raid6: int64x8 gen() 3591 MB/s Sep 12 16:50:53.060877 kernel: raid6: int64x4 gen() 3625 MB/s Sep 12 16:50:53.078874 kernel: raid6: int64x2 gen() 3512 MB/s Sep 12 16:50:53.097044 kernel: raid6: int64x1 gen() 2702 MB/s Sep 12 16:50:53.097132 kernel: raid6: using algorithm neonx8 gen() 6439 MB/s Sep 12 16:50:53.115910 kernel: raid6: .... xor() 4672 MB/s, rmw enabled Sep 12 16:50:53.115993 kernel: raid6: using neon recovery algorithm Sep 12 16:50:53.125460 kernel: xor: measuring software checksum speed Sep 12 16:50:53.125539 kernel: 8regs : 12944 MB/sec Sep 12 16:50:53.126675 kernel: 32regs : 12987 MB/sec Sep 12 16:50:53.129114 kernel: arm64_neon : 8276 MB/sec Sep 12 16:50:53.129196 kernel: xor: using function: 32regs (12987 MB/sec) Sep 12 16:50:53.218886 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 12 16:50:53.243906 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 12 16:50:53.261109 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 16:50:53.312197 systemd-udevd[473]: Using default interface naming scheme 'v255'. Sep 12 16:50:53.323473 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 16:50:53.340353 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 12 16:50:53.386453 dracut-pre-trigger[474]: rd.md=0: removing MD RAID activation Sep 12 16:50:53.454101 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 16:50:53.480131 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 16:50:53.605578 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 16:50:53.629415 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 12 16:50:53.680919 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 12 16:50:53.694690 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 16:50:53.705367 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 16:50:53.716859 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 16:50:53.735502 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 12 16:50:53.777142 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 12 16:50:53.851988 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 12 16:50:53.858492 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Sep 12 16:50:53.871393 kernel: ena 0000:00:05.0: ENA device version: 0.10 Sep 12 16:50:53.871785 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Sep 12 16:50:53.885876 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:2d:8e:b7:cb:db Sep 12 16:50:53.888772 (udev-worker)[515]: Network interface NamePolicy= disabled on kernel command line. Sep 12 16:50:53.899630 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 16:50:53.899974 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 16:50:53.909590 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 16:50:53.928770 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 16:50:53.929201 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 16:50:53.941423 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 16:50:53.956263 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Sep 12 16:50:53.956347 kernel: nvme nvme0: pci function 0000:00:04.0 Sep 12 16:50:53.957722 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 16:50:53.963451 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 12 16:50:53.974854 kernel: nvme nvme0: 2/0/0 default/read/poll queues Sep 12 16:50:53.995301 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 16:50:54.006932 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 12 16:50:54.007013 kernel: GPT:9289727 != 16777215 Sep 12 16:50:54.007041 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 12 16:50:54.007067 kernel: GPT:9289727 != 16777215 Sep 12 16:50:54.007092 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 12 16:50:54.008060 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 12 16:50:54.012198 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 16:50:54.047020 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 16:50:54.110909 kernel: BTRFS: device fsid 402ea12e-53e0-48e3-8f03-9fb2de6b0089 devid 1 transid 36 /dev/nvme0n1p3 scanned by (udev-worker) (528) Sep 12 16:50:54.163697 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by (udev-worker) (517) Sep 12 16:50:54.201515 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Sep 12 16:50:54.260529 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Sep 12 16:50:54.284185 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Sep 12 16:50:54.287560 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Sep 12 16:50:54.352161 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Sep 12 16:50:54.374111 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 12 16:50:54.390561 disk-uuid[661]: Primary Header is updated. Sep 12 16:50:54.390561 disk-uuid[661]: Secondary Entries is updated. Sep 12 16:50:54.390561 disk-uuid[661]: Secondary Header is updated. Sep 12 16:50:54.400850 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 12 16:50:55.421896 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 12 16:50:55.424841 disk-uuid[662]: The operation has completed successfully. Sep 12 16:50:55.651941 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 12 16:50:55.655929 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 12 16:50:55.733139 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 12 16:50:55.756305 sh[924]: Success Sep 12 16:50:55.783867 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 12 16:50:55.922134 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 12 16:50:55.942279 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 12 16:50:55.955506 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 12 16:50:55.983408 kernel: BTRFS info (device dm-0): first mount of filesystem 402ea12e-53e0-48e3-8f03-9fb2de6b0089 Sep 12 16:50:55.983493 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 12 16:50:55.985864 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 12 16:50:55.985957 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 12 16:50:55.988036 kernel: BTRFS info (device dm-0): using free space tree Sep 12 16:50:56.013866 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 12 16:50:56.030392 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 12 16:50:56.035476 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 12 16:50:56.047141 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 12 16:50:56.052239 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 12 16:50:56.103476 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 903d50e4-a739-43b7-a8ad-24da5524f9bc Sep 12 16:50:56.103554 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 12 16:50:56.105137 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 12 16:50:56.127862 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 12 16:50:56.137925 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 903d50e4-a739-43b7-a8ad-24da5524f9bc Sep 12 16:50:56.142448 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 12 16:50:56.150205 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 12 16:50:56.265900 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 16:50:56.288138 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 16:50:56.374754 systemd-networkd[1127]: lo: Link UP Sep 12 16:50:56.375264 systemd-networkd[1127]: lo: Gained carrier Sep 12 16:50:56.378787 systemd-networkd[1127]: Enumeration completed Sep 12 16:50:56.381271 systemd-networkd[1127]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 16:50:56.381278 systemd-networkd[1127]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 16:50:56.399398 ignition[1050]: Ignition 2.20.0 Sep 12 16:50:56.382235 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 16:50:56.399412 ignition[1050]: Stage: fetch-offline Sep 12 16:50:56.387869 systemd[1]: Reached target network.target - Network. Sep 12 16:50:56.400393 ignition[1050]: no configs at "/usr/lib/ignition/base.d" Sep 12 16:50:56.388615 systemd-networkd[1127]: eth0: Link UP Sep 12 16:50:56.400417 ignition[1050]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 16:50:56.388623 systemd-networkd[1127]: eth0: Gained carrier Sep 12 16:50:56.401246 ignition[1050]: Ignition finished successfully Sep 12 16:50:56.388641 systemd-networkd[1127]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 16:50:56.405317 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 16:50:56.418040 systemd-networkd[1127]: eth0: DHCPv4 address 172.31.21.42/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 12 16:50:56.420534 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 12 16:50:56.479508 ignition[1135]: Ignition 2.20.0 Sep 12 16:50:56.479863 ignition[1135]: Stage: fetch Sep 12 16:50:56.480414 ignition[1135]: no configs at "/usr/lib/ignition/base.d" Sep 12 16:50:56.480438 ignition[1135]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 16:50:56.480710 ignition[1135]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 16:50:56.505147 ignition[1135]: PUT result: OK Sep 12 16:50:56.508379 ignition[1135]: parsed url from cmdline: "" Sep 12 16:50:56.508396 ignition[1135]: no config URL provided Sep 12 16:50:56.508413 ignition[1135]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 16:50:56.508713 ignition[1135]: no config at "/usr/lib/ignition/user.ign" Sep 12 16:50:56.508750 ignition[1135]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 16:50:56.519031 ignition[1135]: PUT result: OK Sep 12 16:50:56.519304 ignition[1135]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Sep 12 16:50:56.523658 ignition[1135]: GET result: OK Sep 12 16:50:56.525258 ignition[1135]: parsing config with SHA512: f0dd8aa003b5a3c0f28f89233699e02cb0739025c9598f3a20bf6cd645011b19fa6daf84fe80a68afb9a0a7178939a55e0f76d7f37c6a05890b1e17044580599 Sep 12 16:50:56.535684 unknown[1135]: fetched base config from "system" Sep 12 16:50:56.536099 unknown[1135]: fetched base config from "system" Sep 12 16:50:56.536833 ignition[1135]: fetch: fetch complete Sep 12 16:50:56.536113 unknown[1135]: fetched user config from "aws" Sep 12 16:50:56.536845 ignition[1135]: fetch: fetch passed Sep 12 16:50:56.536938 ignition[1135]: Ignition finished successfully Sep 12 16:50:56.548569 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 12 16:50:56.557142 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 12 16:50:56.588500 ignition[1143]: Ignition 2.20.0 Sep 12 16:50:56.588529 ignition[1143]: Stage: kargs Sep 12 16:50:56.589200 ignition[1143]: no configs at "/usr/lib/ignition/base.d" Sep 12 16:50:56.589227 ignition[1143]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 16:50:56.589392 ignition[1143]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 16:50:56.593998 ignition[1143]: PUT result: OK Sep 12 16:50:56.603203 ignition[1143]: kargs: kargs passed Sep 12 16:50:56.603515 ignition[1143]: Ignition finished successfully Sep 12 16:50:56.608847 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 12 16:50:56.619217 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 12 16:50:56.643249 ignition[1149]: Ignition 2.20.0 Sep 12 16:50:56.643279 ignition[1149]: Stage: disks Sep 12 16:50:56.644108 ignition[1149]: no configs at "/usr/lib/ignition/base.d" Sep 12 16:50:56.644134 ignition[1149]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 16:50:56.644282 ignition[1149]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 16:50:56.648775 ignition[1149]: PUT result: OK Sep 12 16:50:56.659939 ignition[1149]: disks: disks passed Sep 12 16:50:56.660050 ignition[1149]: Ignition finished successfully Sep 12 16:50:56.663839 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 12 16:50:56.668695 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 12 16:50:56.674174 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 12 16:50:56.676996 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 16:50:56.679164 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 16:50:56.682920 systemd[1]: Reached target basic.target - Basic System. Sep 12 16:50:56.699122 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 12 16:50:56.753233 systemd-fsck[1158]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 12 16:50:56.757384 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 12 16:50:56.771864 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 12 16:50:56.866839 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 397cbf4d-cf5b-4786-906a-df7c3e18edd9 r/w with ordered data mode. Quota mode: none. Sep 12 16:50:56.868706 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 12 16:50:56.872936 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 12 16:50:56.891021 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 16:50:56.902060 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 12 16:50:56.907596 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 12 16:50:56.907688 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 12 16:50:56.907740 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 16:50:56.930272 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by mount (1177) Sep 12 16:50:56.930736 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 12 16:50:56.939296 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 903d50e4-a739-43b7-a8ad-24da5524f9bc Sep 12 16:50:56.939333 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 12 16:50:56.939359 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 12 16:50:56.949087 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 12 16:50:56.969627 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 12 16:50:56.974487 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 16:50:57.074677 initrd-setup-root[1202]: cut: /sysroot/etc/passwd: No such file or directory Sep 12 16:50:57.085736 initrd-setup-root[1209]: cut: /sysroot/etc/group: No such file or directory Sep 12 16:50:57.097100 initrd-setup-root[1216]: cut: /sysroot/etc/shadow: No such file or directory Sep 12 16:50:57.106358 initrd-setup-root[1223]: cut: /sysroot/etc/gshadow: No such file or directory Sep 12 16:50:57.282310 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 12 16:50:57.295495 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 12 16:50:57.301250 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 12 16:50:57.317051 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 12 16:50:57.323900 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 903d50e4-a739-43b7-a8ad-24da5524f9bc Sep 12 16:50:57.372890 ignition[1296]: INFO : Ignition 2.20.0 Sep 12 16:50:57.372890 ignition[1296]: INFO : Stage: mount Sep 12 16:50:57.377415 ignition[1296]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 16:50:57.377415 ignition[1296]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 16:50:57.377415 ignition[1296]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 16:50:57.374272 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 12 16:50:57.400480 ignition[1296]: INFO : PUT result: OK Sep 12 16:50:57.400480 ignition[1296]: INFO : mount: mount passed Sep 12 16:50:57.400480 ignition[1296]: INFO : Ignition finished successfully Sep 12 16:50:57.388032 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 12 16:50:57.409426 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 12 16:50:57.442243 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 16:50:57.478850 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1308) Sep 12 16:50:57.483168 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 903d50e4-a739-43b7-a8ad-24da5524f9bc Sep 12 16:50:57.483239 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 12 16:50:57.483267 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 12 16:50:57.489837 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 12 16:50:57.494106 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 16:50:57.526097 ignition[1325]: INFO : Ignition 2.20.0 Sep 12 16:50:57.526097 ignition[1325]: INFO : Stage: files Sep 12 16:50:57.531911 ignition[1325]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 16:50:57.531911 ignition[1325]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 16:50:57.531911 ignition[1325]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 16:50:57.531911 ignition[1325]: INFO : PUT result: OK Sep 12 16:50:57.542611 ignition[1325]: DEBUG : files: compiled without relabeling support, skipping Sep 12 16:50:57.545636 ignition[1325]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 12 16:50:57.545636 ignition[1325]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 12 16:50:57.554021 ignition[1325]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 12 16:50:57.557329 ignition[1325]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 12 16:50:57.560741 unknown[1325]: wrote ssh authorized keys file for user: core Sep 12 16:50:57.563230 ignition[1325]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 12 16:50:57.569029 ignition[1325]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Sep 12 16:50:57.573427 ignition[1325]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Sep 12 16:50:57.691457 ignition[1325]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 12 16:50:58.458006 systemd-networkd[1127]: eth0: Gained IPv6LL Sep 12 16:50:58.690394 ignition[1325]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Sep 12 16:50:58.695320 ignition[1325]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 12 16:50:58.695320 ignition[1325]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Sep 12 16:50:58.878789 ignition[1325]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 12 16:50:59.013613 ignition[1325]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 12 16:50:59.019048 ignition[1325]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 12 16:50:59.019048 ignition[1325]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 12 16:50:59.019048 ignition[1325]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 12 16:50:59.019048 ignition[1325]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 12 16:50:59.019048 ignition[1325]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 16:50:59.019048 ignition[1325]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 16:50:59.019048 ignition[1325]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 16:50:59.047746 ignition[1325]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 16:50:59.047746 ignition[1325]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 16:50:59.047746 ignition[1325]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 16:50:59.047746 ignition[1325]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 12 16:50:59.047746 ignition[1325]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 12 16:50:59.047746 ignition[1325]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 12 16:50:59.047746 ignition[1325]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Sep 12 16:50:59.454233 ignition[1325]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 12 16:51:01.108520 ignition[1325]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 12 16:51:01.108520 ignition[1325]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 12 16:51:01.127119 ignition[1325]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 16:51:01.127119 ignition[1325]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 16:51:01.127119 ignition[1325]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 12 16:51:01.127119 ignition[1325]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Sep 12 16:51:01.127119 ignition[1325]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Sep 12 16:51:01.127119 ignition[1325]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 12 16:51:01.127119 ignition[1325]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 12 16:51:01.127119 ignition[1325]: INFO : files: files passed Sep 12 16:51:01.127119 ignition[1325]: INFO : Ignition finished successfully Sep 12 16:51:01.116012 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 12 16:51:01.137150 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 12 16:51:01.164063 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 12 16:51:01.178973 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 12 16:51:01.179189 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 12 16:51:01.195574 initrd-setup-root-after-ignition[1353]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 16:51:01.195574 initrd-setup-root-after-ignition[1353]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 12 16:51:01.203950 initrd-setup-root-after-ignition[1357]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 16:51:01.205683 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 16:51:01.214997 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 12 16:51:01.227544 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 12 16:51:01.285461 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 12 16:51:01.285657 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 12 16:51:01.288814 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 12 16:51:01.291192 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 12 16:51:01.295375 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 12 16:51:01.305595 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 12 16:51:01.340610 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 16:51:01.352086 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 12 16:51:01.377304 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 12 16:51:01.382588 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 16:51:01.385551 systemd[1]: Stopped target timers.target - Timer Units. Sep 12 16:51:01.390215 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 12 16:51:01.390449 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 16:51:01.399115 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 12 16:51:01.401893 systemd[1]: Stopped target basic.target - Basic System. Sep 12 16:51:01.405984 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 12 16:51:01.409659 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 16:51:01.414048 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 12 16:51:01.423175 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 12 16:51:01.425740 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 16:51:01.433162 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 12 16:51:01.435746 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 12 16:51:01.441997 systemd[1]: Stopped target swap.target - Swaps. Sep 12 16:51:01.444150 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 12 16:51:01.444379 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 12 16:51:01.452604 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 12 16:51:01.455050 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 16:51:01.463528 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 12 16:51:01.463782 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 16:51:01.466905 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 12 16:51:01.467475 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 12 16:51:01.471398 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 12 16:51:01.471779 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 16:51:01.484575 systemd[1]: ignition-files.service: Deactivated successfully. Sep 12 16:51:01.485333 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 12 16:51:01.503245 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 12 16:51:01.511394 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 12 16:51:01.513788 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 12 16:51:01.516091 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 16:51:01.520377 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 12 16:51:01.520610 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 16:51:01.546924 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 12 16:51:01.547143 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 12 16:51:01.557068 ignition[1377]: INFO : Ignition 2.20.0 Sep 12 16:51:01.560535 ignition[1377]: INFO : Stage: umount Sep 12 16:51:01.560535 ignition[1377]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 16:51:01.560535 ignition[1377]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 16:51:01.560535 ignition[1377]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 16:51:01.569957 ignition[1377]: INFO : PUT result: OK Sep 12 16:51:01.575465 ignition[1377]: INFO : umount: umount passed Sep 12 16:51:01.577492 ignition[1377]: INFO : Ignition finished successfully Sep 12 16:51:01.583493 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 12 16:51:01.585888 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 12 16:51:01.589088 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 12 16:51:01.589187 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 12 16:51:01.595258 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 12 16:51:01.595357 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 12 16:51:01.603581 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 12 16:51:01.603695 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 12 16:51:01.611106 systemd[1]: Stopped target network.target - Network. Sep 12 16:51:01.617274 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 12 16:51:01.617393 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 16:51:01.623042 systemd[1]: Stopped target paths.target - Path Units. Sep 12 16:51:01.634919 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 12 16:51:01.637296 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 16:51:01.640978 systemd[1]: Stopped target slices.target - Slice Units. Sep 12 16:51:01.646048 systemd[1]: Stopped target sockets.target - Socket Units. Sep 12 16:51:01.648231 systemd[1]: iscsid.socket: Deactivated successfully. Sep 12 16:51:01.648311 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 16:51:01.650519 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 12 16:51:01.650588 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 16:51:01.652832 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 12 16:51:01.652925 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 12 16:51:01.655412 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 12 16:51:01.655498 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 12 16:51:01.671135 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 12 16:51:01.675234 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 12 16:51:01.679614 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 12 16:51:01.680610 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 12 16:51:01.680784 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 12 16:51:01.685069 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 12 16:51:01.685243 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 12 16:51:01.705307 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 12 16:51:01.705686 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 12 16:51:01.718750 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 12 16:51:01.722041 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 12 16:51:01.722250 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 12 16:51:01.731234 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 12 16:51:01.732677 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 12 16:51:01.732790 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 12 16:51:01.754019 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 12 16:51:01.758559 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 12 16:51:01.758677 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 16:51:01.758890 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 16:51:01.758970 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 16:51:01.766047 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 12 16:51:01.766139 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 12 16:51:01.782619 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 12 16:51:01.782716 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 16:51:01.785666 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 16:51:01.789955 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 12 16:51:01.790089 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 12 16:51:01.807445 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 12 16:51:01.808358 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 16:51:01.817997 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 12 16:51:01.818368 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 12 16:51:01.829004 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 12 16:51:01.829132 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 12 16:51:01.833636 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 12 16:51:01.833994 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 16:51:01.838256 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 12 16:51:01.838365 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 12 16:51:01.845006 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 12 16:51:01.846998 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 12 16:51:01.855652 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 16:51:01.855759 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 16:51:01.870050 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 12 16:51:01.872662 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 12 16:51:01.872782 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 16:51:01.885200 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 12 16:51:01.885310 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 16:51:01.888129 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 12 16:51:01.888240 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 16:51:01.892652 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 16:51:01.892755 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 16:51:01.905017 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 12 16:51:01.905148 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 12 16:51:01.905899 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 12 16:51:01.906074 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 12 16:51:01.929983 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 12 16:51:01.946173 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 12 16:51:01.962934 systemd[1]: Switching root. Sep 12 16:51:01.996085 systemd-journald[251]: Journal stopped Sep 12 16:51:04.241989 systemd-journald[251]: Received SIGTERM from PID 1 (systemd). Sep 12 16:51:04.242128 kernel: SELinux: policy capability network_peer_controls=1 Sep 12 16:51:04.242170 kernel: SELinux: policy capability open_perms=1 Sep 12 16:51:04.242201 kernel: SELinux: policy capability extended_socket_class=1 Sep 12 16:51:04.242231 kernel: SELinux: policy capability always_check_network=0 Sep 12 16:51:04.242259 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 12 16:51:04.242289 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 12 16:51:04.242318 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 12 16:51:04.242347 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 12 16:51:04.242379 kernel: audit: type=1403 audit(1757695862.444:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 12 16:51:04.242426 systemd[1]: Successfully loaded SELinux policy in 54.353ms. Sep 12 16:51:04.242471 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 23.826ms. Sep 12 16:51:04.242503 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 12 16:51:04.242535 systemd[1]: Detected virtualization amazon. Sep 12 16:51:04.242564 systemd[1]: Detected architecture arm64. Sep 12 16:51:04.242594 systemd[1]: Detected first boot. Sep 12 16:51:04.242625 systemd[1]: Initializing machine ID from VM UUID. Sep 12 16:51:04.242653 zram_generator::config[1422]: No configuration found. Sep 12 16:51:04.242688 kernel: NET: Registered PF_VSOCK protocol family Sep 12 16:51:04.242717 systemd[1]: Populated /etc with preset unit settings. Sep 12 16:51:04.242750 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 12 16:51:04.242781 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 12 16:51:04.245340 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 12 16:51:04.245396 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 12 16:51:04.245431 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 12 16:51:04.245460 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 12 16:51:04.245497 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 12 16:51:04.245526 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 12 16:51:04.245558 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 12 16:51:04.250924 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 12 16:51:04.250972 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 12 16:51:04.251002 systemd[1]: Created slice user.slice - User and Session Slice. Sep 12 16:51:04.251043 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 16:51:04.251074 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 16:51:04.251103 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 12 16:51:04.251138 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 12 16:51:04.251168 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 12 16:51:04.251200 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 16:51:04.251231 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 12 16:51:04.251261 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 16:51:04.251289 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 12 16:51:04.251317 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 12 16:51:04.251353 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 12 16:51:04.251382 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 12 16:51:04.251413 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 16:51:04.251445 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 16:51:04.251474 systemd[1]: Reached target slices.target - Slice Units. Sep 12 16:51:04.251505 systemd[1]: Reached target swap.target - Swaps. Sep 12 16:51:04.251533 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 12 16:51:04.251562 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 12 16:51:04.251593 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 12 16:51:04.251622 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 16:51:04.251656 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 16:51:04.251686 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 16:51:04.251715 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 12 16:51:04.251743 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 12 16:51:04.251773 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 12 16:51:04.253893 systemd[1]: Mounting media.mount - External Media Directory... Sep 12 16:51:04.253944 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 12 16:51:04.253977 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 12 16:51:04.254005 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 12 16:51:04.254043 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 12 16:51:04.254076 systemd[1]: Reached target machines.target - Containers. Sep 12 16:51:04.254108 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 12 16:51:04.254137 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 16:51:04.254165 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 16:51:04.254194 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 12 16:51:04.254224 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 16:51:04.254253 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 16:51:04.254285 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 16:51:04.254316 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 12 16:51:04.254344 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 16:51:04.254373 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 12 16:51:04.254402 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 12 16:51:04.254431 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 12 16:51:04.254460 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 12 16:51:04.254491 systemd[1]: Stopped systemd-fsck-usr.service. Sep 12 16:51:04.254525 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 16:51:04.254556 kernel: fuse: init (API version 7.39) Sep 12 16:51:04.254584 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 16:51:04.254612 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 16:51:04.254641 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 16:51:04.254671 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 12 16:51:04.254699 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 12 16:51:04.254727 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 16:51:04.254757 systemd[1]: verity-setup.service: Deactivated successfully. Sep 12 16:51:04.254789 kernel: loop: module loaded Sep 12 16:51:04.263986 systemd[1]: Stopped verity-setup.service. Sep 12 16:51:04.264026 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 12 16:51:04.264056 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 12 16:51:04.264087 systemd[1]: Mounted media.mount - External Media Directory. Sep 12 16:51:04.264125 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 12 16:51:04.264154 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 12 16:51:04.264184 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 12 16:51:04.264217 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 16:51:04.264246 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 12 16:51:04.264279 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 12 16:51:04.264308 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 16:51:04.264337 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 16:51:04.264366 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 16:51:04.264395 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 16:51:04.264426 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 12 16:51:04.264455 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 12 16:51:04.264483 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 16:51:04.264511 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 16:51:04.264544 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 16:51:04.264575 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 12 16:51:04.264604 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 12 16:51:04.264684 systemd-journald[1516]: Collecting audit messages is disabled. Sep 12 16:51:04.264738 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 12 16:51:04.264769 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 12 16:51:04.264826 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 12 16:51:04.264870 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 16:51:04.264903 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 12 16:51:04.264931 kernel: ACPI: bus type drm_connector registered Sep 12 16:51:04.264959 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 12 16:51:04.264990 systemd-journald[1516]: Journal started Sep 12 16:51:04.265038 systemd-journald[1516]: Runtime Journal (/run/log/journal/ec2e0c4dbe79f08a2a8c8f976230fb8d) is 8M, max 75.3M, 67.3M free. Sep 12 16:51:04.270915 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 12 16:51:03.579046 systemd[1]: Queued start job for default target multi-user.target. Sep 12 16:51:03.592618 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Sep 12 16:51:03.593479 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 12 16:51:04.283027 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 16:51:04.298132 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 12 16:51:04.298210 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 16:51:04.315213 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 12 16:51:04.315318 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 16:51:04.331833 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 16:51:04.355086 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 12 16:51:04.370882 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 16:51:04.370994 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 16:51:04.383321 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 16:51:04.383766 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 16:51:04.386932 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 16:51:04.390506 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 12 16:51:04.393600 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 12 16:51:04.396748 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 12 16:51:04.412554 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 12 16:51:04.433077 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 12 16:51:04.477913 kernel: loop0: detected capacity change from 0 to 53784 Sep 12 16:51:04.481173 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 12 16:51:04.485358 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 16:51:04.496183 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 12 16:51:04.507190 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 12 16:51:04.525828 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 16:51:04.563241 systemd-journald[1516]: Time spent on flushing to /var/log/journal/ec2e0c4dbe79f08a2a8c8f976230fb8d is 86.971ms for 928 entries. Sep 12 16:51:04.563241 systemd-journald[1516]: System Journal (/var/log/journal/ec2e0c4dbe79f08a2a8c8f976230fb8d) is 8M, max 195.6M, 187.6M free. Sep 12 16:51:04.668231 systemd-journald[1516]: Received client request to flush runtime journal. Sep 12 16:51:04.668335 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 12 16:51:04.567300 systemd-tmpfiles[1539]: ACLs are not supported, ignoring. Sep 12 16:51:04.567325 systemd-tmpfiles[1539]: ACLs are not supported, ignoring. Sep 12 16:51:04.571836 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 16:51:04.575573 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 12 16:51:04.593162 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 12 16:51:04.600601 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 12 16:51:04.604579 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 16:51:04.616237 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 12 16:51:04.674150 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 12 16:51:04.685581 kernel: loop1: detected capacity change from 0 to 113512 Sep 12 16:51:04.690106 udevadm[1570]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 12 16:51:04.723935 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 12 16:51:04.740126 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 16:51:04.776916 kernel: loop2: detected capacity change from 0 to 207008 Sep 12 16:51:04.783464 systemd-tmpfiles[1579]: ACLs are not supported, ignoring. Sep 12 16:51:04.784008 systemd-tmpfiles[1579]: ACLs are not supported, ignoring. Sep 12 16:51:04.801714 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 16:51:04.876857 kernel: loop3: detected capacity change from 0 to 123192 Sep 12 16:51:04.953133 kernel: loop4: detected capacity change from 0 to 53784 Sep 12 16:51:04.983872 kernel: loop5: detected capacity change from 0 to 113512 Sep 12 16:51:05.024768 kernel: loop6: detected capacity change from 0 to 207008 Sep 12 16:51:05.078860 kernel: loop7: detected capacity change from 0 to 123192 Sep 12 16:51:05.115472 (sd-merge)[1586]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Sep 12 16:51:05.117594 (sd-merge)[1586]: Merged extensions into '/usr'. Sep 12 16:51:05.129038 systemd[1]: Reload requested from client PID 1538 ('systemd-sysext') (unit systemd-sysext.service)... Sep 12 16:51:05.129072 systemd[1]: Reloading... Sep 12 16:51:05.326882 zram_generator::config[1617]: No configuration found. Sep 12 16:51:05.429595 ldconfig[1534]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 12 16:51:05.631219 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 16:51:05.783612 systemd[1]: Reloading finished in 652 ms. Sep 12 16:51:05.805914 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 12 16:51:05.808998 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 12 16:51:05.812585 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 12 16:51:05.832065 systemd[1]: Starting ensure-sysext.service... Sep 12 16:51:05.841241 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 16:51:05.851166 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 16:51:05.885073 systemd[1]: Reload requested from client PID 1667 ('systemctl') (unit ensure-sysext.service)... Sep 12 16:51:05.885112 systemd[1]: Reloading... Sep 12 16:51:05.908530 systemd-tmpfiles[1668]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 12 16:51:05.910779 systemd-tmpfiles[1668]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 12 16:51:05.916893 systemd-tmpfiles[1668]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 12 16:51:05.917488 systemd-tmpfiles[1668]: ACLs are not supported, ignoring. Sep 12 16:51:05.917641 systemd-tmpfiles[1668]: ACLs are not supported, ignoring. Sep 12 16:51:05.933690 systemd-tmpfiles[1668]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 16:51:05.933716 systemd-tmpfiles[1668]: Skipping /boot Sep 12 16:51:05.967586 systemd-tmpfiles[1668]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 16:51:05.967623 systemd-tmpfiles[1668]: Skipping /boot Sep 12 16:51:05.984158 systemd-udevd[1669]: Using default interface naming scheme 'v255'. Sep 12 16:51:06.095849 zram_generator::config[1698]: No configuration found. Sep 12 16:51:06.212311 (udev-worker)[1717]: Network interface NamePolicy= disabled on kernel command line. Sep 12 16:51:06.417883 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (1719) Sep 12 16:51:06.482910 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 16:51:06.684666 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 12 16:51:06.685109 systemd[1]: Reloading finished in 799 ms. Sep 12 16:51:06.700451 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 16:51:06.739693 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 16:51:06.843589 systemd[1]: Finished ensure-sysext.service. Sep 12 16:51:06.869103 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 12 16:51:06.887539 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Sep 12 16:51:06.897194 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 16:51:06.903120 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 12 16:51:06.911244 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 16:51:06.920274 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 12 16:51:06.926561 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 16:51:06.934689 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 16:51:06.945567 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 16:51:06.952138 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 16:51:06.955244 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 16:51:06.962297 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 12 16:51:06.964969 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 16:51:06.971276 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 12 16:51:06.983562 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 16:51:06.996996 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 16:51:07.001019 systemd[1]: Reached target time-set.target - System Time Set. Sep 12 16:51:07.008181 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 12 16:51:07.023771 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 16:51:07.032537 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 16:51:07.033241 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 16:51:07.036460 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 16:51:07.036892 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 16:51:07.073691 lvm[1869]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 12 16:51:07.075651 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 16:51:07.076947 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 16:51:07.080696 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 16:51:07.081167 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 16:51:07.087674 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 16:51:07.087825 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 16:51:07.101105 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 12 16:51:07.121354 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 12 16:51:07.135051 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 12 16:51:07.182958 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 12 16:51:07.186685 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 12 16:51:07.189372 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 16:51:07.200186 augenrules[1908]: No rules Sep 12 16:51:07.201739 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 12 16:51:07.217136 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 12 16:51:07.220034 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 16:51:07.220551 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 16:51:07.236140 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 12 16:51:07.239464 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 12 16:51:07.247452 lvm[1913]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 12 16:51:07.263945 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 12 16:51:07.303472 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 12 16:51:07.307507 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 12 16:51:07.315134 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 16:51:07.438260 systemd-networkd[1882]: lo: Link UP Sep 12 16:51:07.438287 systemd-networkd[1882]: lo: Gained carrier Sep 12 16:51:07.441545 systemd-networkd[1882]: Enumeration completed Sep 12 16:51:07.441730 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 16:51:07.443861 systemd-networkd[1882]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 16:51:07.443869 systemd-networkd[1882]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 16:51:07.448183 systemd-networkd[1882]: eth0: Link UP Sep 12 16:51:07.448490 systemd-networkd[1882]: eth0: Gained carrier Sep 12 16:51:07.448539 systemd-networkd[1882]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 16:51:07.450106 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 12 16:51:07.455074 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 12 16:51:07.467604 systemd-resolved[1884]: Positive Trust Anchors: Sep 12 16:51:07.467642 systemd-resolved[1884]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 16:51:07.467704 systemd-resolved[1884]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 16:51:07.468915 systemd-networkd[1882]: eth0: DHCPv4 address 172.31.21.42/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 12 16:51:07.483751 systemd-resolved[1884]: Defaulting to hostname 'linux'. Sep 12 16:51:07.487231 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 16:51:07.489888 systemd[1]: Reached target network.target - Network. Sep 12 16:51:07.491928 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 16:51:07.494567 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 16:51:07.497065 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 12 16:51:07.499836 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 12 16:51:07.502906 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 12 16:51:07.505423 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 12 16:51:07.508168 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 12 16:51:07.510879 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 12 16:51:07.510920 systemd[1]: Reached target paths.target - Path Units. Sep 12 16:51:07.512901 systemd[1]: Reached target timers.target - Timer Units. Sep 12 16:51:07.516357 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 12 16:51:07.523790 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 12 16:51:07.530984 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 12 16:51:07.534182 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 12 16:51:07.537034 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 12 16:51:07.552043 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 12 16:51:07.555475 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 12 16:51:07.559612 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 12 16:51:07.563229 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 12 16:51:07.566537 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 16:51:07.569152 systemd[1]: Reached target basic.target - Basic System. Sep 12 16:51:07.571475 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 12 16:51:07.571677 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 12 16:51:07.578043 systemd[1]: Starting containerd.service - containerd container runtime... Sep 12 16:51:07.586129 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 12 16:51:07.595136 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 12 16:51:07.609061 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 12 16:51:07.617166 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 12 16:51:07.619586 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 12 16:51:07.644217 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 12 16:51:07.650285 systemd[1]: Started ntpd.service - Network Time Service. Sep 12 16:51:07.657594 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 12 16:51:07.664302 systemd[1]: Starting setup-oem.service - Setup OEM... Sep 12 16:51:07.676329 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 12 16:51:07.688280 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 12 16:51:07.710550 jq[1940]: false Sep 12 16:51:07.687069 dbus-daemon[1939]: [system] SELinux support is enabled Sep 12 16:51:07.703088 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 12 16:51:07.701262 dbus-daemon[1939]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1882 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Sep 12 16:51:07.708697 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 12 16:51:07.709583 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 12 16:51:07.715101 systemd[1]: Starting update-engine.service - Update Engine... Sep 12 16:51:07.723401 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 12 16:51:07.728577 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 12 16:51:07.739914 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 12 16:51:07.741911 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 12 16:51:07.745706 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 12 16:51:07.746143 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 12 16:51:07.761759 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 12 16:51:07.762077 dbus-daemon[1939]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 12 16:51:07.763966 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 12 16:51:07.767501 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 12 16:51:07.767541 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 12 16:51:07.788300 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Sep 12 16:51:07.857903 jq[1952]: true Sep 12 16:51:07.857218 (ntainerd)[1964]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 12 16:51:07.860917 ntpd[1945]: ntpd 4.2.8p17@1.4004-o Fri Sep 12 15:00:19 UTC 2025 (1): Starting Sep 12 16:51:07.872553 ntpd[1945]: 12 Sep 16:51:07 ntpd[1945]: ntpd 4.2.8p17@1.4004-o Fri Sep 12 15:00:19 UTC 2025 (1): Starting Sep 12 16:51:07.872553 ntpd[1945]: 12 Sep 16:51:07 ntpd[1945]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 12 16:51:07.872553 ntpd[1945]: 12 Sep 16:51:07 ntpd[1945]: ---------------------------------------------------- Sep 12 16:51:07.872553 ntpd[1945]: 12 Sep 16:51:07 ntpd[1945]: ntp-4 is maintained by Network Time Foundation, Sep 12 16:51:07.872553 ntpd[1945]: 12 Sep 16:51:07 ntpd[1945]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 12 16:51:07.872553 ntpd[1945]: 12 Sep 16:51:07 ntpd[1945]: corporation. Support and training for ntp-4 are Sep 12 16:51:07.872553 ntpd[1945]: 12 Sep 16:51:07 ntpd[1945]: available at https://www.nwtime.org/support Sep 12 16:51:07.872553 ntpd[1945]: 12 Sep 16:51:07 ntpd[1945]: ---------------------------------------------------- Sep 12 16:51:07.872553 ntpd[1945]: 12 Sep 16:51:07 ntpd[1945]: proto: precision = 0.096 usec (-23) Sep 12 16:51:07.872553 ntpd[1945]: 12 Sep 16:51:07 ntpd[1945]: basedate set to 2025-08-31 Sep 12 16:51:07.872553 ntpd[1945]: 12 Sep 16:51:07 ntpd[1945]: gps base set to 2025-08-31 (week 2382) Sep 12 16:51:07.860984 ntpd[1945]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 12 16:51:07.861004 ntpd[1945]: ---------------------------------------------------- Sep 12 16:51:07.861023 ntpd[1945]: ntp-4 is maintained by Network Time Foundation, Sep 12 16:51:07.861040 ntpd[1945]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 12 16:51:07.861058 ntpd[1945]: corporation. Support and training for ntp-4 are Sep 12 16:51:07.861079 ntpd[1945]: available at https://www.nwtime.org/support Sep 12 16:51:07.861098 ntpd[1945]: ---------------------------------------------------- Sep 12 16:51:07.869281 ntpd[1945]: proto: precision = 0.096 usec (-23) Sep 12 16:51:07.870241 ntpd[1945]: basedate set to 2025-08-31 Sep 12 16:51:07.870272 ntpd[1945]: gps base set to 2025-08-31 (week 2382) Sep 12 16:51:07.877895 ntpd[1945]: Listen and drop on 0 v6wildcard [::]:123 Sep 12 16:51:07.879539 ntpd[1945]: 12 Sep 16:51:07 ntpd[1945]: Listen and drop on 0 v6wildcard [::]:123 Sep 12 16:51:07.879539 ntpd[1945]: 12 Sep 16:51:07 ntpd[1945]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 12 16:51:07.879539 ntpd[1945]: 12 Sep 16:51:07 ntpd[1945]: Listen normally on 2 lo 127.0.0.1:123 Sep 12 16:51:07.879539 ntpd[1945]: 12 Sep 16:51:07 ntpd[1945]: Listen normally on 3 eth0 172.31.21.42:123 Sep 12 16:51:07.879539 ntpd[1945]: 12 Sep 16:51:07 ntpd[1945]: Listen normally on 4 lo [::1]:123 Sep 12 16:51:07.879539 ntpd[1945]: 12 Sep 16:51:07 ntpd[1945]: bind(21) AF_INET6 fe80::42d:8eff:feb7:cbdb%2#123 flags 0x11 failed: Cannot assign requested address Sep 12 16:51:07.879539 ntpd[1945]: 12 Sep 16:51:07 ntpd[1945]: unable to create socket on eth0 (5) for fe80::42d:8eff:feb7:cbdb%2#123 Sep 12 16:51:07.879539 ntpd[1945]: 12 Sep 16:51:07 ntpd[1945]: failed to init interface for address fe80::42d:8eff:feb7:cbdb%2 Sep 12 16:51:07.879539 ntpd[1945]: 12 Sep 16:51:07 ntpd[1945]: Listening on routing socket on fd #21 for interface updates Sep 12 16:51:07.877983 ntpd[1945]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 12 16:51:07.878236 ntpd[1945]: Listen normally on 2 lo 127.0.0.1:123 Sep 12 16:51:07.878303 ntpd[1945]: Listen normally on 3 eth0 172.31.21.42:123 Sep 12 16:51:07.878366 ntpd[1945]: Listen normally on 4 lo [::1]:123 Sep 12 16:51:07.878441 ntpd[1945]: bind(21) AF_INET6 fe80::42d:8eff:feb7:cbdb%2#123 flags 0x11 failed: Cannot assign requested address Sep 12 16:51:07.878478 ntpd[1945]: unable to create socket on eth0 (5) for fe80::42d:8eff:feb7:cbdb%2#123 Sep 12 16:51:07.878506 ntpd[1945]: failed to init interface for address fe80::42d:8eff:feb7:cbdb%2 Sep 12 16:51:07.878557 ntpd[1945]: Listening on routing socket on fd #21 for interface updates Sep 12 16:51:07.889079 ntpd[1945]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 12 16:51:07.893820 ntpd[1945]: 12 Sep 16:51:07 ntpd[1945]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 12 16:51:07.891172 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 12 16:51:07.896976 ntpd[1945]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 12 16:51:07.897595 ntpd[1945]: 12 Sep 16:51:07 ntpd[1945]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 12 16:51:07.920555 systemd[1]: motdgen.service: Deactivated successfully. Sep 12 16:51:07.922947 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 12 16:51:07.964051 tar[1973]: linux-arm64/LICENSE Sep 12 16:51:07.964051 tar[1973]: linux-arm64/helm Sep 12 16:51:07.971487 systemd[1]: Finished setup-oem.service - Setup OEM. Sep 12 16:51:07.982823 extend-filesystems[1941]: Found loop4 Sep 12 16:51:07.982823 extend-filesystems[1941]: Found loop5 Sep 12 16:51:07.982823 extend-filesystems[1941]: Found loop6 Sep 12 16:51:07.982823 extend-filesystems[1941]: Found loop7 Sep 12 16:51:07.993957 extend-filesystems[1941]: Found nvme0n1 Sep 12 16:51:07.993957 extend-filesystems[1941]: Found nvme0n1p1 Sep 12 16:51:07.993957 extend-filesystems[1941]: Found nvme0n1p2 Sep 12 16:51:07.993957 extend-filesystems[1941]: Found nvme0n1p3 Sep 12 16:51:07.993957 extend-filesystems[1941]: Found usr Sep 12 16:51:07.993957 extend-filesystems[1941]: Found nvme0n1p4 Sep 12 16:51:07.993957 extend-filesystems[1941]: Found nvme0n1p6 Sep 12 16:51:07.993957 extend-filesystems[1941]: Found nvme0n1p7 Sep 12 16:51:07.993957 extend-filesystems[1941]: Found nvme0n1p9 Sep 12 16:51:07.993957 extend-filesystems[1941]: Checking size of /dev/nvme0n1p9 Sep 12 16:51:08.018499 jq[1980]: true Sep 12 16:51:08.103085 extend-filesystems[1941]: Resized partition /dev/nvme0n1p9 Sep 12 16:51:08.115846 update_engine[1951]: I20250912 16:51:08.114967 1951 main.cc:92] Flatcar Update Engine starting Sep 12 16:51:08.121845 extend-filesystems[1998]: resize2fs 1.47.1 (20-May-2024) Sep 12 16:51:08.136918 systemd[1]: Started update-engine.service - Update Engine. Sep 12 16:51:08.152686 update_engine[1951]: I20250912 16:51:08.152578 1951 update_check_scheduler.cc:74] Next update check in 4m53s Sep 12 16:51:08.156159 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 12 16:51:08.159981 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Sep 12 16:51:08.166892 coreos-metadata[1938]: Sep 12 16:51:08.166 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 12 16:51:08.171344 coreos-metadata[1938]: Sep 12 16:51:08.170 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Sep 12 16:51:08.173856 coreos-metadata[1938]: Sep 12 16:51:08.172 INFO Fetch successful Sep 12 16:51:08.173856 coreos-metadata[1938]: Sep 12 16:51:08.173 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Sep 12 16:51:08.179833 coreos-metadata[1938]: Sep 12 16:51:08.178 INFO Fetch successful Sep 12 16:51:08.179833 coreos-metadata[1938]: Sep 12 16:51:08.178 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Sep 12 16:51:08.182170 coreos-metadata[1938]: Sep 12 16:51:08.182 INFO Fetch successful Sep 12 16:51:08.182170 coreos-metadata[1938]: Sep 12 16:51:08.182 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Sep 12 16:51:08.183639 coreos-metadata[1938]: Sep 12 16:51:08.183 INFO Fetch successful Sep 12 16:51:08.183639 coreos-metadata[1938]: Sep 12 16:51:08.183 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Sep 12 16:51:08.184604 coreos-metadata[1938]: Sep 12 16:51:08.184 INFO Fetch failed with 404: resource not found Sep 12 16:51:08.184604 coreos-metadata[1938]: Sep 12 16:51:08.184 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Sep 12 16:51:08.185882 coreos-metadata[1938]: Sep 12 16:51:08.185 INFO Fetch successful Sep 12 16:51:08.186027 coreos-metadata[1938]: Sep 12 16:51:08.185 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Sep 12 16:51:08.190254 coreos-metadata[1938]: Sep 12 16:51:08.189 INFO Fetch successful Sep 12 16:51:08.190254 coreos-metadata[1938]: Sep 12 16:51:08.189 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Sep 12 16:51:08.192705 coreos-metadata[1938]: Sep 12 16:51:08.192 INFO Fetch successful Sep 12 16:51:08.192705 coreos-metadata[1938]: Sep 12 16:51:08.192 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Sep 12 16:51:08.194359 coreos-metadata[1938]: Sep 12 16:51:08.194 INFO Fetch successful Sep 12 16:51:08.194359 coreos-metadata[1938]: Sep 12 16:51:08.194 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Sep 12 16:51:08.195307 coreos-metadata[1938]: Sep 12 16:51:08.195 INFO Fetch successful Sep 12 16:51:08.321622 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 12 16:51:08.327306 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 12 16:51:08.332835 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Sep 12 16:51:08.336822 bash[2023]: Updated "/home/core/.ssh/authorized_keys" Sep 12 16:51:08.338312 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 12 16:51:08.362740 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (1710) Sep 12 16:51:08.364481 systemd[1]: Starting sshkeys.service... Sep 12 16:51:08.373998 extend-filesystems[1998]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Sep 12 16:51:08.373998 extend-filesystems[1998]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 12 16:51:08.373998 extend-filesystems[1998]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Sep 12 16:51:08.373570 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 12 16:51:08.403061 extend-filesystems[1941]: Resized filesystem in /dev/nvme0n1p9 Sep 12 16:51:08.376953 systemd-logind[1950]: Watching system buttons on /dev/input/event0 (Power Button) Sep 12 16:51:08.376988 systemd-logind[1950]: Watching system buttons on /dev/input/event1 (Sleep Button) Sep 12 16:51:08.379951 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 12 16:51:08.380589 systemd-logind[1950]: New seat seat0. Sep 12 16:51:08.389227 systemd[1]: Started systemd-logind.service - User Login Management. Sep 12 16:51:08.458160 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 12 16:51:08.467502 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 12 16:51:08.541663 locksmithd[2003]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 12 16:51:08.778233 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Sep 12 16:51:08.786404 dbus-daemon[1939]: [system] Successfully activated service 'org.freedesktop.hostname1' Sep 12 16:51:08.792350 dbus-daemon[1939]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1961 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Sep 12 16:51:08.815584 systemd[1]: Starting polkit.service - Authorization Manager... Sep 12 16:51:08.830850 containerd[1964]: time="2025-09-12T16:51:08.825492854Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Sep 12 16:51:08.862167 ntpd[1945]: bind(24) AF_INET6 fe80::42d:8eff:feb7:cbdb%2#123 flags 0x11 failed: Cannot assign requested address Sep 12 16:51:08.862895 ntpd[1945]: 12 Sep 16:51:08 ntpd[1945]: bind(24) AF_INET6 fe80::42d:8eff:feb7:cbdb%2#123 flags 0x11 failed: Cannot assign requested address Sep 12 16:51:08.862895 ntpd[1945]: 12 Sep 16:51:08 ntpd[1945]: unable to create socket on eth0 (6) for fe80::42d:8eff:feb7:cbdb%2#123 Sep 12 16:51:08.862895 ntpd[1945]: 12 Sep 16:51:08 ntpd[1945]: failed to init interface for address fe80::42d:8eff:feb7:cbdb%2 Sep 12 16:51:08.862228 ntpd[1945]: unable to create socket on eth0 (6) for fe80::42d:8eff:feb7:cbdb%2#123 Sep 12 16:51:08.862255 ntpd[1945]: failed to init interface for address fe80::42d:8eff:feb7:cbdb%2 Sep 12 16:51:08.886700 polkitd[2112]: Started polkitd version 121 Sep 12 16:51:08.958499 polkitd[2112]: Loading rules from directory /etc/polkit-1/rules.d Sep 12 16:51:08.960399 coreos-metadata[2041]: Sep 12 16:51:08.959 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 12 16:51:08.958626 polkitd[2112]: Loading rules from directory /usr/share/polkit-1/rules.d Sep 12 16:51:08.964611 coreos-metadata[2041]: Sep 12 16:51:08.963 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Sep 12 16:51:08.970816 coreos-metadata[2041]: Sep 12 16:51:08.966 INFO Fetch successful Sep 12 16:51:08.970816 coreos-metadata[2041]: Sep 12 16:51:08.966 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Sep 12 16:51:08.970816 coreos-metadata[2041]: Sep 12 16:51:08.970 INFO Fetch successful Sep 12 16:51:08.977325 unknown[2041]: wrote ssh authorized keys file for user: core Sep 12 16:51:08.997171 polkitd[2112]: Finished loading, compiling and executing 2 rules Sep 12 16:51:09.004668 dbus-daemon[1939]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Sep 12 16:51:09.004984 systemd[1]: Started polkit.service - Authorization Manager. Sep 12 16:51:09.009114 polkitd[2112]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Sep 12 16:51:09.043119 containerd[1964]: time="2025-09-12T16:51:09.043059215Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 12 16:51:09.052151 update-ssh-keys[2127]: Updated "/home/core/.ssh/authorized_keys" Sep 12 16:51:09.057648 containerd[1964]: time="2025-09-12T16:51:09.055409555Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.106-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 12 16:51:09.057648 containerd[1964]: time="2025-09-12T16:51:09.055475039Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 12 16:51:09.057648 containerd[1964]: time="2025-09-12T16:51:09.055510895Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 12 16:51:09.057648 containerd[1964]: time="2025-09-12T16:51:09.055835123Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 12 16:51:09.057648 containerd[1964]: time="2025-09-12T16:51:09.055879259Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 12 16:51:09.057648 containerd[1964]: time="2025-09-12T16:51:09.056009915Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 16:51:09.057648 containerd[1964]: time="2025-09-12T16:51:09.056037911Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 12 16:51:09.057648 containerd[1964]: time="2025-09-12T16:51:09.056381903Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 16:51:09.057648 containerd[1964]: time="2025-09-12T16:51:09.056412071Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 12 16:51:09.057648 containerd[1964]: time="2025-09-12T16:51:09.056442815Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 16:51:09.057648 containerd[1964]: time="2025-09-12T16:51:09.056468543Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 12 16:51:09.061130 containerd[1964]: time="2025-09-12T16:51:09.056632091Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 12 16:51:09.057662 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 12 16:51:09.065655 systemd[1]: Finished sshkeys.service. Sep 12 16:51:09.069321 containerd[1964]: time="2025-09-12T16:51:09.068726915Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 12 16:51:09.071189 containerd[1964]: time="2025-09-12T16:51:09.070980623Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 16:51:09.071189 containerd[1964]: time="2025-09-12T16:51:09.071059019Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 12 16:51:09.072239 containerd[1964]: time="2025-09-12T16:51:09.071906711Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 12 16:51:09.073152 containerd[1964]: time="2025-09-12T16:51:09.073084691Z" level=info msg="metadata content store policy set" policy=shared Sep 12 16:51:09.083401 containerd[1964]: time="2025-09-12T16:51:09.081939503Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 12 16:51:09.083401 containerd[1964]: time="2025-09-12T16:51:09.082059887Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 12 16:51:09.083401 containerd[1964]: time="2025-09-12T16:51:09.082108043Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 12 16:51:09.083401 containerd[1964]: time="2025-09-12T16:51:09.082145567Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 12 16:51:09.083401 containerd[1964]: time="2025-09-12T16:51:09.082178075Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 12 16:51:09.083401 containerd[1964]: time="2025-09-12T16:51:09.082430939Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 12 16:51:09.086614 containerd[1964]: time="2025-09-12T16:51:09.086536835Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 12 16:51:09.087139 systemd-hostnamed[1961]: Hostname set to (transient) Sep 12 16:51:09.087313 systemd-resolved[1884]: System hostname changed to 'ip-172-31-21-42'. Sep 12 16:51:09.092592 containerd[1964]: time="2025-09-12T16:51:09.092207783Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 12 16:51:09.092592 containerd[1964]: time="2025-09-12T16:51:09.092274335Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 12 16:51:09.092592 containerd[1964]: time="2025-09-12T16:51:09.092313623Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 12 16:51:09.092592 containerd[1964]: time="2025-09-12T16:51:09.092348207Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 12 16:51:09.092592 containerd[1964]: time="2025-09-12T16:51:09.092378903Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 12 16:51:09.092592 containerd[1964]: time="2025-09-12T16:51:09.092409227Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 12 16:51:09.092592 containerd[1964]: time="2025-09-12T16:51:09.092441963Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 12 16:51:09.092592 containerd[1964]: time="2025-09-12T16:51:09.092481215Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 12 16:51:09.092592 containerd[1964]: time="2025-09-12T16:51:09.092512223Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 12 16:51:09.092592 containerd[1964]: time="2025-09-12T16:51:09.092541923Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 12 16:51:09.092592 containerd[1964]: time="2025-09-12T16:51:09.092569823Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 12 16:51:09.093164 containerd[1964]: time="2025-09-12T16:51:09.092612843Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 12 16:51:09.093164 containerd[1964]: time="2025-09-12T16:51:09.092644115Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 12 16:51:09.093164 containerd[1964]: time="2025-09-12T16:51:09.092673035Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 12 16:51:09.093164 containerd[1964]: time="2025-09-12T16:51:09.092702891Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 12 16:51:09.093164 containerd[1964]: time="2025-09-12T16:51:09.092731499Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 12 16:51:09.093164 containerd[1964]: time="2025-09-12T16:51:09.092762051Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 12 16:51:09.093164 containerd[1964]: time="2025-09-12T16:51:09.092789111Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 12 16:51:09.093164 containerd[1964]: time="2025-09-12T16:51:09.092842631Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 12 16:51:09.093164 containerd[1964]: time="2025-09-12T16:51:09.092875535Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 12 16:51:09.093164 containerd[1964]: time="2025-09-12T16:51:09.092909999Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 12 16:51:09.093164 containerd[1964]: time="2025-09-12T16:51:09.092939303Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 12 16:51:09.093164 containerd[1964]: time="2025-09-12T16:51:09.092967299Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 12 16:51:09.093164 containerd[1964]: time="2025-09-12T16:51:09.092996159Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 12 16:51:09.093164 containerd[1964]: time="2025-09-12T16:51:09.093042839Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 12 16:51:09.093164 containerd[1964]: time="2025-09-12T16:51:09.093098615Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 12 16:51:09.094049 containerd[1964]: time="2025-09-12T16:51:09.093133067Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 12 16:51:09.094049 containerd[1964]: time="2025-09-12T16:51:09.093166739Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 12 16:51:09.094049 containerd[1964]: time="2025-09-12T16:51:09.093538415Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 12 16:51:09.094049 containerd[1964]: time="2025-09-12T16:51:09.093578087Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 12 16:51:09.094049 containerd[1964]: time="2025-09-12T16:51:09.093603083Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 12 16:51:09.094049 containerd[1964]: time="2025-09-12T16:51:09.093633275Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 12 16:51:09.094049 containerd[1964]: time="2025-09-12T16:51:09.093659171Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 12 16:51:09.094049 containerd[1964]: time="2025-09-12T16:51:09.093696419Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 12 16:51:09.094049 containerd[1964]: time="2025-09-12T16:51:09.093720587Z" level=info msg="NRI interface is disabled by configuration." Sep 12 16:51:09.094049 containerd[1964]: time="2025-09-12T16:51:09.093746819Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 12 16:51:09.099872 containerd[1964]: time="2025-09-12T16:51:09.098473175Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 12 16:51:09.099872 containerd[1964]: time="2025-09-12T16:51:09.098625071Z" level=info msg="Connect containerd service" Sep 12 16:51:09.099872 containerd[1964]: time="2025-09-12T16:51:09.098689775Z" level=info msg="using legacy CRI server" Sep 12 16:51:09.099872 containerd[1964]: time="2025-09-12T16:51:09.098708087Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 12 16:51:09.099872 containerd[1964]: time="2025-09-12T16:51:09.099233423Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 12 16:51:09.101405 containerd[1964]: time="2025-09-12T16:51:09.101347919Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 16:51:09.102845 containerd[1964]: time="2025-09-12T16:51:09.102576611Z" level=info msg="Start subscribing containerd event" Sep 12 16:51:09.102845 containerd[1964]: time="2025-09-12T16:51:09.102659255Z" level=info msg="Start recovering state" Sep 12 16:51:09.103862 containerd[1964]: time="2025-09-12T16:51:09.103044959Z" level=info msg="Start event monitor" Sep 12 16:51:09.103862 containerd[1964]: time="2025-09-12T16:51:09.103082459Z" level=info msg="Start snapshots syncer" Sep 12 16:51:09.103862 containerd[1964]: time="2025-09-12T16:51:09.103105439Z" level=info msg="Start cni network conf syncer for default" Sep 12 16:51:09.103862 containerd[1964]: time="2025-09-12T16:51:09.103124807Z" level=info msg="Start streaming server" Sep 12 16:51:09.104857 containerd[1964]: time="2025-09-12T16:51:09.104816303Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 12 16:51:09.105058 containerd[1964]: time="2025-09-12T16:51:09.105031307Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 12 16:51:09.105277 containerd[1964]: time="2025-09-12T16:51:09.105250043Z" level=info msg="containerd successfully booted in 0.291766s" Sep 12 16:51:09.105369 systemd[1]: Started containerd.service - containerd container runtime. Sep 12 16:51:09.401981 systemd-networkd[1882]: eth0: Gained IPv6LL Sep 12 16:51:09.408958 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 12 16:51:09.412516 systemd[1]: Reached target network-online.target - Network is Online. Sep 12 16:51:09.426324 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Sep 12 16:51:09.434238 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 16:51:09.446291 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 12 16:51:09.562536 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 12 16:51:09.577653 amazon-ssm-agent[2145]: Initializing new seelog logger Sep 12 16:51:09.579483 amazon-ssm-agent[2145]: New Seelog Logger Creation Complete Sep 12 16:51:09.579483 amazon-ssm-agent[2145]: 2025/09/12 16:51:09 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 16:51:09.579483 amazon-ssm-agent[2145]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 16:51:09.579483 amazon-ssm-agent[2145]: 2025/09/12 16:51:09 processing appconfig overrides Sep 12 16:51:09.580110 amazon-ssm-agent[2145]: 2025/09/12 16:51:09 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 16:51:09.580206 amazon-ssm-agent[2145]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 16:51:09.580401 amazon-ssm-agent[2145]: 2025/09/12 16:51:09 processing appconfig overrides Sep 12 16:51:09.580906 amazon-ssm-agent[2145]: 2025/09/12 16:51:09 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 16:51:09.580986 amazon-ssm-agent[2145]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 16:51:09.581166 amazon-ssm-agent[2145]: 2025/09/12 16:51:09 processing appconfig overrides Sep 12 16:51:09.582238 amazon-ssm-agent[2145]: 2025-09-12 16:51:09 INFO Proxy environment variables: Sep 12 16:51:09.584817 amazon-ssm-agent[2145]: 2025/09/12 16:51:09 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 16:51:09.586831 amazon-ssm-agent[2145]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 16:51:09.586831 amazon-ssm-agent[2145]: 2025/09/12 16:51:09 processing appconfig overrides Sep 12 16:51:09.682170 amazon-ssm-agent[2145]: 2025-09-12 16:51:09 INFO https_proxy: Sep 12 16:51:09.780822 amazon-ssm-agent[2145]: 2025-09-12 16:51:09 INFO http_proxy: Sep 12 16:51:09.879368 amazon-ssm-agent[2145]: 2025-09-12 16:51:09 INFO no_proxy: Sep 12 16:51:09.954550 tar[1973]: linux-arm64/README.md Sep 12 16:51:09.977540 amazon-ssm-agent[2145]: 2025-09-12 16:51:09 INFO Checking if agent identity type OnPrem can be assumed Sep 12 16:51:09.990819 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 12 16:51:10.080819 amazon-ssm-agent[2145]: 2025-09-12 16:51:09 INFO Checking if agent identity type EC2 can be assumed Sep 12 16:51:10.131473 sshd_keygen[1982]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 12 16:51:10.177867 amazon-ssm-agent[2145]: 2025-09-12 16:51:09 INFO Agent will take identity from EC2 Sep 12 16:51:10.177918 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 12 16:51:10.189260 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 12 16:51:10.194956 systemd[1]: Started sshd@0-172.31.21.42:22-139.178.89.65:38306.service - OpenSSH per-connection server daemon (139.178.89.65:38306). Sep 12 16:51:10.200391 amazon-ssm-agent[2145]: 2025-09-12 16:51:09 INFO [amazon-ssm-agent] using named pipe channel for IPC Sep 12 16:51:10.200391 amazon-ssm-agent[2145]: 2025-09-12 16:51:09 INFO [amazon-ssm-agent] using named pipe channel for IPC Sep 12 16:51:10.200391 amazon-ssm-agent[2145]: 2025-09-12 16:51:09 INFO [amazon-ssm-agent] using named pipe channel for IPC Sep 12 16:51:10.200391 amazon-ssm-agent[2145]: 2025-09-12 16:51:09 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Sep 12 16:51:10.200391 amazon-ssm-agent[2145]: 2025-09-12 16:51:09 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Sep 12 16:51:10.200391 amazon-ssm-agent[2145]: 2025-09-12 16:51:09 INFO [amazon-ssm-agent] Starting Core Agent Sep 12 16:51:10.200391 amazon-ssm-agent[2145]: 2025-09-12 16:51:09 INFO [amazon-ssm-agent] registrar detected. Attempting registration Sep 12 16:51:10.200391 amazon-ssm-agent[2145]: 2025-09-12 16:51:09 INFO [Registrar] Starting registrar module Sep 12 16:51:10.200391 amazon-ssm-agent[2145]: 2025-09-12 16:51:09 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Sep 12 16:51:10.203417 amazon-ssm-agent[2145]: 2025-09-12 16:51:10 INFO [EC2Identity] EC2 registration was successful. Sep 12 16:51:10.203417 amazon-ssm-agent[2145]: 2025-09-12 16:51:10 INFO [CredentialRefresher] credentialRefresher has started Sep 12 16:51:10.203417 amazon-ssm-agent[2145]: 2025-09-12 16:51:10 INFO [CredentialRefresher] Starting credentials refresher loop Sep 12 16:51:10.203417 amazon-ssm-agent[2145]: 2025-09-12 16:51:10 INFO EC2RoleProvider Successfully connected with instance profile role credentials Sep 12 16:51:10.221358 systemd[1]: issuegen.service: Deactivated successfully. Sep 12 16:51:10.221785 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 12 16:51:10.241309 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 12 16:51:10.276847 amazon-ssm-agent[2145]: 2025-09-12 16:51:10 INFO [CredentialRefresher] Next credential rotation will be in 30.091535368866666 minutes Sep 12 16:51:10.286358 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 12 16:51:10.297410 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 12 16:51:10.308559 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 12 16:51:10.315495 systemd[1]: Reached target getty.target - Login Prompts. Sep 12 16:51:10.457244 sshd[2177]: Accepted publickey for core from 139.178.89.65 port 38306 ssh2: RSA SHA256:UtlJgM7ARb7wxMu1nBhWJ04sNPurn7zs7fZADhw2VQM Sep 12 16:51:10.462204 sshd-session[2177]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 16:51:10.478075 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 12 16:51:10.489440 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 12 16:51:10.518232 systemd-logind[1950]: New session 1 of user core. Sep 12 16:51:10.530514 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 12 16:51:10.551735 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 12 16:51:10.560036 (systemd)[2188]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 12 16:51:10.565359 systemd-logind[1950]: New session c1 of user core. Sep 12 16:51:10.886099 systemd[2188]: Queued start job for default target default.target. Sep 12 16:51:10.895984 systemd[2188]: Created slice app.slice - User Application Slice. Sep 12 16:51:10.896036 systemd[2188]: Reached target paths.target - Paths. Sep 12 16:51:10.896122 systemd[2188]: Reached target timers.target - Timers. Sep 12 16:51:10.899171 systemd[2188]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 12 16:51:10.943559 systemd[2188]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 12 16:51:10.944918 systemd[2188]: Reached target sockets.target - Sockets. Sep 12 16:51:10.945029 systemd[2188]: Reached target basic.target - Basic System. Sep 12 16:51:10.945112 systemd[2188]: Reached target default.target - Main User Target. Sep 12 16:51:10.945170 systemd[2188]: Startup finished in 358ms. Sep 12 16:51:10.945367 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 12 16:51:10.955065 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 12 16:51:11.122320 systemd[1]: Started sshd@1-172.31.21.42:22-139.178.89.65:60666.service - OpenSSH per-connection server daemon (139.178.89.65:60666). Sep 12 16:51:11.237267 amazon-ssm-agent[2145]: 2025-09-12 16:51:11 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Sep 12 16:51:11.319770 sshd[2199]: Accepted publickey for core from 139.178.89.65 port 60666 ssh2: RSA SHA256:UtlJgM7ARb7wxMu1nBhWJ04sNPurn7zs7fZADhw2VQM Sep 12 16:51:11.322153 sshd-session[2199]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 16:51:11.338460 amazon-ssm-agent[2145]: 2025-09-12 16:51:11 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2202) started Sep 12 16:51:11.341151 systemd-logind[1950]: New session 2 of user core. Sep 12 16:51:11.349086 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 12 16:51:11.424108 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 16:51:11.432053 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 12 16:51:11.440971 amazon-ssm-agent[2145]: 2025-09-12 16:51:11 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Sep 12 16:51:11.443924 systemd[1]: Startup finished in 1.197s (kernel) + 10.637s (initrd) + 9.053s (userspace) = 20.889s. Sep 12 16:51:11.445602 (kubelet)[2214]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 16:51:11.516115 sshd[2208]: Connection closed by 139.178.89.65 port 60666 Sep 12 16:51:11.517345 sshd-session[2199]: pam_unix(sshd:session): session closed for user core Sep 12 16:51:11.523906 systemd-logind[1950]: Session 2 logged out. Waiting for processes to exit. Sep 12 16:51:11.524309 systemd[1]: sshd@1-172.31.21.42:22-139.178.89.65:60666.service: Deactivated successfully. Sep 12 16:51:11.532187 systemd[1]: session-2.scope: Deactivated successfully. Sep 12 16:51:11.537057 systemd-logind[1950]: Removed session 2. Sep 12 16:51:11.568986 systemd[1]: Started sshd@2-172.31.21.42:22-139.178.89.65:60670.service - OpenSSH per-connection server daemon (139.178.89.65:60670). Sep 12 16:51:11.754634 sshd[2228]: Accepted publickey for core from 139.178.89.65 port 60670 ssh2: RSA SHA256:UtlJgM7ARb7wxMu1nBhWJ04sNPurn7zs7fZADhw2VQM Sep 12 16:51:11.756282 sshd-session[2228]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 16:51:11.766481 systemd-logind[1950]: New session 3 of user core. Sep 12 16:51:11.775094 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 12 16:51:11.862163 ntpd[1945]: Listen normally on 7 eth0 [fe80::42d:8eff:feb7:cbdb%2]:123 Sep 12 16:51:11.862786 ntpd[1945]: 12 Sep 16:51:11 ntpd[1945]: Listen normally on 7 eth0 [fe80::42d:8eff:feb7:cbdb%2]:123 Sep 12 16:51:11.894839 sshd[2234]: Connection closed by 139.178.89.65 port 60670 Sep 12 16:51:11.895598 sshd-session[2228]: pam_unix(sshd:session): session closed for user core Sep 12 16:51:11.902543 systemd[1]: sshd@2-172.31.21.42:22-139.178.89.65:60670.service: Deactivated successfully. Sep 12 16:51:11.903271 systemd-logind[1950]: Session 3 logged out. Waiting for processes to exit. Sep 12 16:51:11.906099 systemd[1]: session-3.scope: Deactivated successfully. Sep 12 16:51:11.911304 systemd-logind[1950]: Removed session 3. Sep 12 16:51:11.937339 systemd[1]: Started sshd@3-172.31.21.42:22-139.178.89.65:60682.service - OpenSSH per-connection server daemon (139.178.89.65:60682). Sep 12 16:51:12.121486 sshd[2240]: Accepted publickey for core from 139.178.89.65 port 60682 ssh2: RSA SHA256:UtlJgM7ARb7wxMu1nBhWJ04sNPurn7zs7fZADhw2VQM Sep 12 16:51:12.124372 sshd-session[2240]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 16:51:12.133544 systemd-logind[1950]: New session 4 of user core. Sep 12 16:51:12.142096 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 12 16:51:12.273703 sshd[2242]: Connection closed by 139.178.89.65 port 60682 Sep 12 16:51:12.274092 sshd-session[2240]: pam_unix(sshd:session): session closed for user core Sep 12 16:51:12.281787 systemd[1]: sshd@3-172.31.21.42:22-139.178.89.65:60682.service: Deactivated successfully. Sep 12 16:51:12.284750 systemd[1]: session-4.scope: Deactivated successfully. Sep 12 16:51:12.349835 systemd-logind[1950]: Session 4 logged out. Waiting for processes to exit. Sep 12 16:51:12.371285 systemd[1]: Started sshd@4-172.31.21.42:22-139.178.89.65:60692.service - OpenSSH per-connection server daemon (139.178.89.65:60692). Sep 12 16:51:12.373726 systemd-logind[1950]: Removed session 4. Sep 12 16:51:12.537124 kubelet[2214]: E0912 16:51:12.536993 2214 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 16:51:12.541614 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 16:51:12.542212 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 16:51:12.544224 systemd[1]: kubelet.service: Consumed 1.454s CPU time, 259.1M memory peak. Sep 12 16:51:12.576734 sshd[2247]: Accepted publickey for core from 139.178.89.65 port 60692 ssh2: RSA SHA256:UtlJgM7ARb7wxMu1nBhWJ04sNPurn7zs7fZADhw2VQM Sep 12 16:51:12.578322 sshd-session[2247]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 16:51:12.587163 systemd-logind[1950]: New session 5 of user core. Sep 12 16:51:12.599069 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 12 16:51:12.719239 sudo[2253]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 12 16:51:12.719882 sudo[2253]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 16:51:12.740036 sudo[2253]: pam_unix(sudo:session): session closed for user root Sep 12 16:51:12.764358 sshd[2252]: Connection closed by 139.178.89.65 port 60692 Sep 12 16:51:12.765390 sshd-session[2247]: pam_unix(sshd:session): session closed for user core Sep 12 16:51:12.772491 systemd[1]: sshd@4-172.31.21.42:22-139.178.89.65:60692.service: Deactivated successfully. Sep 12 16:51:12.776820 systemd[1]: session-5.scope: Deactivated successfully. Sep 12 16:51:12.780174 systemd-logind[1950]: Session 5 logged out. Waiting for processes to exit. Sep 12 16:51:12.782131 systemd-logind[1950]: Removed session 5. Sep 12 16:51:12.805337 systemd[1]: Started sshd@5-172.31.21.42:22-139.178.89.65:60702.service - OpenSSH per-connection server daemon (139.178.89.65:60702). Sep 12 16:51:12.992759 sshd[2259]: Accepted publickey for core from 139.178.89.65 port 60702 ssh2: RSA SHA256:UtlJgM7ARb7wxMu1nBhWJ04sNPurn7zs7fZADhw2VQM Sep 12 16:51:12.995608 sshd-session[2259]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 16:51:13.004028 systemd-logind[1950]: New session 6 of user core. Sep 12 16:51:13.011060 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 12 16:51:13.116540 sudo[2263]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 12 16:51:13.117741 sudo[2263]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 16:51:13.125341 sudo[2263]: pam_unix(sudo:session): session closed for user root Sep 12 16:51:13.143334 sudo[2262]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 12 16:51:13.144283 sudo[2262]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 16:51:13.172370 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 16:51:13.219845 augenrules[2285]: No rules Sep 12 16:51:13.221550 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 16:51:13.222072 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 16:51:13.225254 sudo[2262]: pam_unix(sudo:session): session closed for user root Sep 12 16:51:13.248634 sshd[2261]: Connection closed by 139.178.89.65 port 60702 Sep 12 16:51:13.249421 sshd-session[2259]: pam_unix(sshd:session): session closed for user core Sep 12 16:51:13.256692 systemd[1]: sshd@5-172.31.21.42:22-139.178.89.65:60702.service: Deactivated successfully. Sep 12 16:51:13.260986 systemd[1]: session-6.scope: Deactivated successfully. Sep 12 16:51:13.263162 systemd-logind[1950]: Session 6 logged out. Waiting for processes to exit. Sep 12 16:51:13.264846 systemd-logind[1950]: Removed session 6. Sep 12 16:51:13.293340 systemd[1]: Started sshd@6-172.31.21.42:22-139.178.89.65:60710.service - OpenSSH per-connection server daemon (139.178.89.65:60710). Sep 12 16:51:13.472360 sshd[2294]: Accepted publickey for core from 139.178.89.65 port 60710 ssh2: RSA SHA256:UtlJgM7ARb7wxMu1nBhWJ04sNPurn7zs7fZADhw2VQM Sep 12 16:51:13.474994 sshd-session[2294]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 16:51:13.483114 systemd-logind[1950]: New session 7 of user core. Sep 12 16:51:13.492033 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 12 16:51:13.594552 sudo[2297]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 12 16:51:13.595393 sudo[2297]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 16:51:14.172289 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 12 16:51:14.181301 (dockerd)[2313]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 12 16:51:14.589063 dockerd[2313]: time="2025-09-12T16:51:14.588508182Z" level=info msg="Starting up" Sep 12 16:51:14.802125 systemd[1]: var-lib-docker-metacopy\x2dcheck3106135751-merged.mount: Deactivated successfully. Sep 12 16:51:14.814218 dockerd[2313]: time="2025-09-12T16:51:14.814141279Z" level=info msg="Loading containers: start." Sep 12 16:51:15.060968 kernel: Initializing XFRM netlink socket Sep 12 16:51:15.097888 (udev-worker)[2339]: Network interface NamePolicy= disabled on kernel command line. Sep 12 16:51:15.190683 systemd-networkd[1882]: docker0: Link UP Sep 12 16:51:15.232147 dockerd[2313]: time="2025-09-12T16:51:15.232001320Z" level=info msg="Loading containers: done." Sep 12 16:51:15.268603 dockerd[2313]: time="2025-09-12T16:51:15.268540453Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 12 16:51:15.268874 dockerd[2313]: time="2025-09-12T16:51:15.268676517Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Sep 12 16:51:15.269056 dockerd[2313]: time="2025-09-12T16:51:15.269016755Z" level=info msg="Daemon has completed initialization" Sep 12 16:51:15.321842 dockerd[2313]: time="2025-09-12T16:51:15.321659944Z" level=info msg="API listen on /run/docker.sock" Sep 12 16:51:15.322247 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 12 16:51:16.497193 containerd[1964]: time="2025-09-12T16:51:16.496792644Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Sep 12 16:51:17.168156 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1524430876.mount: Deactivated successfully. Sep 12 16:51:19.144321 containerd[1964]: time="2025-09-12T16:51:19.144236299Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 16:51:19.146505 containerd[1964]: time="2025-09-12T16:51:19.146406000Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=26363685" Sep 12 16:51:19.148838 containerd[1964]: time="2025-09-12T16:51:19.147600525Z" level=info msg="ImageCreate event name:\"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 16:51:19.153478 containerd[1964]: time="2025-09-12T16:51:19.153417852Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 16:51:19.155859 containerd[1964]: time="2025-09-12T16:51:19.155778652Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"26360284\" in 2.658890872s" Sep 12 16:51:19.156030 containerd[1964]: time="2025-09-12T16:51:19.156000715Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\"" Sep 12 16:51:19.157043 containerd[1964]: time="2025-09-12T16:51:19.156996348Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Sep 12 16:51:20.995838 containerd[1964]: time="2025-09-12T16:51:20.994278239Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 16:51:20.996389 containerd[1964]: time="2025-09-12T16:51:20.996349514Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=22531200" Sep 12 16:51:20.997121 containerd[1964]: time="2025-09-12T16:51:20.997083897Z" level=info msg="ImageCreate event name:\"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 16:51:21.003060 containerd[1964]: time="2025-09-12T16:51:21.003006421Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 16:51:21.005502 containerd[1964]: time="2025-09-12T16:51:21.005453508Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"24099975\" in 1.848400275s" Sep 12 16:51:21.005703 containerd[1964]: time="2025-09-12T16:51:21.005670960Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\"" Sep 12 16:51:21.006707 containerd[1964]: time="2025-09-12T16:51:21.006654132Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Sep 12 16:51:22.628902 containerd[1964]: time="2025-09-12T16:51:22.628297503Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 16:51:22.630530 containerd[1964]: time="2025-09-12T16:51:22.630460288Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=17484324" Sep 12 16:51:22.631834 containerd[1964]: time="2025-09-12T16:51:22.631504954Z" level=info msg="ImageCreate event name:\"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 16:51:22.637844 containerd[1964]: time="2025-09-12T16:51:22.637311957Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 16:51:22.640352 containerd[1964]: time="2025-09-12T16:51:22.639756918Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"19053117\" in 1.632862955s" Sep 12 16:51:22.640352 containerd[1964]: time="2025-09-12T16:51:22.639828414Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\"" Sep 12 16:51:22.641662 containerd[1964]: time="2025-09-12T16:51:22.641594857Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Sep 12 16:51:22.792565 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 12 16:51:22.803014 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 16:51:23.168385 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 16:51:23.178396 (kubelet)[2577]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 16:51:23.263698 kubelet[2577]: E0912 16:51:23.263603 2577 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 16:51:23.270730 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 16:51:23.271291 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 16:51:23.272407 systemd[1]: kubelet.service: Consumed 325ms CPU time, 107.6M memory peak. Sep 12 16:51:24.316180 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2653265379.mount: Deactivated successfully. Sep 12 16:51:24.855183 containerd[1964]: time="2025-09-12T16:51:24.853742669Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 16:51:24.855183 containerd[1964]: time="2025-09-12T16:51:24.855113767Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=27417817" Sep 12 16:51:24.856054 containerd[1964]: time="2025-09-12T16:51:24.856009186Z" level=info msg="ImageCreate event name:\"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 16:51:24.859274 containerd[1964]: time="2025-09-12T16:51:24.859210046Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 16:51:24.861055 containerd[1964]: time="2025-09-12T16:51:24.860990199Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"27416836\" in 2.219334496s" Sep 12 16:51:24.861055 containerd[1964]: time="2025-09-12T16:51:24.861044431Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\"" Sep 12 16:51:24.861945 containerd[1964]: time="2025-09-12T16:51:24.861859518Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 12 16:51:25.423019 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3388335175.mount: Deactivated successfully. Sep 12 16:51:26.956960 containerd[1964]: time="2025-09-12T16:51:26.956889508Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 16:51:26.964477 containerd[1964]: time="2025-09-12T16:51:26.964380664Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Sep 12 16:51:26.977336 containerd[1964]: time="2025-09-12T16:51:26.977251900Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 16:51:26.991504 containerd[1964]: time="2025-09-12T16:51:26.991403108Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 16:51:26.995581 containerd[1964]: time="2025-09-12T16:51:26.995514059Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 2.133593959s" Sep 12 16:51:26.995581 containerd[1964]: time="2025-09-12T16:51:26.995581737Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Sep 12 16:51:26.996519 containerd[1964]: time="2025-09-12T16:51:26.996438425Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 12 16:51:27.477626 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1630088221.mount: Deactivated successfully. Sep 12 16:51:27.484280 containerd[1964]: time="2025-09-12T16:51:27.484178096Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 16:51:27.486063 containerd[1964]: time="2025-09-12T16:51:27.485846210Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Sep 12 16:51:27.487137 containerd[1964]: time="2025-09-12T16:51:27.487082480Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 16:51:27.492655 containerd[1964]: time="2025-09-12T16:51:27.492572031Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 16:51:27.494567 containerd[1964]: time="2025-09-12T16:51:27.494328785Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 497.731089ms" Sep 12 16:51:27.494567 containerd[1964]: time="2025-09-12T16:51:27.494381852Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 12 16:51:27.495326 containerd[1964]: time="2025-09-12T16:51:27.495131062Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 12 16:51:28.093379 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4117443876.mount: Deactivated successfully. Sep 12 16:51:30.828091 containerd[1964]: time="2025-09-12T16:51:30.828029029Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 16:51:30.830821 containerd[1964]: time="2025-09-12T16:51:30.830274283Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67943165" Sep 12 16:51:30.831083 containerd[1964]: time="2025-09-12T16:51:30.831044156Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 16:51:30.837342 containerd[1964]: time="2025-09-12T16:51:30.837293531Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 16:51:30.840718 containerd[1964]: time="2025-09-12T16:51:30.840016864Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 3.344724274s" Sep 12 16:51:30.840718 containerd[1964]: time="2025-09-12T16:51:30.840079500Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Sep 12 16:51:33.522001 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 12 16:51:33.532007 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 16:51:33.991258 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 16:51:33.996649 (kubelet)[2730]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 16:51:34.074827 kubelet[2730]: E0912 16:51:34.073110 2730 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 16:51:34.078135 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 16:51:34.078624 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 16:51:34.080923 systemd[1]: kubelet.service: Consumed 279ms CPU time, 106.8M memory peak. Sep 12 16:51:39.125577 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Sep 12 16:51:39.299342 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 16:51:39.300032 systemd[1]: kubelet.service: Consumed 279ms CPU time, 106.8M memory peak. Sep 12 16:51:39.313915 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 16:51:39.380310 systemd[1]: Reload requested from client PID 2748 ('systemctl') (unit session-7.scope)... Sep 12 16:51:39.380611 systemd[1]: Reloading... Sep 12 16:51:39.637255 zram_generator::config[2796]: No configuration found. Sep 12 16:51:39.864493 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 16:51:40.089578 systemd[1]: Reloading finished in 708 ms. Sep 12 16:51:40.177089 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 16:51:40.179739 (kubelet)[2847]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 16:51:40.186759 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 16:51:40.188886 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 16:51:40.189317 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 16:51:40.189398 systemd[1]: kubelet.service: Consumed 223ms CPU time, 96.1M memory peak. Sep 12 16:51:40.198411 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 16:51:40.528454 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 16:51:40.542418 (kubelet)[2859]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 16:51:40.627635 kubelet[2859]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 16:51:40.627635 kubelet[2859]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 12 16:51:40.627635 kubelet[2859]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 16:51:40.628324 kubelet[2859]: I0912 16:51:40.627743 2859 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 16:51:41.947311 kubelet[2859]: I0912 16:51:41.946954 2859 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 12 16:51:41.947311 kubelet[2859]: I0912 16:51:41.947014 2859 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 16:51:41.948069 kubelet[2859]: I0912 16:51:41.947516 2859 server.go:954] "Client rotation is on, will bootstrap in background" Sep 12 16:51:41.992915 kubelet[2859]: E0912 16:51:41.992842 2859 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.21.42:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.21.42:6443: connect: connection refused" logger="UnhandledError" Sep 12 16:51:41.996756 kubelet[2859]: I0912 16:51:41.996582 2859 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 16:51:42.006995 kubelet[2859]: E0912 16:51:42.006934 2859 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 12 16:51:42.006995 kubelet[2859]: I0912 16:51:42.006994 2859 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 12 16:51:42.013290 kubelet[2859]: I0912 16:51:42.013235 2859 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 16:51:42.015011 kubelet[2859]: I0912 16:51:42.014930 2859 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 16:51:42.015327 kubelet[2859]: I0912 16:51:42.015001 2859 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-21-42","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 16:51:42.015524 kubelet[2859]: I0912 16:51:42.015470 2859 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 16:51:42.015524 kubelet[2859]: I0912 16:51:42.015493 2859 container_manager_linux.go:304] "Creating device plugin manager" Sep 12 16:51:42.015921 kubelet[2859]: I0912 16:51:42.015875 2859 state_mem.go:36] "Initialized new in-memory state store" Sep 12 16:51:42.027038 kubelet[2859]: I0912 16:51:42.026972 2859 kubelet.go:446] "Attempting to sync node with API server" Sep 12 16:51:42.027038 kubelet[2859]: I0912 16:51:42.027031 2859 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 16:51:42.027863 kubelet[2859]: I0912 16:51:42.027070 2859 kubelet.go:352] "Adding apiserver pod source" Sep 12 16:51:42.027863 kubelet[2859]: I0912 16:51:42.027096 2859 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 16:51:42.034607 kubelet[2859]: W0912 16:51:42.034428 2859 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.21.42:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.21.42:6443: connect: connection refused Sep 12 16:51:42.035021 kubelet[2859]: E0912 16:51:42.034544 2859 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.21.42:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.21.42:6443: connect: connection refused" logger="UnhandledError" Sep 12 16:51:42.036100 kubelet[2859]: I0912 16:51:42.036060 2859 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Sep 12 16:51:42.038840 kubelet[2859]: I0912 16:51:42.037678 2859 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 16:51:42.038840 kubelet[2859]: W0912 16:51:42.037967 2859 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 12 16:51:42.041583 kubelet[2859]: W0912 16:51:42.041443 2859 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.21.42:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-21-42&limit=500&resourceVersion=0": dial tcp 172.31.21.42:6443: connect: connection refused Sep 12 16:51:42.041878 kubelet[2859]: E0912 16:51:42.041596 2859 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.21.42:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-21-42&limit=500&resourceVersion=0\": dial tcp 172.31.21.42:6443: connect: connection refused" logger="UnhandledError" Sep 12 16:51:42.044068 kubelet[2859]: I0912 16:51:42.043979 2859 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 12 16:51:42.044068 kubelet[2859]: I0912 16:51:42.044075 2859 server.go:1287] "Started kubelet" Sep 12 16:51:42.047599 kubelet[2859]: I0912 16:51:42.046884 2859 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 16:51:42.050469 kubelet[2859]: I0912 16:51:42.050413 2859 server.go:479] "Adding debug handlers to kubelet server" Sep 12 16:51:42.053185 kubelet[2859]: I0912 16:51:42.053064 2859 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 16:51:42.057031 kubelet[2859]: I0912 16:51:42.056980 2859 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 16:51:42.057584 kubelet[2859]: I0912 16:51:42.057555 2859 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 16:51:42.061466 kubelet[2859]: E0912 16:51:42.060950 2859 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.21.42:6443/api/v1/namespaces/default/events\": dial tcp 172.31.21.42:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-21-42.1864971c387ea50d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-21-42,UID:ip-172-31-21-42,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-21-42,},FirstTimestamp:2025-09-12 16:51:42.044038413 +0000 UTC m=+1.494437317,LastTimestamp:2025-09-12 16:51:42.044038413 +0000 UTC m=+1.494437317,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-21-42,}" Sep 12 16:51:42.061714 kubelet[2859]: I0912 16:51:42.061627 2859 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 16:51:42.069372 kubelet[2859]: I0912 16:51:42.069101 2859 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 12 16:51:42.070855 kubelet[2859]: E0912 16:51:42.070116 2859 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-21-42\" not found" Sep 12 16:51:42.070855 kubelet[2859]: I0912 16:51:42.070396 2859 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 12 16:51:42.070855 kubelet[2859]: I0912 16:51:42.070501 2859 reconciler.go:26] "Reconciler: start to sync state" Sep 12 16:51:42.072405 kubelet[2859]: W0912 16:51:42.072317 2859 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.21.42:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.21.42:6443: connect: connection refused Sep 12 16:51:42.072570 kubelet[2859]: E0912 16:51:42.072424 2859 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.21.42:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.21.42:6443: connect: connection refused" logger="UnhandledError" Sep 12 16:51:42.072632 kubelet[2859]: E0912 16:51:42.072608 2859 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 16:51:42.073971 kubelet[2859]: E0912 16:51:42.073765 2859 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-42?timeout=10s\": dial tcp 172.31.21.42:6443: connect: connection refused" interval="200ms" Sep 12 16:51:42.078859 kubelet[2859]: I0912 16:51:42.077420 2859 factory.go:221] Registration of the containerd container factory successfully Sep 12 16:51:42.078859 kubelet[2859]: I0912 16:51:42.077463 2859 factory.go:221] Registration of the systemd container factory successfully Sep 12 16:51:42.078859 kubelet[2859]: I0912 16:51:42.077649 2859 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 16:51:42.110274 kubelet[2859]: I0912 16:51:42.110219 2859 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 12 16:51:42.110274 kubelet[2859]: I0912 16:51:42.110257 2859 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 12 16:51:42.110483 kubelet[2859]: I0912 16:51:42.110291 2859 state_mem.go:36] "Initialized new in-memory state store" Sep 12 16:51:42.112655 kubelet[2859]: I0912 16:51:42.112598 2859 policy_none.go:49] "None policy: Start" Sep 12 16:51:42.112655 kubelet[2859]: I0912 16:51:42.112658 2859 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 12 16:51:42.112942 kubelet[2859]: I0912 16:51:42.112685 2859 state_mem.go:35] "Initializing new in-memory state store" Sep 12 16:51:42.116742 kubelet[2859]: I0912 16:51:42.116670 2859 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 16:51:42.123844 kubelet[2859]: I0912 16:51:42.123765 2859 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 16:51:42.124875 kubelet[2859]: I0912 16:51:42.123831 2859 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 12 16:51:42.125015 kubelet[2859]: I0912 16:51:42.124933 2859 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 12 16:51:42.125015 kubelet[2859]: I0912 16:51:42.124955 2859 kubelet.go:2382] "Starting kubelet main sync loop" Sep 12 16:51:42.125116 kubelet[2859]: E0912 16:51:42.125026 2859 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 16:51:42.130396 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 12 16:51:42.133489 kubelet[2859]: W0912 16:51:42.133399 2859 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.21.42:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.21.42:6443: connect: connection refused Sep 12 16:51:42.133634 kubelet[2859]: E0912 16:51:42.133500 2859 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.21.42:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.21.42:6443: connect: connection refused" logger="UnhandledError" Sep 12 16:51:42.152689 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 12 16:51:42.161229 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 12 16:51:42.170738 kubelet[2859]: E0912 16:51:42.170688 2859 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-21-42\" not found" Sep 12 16:51:42.173220 kubelet[2859]: I0912 16:51:42.172405 2859 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 16:51:42.173220 kubelet[2859]: I0912 16:51:42.172695 2859 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 16:51:42.173220 kubelet[2859]: I0912 16:51:42.172715 2859 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 16:51:42.175254 kubelet[2859]: I0912 16:51:42.175166 2859 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 16:51:42.175901 kubelet[2859]: E0912 16:51:42.175411 2859 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 12 16:51:42.175901 kubelet[2859]: E0912 16:51:42.175474 2859 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-21-42\" not found" Sep 12 16:51:42.245022 systemd[1]: Created slice kubepods-burstable-pod6f2b3b21899c609fe7f11bb7ec9c1807.slice - libcontainer container kubepods-burstable-pod6f2b3b21899c609fe7f11bb7ec9c1807.slice. Sep 12 16:51:42.263486 kubelet[2859]: E0912 16:51:42.263101 2859 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-42\" not found" node="ip-172-31-21-42" Sep 12 16:51:42.271549 kubelet[2859]: I0912 16:51:42.271099 2859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6f2b3b21899c609fe7f11bb7ec9c1807-k8s-certs\") pod \"kube-apiserver-ip-172-31-21-42\" (UID: \"6f2b3b21899c609fe7f11bb7ec9c1807\") " pod="kube-system/kube-apiserver-ip-172-31-21-42" Sep 12 16:51:42.271549 kubelet[2859]: I0912 16:51:42.271154 2859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6f2b3b21899c609fe7f11bb7ec9c1807-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-21-42\" (UID: \"6f2b3b21899c609fe7f11bb7ec9c1807\") " pod="kube-system/kube-apiserver-ip-172-31-21-42" Sep 12 16:51:42.271549 kubelet[2859]: I0912 16:51:42.271195 2859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1942d003b1048d7b9dff4826c8a67323-k8s-certs\") pod \"kube-controller-manager-ip-172-31-21-42\" (UID: \"1942d003b1048d7b9dff4826c8a67323\") " pod="kube-system/kube-controller-manager-ip-172-31-21-42" Sep 12 16:51:42.271549 kubelet[2859]: I0912 16:51:42.271233 2859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1942d003b1048d7b9dff4826c8a67323-kubeconfig\") pod \"kube-controller-manager-ip-172-31-21-42\" (UID: \"1942d003b1048d7b9dff4826c8a67323\") " pod="kube-system/kube-controller-manager-ip-172-31-21-42" Sep 12 16:51:42.271549 kubelet[2859]: I0912 16:51:42.271268 2859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1942d003b1048d7b9dff4826c8a67323-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-21-42\" (UID: \"1942d003b1048d7b9dff4826c8a67323\") " pod="kube-system/kube-controller-manager-ip-172-31-21-42" Sep 12 16:51:42.271948 kubelet[2859]: I0912 16:51:42.271307 2859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/80ecfc9369da87a94992e5242510bcf1-kubeconfig\") pod \"kube-scheduler-ip-172-31-21-42\" (UID: \"80ecfc9369da87a94992e5242510bcf1\") " pod="kube-system/kube-scheduler-ip-172-31-21-42" Sep 12 16:51:42.271948 kubelet[2859]: I0912 16:51:42.271344 2859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6f2b3b21899c609fe7f11bb7ec9c1807-ca-certs\") pod \"kube-apiserver-ip-172-31-21-42\" (UID: \"6f2b3b21899c609fe7f11bb7ec9c1807\") " pod="kube-system/kube-apiserver-ip-172-31-21-42" Sep 12 16:51:42.271948 kubelet[2859]: I0912 16:51:42.271376 2859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1942d003b1048d7b9dff4826c8a67323-ca-certs\") pod \"kube-controller-manager-ip-172-31-21-42\" (UID: \"1942d003b1048d7b9dff4826c8a67323\") " pod="kube-system/kube-controller-manager-ip-172-31-21-42" Sep 12 16:51:42.271948 kubelet[2859]: I0912 16:51:42.271422 2859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1942d003b1048d7b9dff4826c8a67323-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-21-42\" (UID: \"1942d003b1048d7b9dff4826c8a67323\") " pod="kube-system/kube-controller-manager-ip-172-31-21-42" Sep 12 16:51:42.271650 systemd[1]: Created slice kubepods-burstable-pod1942d003b1048d7b9dff4826c8a67323.slice - libcontainer container kubepods-burstable-pod1942d003b1048d7b9dff4826c8a67323.slice. Sep 12 16:51:42.275208 kubelet[2859]: E0912 16:51:42.275140 2859 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-42?timeout=10s\": dial tcp 172.31.21.42:6443: connect: connection refused" interval="400ms" Sep 12 16:51:42.278955 kubelet[2859]: E0912 16:51:42.278578 2859 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-42\" not found" node="ip-172-31-21-42" Sep 12 16:51:42.279976 kubelet[2859]: I0912 16:51:42.279912 2859 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-21-42" Sep 12 16:51:42.282776 kubelet[2859]: E0912 16:51:42.282667 2859 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.21.42:6443/api/v1/nodes\": dial tcp 172.31.21.42:6443: connect: connection refused" node="ip-172-31-21-42" Sep 12 16:51:42.284062 systemd[1]: Created slice kubepods-burstable-pod80ecfc9369da87a94992e5242510bcf1.slice - libcontainer container kubepods-burstable-pod80ecfc9369da87a94992e5242510bcf1.slice. Sep 12 16:51:42.287859 kubelet[2859]: E0912 16:51:42.287597 2859 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-42\" not found" node="ip-172-31-21-42" Sep 12 16:51:42.485822 kubelet[2859]: I0912 16:51:42.485259 2859 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-21-42" Sep 12 16:51:42.485822 kubelet[2859]: E0912 16:51:42.485713 2859 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.21.42:6443/api/v1/nodes\": dial tcp 172.31.21.42:6443: connect: connection refused" node="ip-172-31-21-42" Sep 12 16:51:42.565162 containerd[1964]: time="2025-09-12T16:51:42.564983223Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-21-42,Uid:6f2b3b21899c609fe7f11bb7ec9c1807,Namespace:kube-system,Attempt:0,}" Sep 12 16:51:42.580825 containerd[1964]: time="2025-09-12T16:51:42.580450109Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-21-42,Uid:1942d003b1048d7b9dff4826c8a67323,Namespace:kube-system,Attempt:0,}" Sep 12 16:51:42.589190 containerd[1964]: time="2025-09-12T16:51:42.589116928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-21-42,Uid:80ecfc9369da87a94992e5242510bcf1,Namespace:kube-system,Attempt:0,}" Sep 12 16:51:42.676442 kubelet[2859]: E0912 16:51:42.676370 2859 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-42?timeout=10s\": dial tcp 172.31.21.42:6443: connect: connection refused" interval="800ms" Sep 12 16:51:42.888983 kubelet[2859]: I0912 16:51:42.888845 2859 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-21-42" Sep 12 16:51:42.889639 kubelet[2859]: E0912 16:51:42.889544 2859 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.21.42:6443/api/v1/nodes\": dial tcp 172.31.21.42:6443: connect: connection refused" node="ip-172-31-21-42" Sep 12 16:51:42.942354 kubelet[2859]: W0912 16:51:42.942302 2859 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.21.42:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-21-42&limit=500&resourceVersion=0": dial tcp 172.31.21.42:6443: connect: connection refused Sep 12 16:51:42.942532 kubelet[2859]: E0912 16:51:42.942374 2859 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.21.42:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-21-42&limit=500&resourceVersion=0\": dial tcp 172.31.21.42:6443: connect: connection refused" logger="UnhandledError" Sep 12 16:51:43.007372 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2644397001.mount: Deactivated successfully. Sep 12 16:51:43.014289 containerd[1964]: time="2025-09-12T16:51:43.014213241Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 16:51:43.017850 containerd[1964]: time="2025-09-12T16:51:43.017499139Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 16:51:43.020673 containerd[1964]: time="2025-09-12T16:51:43.020557560Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Sep 12 16:51:43.021542 containerd[1964]: time="2025-09-12T16:51:43.021486320Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 12 16:51:43.023933 containerd[1964]: time="2025-09-12T16:51:43.023860603Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 16:51:43.028832 containerd[1964]: time="2025-09-12T16:51:43.026931425Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 12 16:51:43.031238 containerd[1964]: time="2025-09-12T16:51:43.031178848Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 16:51:43.036423 containerd[1964]: time="2025-09-12T16:51:43.036361190Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 455.806881ms" Sep 12 16:51:43.038360 containerd[1964]: time="2025-09-12T16:51:43.038311697Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 16:51:43.043307 containerd[1964]: time="2025-09-12T16:51:43.043252370Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 478.157802ms" Sep 12 16:51:43.045854 containerd[1964]: time="2025-09-12T16:51:43.045736579Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 456.504778ms" Sep 12 16:51:43.182277 kubelet[2859]: W0912 16:51:43.182085 2859 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.21.42:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.21.42:6443: connect: connection refused Sep 12 16:51:43.182277 kubelet[2859]: E0912 16:51:43.182189 2859 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.21.42:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.21.42:6443: connect: connection refused" logger="UnhandledError" Sep 12 16:51:43.216614 kubelet[2859]: W0912 16:51:43.216427 2859 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.21.42:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.21.42:6443: connect: connection refused Sep 12 16:51:43.216614 kubelet[2859]: E0912 16:51:43.216504 2859 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.21.42:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.21.42:6443: connect: connection refused" logger="UnhandledError" Sep 12 16:51:43.246996 containerd[1964]: time="2025-09-12T16:51:43.246616050Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 16:51:43.246996 containerd[1964]: time="2025-09-12T16:51:43.246765849Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 16:51:43.246996 containerd[1964]: time="2025-09-12T16:51:43.246832026Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 16:51:43.248365 containerd[1964]: time="2025-09-12T16:51:43.247406441Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 16:51:43.254954 containerd[1964]: time="2025-09-12T16:51:43.254442281Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 16:51:43.254954 containerd[1964]: time="2025-09-12T16:51:43.254599140Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 16:51:43.254954 containerd[1964]: time="2025-09-12T16:51:43.254661595Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 16:51:43.256481 containerd[1964]: time="2025-09-12T16:51:43.254858745Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 16:51:43.256917 containerd[1964]: time="2025-09-12T16:51:43.256557462Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 16:51:43.256917 containerd[1964]: time="2025-09-12T16:51:43.256731814Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 16:51:43.258221 containerd[1964]: time="2025-09-12T16:51:43.257725994Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 16:51:43.258221 containerd[1964]: time="2025-09-12T16:51:43.258000007Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 16:51:43.299176 systemd[1]: Started cri-containerd-ae44dd4f4520c16381e0d2d80eae957f925e2d9af29c500a0a5d456389b1a9f2.scope - libcontainer container ae44dd4f4520c16381e0d2d80eae957f925e2d9af29c500a0a5d456389b1a9f2. Sep 12 16:51:43.338092 systemd[1]: Started cri-containerd-403ce9e6ebf30bb301e9052d2f6828f45c0718a210eba51a6336671ff6916bed.scope - libcontainer container 403ce9e6ebf30bb301e9052d2f6828f45c0718a210eba51a6336671ff6916bed. Sep 12 16:51:43.356117 systemd[1]: Started cri-containerd-d22e13e24945113cd5cde06d2cc144815ac2a907f3ba142baf196df9fef8e3dd.scope - libcontainer container d22e13e24945113cd5cde06d2cc144815ac2a907f3ba142baf196df9fef8e3dd. Sep 12 16:51:43.460326 containerd[1964]: time="2025-09-12T16:51:43.459950846Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-21-42,Uid:80ecfc9369da87a94992e5242510bcf1,Namespace:kube-system,Attempt:0,} returns sandbox id \"403ce9e6ebf30bb301e9052d2f6828f45c0718a210eba51a6336671ff6916bed\"" Sep 12 16:51:43.462660 containerd[1964]: time="2025-09-12T16:51:43.462488170Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-21-42,Uid:1942d003b1048d7b9dff4826c8a67323,Namespace:kube-system,Attempt:0,} returns sandbox id \"ae44dd4f4520c16381e0d2d80eae957f925e2d9af29c500a0a5d456389b1a9f2\"" Sep 12 16:51:43.472523 kubelet[2859]: W0912 16:51:43.472439 2859 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.21.42:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.21.42:6443: connect: connection refused Sep 12 16:51:43.472668 kubelet[2859]: E0912 16:51:43.472540 2859 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.21.42:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.21.42:6443: connect: connection refused" logger="UnhandledError" Sep 12 16:51:43.474729 containerd[1964]: time="2025-09-12T16:51:43.474536119Z" level=info msg="CreateContainer within sandbox \"ae44dd4f4520c16381e0d2d80eae957f925e2d9af29c500a0a5d456389b1a9f2\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 12 16:51:43.475129 containerd[1964]: time="2025-09-12T16:51:43.474624351Z" level=info msg="CreateContainer within sandbox \"403ce9e6ebf30bb301e9052d2f6828f45c0718a210eba51a6336671ff6916bed\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 12 16:51:43.478005 kubelet[2859]: E0912 16:51:43.477947 2859 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-42?timeout=10s\": dial tcp 172.31.21.42:6443: connect: connection refused" interval="1.6s" Sep 12 16:51:43.501439 containerd[1964]: time="2025-09-12T16:51:43.501275222Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-21-42,Uid:6f2b3b21899c609fe7f11bb7ec9c1807,Namespace:kube-system,Attempt:0,} returns sandbox id \"d22e13e24945113cd5cde06d2cc144815ac2a907f3ba142baf196df9fef8e3dd\"" Sep 12 16:51:43.504239 containerd[1964]: time="2025-09-12T16:51:43.504172522Z" level=info msg="CreateContainer within sandbox \"403ce9e6ebf30bb301e9052d2f6828f45c0718a210eba51a6336671ff6916bed\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"26acf622b700d54dcc58745cfb8d5aa0e16616bd1aded0861f65168a0398fae0\"" Sep 12 16:51:43.505911 containerd[1964]: time="2025-09-12T16:51:43.505298685Z" level=info msg="StartContainer for \"26acf622b700d54dcc58745cfb8d5aa0e16616bd1aded0861f65168a0398fae0\"" Sep 12 16:51:43.510404 containerd[1964]: time="2025-09-12T16:51:43.509951418Z" level=info msg="CreateContainer within sandbox \"ae44dd4f4520c16381e0d2d80eae957f925e2d9af29c500a0a5d456389b1a9f2\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a3f4980bfa60aec7330fe08766406341c0417acdbb131a1af3ca5567aee01a04\"" Sep 12 16:51:43.510404 containerd[1964]: time="2025-09-12T16:51:43.510148484Z" level=info msg="CreateContainer within sandbox \"d22e13e24945113cd5cde06d2cc144815ac2a907f3ba142baf196df9fef8e3dd\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 12 16:51:43.512257 containerd[1964]: time="2025-09-12T16:51:43.512191017Z" level=info msg="StartContainer for \"a3f4980bfa60aec7330fe08766406341c0417acdbb131a1af3ca5567aee01a04\"" Sep 12 16:51:43.543525 containerd[1964]: time="2025-09-12T16:51:43.542978772Z" level=info msg="CreateContainer within sandbox \"d22e13e24945113cd5cde06d2cc144815ac2a907f3ba142baf196df9fef8e3dd\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c658f1a65aa3bbb11d76c68220d24d699c5353136615761537ca9d11b11b403f\"" Sep 12 16:51:43.544257 containerd[1964]: time="2025-09-12T16:51:43.544200070Z" level=info msg="StartContainer for \"c658f1a65aa3bbb11d76c68220d24d699c5353136615761537ca9d11b11b403f\"" Sep 12 16:51:43.566467 systemd[1]: Started cri-containerd-26acf622b700d54dcc58745cfb8d5aa0e16616bd1aded0861f65168a0398fae0.scope - libcontainer container 26acf622b700d54dcc58745cfb8d5aa0e16616bd1aded0861f65168a0398fae0. Sep 12 16:51:43.610097 systemd[1]: Started cri-containerd-a3f4980bfa60aec7330fe08766406341c0417acdbb131a1af3ca5567aee01a04.scope - libcontainer container a3f4980bfa60aec7330fe08766406341c0417acdbb131a1af3ca5567aee01a04. Sep 12 16:51:43.647142 systemd[1]: Started cri-containerd-c658f1a65aa3bbb11d76c68220d24d699c5353136615761537ca9d11b11b403f.scope - libcontainer container c658f1a65aa3bbb11d76c68220d24d699c5353136615761537ca9d11b11b403f. Sep 12 16:51:43.696129 kubelet[2859]: I0912 16:51:43.696087 2859 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-21-42" Sep 12 16:51:43.697225 kubelet[2859]: E0912 16:51:43.697168 2859 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.21.42:6443/api/v1/nodes\": dial tcp 172.31.21.42:6443: connect: connection refused" node="ip-172-31-21-42" Sep 12 16:51:43.716457 containerd[1964]: time="2025-09-12T16:51:43.714660143Z" level=info msg="StartContainer for \"26acf622b700d54dcc58745cfb8d5aa0e16616bd1aded0861f65168a0398fae0\" returns successfully" Sep 12 16:51:43.756284 containerd[1964]: time="2025-09-12T16:51:43.755707576Z" level=info msg="StartContainer for \"a3f4980bfa60aec7330fe08766406341c0417acdbb131a1af3ca5567aee01a04\" returns successfully" Sep 12 16:51:43.797207 containerd[1964]: time="2025-09-12T16:51:43.797038544Z" level=info msg="StartContainer for \"c658f1a65aa3bbb11d76c68220d24d699c5353136615761537ca9d11b11b403f\" returns successfully" Sep 12 16:51:44.151601 kubelet[2859]: E0912 16:51:44.151042 2859 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-42\" not found" node="ip-172-31-21-42" Sep 12 16:51:44.155239 kubelet[2859]: E0912 16:51:44.154852 2859 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-42\" not found" node="ip-172-31-21-42" Sep 12 16:51:44.160736 kubelet[2859]: E0912 16:51:44.160704 2859 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-42\" not found" node="ip-172-31-21-42" Sep 12 16:51:45.161586 kubelet[2859]: E0912 16:51:45.161505 2859 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-42\" not found" node="ip-172-31-21-42" Sep 12 16:51:45.164261 kubelet[2859]: E0912 16:51:45.162667 2859 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-42\" not found" node="ip-172-31-21-42" Sep 12 16:51:45.300018 kubelet[2859]: I0912 16:51:45.299590 2859 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-21-42" Sep 12 16:51:46.035432 kubelet[2859]: E0912 16:51:46.034903 2859 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-42\" not found" node="ip-172-31-21-42" Sep 12 16:51:46.164083 kubelet[2859]: E0912 16:51:46.164040 2859 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-42\" not found" node="ip-172-31-21-42" Sep 12 16:51:48.325407 kubelet[2859]: E0912 16:51:48.325342 2859 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-21-42\" not found" node="ip-172-31-21-42" Sep 12 16:51:48.425373 kubelet[2859]: I0912 16:51:48.424993 2859 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-21-42" Sep 12 16:51:48.425373 kubelet[2859]: E0912 16:51:48.425052 2859 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ip-172-31-21-42\": node \"ip-172-31-21-42\" not found" Sep 12 16:51:48.473086 kubelet[2859]: I0912 16:51:48.473019 2859 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-21-42" Sep 12 16:51:48.504436 kubelet[2859]: E0912 16:51:48.504074 2859 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-21-42\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-21-42" Sep 12 16:51:48.504436 kubelet[2859]: I0912 16:51:48.504121 2859 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-21-42" Sep 12 16:51:48.514515 kubelet[2859]: E0912 16:51:48.514198 2859 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-21-42\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-21-42" Sep 12 16:51:48.514515 kubelet[2859]: I0912 16:51:48.514250 2859 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-21-42" Sep 12 16:51:48.524250 kubelet[2859]: E0912 16:51:48.524202 2859 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-21-42\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-21-42" Sep 12 16:51:49.039271 kubelet[2859]: I0912 16:51:49.039208 2859 apiserver.go:52] "Watching apiserver" Sep 12 16:51:49.070911 kubelet[2859]: I0912 16:51:49.070717 2859 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 12 16:51:50.544436 systemd[1]: Reload requested from client PID 3136 ('systemctl') (unit session-7.scope)... Sep 12 16:51:50.544462 systemd[1]: Reloading... Sep 12 16:51:50.762856 zram_generator::config[3184]: No configuration found. Sep 12 16:51:50.982257 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 16:51:51.242914 systemd[1]: Reloading finished in 697 ms. Sep 12 16:51:51.283155 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 16:51:51.307982 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 16:51:51.308777 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 16:51:51.309027 systemd[1]: kubelet.service: Consumed 2.252s CPU time, 130.4M memory peak. Sep 12 16:51:51.316639 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 16:51:51.643288 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 16:51:51.658655 (kubelet)[3241]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 16:51:51.802855 kubelet[3241]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 16:51:51.803843 kubelet[3241]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 12 16:51:51.803843 kubelet[3241]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 16:51:51.803843 kubelet[3241]: I0912 16:51:51.803477 3241 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 16:51:51.822101 kubelet[3241]: I0912 16:51:51.821976 3241 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 12 16:51:51.822101 kubelet[3241]: I0912 16:51:51.822045 3241 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 16:51:51.823973 kubelet[3241]: I0912 16:51:51.822946 3241 server.go:954] "Client rotation is on, will bootstrap in background" Sep 12 16:51:51.832944 kubelet[3241]: I0912 16:51:51.832753 3241 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 12 16:51:51.849493 kubelet[3241]: I0912 16:51:51.848585 3241 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 16:51:51.849962 sudo[3256]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 12 16:51:51.850681 sudo[3256]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 12 16:51:51.865052 kubelet[3241]: E0912 16:51:51.863912 3241 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 12 16:51:51.865052 kubelet[3241]: I0912 16:51:51.864006 3241 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 12 16:51:51.886888 kubelet[3241]: I0912 16:51:51.885781 3241 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 16:51:51.889421 kubelet[3241]: I0912 16:51:51.889332 3241 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 16:51:51.889924 kubelet[3241]: I0912 16:51:51.889572 3241 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-21-42","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 16:51:51.890188 kubelet[3241]: I0912 16:51:51.890166 3241 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 16:51:51.892065 kubelet[3241]: I0912 16:51:51.890278 3241 container_manager_linux.go:304] "Creating device plugin manager" Sep 12 16:51:51.892065 kubelet[3241]: I0912 16:51:51.890375 3241 state_mem.go:36] "Initialized new in-memory state store" Sep 12 16:51:51.895715 kubelet[3241]: I0912 16:51:51.893858 3241 kubelet.go:446] "Attempting to sync node with API server" Sep 12 16:51:51.895715 kubelet[3241]: I0912 16:51:51.893894 3241 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 16:51:51.895715 kubelet[3241]: I0912 16:51:51.893927 3241 kubelet.go:352] "Adding apiserver pod source" Sep 12 16:51:51.895715 kubelet[3241]: I0912 16:51:51.893947 3241 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 16:51:51.897793 kubelet[3241]: I0912 16:51:51.897429 3241 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Sep 12 16:51:51.901274 kubelet[3241]: I0912 16:51:51.898969 3241 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 16:51:51.903721 kubelet[3241]: I0912 16:51:51.903515 3241 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 12 16:51:51.904562 kubelet[3241]: I0912 16:51:51.904534 3241 server.go:1287] "Started kubelet" Sep 12 16:51:51.916864 kubelet[3241]: I0912 16:51:51.916370 3241 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 16:51:51.917389 kubelet[3241]: I0912 16:51:51.917067 3241 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 16:51:51.917389 kubelet[3241]: I0912 16:51:51.917184 3241 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 16:51:51.919827 kubelet[3241]: I0912 16:51:51.918985 3241 server.go:479] "Adding debug handlers to kubelet server" Sep 12 16:51:51.923512 kubelet[3241]: I0912 16:51:51.923464 3241 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 16:51:51.927161 kubelet[3241]: E0912 16:51:51.925081 3241 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 16:51:51.927161 kubelet[3241]: I0912 16:51:51.925563 3241 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 16:51:51.945004 kubelet[3241]: I0912 16:51:51.944970 3241 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 12 16:51:51.946646 kubelet[3241]: E0912 16:51:51.946096 3241 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-21-42\" not found" Sep 12 16:51:51.949315 kubelet[3241]: I0912 16:51:51.949266 3241 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 12 16:51:51.957053 kubelet[3241]: I0912 16:51:51.956951 3241 reconciler.go:26] "Reconciler: start to sync state" Sep 12 16:51:51.966204 kubelet[3241]: I0912 16:51:51.965614 3241 factory.go:221] Registration of the systemd container factory successfully Sep 12 16:51:51.966204 kubelet[3241]: I0912 16:51:51.965850 3241 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 16:51:52.039829 kubelet[3241]: I0912 16:51:52.038166 3241 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 16:51:52.043527 kubelet[3241]: I0912 16:51:52.042349 3241 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 16:51:52.043527 kubelet[3241]: I0912 16:51:52.042421 3241 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 12 16:51:52.043527 kubelet[3241]: I0912 16:51:52.042453 3241 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 12 16:51:52.043527 kubelet[3241]: I0912 16:51:52.042470 3241 kubelet.go:2382] "Starting kubelet main sync loop" Sep 12 16:51:52.043527 kubelet[3241]: E0912 16:51:52.042545 3241 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 16:51:52.065651 kubelet[3241]: I0912 16:51:52.064747 3241 factory.go:221] Registration of the containerd container factory successfully Sep 12 16:51:52.143457 kubelet[3241]: E0912 16:51:52.143402 3241 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 12 16:51:52.223117 kubelet[3241]: I0912 16:51:52.222772 3241 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 12 16:51:52.223117 kubelet[3241]: I0912 16:51:52.222824 3241 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 12 16:51:52.223117 kubelet[3241]: I0912 16:51:52.222859 3241 state_mem.go:36] "Initialized new in-memory state store" Sep 12 16:51:52.223360 kubelet[3241]: I0912 16:51:52.223127 3241 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 12 16:51:52.223360 kubelet[3241]: I0912 16:51:52.223162 3241 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 12 16:51:52.223360 kubelet[3241]: I0912 16:51:52.223202 3241 policy_none.go:49] "None policy: Start" Sep 12 16:51:52.223360 kubelet[3241]: I0912 16:51:52.223219 3241 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 12 16:51:52.223360 kubelet[3241]: I0912 16:51:52.223239 3241 state_mem.go:35] "Initializing new in-memory state store" Sep 12 16:51:52.223691 kubelet[3241]: I0912 16:51:52.223412 3241 state_mem.go:75] "Updated machine memory state" Sep 12 16:51:52.231752 kubelet[3241]: I0912 16:51:52.231550 3241 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 16:51:52.235323 kubelet[3241]: I0912 16:51:52.234721 3241 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 16:51:52.235323 kubelet[3241]: I0912 16:51:52.234772 3241 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 16:51:52.235528 kubelet[3241]: I0912 16:51:52.235393 3241 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 16:51:52.250763 kubelet[3241]: E0912 16:51:52.249557 3241 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 12 16:51:52.345206 kubelet[3241]: I0912 16:51:52.345073 3241 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-21-42" Sep 12 16:51:52.347736 kubelet[3241]: I0912 16:51:52.346380 3241 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-21-42" Sep 12 16:51:52.347736 kubelet[3241]: I0912 16:51:52.347047 3241 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-21-42" Sep 12 16:51:52.364072 kubelet[3241]: I0912 16:51:52.363931 3241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1942d003b1048d7b9dff4826c8a67323-ca-certs\") pod \"kube-controller-manager-ip-172-31-21-42\" (UID: \"1942d003b1048d7b9dff4826c8a67323\") " pod="kube-system/kube-controller-manager-ip-172-31-21-42" Sep 12 16:51:52.364238 kubelet[3241]: I0912 16:51:52.364130 3241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1942d003b1048d7b9dff4826c8a67323-k8s-certs\") pod \"kube-controller-manager-ip-172-31-21-42\" (UID: \"1942d003b1048d7b9dff4826c8a67323\") " pod="kube-system/kube-controller-manager-ip-172-31-21-42" Sep 12 16:51:52.364295 kubelet[3241]: I0912 16:51:52.364250 3241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1942d003b1048d7b9dff4826c8a67323-kubeconfig\") pod \"kube-controller-manager-ip-172-31-21-42\" (UID: \"1942d003b1048d7b9dff4826c8a67323\") " pod="kube-system/kube-controller-manager-ip-172-31-21-42" Sep 12 16:51:52.364393 kubelet[3241]: I0912 16:51:52.364305 3241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6f2b3b21899c609fe7f11bb7ec9c1807-k8s-certs\") pod \"kube-apiserver-ip-172-31-21-42\" (UID: \"6f2b3b21899c609fe7f11bb7ec9c1807\") " pod="kube-system/kube-apiserver-ip-172-31-21-42" Sep 12 16:51:52.364504 kubelet[3241]: I0912 16:51:52.364422 3241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6f2b3b21899c609fe7f11bb7ec9c1807-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-21-42\" (UID: \"6f2b3b21899c609fe7f11bb7ec9c1807\") " pod="kube-system/kube-apiserver-ip-172-31-21-42" Sep 12 16:51:52.365345 kubelet[3241]: I0912 16:51:52.365222 3241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1942d003b1048d7b9dff4826c8a67323-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-21-42\" (UID: \"1942d003b1048d7b9dff4826c8a67323\") " pod="kube-system/kube-controller-manager-ip-172-31-21-42" Sep 12 16:51:52.365481 kubelet[3241]: I0912 16:51:52.365399 3241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1942d003b1048d7b9dff4826c8a67323-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-21-42\" (UID: \"1942d003b1048d7b9dff4826c8a67323\") " pod="kube-system/kube-controller-manager-ip-172-31-21-42" Sep 12 16:51:52.365619 kubelet[3241]: I0912 16:51:52.365581 3241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/80ecfc9369da87a94992e5242510bcf1-kubeconfig\") pod \"kube-scheduler-ip-172-31-21-42\" (UID: \"80ecfc9369da87a94992e5242510bcf1\") " pod="kube-system/kube-scheduler-ip-172-31-21-42" Sep 12 16:51:52.366204 kubelet[3241]: I0912 16:51:52.365676 3241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6f2b3b21899c609fe7f11bb7ec9c1807-ca-certs\") pod \"kube-apiserver-ip-172-31-21-42\" (UID: \"6f2b3b21899c609fe7f11bb7ec9c1807\") " pod="kube-system/kube-apiserver-ip-172-31-21-42" Sep 12 16:51:52.404822 kubelet[3241]: I0912 16:51:52.404542 3241 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-21-42" Sep 12 16:51:52.420370 kubelet[3241]: I0912 16:51:52.420007 3241 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-21-42" Sep 12 16:51:52.420370 kubelet[3241]: I0912 16:51:52.420118 3241 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-21-42" Sep 12 16:51:52.859978 sudo[3256]: pam_unix(sudo:session): session closed for user root Sep 12 16:51:52.909445 kubelet[3241]: I0912 16:51:52.909394 3241 apiserver.go:52] "Watching apiserver" Sep 12 16:51:52.951392 kubelet[3241]: I0912 16:51:52.951304 3241 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 12 16:51:53.125995 kubelet[3241]: I0912 16:51:53.125215 3241 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-21-42" Sep 12 16:51:53.143232 kubelet[3241]: E0912 16:51:53.143188 3241 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-21-42\" already exists" pod="kube-system/kube-apiserver-ip-172-31-21-42" Sep 12 16:51:53.224831 kubelet[3241]: I0912 16:51:53.224082 3241 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-21-42" podStartSLOduration=1.22405866 podStartE2EDuration="1.22405866s" podCreationTimestamp="2025-09-12 16:51:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 16:51:53.177491431 +0000 UTC m=+1.508968466" watchObservedRunningTime="2025-09-12 16:51:53.22405866 +0000 UTC m=+1.555535683" Sep 12 16:51:53.269589 kubelet[3241]: I0912 16:51:53.269399 3241 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-21-42" podStartSLOduration=1.269375787 podStartE2EDuration="1.269375787s" podCreationTimestamp="2025-09-12 16:51:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 16:51:53.226127906 +0000 UTC m=+1.557604965" watchObservedRunningTime="2025-09-12 16:51:53.269375787 +0000 UTC m=+1.600852810" Sep 12 16:51:53.328066 kubelet[3241]: I0912 16:51:53.327700 3241 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-21-42" podStartSLOduration=1.327678725 podStartE2EDuration="1.327678725s" podCreationTimestamp="2025-09-12 16:51:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 16:51:53.271094122 +0000 UTC m=+1.602571145" watchObservedRunningTime="2025-09-12 16:51:53.327678725 +0000 UTC m=+1.659155772" Sep 12 16:51:53.557890 update_engine[1951]: I20250912 16:51:53.556944 1951 update_attempter.cc:509] Updating boot flags... Sep 12 16:51:53.696976 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (3298) Sep 12 16:51:54.193621 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (3298) Sep 12 16:51:56.404494 kubelet[3241]: I0912 16:51:56.404443 3241 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 12 16:51:56.406788 containerd[1964]: time="2025-09-12T16:51:56.406658579Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 12 16:51:56.410645 kubelet[3241]: I0912 16:51:56.407058 3241 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 12 16:51:56.414013 sudo[2297]: pam_unix(sudo:session): session closed for user root Sep 12 16:51:56.437792 sshd[2296]: Connection closed by 139.178.89.65 port 60710 Sep 12 16:51:56.438624 sshd-session[2294]: pam_unix(sshd:session): session closed for user core Sep 12 16:51:56.445312 systemd[1]: sshd@6-172.31.21.42:22-139.178.89.65:60710.service: Deactivated successfully. Sep 12 16:51:56.452250 systemd[1]: session-7.scope: Deactivated successfully. Sep 12 16:51:56.453985 systemd[1]: session-7.scope: Consumed 12.564s CPU time, 264.9M memory peak. Sep 12 16:51:56.457139 systemd-logind[1950]: Session 7 logged out. Waiting for processes to exit. Sep 12 16:51:56.459458 systemd-logind[1950]: Removed session 7. Sep 12 16:51:57.000779 systemd[1]: Created slice kubepods-besteffort-pod709ebe0e_35d7_4cd4_b82d_11961aeca4f5.slice - libcontainer container kubepods-besteffort-pod709ebe0e_35d7_4cd4_b82d_11961aeca4f5.slice. Sep 12 16:51:57.007049 kubelet[3241]: I0912 16:51:57.006789 3241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/709ebe0e-35d7-4cd4-b82d-11961aeca4f5-xtables-lock\") pod \"kube-proxy-tmj9v\" (UID: \"709ebe0e-35d7-4cd4-b82d-11961aeca4f5\") " pod="kube-system/kube-proxy-tmj9v" Sep 12 16:51:57.007049 kubelet[3241]: I0912 16:51:57.006871 3241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/709ebe0e-35d7-4cd4-b82d-11961aeca4f5-kube-proxy\") pod \"kube-proxy-tmj9v\" (UID: \"709ebe0e-35d7-4cd4-b82d-11961aeca4f5\") " pod="kube-system/kube-proxy-tmj9v" Sep 12 16:51:57.007049 kubelet[3241]: I0912 16:51:57.006908 3241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/709ebe0e-35d7-4cd4-b82d-11961aeca4f5-lib-modules\") pod \"kube-proxy-tmj9v\" (UID: \"709ebe0e-35d7-4cd4-b82d-11961aeca4f5\") " pod="kube-system/kube-proxy-tmj9v" Sep 12 16:51:57.007049 kubelet[3241]: I0912 16:51:57.006945 3241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkvtp\" (UniqueName: \"kubernetes.io/projected/709ebe0e-35d7-4cd4-b82d-11961aeca4f5-kube-api-access-gkvtp\") pod \"kube-proxy-tmj9v\" (UID: \"709ebe0e-35d7-4cd4-b82d-11961aeca4f5\") " pod="kube-system/kube-proxy-tmj9v" Sep 12 16:51:57.044634 systemd[1]: Created slice kubepods-burstable-podd2050f2f_0f27_469d_8312_57577bc96f50.slice - libcontainer container kubepods-burstable-podd2050f2f_0f27_469d_8312_57577bc96f50.slice. Sep 12 16:51:57.108134 kubelet[3241]: I0912 16:51:57.108080 3241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d2050f2f-0f27-469d-8312-57577bc96f50-cilium-cgroup\") pod \"cilium-pcrsx\" (UID: \"d2050f2f-0f27-469d-8312-57577bc96f50\") " pod="kube-system/cilium-pcrsx" Sep 12 16:51:57.108993 kubelet[3241]: I0912 16:51:57.108602 3241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d2050f2f-0f27-469d-8312-57577bc96f50-xtables-lock\") pod \"cilium-pcrsx\" (UID: \"d2050f2f-0f27-469d-8312-57577bc96f50\") " pod="kube-system/cilium-pcrsx" Sep 12 16:51:57.109260 kubelet[3241]: I0912 16:51:57.109207 3241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d2050f2f-0f27-469d-8312-57577bc96f50-hubble-tls\") pod \"cilium-pcrsx\" (UID: \"d2050f2f-0f27-469d-8312-57577bc96f50\") " pod="kube-system/cilium-pcrsx" Sep 12 16:51:57.109490 kubelet[3241]: I0912 16:51:57.109415 3241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xs67f\" (UniqueName: \"kubernetes.io/projected/d2050f2f-0f27-469d-8312-57577bc96f50-kube-api-access-xs67f\") pod \"cilium-pcrsx\" (UID: \"d2050f2f-0f27-469d-8312-57577bc96f50\") " pod="kube-system/cilium-pcrsx" Sep 12 16:51:57.109938 kubelet[3241]: I0912 16:51:57.109907 3241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d2050f2f-0f27-469d-8312-57577bc96f50-cilium-config-path\") pod \"cilium-pcrsx\" (UID: \"d2050f2f-0f27-469d-8312-57577bc96f50\") " pod="kube-system/cilium-pcrsx" Sep 12 16:51:57.110274 kubelet[3241]: I0912 16:51:57.110186 3241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d2050f2f-0f27-469d-8312-57577bc96f50-host-proc-sys-kernel\") pod \"cilium-pcrsx\" (UID: \"d2050f2f-0f27-469d-8312-57577bc96f50\") " pod="kube-system/cilium-pcrsx" Sep 12 16:51:57.110560 kubelet[3241]: I0912 16:51:57.110414 3241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d2050f2f-0f27-469d-8312-57577bc96f50-bpf-maps\") pod \"cilium-pcrsx\" (UID: \"d2050f2f-0f27-469d-8312-57577bc96f50\") " pod="kube-system/cilium-pcrsx" Sep 12 16:51:57.110791 kubelet[3241]: I0912 16:51:57.110681 3241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d2050f2f-0f27-469d-8312-57577bc96f50-hostproc\") pod \"cilium-pcrsx\" (UID: \"d2050f2f-0f27-469d-8312-57577bc96f50\") " pod="kube-system/cilium-pcrsx" Sep 12 16:51:57.111053 kubelet[3241]: I0912 16:51:57.110942 3241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d2050f2f-0f27-469d-8312-57577bc96f50-etc-cni-netd\") pod \"cilium-pcrsx\" (UID: \"d2050f2f-0f27-469d-8312-57577bc96f50\") " pod="kube-system/cilium-pcrsx" Sep 12 16:51:57.111237 kubelet[3241]: I0912 16:51:57.110992 3241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d2050f2f-0f27-469d-8312-57577bc96f50-host-proc-sys-net\") pod \"cilium-pcrsx\" (UID: \"d2050f2f-0f27-469d-8312-57577bc96f50\") " pod="kube-system/cilium-pcrsx" Sep 12 16:51:57.111631 kubelet[3241]: I0912 16:51:57.111407 3241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d2050f2f-0f27-469d-8312-57577bc96f50-cilium-run\") pod \"cilium-pcrsx\" (UID: \"d2050f2f-0f27-469d-8312-57577bc96f50\") " pod="kube-system/cilium-pcrsx" Sep 12 16:51:57.111631 kubelet[3241]: I0912 16:51:57.111498 3241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d2050f2f-0f27-469d-8312-57577bc96f50-lib-modules\") pod \"cilium-pcrsx\" (UID: \"d2050f2f-0f27-469d-8312-57577bc96f50\") " pod="kube-system/cilium-pcrsx" Sep 12 16:51:57.111631 kubelet[3241]: I0912 16:51:57.111563 3241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d2050f2f-0f27-469d-8312-57577bc96f50-clustermesh-secrets\") pod \"cilium-pcrsx\" (UID: \"d2050f2f-0f27-469d-8312-57577bc96f50\") " pod="kube-system/cilium-pcrsx" Sep 12 16:51:57.112098 kubelet[3241]: I0912 16:51:57.111599 3241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d2050f2f-0f27-469d-8312-57577bc96f50-cni-path\") pod \"cilium-pcrsx\" (UID: \"d2050f2f-0f27-469d-8312-57577bc96f50\") " pod="kube-system/cilium-pcrsx" Sep 12 16:51:57.317257 containerd[1964]: time="2025-09-12T16:51:57.317092318Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tmj9v,Uid:709ebe0e-35d7-4cd4-b82d-11961aeca4f5,Namespace:kube-system,Attempt:0,}" Sep 12 16:51:57.353571 containerd[1964]: time="2025-09-12T16:51:57.353505580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pcrsx,Uid:d2050f2f-0f27-469d-8312-57577bc96f50,Namespace:kube-system,Attempt:0,}" Sep 12 16:51:57.364832 containerd[1964]: time="2025-09-12T16:51:57.364556695Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 16:51:57.365215 containerd[1964]: time="2025-09-12T16:51:57.365025733Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 16:51:57.365215 containerd[1964]: time="2025-09-12T16:51:57.365135324Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 16:51:57.365563 containerd[1964]: time="2025-09-12T16:51:57.365452763Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 16:51:57.409518 systemd[1]: Started cri-containerd-837e99e17c520d806cf809d88f8e89e973b3c7eda5797af20a67760fab1002ab.scope - libcontainer container 837e99e17c520d806cf809d88f8e89e973b3c7eda5797af20a67760fab1002ab. Sep 12 16:51:57.452953 containerd[1964]: time="2025-09-12T16:51:57.448358648Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 16:51:57.452953 containerd[1964]: time="2025-09-12T16:51:57.448477063Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 16:51:57.452953 containerd[1964]: time="2025-09-12T16:51:57.448512901Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 16:51:57.452953 containerd[1964]: time="2025-09-12T16:51:57.453190318Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 16:51:57.458871 systemd[1]: Created slice kubepods-besteffort-pod435b759a_5e77_43bd_b2df_82d84b61f758.slice - libcontainer container kubepods-besteffort-pod435b759a_5e77_43bd_b2df_82d84b61f758.slice. Sep 12 16:51:57.516925 kubelet[3241]: I0912 16:51:57.514346 3241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/435b759a-5e77-43bd-b2df-82d84b61f758-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-gztxg\" (UID: \"435b759a-5e77-43bd-b2df-82d84b61f758\") " pod="kube-system/cilium-operator-6c4d7847fc-gztxg" Sep 12 16:51:57.516925 kubelet[3241]: I0912 16:51:57.514444 3241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x726p\" (UniqueName: \"kubernetes.io/projected/435b759a-5e77-43bd-b2df-82d84b61f758-kube-api-access-x726p\") pod \"cilium-operator-6c4d7847fc-gztxg\" (UID: \"435b759a-5e77-43bd-b2df-82d84b61f758\") " pod="kube-system/cilium-operator-6c4d7847fc-gztxg" Sep 12 16:51:57.531116 systemd[1]: Started cri-containerd-30d753d4fe70401fa0dd1ac27deb9957a3bc710cd56b002448e1f6fcc515c0a4.scope - libcontainer container 30d753d4fe70401fa0dd1ac27deb9957a3bc710cd56b002448e1f6fcc515c0a4. Sep 12 16:51:57.610821 containerd[1964]: time="2025-09-12T16:51:57.610659731Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pcrsx,Uid:d2050f2f-0f27-469d-8312-57577bc96f50,Namespace:kube-system,Attempt:0,} returns sandbox id \"30d753d4fe70401fa0dd1ac27deb9957a3bc710cd56b002448e1f6fcc515c0a4\"" Sep 12 16:51:57.620570 containerd[1964]: time="2025-09-12T16:51:57.620478286Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tmj9v,Uid:709ebe0e-35d7-4cd4-b82d-11961aeca4f5,Namespace:kube-system,Attempt:0,} returns sandbox id \"837e99e17c520d806cf809d88f8e89e973b3c7eda5797af20a67760fab1002ab\"" Sep 12 16:51:57.628031 containerd[1964]: time="2025-09-12T16:51:57.627968793Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 12 16:51:57.634826 containerd[1964]: time="2025-09-12T16:51:57.632112760Z" level=info msg="CreateContainer within sandbox \"837e99e17c520d806cf809d88f8e89e973b3c7eda5797af20a67760fab1002ab\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 12 16:51:57.671561 containerd[1964]: time="2025-09-12T16:51:57.671479474Z" level=info msg="CreateContainer within sandbox \"837e99e17c520d806cf809d88f8e89e973b3c7eda5797af20a67760fab1002ab\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"3a424147b0a97fa9498e8f957a9efc046b3c9829fa68e1bbd915030af133bcd9\"" Sep 12 16:51:57.673367 containerd[1964]: time="2025-09-12T16:51:57.673263397Z" level=info msg="StartContainer for \"3a424147b0a97fa9498e8f957a9efc046b3c9829fa68e1bbd915030af133bcd9\"" Sep 12 16:51:57.723388 systemd[1]: Started cri-containerd-3a424147b0a97fa9498e8f957a9efc046b3c9829fa68e1bbd915030af133bcd9.scope - libcontainer container 3a424147b0a97fa9498e8f957a9efc046b3c9829fa68e1bbd915030af133bcd9. Sep 12 16:51:57.769399 containerd[1964]: time="2025-09-12T16:51:57.769320859Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-gztxg,Uid:435b759a-5e77-43bd-b2df-82d84b61f758,Namespace:kube-system,Attempt:0,}" Sep 12 16:51:57.799077 containerd[1964]: time="2025-09-12T16:51:57.797286531Z" level=info msg="StartContainer for \"3a424147b0a97fa9498e8f957a9efc046b3c9829fa68e1bbd915030af133bcd9\" returns successfully" Sep 12 16:51:57.826087 containerd[1964]: time="2025-09-12T16:51:57.824126388Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 16:51:57.826087 containerd[1964]: time="2025-09-12T16:51:57.824251683Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 16:51:57.826087 containerd[1964]: time="2025-09-12T16:51:57.824288145Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 16:51:57.826087 containerd[1964]: time="2025-09-12T16:51:57.824425218Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 16:51:57.865184 systemd[1]: Started cri-containerd-242e7fbd2ce07db33a25decb0c9c5f78783c7ac77310613706e7bea244591d98.scope - libcontainer container 242e7fbd2ce07db33a25decb0c9c5f78783c7ac77310613706e7bea244591d98. Sep 12 16:51:57.958346 containerd[1964]: time="2025-09-12T16:51:57.958144012Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-gztxg,Uid:435b759a-5e77-43bd-b2df-82d84b61f758,Namespace:kube-system,Attempt:0,} returns sandbox id \"242e7fbd2ce07db33a25decb0c9c5f78783c7ac77310613706e7bea244591d98\"" Sep 12 16:51:58.899587 kubelet[3241]: I0912 16:51:58.899486 3241 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-tmj9v" podStartSLOduration=2.899463464 podStartE2EDuration="2.899463464s" podCreationTimestamp="2025-09-12 16:51:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 16:51:58.198546899 +0000 UTC m=+6.530023959" watchObservedRunningTime="2025-09-12 16:51:58.899463464 +0000 UTC m=+7.230940487" Sep 12 16:52:08.396829 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1727968552.mount: Deactivated successfully. Sep 12 16:52:10.888613 containerd[1964]: time="2025-09-12T16:52:10.888548810Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 16:52:10.891506 containerd[1964]: time="2025-09-12T16:52:10.891417307Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Sep 12 16:52:10.892843 containerd[1964]: time="2025-09-12T16:52:10.892372841Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 16:52:10.898558 containerd[1964]: time="2025-09-12T16:52:10.898493536Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 13.270457965s" Sep 12 16:52:10.898715 containerd[1964]: time="2025-09-12T16:52:10.898558848Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Sep 12 16:52:10.900923 containerd[1964]: time="2025-09-12T16:52:10.900045948Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 12 16:52:10.902643 containerd[1964]: time="2025-09-12T16:52:10.902570377Z" level=info msg="CreateContainer within sandbox \"30d753d4fe70401fa0dd1ac27deb9957a3bc710cd56b002448e1f6fcc515c0a4\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 12 16:52:10.932890 containerd[1964]: time="2025-09-12T16:52:10.931678444Z" level=info msg="CreateContainer within sandbox \"30d753d4fe70401fa0dd1ac27deb9957a3bc710cd56b002448e1f6fcc515c0a4\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a00663801ce2e28aef52f98b5bb0068626dc331a27b20fd211771f6c14521044\"" Sep 12 16:52:10.932548 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3926029264.mount: Deactivated successfully. Sep 12 16:52:10.936657 containerd[1964]: time="2025-09-12T16:52:10.936416264Z" level=info msg="StartContainer for \"a00663801ce2e28aef52f98b5bb0068626dc331a27b20fd211771f6c14521044\"" Sep 12 16:52:10.996111 systemd[1]: Started cri-containerd-a00663801ce2e28aef52f98b5bb0068626dc331a27b20fd211771f6c14521044.scope - libcontainer container a00663801ce2e28aef52f98b5bb0068626dc331a27b20fd211771f6c14521044. Sep 12 16:52:11.052533 containerd[1964]: time="2025-09-12T16:52:11.052445598Z" level=info msg="StartContainer for \"a00663801ce2e28aef52f98b5bb0068626dc331a27b20fd211771f6c14521044\" returns successfully" Sep 12 16:52:11.084265 systemd[1]: cri-containerd-a00663801ce2e28aef52f98b5bb0068626dc331a27b20fd211771f6c14521044.scope: Deactivated successfully. Sep 12 16:52:11.921904 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a00663801ce2e28aef52f98b5bb0068626dc331a27b20fd211771f6c14521044-rootfs.mount: Deactivated successfully. Sep 12 16:52:12.111692 containerd[1964]: time="2025-09-12T16:52:12.111466195Z" level=info msg="shim disconnected" id=a00663801ce2e28aef52f98b5bb0068626dc331a27b20fd211771f6c14521044 namespace=k8s.io Sep 12 16:52:12.111692 containerd[1964]: time="2025-09-12T16:52:12.111545327Z" level=warning msg="cleaning up after shim disconnected" id=a00663801ce2e28aef52f98b5bb0068626dc331a27b20fd211771f6c14521044 namespace=k8s.io Sep 12 16:52:12.111692 containerd[1964]: time="2025-09-12T16:52:12.111566926Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 16:52:12.237683 containerd[1964]: time="2025-09-12T16:52:12.236718203Z" level=info msg="CreateContainer within sandbox \"30d753d4fe70401fa0dd1ac27deb9957a3bc710cd56b002448e1f6fcc515c0a4\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 12 16:52:12.268212 containerd[1964]: time="2025-09-12T16:52:12.266124979Z" level=info msg="CreateContainer within sandbox \"30d753d4fe70401fa0dd1ac27deb9957a3bc710cd56b002448e1f6fcc515c0a4\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"6dad778a090795ed6b036a651e5732e347a2eccf561741b9d0c3caba3f7dce8a\"" Sep 12 16:52:12.271844 containerd[1964]: time="2025-09-12T16:52:12.268656516Z" level=info msg="StartContainer for \"6dad778a090795ed6b036a651e5732e347a2eccf561741b9d0c3caba3f7dce8a\"" Sep 12 16:52:12.351166 systemd[1]: Started cri-containerd-6dad778a090795ed6b036a651e5732e347a2eccf561741b9d0c3caba3f7dce8a.scope - libcontainer container 6dad778a090795ed6b036a651e5732e347a2eccf561741b9d0c3caba3f7dce8a. Sep 12 16:52:12.407765 containerd[1964]: time="2025-09-12T16:52:12.407059092Z" level=info msg="StartContainer for \"6dad778a090795ed6b036a651e5732e347a2eccf561741b9d0c3caba3f7dce8a\" returns successfully" Sep 12 16:52:12.449781 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 16:52:12.451248 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 16:52:12.451665 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 12 16:52:12.462113 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 16:52:12.464132 systemd[1]: cri-containerd-6dad778a090795ed6b036a651e5732e347a2eccf561741b9d0c3caba3f7dce8a.scope: Deactivated successfully. Sep 12 16:52:12.508063 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 16:52:12.511206 containerd[1964]: time="2025-09-12T16:52:12.510661676Z" level=info msg="shim disconnected" id=6dad778a090795ed6b036a651e5732e347a2eccf561741b9d0c3caba3f7dce8a namespace=k8s.io Sep 12 16:52:12.511206 containerd[1964]: time="2025-09-12T16:52:12.510773044Z" level=warning msg="cleaning up after shim disconnected" id=6dad778a090795ed6b036a651e5732e347a2eccf561741b9d0c3caba3f7dce8a namespace=k8s.io Sep 12 16:52:12.511206 containerd[1964]: time="2025-09-12T16:52:12.510840806Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 16:52:12.922116 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6dad778a090795ed6b036a651e5732e347a2eccf561741b9d0c3caba3f7dce8a-rootfs.mount: Deactivated successfully. Sep 12 16:52:13.240193 containerd[1964]: time="2025-09-12T16:52:13.239681597Z" level=info msg="CreateContainer within sandbox \"30d753d4fe70401fa0dd1ac27deb9957a3bc710cd56b002448e1f6fcc515c0a4\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 12 16:52:13.278317 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2447925598.mount: Deactivated successfully. Sep 12 16:52:13.300947 containerd[1964]: time="2025-09-12T16:52:13.299610195Z" level=info msg="CreateContainer within sandbox \"30d753d4fe70401fa0dd1ac27deb9957a3bc710cd56b002448e1f6fcc515c0a4\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ef65e4f65a781c33b9abf3936a0d8286d4096baf0a486b0b67f07dfcbacd70c1\"" Sep 12 16:52:13.300646 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1982953841.mount: Deactivated successfully. Sep 12 16:52:13.309114 containerd[1964]: time="2025-09-12T16:52:13.309058833Z" level=info msg="StartContainer for \"ef65e4f65a781c33b9abf3936a0d8286d4096baf0a486b0b67f07dfcbacd70c1\"" Sep 12 16:52:13.366106 systemd[1]: Started cri-containerd-ef65e4f65a781c33b9abf3936a0d8286d4096baf0a486b0b67f07dfcbacd70c1.scope - libcontainer container ef65e4f65a781c33b9abf3936a0d8286d4096baf0a486b0b67f07dfcbacd70c1. Sep 12 16:52:13.427782 containerd[1964]: time="2025-09-12T16:52:13.427574706Z" level=info msg="StartContainer for \"ef65e4f65a781c33b9abf3936a0d8286d4096baf0a486b0b67f07dfcbacd70c1\" returns successfully" Sep 12 16:52:13.438560 systemd[1]: cri-containerd-ef65e4f65a781c33b9abf3936a0d8286d4096baf0a486b0b67f07dfcbacd70c1.scope: Deactivated successfully. Sep 12 16:52:13.484615 containerd[1964]: time="2025-09-12T16:52:13.484511373Z" level=info msg="shim disconnected" id=ef65e4f65a781c33b9abf3936a0d8286d4096baf0a486b0b67f07dfcbacd70c1 namespace=k8s.io Sep 12 16:52:13.484615 containerd[1964]: time="2025-09-12T16:52:13.484594730Z" level=warning msg="cleaning up after shim disconnected" id=ef65e4f65a781c33b9abf3936a0d8286d4096baf0a486b0b67f07dfcbacd70c1 namespace=k8s.io Sep 12 16:52:13.484615 containerd[1964]: time="2025-09-12T16:52:13.484616725Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 16:52:14.248693 containerd[1964]: time="2025-09-12T16:52:14.248619489Z" level=info msg="CreateContainer within sandbox \"30d753d4fe70401fa0dd1ac27deb9957a3bc710cd56b002448e1f6fcc515c0a4\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 12 16:52:14.275200 containerd[1964]: time="2025-09-12T16:52:14.273337166Z" level=info msg="CreateContainer within sandbox \"30d753d4fe70401fa0dd1ac27deb9957a3bc710cd56b002448e1f6fcc515c0a4\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"974dd20e7b7c147fce4db2238cc6c123504803208a8c12a01b2dccc6455a5e77\"" Sep 12 16:52:14.276321 containerd[1964]: time="2025-09-12T16:52:14.276175528Z" level=info msg="StartContainer for \"974dd20e7b7c147fce4db2238cc6c123504803208a8c12a01b2dccc6455a5e77\"" Sep 12 16:52:14.345144 systemd[1]: Started cri-containerd-974dd20e7b7c147fce4db2238cc6c123504803208a8c12a01b2dccc6455a5e77.scope - libcontainer container 974dd20e7b7c147fce4db2238cc6c123504803208a8c12a01b2dccc6455a5e77. Sep 12 16:52:14.392489 systemd[1]: cri-containerd-974dd20e7b7c147fce4db2238cc6c123504803208a8c12a01b2dccc6455a5e77.scope: Deactivated successfully. Sep 12 16:52:14.400745 containerd[1964]: time="2025-09-12T16:52:14.400457895Z" level=info msg="StartContainer for \"974dd20e7b7c147fce4db2238cc6c123504803208a8c12a01b2dccc6455a5e77\" returns successfully" Sep 12 16:52:14.402646 containerd[1964]: time="2025-09-12T16:52:14.401976330Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd2050f2f_0f27_469d_8312_57577bc96f50.slice/cri-containerd-974dd20e7b7c147fce4db2238cc6c123504803208a8c12a01b2dccc6455a5e77.scope/memory.events\": no such file or directory" Sep 12 16:52:14.451857 containerd[1964]: time="2025-09-12T16:52:14.451615534Z" level=info msg="shim disconnected" id=974dd20e7b7c147fce4db2238cc6c123504803208a8c12a01b2dccc6455a5e77 namespace=k8s.io Sep 12 16:52:14.451857 containerd[1964]: time="2025-09-12T16:52:14.451719986Z" level=warning msg="cleaning up after shim disconnected" id=974dd20e7b7c147fce4db2238cc6c123504803208a8c12a01b2dccc6455a5e77 namespace=k8s.io Sep 12 16:52:14.451857 containerd[1964]: time="2025-09-12T16:52:14.451739087Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 16:52:14.922304 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-974dd20e7b7c147fce4db2238cc6c123504803208a8c12a01b2dccc6455a5e77-rootfs.mount: Deactivated successfully. Sep 12 16:52:15.257409 containerd[1964]: time="2025-09-12T16:52:15.256721259Z" level=info msg="CreateContainer within sandbox \"30d753d4fe70401fa0dd1ac27deb9957a3bc710cd56b002448e1f6fcc515c0a4\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 12 16:52:15.316262 containerd[1964]: time="2025-09-12T16:52:15.316006275Z" level=info msg="CreateContainer within sandbox \"30d753d4fe70401fa0dd1ac27deb9957a3bc710cd56b002448e1f6fcc515c0a4\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e3506697fd20e447900a227aabf3daba682c507bcc19cc97d7cfd1033baee1c7\"" Sep 12 16:52:15.318718 containerd[1964]: time="2025-09-12T16:52:15.316740562Z" level=info msg="StartContainer for \"e3506697fd20e447900a227aabf3daba682c507bcc19cc97d7cfd1033baee1c7\"" Sep 12 16:52:15.393121 systemd[1]: Started cri-containerd-e3506697fd20e447900a227aabf3daba682c507bcc19cc97d7cfd1033baee1c7.scope - libcontainer container e3506697fd20e447900a227aabf3daba682c507bcc19cc97d7cfd1033baee1c7. Sep 12 16:52:15.468859 containerd[1964]: time="2025-09-12T16:52:15.467947657Z" level=info msg="StartContainer for \"e3506697fd20e447900a227aabf3daba682c507bcc19cc97d7cfd1033baee1c7\" returns successfully" Sep 12 16:52:15.699177 kubelet[3241]: I0912 16:52:15.698674 3241 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 12 16:52:15.808280 kubelet[3241]: I0912 16:52:15.806422 3241 status_manager.go:890] "Failed to get status for pod" podUID="37b29caa-9f37-46e7-bb48-d1a5cd7e3a98" pod="kube-system/coredns-668d6bf9bc-tdvmr" err="pods \"coredns-668d6bf9bc-tdvmr\" is forbidden: User \"system:node:ip-172-31-21-42\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-21-42' and this object" Sep 12 16:52:15.810084 systemd[1]: Created slice kubepods-burstable-pod37b29caa_9f37_46e7_bb48_d1a5cd7e3a98.slice - libcontainer container kubepods-burstable-pod37b29caa_9f37_46e7_bb48_d1a5cd7e3a98.slice. Sep 12 16:52:15.819144 kubelet[3241]: W0912 16:52:15.819030 3241 reflector.go:569] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ip-172-31-21-42" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-21-42' and this object Sep 12 16:52:15.819144 kubelet[3241]: E0912 16:52:15.819117 3241 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:ip-172-31-21-42\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-21-42' and this object" logger="UnhandledError" Sep 12 16:52:15.838985 systemd[1]: Created slice kubepods-burstable-pod5a6a050a_8395_4720_a1e9_38b0e610e595.slice - libcontainer container kubepods-burstable-pod5a6a050a_8395_4720_a1e9_38b0e610e595.slice. Sep 12 16:52:15.846154 kubelet[3241]: I0912 16:52:15.845382 3241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/37b29caa-9f37-46e7-bb48-d1a5cd7e3a98-config-volume\") pod \"coredns-668d6bf9bc-tdvmr\" (UID: \"37b29caa-9f37-46e7-bb48-d1a5cd7e3a98\") " pod="kube-system/coredns-668d6bf9bc-tdvmr" Sep 12 16:52:15.846154 kubelet[3241]: I0912 16:52:15.845485 3241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96fmz\" (UniqueName: \"kubernetes.io/projected/37b29caa-9f37-46e7-bb48-d1a5cd7e3a98-kube-api-access-96fmz\") pod \"coredns-668d6bf9bc-tdvmr\" (UID: \"37b29caa-9f37-46e7-bb48-d1a5cd7e3a98\") " pod="kube-system/coredns-668d6bf9bc-tdvmr" Sep 12 16:52:15.846154 kubelet[3241]: I0912 16:52:15.845531 3241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5a6a050a-8395-4720-a1e9-38b0e610e595-config-volume\") pod \"coredns-668d6bf9bc-69r2x\" (UID: \"5a6a050a-8395-4720-a1e9-38b0e610e595\") " pod="kube-system/coredns-668d6bf9bc-69r2x" Sep 12 16:52:15.846154 kubelet[3241]: I0912 16:52:15.845573 3241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78477\" (UniqueName: \"kubernetes.io/projected/5a6a050a-8395-4720-a1e9-38b0e610e595-kube-api-access-78477\") pod \"coredns-668d6bf9bc-69r2x\" (UID: \"5a6a050a-8395-4720-a1e9-38b0e610e595\") " pod="kube-system/coredns-668d6bf9bc-69r2x" Sep 12 16:52:15.925436 systemd[1]: run-containerd-runc-k8s.io-e3506697fd20e447900a227aabf3daba682c507bcc19cc97d7cfd1033baee1c7-runc.aft1zV.mount: Deactivated successfully. Sep 12 16:52:16.325856 kubelet[3241]: I0912 16:52:16.325729 3241 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-pcrsx" podStartSLOduration=7.053080935 podStartE2EDuration="20.32570327s" podCreationTimestamp="2025-09-12 16:51:56 +0000 UTC" firstStartedPulling="2025-09-12 16:51:57.627168617 +0000 UTC m=+5.958645640" lastFinishedPulling="2025-09-12 16:52:10.899790964 +0000 UTC m=+19.231267975" observedRunningTime="2025-09-12 16:52:16.320378801 +0000 UTC m=+24.651855848" watchObservedRunningTime="2025-09-12 16:52:16.32570327 +0000 UTC m=+24.657180293" Sep 12 16:52:16.680606 containerd[1964]: time="2025-09-12T16:52:16.679655328Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 16:52:16.682684 containerd[1964]: time="2025-09-12T16:52:16.682520056Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Sep 12 16:52:16.683733 containerd[1964]: time="2025-09-12T16:52:16.683688240Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 16:52:16.690933 containerd[1964]: time="2025-09-12T16:52:16.690265895Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 5.790149773s" Sep 12 16:52:16.690933 containerd[1964]: time="2025-09-12T16:52:16.690331184Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Sep 12 16:52:16.697153 containerd[1964]: time="2025-09-12T16:52:16.696861331Z" level=info msg="CreateContainer within sandbox \"242e7fbd2ce07db33a25decb0c9c5f78783c7ac77310613706e7bea244591d98\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 12 16:52:16.723723 containerd[1964]: time="2025-09-12T16:52:16.723605705Z" level=info msg="CreateContainer within sandbox \"242e7fbd2ce07db33a25decb0c9c5f78783c7ac77310613706e7bea244591d98\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"c5f7df82322ca72a152844482e96a09d632e0c0b9cf792ade991582edeb8fdae\"" Sep 12 16:52:16.727539 containerd[1964]: time="2025-09-12T16:52:16.726374949Z" level=info msg="StartContainer for \"c5f7df82322ca72a152844482e96a09d632e0c0b9cf792ade991582edeb8fdae\"" Sep 12 16:52:16.762021 containerd[1964]: time="2025-09-12T16:52:16.761924523Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-69r2x,Uid:5a6a050a-8395-4720-a1e9-38b0e610e595,Namespace:kube-system,Attempt:0,}" Sep 12 16:52:16.824183 systemd[1]: Started cri-containerd-c5f7df82322ca72a152844482e96a09d632e0c0b9cf792ade991582edeb8fdae.scope - libcontainer container c5f7df82322ca72a152844482e96a09d632e0c0b9cf792ade991582edeb8fdae. Sep 12 16:52:16.969970 containerd[1964]: time="2025-09-12T16:52:16.969606894Z" level=info msg="StartContainer for \"c5f7df82322ca72a152844482e96a09d632e0c0b9cf792ade991582edeb8fdae\" returns successfully" Sep 12 16:52:17.032139 containerd[1964]: time="2025-09-12T16:52:17.031465972Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tdvmr,Uid:37b29caa-9f37-46e7-bb48-d1a5cd7e3a98,Namespace:kube-system,Attempt:0,}" Sep 12 16:52:17.309066 kubelet[3241]: I0912 16:52:17.308832 3241 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-gztxg" podStartSLOduration=1.5786287799999998 podStartE2EDuration="20.30877648s" podCreationTimestamp="2025-09-12 16:51:57 +0000 UTC" firstStartedPulling="2025-09-12 16:51:57.963407995 +0000 UTC m=+6.294885018" lastFinishedPulling="2025-09-12 16:52:16.693555707 +0000 UTC m=+25.025032718" observedRunningTime="2025-09-12 16:52:17.308286311 +0000 UTC m=+25.639763358" watchObservedRunningTime="2025-09-12 16:52:17.30877648 +0000 UTC m=+25.640253503" Sep 12 16:52:21.178637 systemd-networkd[1882]: cilium_host: Link UP Sep 12 16:52:21.179050 systemd-networkd[1882]: cilium_net: Link UP Sep 12 16:52:21.179057 systemd-networkd[1882]: cilium_net: Gained carrier Sep 12 16:52:21.179464 systemd-networkd[1882]: cilium_host: Gained carrier Sep 12 16:52:21.182867 (udev-worker)[4261]: Network interface NamePolicy= disabled on kernel command line. Sep 12 16:52:21.185045 (udev-worker)[4263]: Network interface NamePolicy= disabled on kernel command line. Sep 12 16:52:21.274449 systemd-networkd[1882]: cilium_net: Gained IPv6LL Sep 12 16:52:21.391963 systemd-networkd[1882]: cilium_vxlan: Link UP Sep 12 16:52:21.391976 systemd-networkd[1882]: cilium_vxlan: Gained carrier Sep 12 16:52:21.570097 systemd-networkd[1882]: cilium_host: Gained IPv6LL Sep 12 16:52:22.005042 kernel: NET: Registered PF_ALG protocol family Sep 12 16:52:23.130648 systemd-networkd[1882]: cilium_vxlan: Gained IPv6LL Sep 12 16:52:23.450706 (udev-worker)[4273]: Network interface NamePolicy= disabled on kernel command line. Sep 12 16:52:23.455065 systemd-networkd[1882]: lxc_health: Link UP Sep 12 16:52:23.467096 systemd-networkd[1882]: lxc_health: Gained carrier Sep 12 16:52:23.906271 systemd-networkd[1882]: lxcf028ff9c2e2f: Link UP Sep 12 16:52:23.916123 kernel: eth0: renamed from tmp611c0 Sep 12 16:52:23.920193 systemd-networkd[1882]: lxcf028ff9c2e2f: Gained carrier Sep 12 16:52:24.134019 systemd-networkd[1882]: lxce4c449f167a8: Link UP Sep 12 16:52:24.153003 kernel: eth0: renamed from tmp2e865 Sep 12 16:52:24.155223 systemd-networkd[1882]: lxce4c449f167a8: Gained carrier Sep 12 16:52:25.114063 systemd-networkd[1882]: lxc_health: Gained IPv6LL Sep 12 16:52:25.882042 systemd-networkd[1882]: lxcf028ff9c2e2f: Gained IPv6LL Sep 12 16:52:26.074264 systemd-networkd[1882]: lxce4c449f167a8: Gained IPv6LL Sep 12 16:52:26.800208 kubelet[3241]: I0912 16:52:26.798953 3241 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 16:52:28.862239 ntpd[1945]: Listen normally on 8 cilium_host 192.168.0.242:123 Sep 12 16:52:28.863221 ntpd[1945]: 12 Sep 16:52:28 ntpd[1945]: Listen normally on 8 cilium_host 192.168.0.242:123 Sep 12 16:52:28.863461 ntpd[1945]: Listen normally on 9 cilium_net [fe80::74a8:28ff:fe3c:a1ae%4]:123 Sep 12 16:52:28.864555 ntpd[1945]: 12 Sep 16:52:28 ntpd[1945]: Listen normally on 9 cilium_net [fe80::74a8:28ff:fe3c:a1ae%4]:123 Sep 12 16:52:28.864555 ntpd[1945]: 12 Sep 16:52:28 ntpd[1945]: Listen normally on 10 cilium_host [fe80::486b:45ff:fe8a:97aa%5]:123 Sep 12 16:52:28.864555 ntpd[1945]: 12 Sep 16:52:28 ntpd[1945]: Listen normally on 11 cilium_vxlan [fe80::b089:53ff:fe27:ef0%6]:123 Sep 12 16:52:28.864555 ntpd[1945]: 12 Sep 16:52:28 ntpd[1945]: Listen normally on 12 lxc_health [fe80::4b4:2cff:fe1b:b4cc%8]:123 Sep 12 16:52:28.864555 ntpd[1945]: 12 Sep 16:52:28 ntpd[1945]: Listen normally on 13 lxcf028ff9c2e2f [fe80::f4ca:deff:fef6:81c1%10]:123 Sep 12 16:52:28.864555 ntpd[1945]: 12 Sep 16:52:28 ntpd[1945]: Listen normally on 14 lxce4c449f167a8 [fe80::6c07:afff:fe3f:c6f1%12]:123 Sep 12 16:52:28.863703 ntpd[1945]: Listen normally on 10 cilium_host [fe80::486b:45ff:fe8a:97aa%5]:123 Sep 12 16:52:28.863889 ntpd[1945]: Listen normally on 11 cilium_vxlan [fe80::b089:53ff:fe27:ef0%6]:123 Sep 12 16:52:28.863976 ntpd[1945]: Listen normally on 12 lxc_health [fe80::4b4:2cff:fe1b:b4cc%8]:123 Sep 12 16:52:28.864049 ntpd[1945]: Listen normally on 13 lxcf028ff9c2e2f [fe80::f4ca:deff:fef6:81c1%10]:123 Sep 12 16:52:28.864120 ntpd[1945]: Listen normally on 14 lxce4c449f167a8 [fe80::6c07:afff:fe3f:c6f1%12]:123 Sep 12 16:52:33.341914 containerd[1964]: time="2025-09-12T16:52:33.341699963Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 16:52:33.344567 containerd[1964]: time="2025-09-12T16:52:33.342539315Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 16:52:33.344567 containerd[1964]: time="2025-09-12T16:52:33.342581732Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 16:52:33.344567 containerd[1964]: time="2025-09-12T16:52:33.342924995Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 16:52:33.391417 systemd[1]: Started cri-containerd-611c004c9803f04a9fc6d3ca3d519a9e40e97f06adc0f6b018b82ba09a4f23f2.scope - libcontainer container 611c004c9803f04a9fc6d3ca3d519a9e40e97f06adc0f6b018b82ba09a4f23f2. Sep 12 16:52:33.448828 containerd[1964]: time="2025-09-12T16:52:33.448292654Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 16:52:33.448828 containerd[1964]: time="2025-09-12T16:52:33.448412846Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 16:52:33.448828 containerd[1964]: time="2025-09-12T16:52:33.448450316Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 16:52:33.451227 containerd[1964]: time="2025-09-12T16:52:33.450931632Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 16:52:33.539127 systemd[1]: run-containerd-runc-k8s.io-2e8657af0d5b695ad197e5991350e5c50a5d57566eaad3bb783c225a70686f41-runc.1Sb29H.mount: Deactivated successfully. Sep 12 16:52:33.553409 systemd[1]: Started cri-containerd-2e8657af0d5b695ad197e5991350e5c50a5d57566eaad3bb783c225a70686f41.scope - libcontainer container 2e8657af0d5b695ad197e5991350e5c50a5d57566eaad3bb783c225a70686f41. Sep 12 16:52:33.565512 containerd[1964]: time="2025-09-12T16:52:33.564877061Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-69r2x,Uid:5a6a050a-8395-4720-a1e9-38b0e610e595,Namespace:kube-system,Attempt:0,} returns sandbox id \"611c004c9803f04a9fc6d3ca3d519a9e40e97f06adc0f6b018b82ba09a4f23f2\"" Sep 12 16:52:33.574593 containerd[1964]: time="2025-09-12T16:52:33.574407160Z" level=info msg="CreateContainer within sandbox \"611c004c9803f04a9fc6d3ca3d519a9e40e97f06adc0f6b018b82ba09a4f23f2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 16:52:33.598936 containerd[1964]: time="2025-09-12T16:52:33.598755845Z" level=info msg="CreateContainer within sandbox \"611c004c9803f04a9fc6d3ca3d519a9e40e97f06adc0f6b018b82ba09a4f23f2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7bc0cc6a23aac2a92558ea06d0100f9c3607fc56b946384ceee9236dcdf72576\"" Sep 12 16:52:33.601686 containerd[1964]: time="2025-09-12T16:52:33.601615014Z" level=info msg="StartContainer for \"7bc0cc6a23aac2a92558ea06d0100f9c3607fc56b946384ceee9236dcdf72576\"" Sep 12 16:52:33.663155 systemd[1]: Started cri-containerd-7bc0cc6a23aac2a92558ea06d0100f9c3607fc56b946384ceee9236dcdf72576.scope - libcontainer container 7bc0cc6a23aac2a92558ea06d0100f9c3607fc56b946384ceee9236dcdf72576. Sep 12 16:52:33.712141 containerd[1964]: time="2025-09-12T16:52:33.712080323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tdvmr,Uid:37b29caa-9f37-46e7-bb48-d1a5cd7e3a98,Namespace:kube-system,Attempt:0,} returns sandbox id \"2e8657af0d5b695ad197e5991350e5c50a5d57566eaad3bb783c225a70686f41\"" Sep 12 16:52:33.720233 containerd[1964]: time="2025-09-12T16:52:33.720163459Z" level=info msg="CreateContainer within sandbox \"2e8657af0d5b695ad197e5991350e5c50a5d57566eaad3bb783c225a70686f41\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 16:52:33.754172 containerd[1964]: time="2025-09-12T16:52:33.754096305Z" level=info msg="CreateContainer within sandbox \"2e8657af0d5b695ad197e5991350e5c50a5d57566eaad3bb783c225a70686f41\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"777a1c21a9b75f66eb9d672ae6f000261a61e5f59696d091a067ed79de2dbfc0\"" Sep 12 16:52:33.757402 containerd[1964]: time="2025-09-12T16:52:33.757324719Z" level=info msg="StartContainer for \"777a1c21a9b75f66eb9d672ae6f000261a61e5f59696d091a067ed79de2dbfc0\"" Sep 12 16:52:33.824263 containerd[1964]: time="2025-09-12T16:52:33.824184068Z" level=info msg="StartContainer for \"7bc0cc6a23aac2a92558ea06d0100f9c3607fc56b946384ceee9236dcdf72576\" returns successfully" Sep 12 16:52:33.853301 systemd[1]: Started cri-containerd-777a1c21a9b75f66eb9d672ae6f000261a61e5f59696d091a067ed79de2dbfc0.scope - libcontainer container 777a1c21a9b75f66eb9d672ae6f000261a61e5f59696d091a067ed79de2dbfc0. Sep 12 16:52:33.942237 containerd[1964]: time="2025-09-12T16:52:33.942159467Z" level=info msg="StartContainer for \"777a1c21a9b75f66eb9d672ae6f000261a61e5f59696d091a067ed79de2dbfc0\" returns successfully" Sep 12 16:52:34.368450 kubelet[3241]: I0912 16:52:34.367149 3241 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-tdvmr" podStartSLOduration=37.36712752 podStartE2EDuration="37.36712752s" podCreationTimestamp="2025-09-12 16:51:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 16:52:34.363961333 +0000 UTC m=+42.695438393" watchObservedRunningTime="2025-09-12 16:52:34.36712752 +0000 UTC m=+42.698604543" Sep 12 16:52:42.972311 systemd[1]: Started sshd@7-172.31.21.42:22-139.178.89.65:56500.service - OpenSSH per-connection server daemon (139.178.89.65:56500). Sep 12 16:52:43.165625 sshd[4793]: Accepted publickey for core from 139.178.89.65 port 56500 ssh2: RSA SHA256:UtlJgM7ARb7wxMu1nBhWJ04sNPurn7zs7fZADhw2VQM Sep 12 16:52:43.168202 sshd-session[4793]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 16:52:43.176364 systemd-logind[1950]: New session 8 of user core. Sep 12 16:52:43.188115 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 12 16:52:43.497040 sshd[4795]: Connection closed by 139.178.89.65 port 56500 Sep 12 16:52:43.498253 sshd-session[4793]: pam_unix(sshd:session): session closed for user core Sep 12 16:52:43.504737 systemd-logind[1950]: Session 8 logged out. Waiting for processes to exit. Sep 12 16:52:43.506178 systemd[1]: sshd@7-172.31.21.42:22-139.178.89.65:56500.service: Deactivated successfully. Sep 12 16:52:43.510895 systemd[1]: session-8.scope: Deactivated successfully. Sep 12 16:52:43.515349 systemd-logind[1950]: Removed session 8. Sep 12 16:52:48.541372 systemd[1]: Started sshd@8-172.31.21.42:22-139.178.89.65:56508.service - OpenSSH per-connection server daemon (139.178.89.65:56508). Sep 12 16:52:48.732735 sshd[4808]: Accepted publickey for core from 139.178.89.65 port 56508 ssh2: RSA SHA256:UtlJgM7ARb7wxMu1nBhWJ04sNPurn7zs7fZADhw2VQM Sep 12 16:52:48.735259 sshd-session[4808]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 16:52:48.743364 systemd-logind[1950]: New session 9 of user core. Sep 12 16:52:48.752086 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 12 16:52:49.007922 sshd[4810]: Connection closed by 139.178.89.65 port 56508 Sep 12 16:52:49.008750 sshd-session[4808]: pam_unix(sshd:session): session closed for user core Sep 12 16:52:49.015225 systemd[1]: sshd@8-172.31.21.42:22-139.178.89.65:56508.service: Deactivated successfully. Sep 12 16:52:49.020248 systemd[1]: session-9.scope: Deactivated successfully. Sep 12 16:52:49.022041 systemd-logind[1950]: Session 9 logged out. Waiting for processes to exit. Sep 12 16:52:49.025349 systemd-logind[1950]: Removed session 9. Sep 12 16:52:54.056330 systemd[1]: Started sshd@9-172.31.21.42:22-139.178.89.65:44644.service - OpenSSH per-connection server daemon (139.178.89.65:44644). Sep 12 16:52:54.233205 sshd[4825]: Accepted publickey for core from 139.178.89.65 port 44644 ssh2: RSA SHA256:UtlJgM7ARb7wxMu1nBhWJ04sNPurn7zs7fZADhw2VQM Sep 12 16:52:54.235696 sshd-session[4825]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 16:52:54.243875 systemd-logind[1950]: New session 10 of user core. Sep 12 16:52:54.256134 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 12 16:52:54.497527 sshd[4827]: Connection closed by 139.178.89.65 port 44644 Sep 12 16:52:54.498655 sshd-session[4825]: pam_unix(sshd:session): session closed for user core Sep 12 16:52:54.505564 systemd[1]: sshd@9-172.31.21.42:22-139.178.89.65:44644.service: Deactivated successfully. Sep 12 16:52:54.510242 systemd[1]: session-10.scope: Deactivated successfully. Sep 12 16:52:54.512565 systemd-logind[1950]: Session 10 logged out. Waiting for processes to exit. Sep 12 16:52:54.514988 systemd-logind[1950]: Removed session 10. Sep 12 16:52:59.548320 systemd[1]: Started sshd@10-172.31.21.42:22-139.178.89.65:44650.service - OpenSSH per-connection server daemon (139.178.89.65:44650). Sep 12 16:52:59.732300 sshd[4844]: Accepted publickey for core from 139.178.89.65 port 44650 ssh2: RSA SHA256:UtlJgM7ARb7wxMu1nBhWJ04sNPurn7zs7fZADhw2VQM Sep 12 16:52:59.735369 sshd-session[4844]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 16:52:59.746287 systemd-logind[1950]: New session 11 of user core. Sep 12 16:52:59.755166 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 12 16:53:00.012625 sshd[4846]: Connection closed by 139.178.89.65 port 44650 Sep 12 16:53:00.013202 sshd-session[4844]: pam_unix(sshd:session): session closed for user core Sep 12 16:53:00.022681 systemd[1]: sshd@10-172.31.21.42:22-139.178.89.65:44650.service: Deactivated successfully. Sep 12 16:53:00.030033 systemd[1]: session-11.scope: Deactivated successfully. Sep 12 16:53:00.033717 systemd-logind[1950]: Session 11 logged out. Waiting for processes to exit. Sep 12 16:53:00.055411 systemd[1]: Started sshd@11-172.31.21.42:22-139.178.89.65:50284.service - OpenSSH per-connection server daemon (139.178.89.65:50284). Sep 12 16:53:00.057418 systemd-logind[1950]: Removed session 11. Sep 12 16:53:00.257631 sshd[4858]: Accepted publickey for core from 139.178.89.65 port 50284 ssh2: RSA SHA256:UtlJgM7ARb7wxMu1nBhWJ04sNPurn7zs7fZADhw2VQM Sep 12 16:53:00.260292 sshd-session[4858]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 16:53:00.270215 systemd-logind[1950]: New session 12 of user core. Sep 12 16:53:00.282094 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 12 16:53:00.619156 sshd[4861]: Connection closed by 139.178.89.65 port 50284 Sep 12 16:53:00.620374 sshd-session[4858]: pam_unix(sshd:session): session closed for user core Sep 12 16:53:00.632585 systemd[1]: sshd@11-172.31.21.42:22-139.178.89.65:50284.service: Deactivated successfully. Sep 12 16:53:00.643103 systemd[1]: session-12.scope: Deactivated successfully. Sep 12 16:53:00.666964 systemd-logind[1950]: Session 12 logged out. Waiting for processes to exit. Sep 12 16:53:00.673358 systemd[1]: Started sshd@12-172.31.21.42:22-139.178.89.65:50290.service - OpenSSH per-connection server daemon (139.178.89.65:50290). Sep 12 16:53:00.679332 systemd-logind[1950]: Removed session 12. Sep 12 16:53:00.876059 sshd[4870]: Accepted publickey for core from 139.178.89.65 port 50290 ssh2: RSA SHA256:UtlJgM7ARb7wxMu1nBhWJ04sNPurn7zs7fZADhw2VQM Sep 12 16:53:00.878774 sshd-session[4870]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 16:53:00.887680 systemd-logind[1950]: New session 13 of user core. Sep 12 16:53:00.897072 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 12 16:53:01.147392 sshd[4873]: Connection closed by 139.178.89.65 port 50290 Sep 12 16:53:01.146248 sshd-session[4870]: pam_unix(sshd:session): session closed for user core Sep 12 16:53:01.153420 systemd[1]: sshd@12-172.31.21.42:22-139.178.89.65:50290.service: Deactivated successfully. Sep 12 16:53:01.158557 systemd[1]: session-13.scope: Deactivated successfully. Sep 12 16:53:01.162283 systemd-logind[1950]: Session 13 logged out. Waiting for processes to exit. Sep 12 16:53:01.164369 systemd-logind[1950]: Removed session 13. Sep 12 16:53:06.190517 systemd[1]: Started sshd@13-172.31.21.42:22-139.178.89.65:50304.service - OpenSSH per-connection server daemon (139.178.89.65:50304). Sep 12 16:53:06.383744 sshd[4885]: Accepted publickey for core from 139.178.89.65 port 50304 ssh2: RSA SHA256:UtlJgM7ARb7wxMu1nBhWJ04sNPurn7zs7fZADhw2VQM Sep 12 16:53:06.386667 sshd-session[4885]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 16:53:06.396852 systemd-logind[1950]: New session 14 of user core. Sep 12 16:53:06.405163 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 12 16:53:06.660716 sshd[4887]: Connection closed by 139.178.89.65 port 50304 Sep 12 16:53:06.661704 sshd-session[4885]: pam_unix(sshd:session): session closed for user core Sep 12 16:53:06.668992 systemd[1]: sshd@13-172.31.21.42:22-139.178.89.65:50304.service: Deactivated successfully. Sep 12 16:53:06.672787 systemd[1]: session-14.scope: Deactivated successfully. Sep 12 16:53:06.674942 systemd-logind[1950]: Session 14 logged out. Waiting for processes to exit. Sep 12 16:53:06.677893 systemd-logind[1950]: Removed session 14. Sep 12 16:53:11.713273 systemd[1]: Started sshd@14-172.31.21.42:22-139.178.89.65:52606.service - OpenSSH per-connection server daemon (139.178.89.65:52606). Sep 12 16:53:11.899792 sshd[4899]: Accepted publickey for core from 139.178.89.65 port 52606 ssh2: RSA SHA256:UtlJgM7ARb7wxMu1nBhWJ04sNPurn7zs7fZADhw2VQM Sep 12 16:53:11.902955 sshd-session[4899]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 16:53:11.912338 systemd-logind[1950]: New session 15 of user core. Sep 12 16:53:11.921091 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 12 16:53:12.176838 sshd[4901]: Connection closed by 139.178.89.65 port 52606 Sep 12 16:53:12.175568 sshd-session[4899]: pam_unix(sshd:session): session closed for user core Sep 12 16:53:12.182494 systemd[1]: sshd@14-172.31.21.42:22-139.178.89.65:52606.service: Deactivated successfully. Sep 12 16:53:12.186902 systemd[1]: session-15.scope: Deactivated successfully. Sep 12 16:53:12.190634 systemd-logind[1950]: Session 15 logged out. Waiting for processes to exit. Sep 12 16:53:12.193495 systemd-logind[1950]: Removed session 15. Sep 12 16:53:17.219323 systemd[1]: Started sshd@15-172.31.21.42:22-139.178.89.65:52620.service - OpenSSH per-connection server daemon (139.178.89.65:52620). Sep 12 16:53:17.415634 sshd[4915]: Accepted publickey for core from 139.178.89.65 port 52620 ssh2: RSA SHA256:UtlJgM7ARb7wxMu1nBhWJ04sNPurn7zs7fZADhw2VQM Sep 12 16:53:17.418503 sshd-session[4915]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 16:53:17.428470 systemd-logind[1950]: New session 16 of user core. Sep 12 16:53:17.438192 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 12 16:53:17.700623 sshd[4917]: Connection closed by 139.178.89.65 port 52620 Sep 12 16:53:17.701967 sshd-session[4915]: pam_unix(sshd:session): session closed for user core Sep 12 16:53:17.712968 systemd[1]: sshd@15-172.31.21.42:22-139.178.89.65:52620.service: Deactivated successfully. Sep 12 16:53:17.713050 systemd-logind[1950]: Session 16 logged out. Waiting for processes to exit. Sep 12 16:53:17.719938 systemd[1]: session-16.scope: Deactivated successfully. Sep 12 16:53:17.742088 systemd-logind[1950]: Removed session 16. Sep 12 16:53:17.751440 systemd[1]: Started sshd@16-172.31.21.42:22-139.178.89.65:52626.service - OpenSSH per-connection server daemon (139.178.89.65:52626). Sep 12 16:53:17.945688 sshd[4928]: Accepted publickey for core from 139.178.89.65 port 52626 ssh2: RSA SHA256:UtlJgM7ARb7wxMu1nBhWJ04sNPurn7zs7fZADhw2VQM Sep 12 16:53:17.948598 sshd-session[4928]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 16:53:17.960114 systemd-logind[1950]: New session 17 of user core. Sep 12 16:53:17.969147 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 12 16:53:18.311452 sshd[4931]: Connection closed by 139.178.89.65 port 52626 Sep 12 16:53:18.312088 sshd-session[4928]: pam_unix(sshd:session): session closed for user core Sep 12 16:53:18.320585 systemd[1]: sshd@16-172.31.21.42:22-139.178.89.65:52626.service: Deactivated successfully. Sep 12 16:53:18.324424 systemd[1]: session-17.scope: Deactivated successfully. Sep 12 16:53:18.328529 systemd-logind[1950]: Session 17 logged out. Waiting for processes to exit. Sep 12 16:53:18.330670 systemd-logind[1950]: Removed session 17. Sep 12 16:53:18.357913 systemd[1]: Started sshd@17-172.31.21.42:22-139.178.89.65:52634.service - OpenSSH per-connection server daemon (139.178.89.65:52634). Sep 12 16:53:18.537239 sshd[4941]: Accepted publickey for core from 139.178.89.65 port 52634 ssh2: RSA SHA256:UtlJgM7ARb7wxMu1nBhWJ04sNPurn7zs7fZADhw2VQM Sep 12 16:53:18.539665 sshd-session[4941]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 16:53:18.547885 systemd-logind[1950]: New session 18 of user core. Sep 12 16:53:18.555055 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 12 16:53:19.608238 sshd[4943]: Connection closed by 139.178.89.65 port 52634 Sep 12 16:53:19.611296 sshd-session[4941]: pam_unix(sshd:session): session closed for user core Sep 12 16:53:19.621764 systemd[1]: sshd@17-172.31.21.42:22-139.178.89.65:52634.service: Deactivated successfully. Sep 12 16:53:19.631455 systemd[1]: session-18.scope: Deactivated successfully. Sep 12 16:53:19.638605 systemd-logind[1950]: Session 18 logged out. Waiting for processes to exit. Sep 12 16:53:19.659510 systemd[1]: Started sshd@18-172.31.21.42:22-139.178.89.65:52636.service - OpenSSH per-connection server daemon (139.178.89.65:52636). Sep 12 16:53:19.663523 systemd-logind[1950]: Removed session 18. Sep 12 16:53:19.862990 sshd[4959]: Accepted publickey for core from 139.178.89.65 port 52636 ssh2: RSA SHA256:UtlJgM7ARb7wxMu1nBhWJ04sNPurn7zs7fZADhw2VQM Sep 12 16:53:19.865399 sshd-session[4959]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 16:53:19.873972 systemd-logind[1950]: New session 19 of user core. Sep 12 16:53:19.888135 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 12 16:53:20.403839 sshd[4962]: Connection closed by 139.178.89.65 port 52636 Sep 12 16:53:20.404280 sshd-session[4959]: pam_unix(sshd:session): session closed for user core Sep 12 16:53:20.411340 systemd[1]: sshd@18-172.31.21.42:22-139.178.89.65:52636.service: Deactivated successfully. Sep 12 16:53:20.418141 systemd[1]: session-19.scope: Deactivated successfully. Sep 12 16:53:20.419755 systemd-logind[1950]: Session 19 logged out. Waiting for processes to exit. Sep 12 16:53:20.421552 systemd-logind[1950]: Removed session 19. Sep 12 16:53:20.445486 systemd[1]: Started sshd@19-172.31.21.42:22-139.178.89.65:46502.service - OpenSSH per-connection server daemon (139.178.89.65:46502). Sep 12 16:53:20.635273 sshd[4972]: Accepted publickey for core from 139.178.89.65 port 46502 ssh2: RSA SHA256:UtlJgM7ARb7wxMu1nBhWJ04sNPurn7zs7fZADhw2VQM Sep 12 16:53:20.637738 sshd-session[4972]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 16:53:20.647349 systemd-logind[1950]: New session 20 of user core. Sep 12 16:53:20.658099 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 12 16:53:20.901015 sshd[4974]: Connection closed by 139.178.89.65 port 46502 Sep 12 16:53:20.901885 sshd-session[4972]: pam_unix(sshd:session): session closed for user core Sep 12 16:53:20.908485 systemd[1]: sshd@19-172.31.21.42:22-139.178.89.65:46502.service: Deactivated successfully. Sep 12 16:53:20.914224 systemd[1]: session-20.scope: Deactivated successfully. Sep 12 16:53:20.916414 systemd-logind[1950]: Session 20 logged out. Waiting for processes to exit. Sep 12 16:53:20.918438 systemd-logind[1950]: Removed session 20. Sep 12 16:53:25.946324 systemd[1]: Started sshd@20-172.31.21.42:22-139.178.89.65:46510.service - OpenSSH per-connection server daemon (139.178.89.65:46510). Sep 12 16:53:26.142834 sshd[4986]: Accepted publickey for core from 139.178.89.65 port 46510 ssh2: RSA SHA256:UtlJgM7ARb7wxMu1nBhWJ04sNPurn7zs7fZADhw2VQM Sep 12 16:53:26.145322 sshd-session[4986]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 16:53:26.154066 systemd-logind[1950]: New session 21 of user core. Sep 12 16:53:26.159116 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 12 16:53:26.407926 sshd[4988]: Connection closed by 139.178.89.65 port 46510 Sep 12 16:53:26.408756 sshd-session[4986]: pam_unix(sshd:session): session closed for user core Sep 12 16:53:26.415493 systemd[1]: sshd@20-172.31.21.42:22-139.178.89.65:46510.service: Deactivated successfully. Sep 12 16:53:26.419361 systemd[1]: session-21.scope: Deactivated successfully. Sep 12 16:53:26.420894 systemd-logind[1950]: Session 21 logged out. Waiting for processes to exit. Sep 12 16:53:26.423921 systemd-logind[1950]: Removed session 21. Sep 12 16:53:31.455021 systemd[1]: Started sshd@21-172.31.21.42:22-139.178.89.65:53814.service - OpenSSH per-connection server daemon (139.178.89.65:53814). Sep 12 16:53:31.655021 sshd[5005]: Accepted publickey for core from 139.178.89.65 port 53814 ssh2: RSA SHA256:UtlJgM7ARb7wxMu1nBhWJ04sNPurn7zs7fZADhw2VQM Sep 12 16:53:31.657444 sshd-session[5005]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 16:53:31.666314 systemd-logind[1950]: New session 22 of user core. Sep 12 16:53:31.674074 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 12 16:53:31.918878 sshd[5007]: Connection closed by 139.178.89.65 port 53814 Sep 12 16:53:31.919706 sshd-session[5005]: pam_unix(sshd:session): session closed for user core Sep 12 16:53:31.925139 systemd-logind[1950]: Session 22 logged out. Waiting for processes to exit. Sep 12 16:53:31.927432 systemd[1]: sshd@21-172.31.21.42:22-139.178.89.65:53814.service: Deactivated successfully. Sep 12 16:53:31.931213 systemd[1]: session-22.scope: Deactivated successfully. Sep 12 16:53:31.934822 systemd-logind[1950]: Removed session 22. Sep 12 16:53:36.967298 systemd[1]: Started sshd@22-172.31.21.42:22-139.178.89.65:53824.service - OpenSSH per-connection server daemon (139.178.89.65:53824). Sep 12 16:53:37.154108 sshd[5019]: Accepted publickey for core from 139.178.89.65 port 53824 ssh2: RSA SHA256:UtlJgM7ARb7wxMu1nBhWJ04sNPurn7zs7fZADhw2VQM Sep 12 16:53:37.156592 sshd-session[5019]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 16:53:37.167011 systemd-logind[1950]: New session 23 of user core. Sep 12 16:53:37.172104 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 12 16:53:37.414325 sshd[5021]: Connection closed by 139.178.89.65 port 53824 Sep 12 16:53:37.413570 sshd-session[5019]: pam_unix(sshd:session): session closed for user core Sep 12 16:53:37.420231 systemd[1]: sshd@22-172.31.21.42:22-139.178.89.65:53824.service: Deactivated successfully. Sep 12 16:53:37.424505 systemd[1]: session-23.scope: Deactivated successfully. Sep 12 16:53:37.428206 systemd-logind[1950]: Session 23 logged out. Waiting for processes to exit. Sep 12 16:53:37.430511 systemd-logind[1950]: Removed session 23. Sep 12 16:53:42.463297 systemd[1]: Started sshd@23-172.31.21.42:22-139.178.89.65:50446.service - OpenSSH per-connection server daemon (139.178.89.65:50446). Sep 12 16:53:42.639870 sshd[5033]: Accepted publickey for core from 139.178.89.65 port 50446 ssh2: RSA SHA256:UtlJgM7ARb7wxMu1nBhWJ04sNPurn7zs7fZADhw2VQM Sep 12 16:53:42.642571 sshd-session[5033]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 16:53:42.652479 systemd-logind[1950]: New session 24 of user core. Sep 12 16:53:42.659121 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 12 16:53:42.922425 sshd[5035]: Connection closed by 139.178.89.65 port 50446 Sep 12 16:53:42.923510 sshd-session[5033]: pam_unix(sshd:session): session closed for user core Sep 12 16:53:42.928963 systemd-logind[1950]: Session 24 logged out. Waiting for processes to exit. Sep 12 16:53:42.930195 systemd[1]: sshd@23-172.31.21.42:22-139.178.89.65:50446.service: Deactivated successfully. Sep 12 16:53:42.933259 systemd[1]: session-24.scope: Deactivated successfully. Sep 12 16:53:42.938657 systemd-logind[1950]: Removed session 24. Sep 12 16:53:42.961418 systemd[1]: Started sshd@24-172.31.21.42:22-139.178.89.65:50454.service - OpenSSH per-connection server daemon (139.178.89.65:50454). Sep 12 16:53:43.142092 sshd[5047]: Accepted publickey for core from 139.178.89.65 port 50454 ssh2: RSA SHA256:UtlJgM7ARb7wxMu1nBhWJ04sNPurn7zs7fZADhw2VQM Sep 12 16:53:43.144635 sshd-session[5047]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 16:53:43.158643 systemd-logind[1950]: New session 25 of user core. Sep 12 16:53:43.168953 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 12 16:53:47.031835 kubelet[3241]: I0912 16:53:47.030079 3241 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-69r2x" podStartSLOduration=110.030055869 podStartE2EDuration="1m50.030055869s" podCreationTimestamp="2025-09-12 16:51:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 16:52:34.428245396 +0000 UTC m=+42.759722431" watchObservedRunningTime="2025-09-12 16:53:47.030055869 +0000 UTC m=+115.361532880" Sep 12 16:53:47.071518 containerd[1964]: time="2025-09-12T16:53:47.069791155Z" level=info msg="StopContainer for \"c5f7df82322ca72a152844482e96a09d632e0c0b9cf792ade991582edeb8fdae\" with timeout 30 (s)" Sep 12 16:53:47.071518 containerd[1964]: time="2025-09-12T16:53:47.071300333Z" level=info msg="Stop container \"c5f7df82322ca72a152844482e96a09d632e0c0b9cf792ade991582edeb8fdae\" with signal terminated" Sep 12 16:53:47.102172 systemd[1]: cri-containerd-c5f7df82322ca72a152844482e96a09d632e0c0b9cf792ade991582edeb8fdae.scope: Deactivated successfully. Sep 12 16:53:47.134396 containerd[1964]: time="2025-09-12T16:53:47.134148574Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 16:53:47.146438 containerd[1964]: time="2025-09-12T16:53:47.146320749Z" level=info msg="StopContainer for \"e3506697fd20e447900a227aabf3daba682c507bcc19cc97d7cfd1033baee1c7\" with timeout 2 (s)" Sep 12 16:53:47.147426 containerd[1964]: time="2025-09-12T16:53:47.147381671Z" level=info msg="Stop container \"e3506697fd20e447900a227aabf3daba682c507bcc19cc97d7cfd1033baee1c7\" with signal terminated" Sep 12 16:53:47.172056 systemd-networkd[1882]: lxc_health: Link DOWN Sep 12 16:53:47.172077 systemd-networkd[1882]: lxc_health: Lost carrier Sep 12 16:53:47.187441 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c5f7df82322ca72a152844482e96a09d632e0c0b9cf792ade991582edeb8fdae-rootfs.mount: Deactivated successfully. Sep 12 16:53:47.207967 containerd[1964]: time="2025-09-12T16:53:47.207862317Z" level=info msg="shim disconnected" id=c5f7df82322ca72a152844482e96a09d632e0c0b9cf792ade991582edeb8fdae namespace=k8s.io Sep 12 16:53:47.208369 containerd[1964]: time="2025-09-12T16:53:47.208329903Z" level=warning msg="cleaning up after shim disconnected" id=c5f7df82322ca72a152844482e96a09d632e0c0b9cf792ade991582edeb8fdae namespace=k8s.io Sep 12 16:53:47.208601 containerd[1964]: time="2025-09-12T16:53:47.208466675Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 16:53:47.217061 systemd[1]: cri-containerd-e3506697fd20e447900a227aabf3daba682c507bcc19cc97d7cfd1033baee1c7.scope: Deactivated successfully. Sep 12 16:53:47.218871 systemd[1]: cri-containerd-e3506697fd20e447900a227aabf3daba682c507bcc19cc97d7cfd1033baee1c7.scope: Consumed 15.937s CPU time, 125.7M memory peak, 128K read from disk, 12.9M written to disk. Sep 12 16:53:47.255317 containerd[1964]: time="2025-09-12T16:53:47.255264262Z" level=info msg="StopContainer for \"c5f7df82322ca72a152844482e96a09d632e0c0b9cf792ade991582edeb8fdae\" returns successfully" Sep 12 16:53:47.257831 containerd[1964]: time="2025-09-12T16:53:47.257459331Z" level=info msg="StopPodSandbox for \"242e7fbd2ce07db33a25decb0c9c5f78783c7ac77310613706e7bea244591d98\"" Sep 12 16:53:47.257831 containerd[1964]: time="2025-09-12T16:53:47.257529254Z" level=info msg="Container to stop \"c5f7df82322ca72a152844482e96a09d632e0c0b9cf792ade991582edeb8fdae\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 16:53:47.264063 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-242e7fbd2ce07db33a25decb0c9c5f78783c7ac77310613706e7bea244591d98-shm.mount: Deactivated successfully. Sep 12 16:53:47.276647 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e3506697fd20e447900a227aabf3daba682c507bcc19cc97d7cfd1033baee1c7-rootfs.mount: Deactivated successfully. Sep 12 16:53:47.288857 systemd[1]: cri-containerd-242e7fbd2ce07db33a25decb0c9c5f78783c7ac77310613706e7bea244591d98.scope: Deactivated successfully. Sep 12 16:53:47.296381 containerd[1964]: time="2025-09-12T16:53:47.295877967Z" level=info msg="shim disconnected" id=e3506697fd20e447900a227aabf3daba682c507bcc19cc97d7cfd1033baee1c7 namespace=k8s.io Sep 12 16:53:47.296381 containerd[1964]: time="2025-09-12T16:53:47.295959332Z" level=warning msg="cleaning up after shim disconnected" id=e3506697fd20e447900a227aabf3daba682c507bcc19cc97d7cfd1033baee1c7 namespace=k8s.io Sep 12 16:53:47.296381 containerd[1964]: time="2025-09-12T16:53:47.296269723Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 16:53:47.308371 kubelet[3241]: E0912 16:53:47.308240 3241 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 12 16:53:47.336557 containerd[1964]: time="2025-09-12T16:53:47.336436373Z" level=info msg="StopContainer for \"e3506697fd20e447900a227aabf3daba682c507bcc19cc97d7cfd1033baee1c7\" returns successfully" Sep 12 16:53:47.337338 containerd[1964]: time="2025-09-12T16:53:47.337288379Z" level=info msg="StopPodSandbox for \"30d753d4fe70401fa0dd1ac27deb9957a3bc710cd56b002448e1f6fcc515c0a4\"" Sep 12 16:53:47.337425 containerd[1964]: time="2025-09-12T16:53:47.337359442Z" level=info msg="Container to stop \"ef65e4f65a781c33b9abf3936a0d8286d4096baf0a486b0b67f07dfcbacd70c1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 16:53:47.337425 containerd[1964]: time="2025-09-12T16:53:47.337387416Z" level=info msg="Container to stop \"974dd20e7b7c147fce4db2238cc6c123504803208a8c12a01b2dccc6455a5e77\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 16:53:47.337425 containerd[1964]: time="2025-09-12T16:53:47.337408619Z" level=info msg="Container to stop \"e3506697fd20e447900a227aabf3daba682c507bcc19cc97d7cfd1033baee1c7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 16:53:47.337928 containerd[1964]: time="2025-09-12T16:53:47.337431802Z" level=info msg="Container to stop \"a00663801ce2e28aef52f98b5bb0068626dc331a27b20fd211771f6c14521044\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 16:53:47.337928 containerd[1964]: time="2025-09-12T16:53:47.337453353Z" level=info msg="Container to stop \"6dad778a090795ed6b036a651e5732e347a2eccf561741b9d0c3caba3f7dce8a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 16:53:47.342390 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-30d753d4fe70401fa0dd1ac27deb9957a3bc710cd56b002448e1f6fcc515c0a4-shm.mount: Deactivated successfully. Sep 12 16:53:47.358475 systemd[1]: cri-containerd-30d753d4fe70401fa0dd1ac27deb9957a3bc710cd56b002448e1f6fcc515c0a4.scope: Deactivated successfully. Sep 12 16:53:47.368626 containerd[1964]: time="2025-09-12T16:53:47.368479907Z" level=info msg="shim disconnected" id=242e7fbd2ce07db33a25decb0c9c5f78783c7ac77310613706e7bea244591d98 namespace=k8s.io Sep 12 16:53:47.368626 containerd[1964]: time="2025-09-12T16:53:47.368591118Z" level=warning msg="cleaning up after shim disconnected" id=242e7fbd2ce07db33a25decb0c9c5f78783c7ac77310613706e7bea244591d98 namespace=k8s.io Sep 12 16:53:47.369348 containerd[1964]: time="2025-09-12T16:53:47.368613473Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 16:53:47.401269 containerd[1964]: time="2025-09-12T16:53:47.400949474Z" level=info msg="TearDown network for sandbox \"242e7fbd2ce07db33a25decb0c9c5f78783c7ac77310613706e7bea244591d98\" successfully" Sep 12 16:53:47.401269 containerd[1964]: time="2025-09-12T16:53:47.401003345Z" level=info msg="StopPodSandbox for \"242e7fbd2ce07db33a25decb0c9c5f78783c7ac77310613706e7bea244591d98\" returns successfully" Sep 12 16:53:47.428969 containerd[1964]: time="2025-09-12T16:53:47.428866077Z" level=info msg="shim disconnected" id=30d753d4fe70401fa0dd1ac27deb9957a3bc710cd56b002448e1f6fcc515c0a4 namespace=k8s.io Sep 12 16:53:47.428969 containerd[1964]: time="2025-09-12T16:53:47.428963170Z" level=warning msg="cleaning up after shim disconnected" id=30d753d4fe70401fa0dd1ac27deb9957a3bc710cd56b002448e1f6fcc515c0a4 namespace=k8s.io Sep 12 16:53:47.429494 containerd[1964]: time="2025-09-12T16:53:47.428986233Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 16:53:47.458569 containerd[1964]: time="2025-09-12T16:53:47.458351433Z" level=info msg="TearDown network for sandbox \"30d753d4fe70401fa0dd1ac27deb9957a3bc710cd56b002448e1f6fcc515c0a4\" successfully" Sep 12 16:53:47.458569 containerd[1964]: time="2025-09-12T16:53:47.458425125Z" level=info msg="StopPodSandbox for \"30d753d4fe70401fa0dd1ac27deb9957a3bc710cd56b002448e1f6fcc515c0a4\" returns successfully" Sep 12 16:53:47.489292 kubelet[3241]: I0912 16:53:47.489221 3241 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/435b759a-5e77-43bd-b2df-82d84b61f758-cilium-config-path\") pod \"435b759a-5e77-43bd-b2df-82d84b61f758\" (UID: \"435b759a-5e77-43bd-b2df-82d84b61f758\") " Sep 12 16:53:47.489595 kubelet[3241]: I0912 16:53:47.489309 3241 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x726p\" (UniqueName: \"kubernetes.io/projected/435b759a-5e77-43bd-b2df-82d84b61f758-kube-api-access-x726p\") pod \"435b759a-5e77-43bd-b2df-82d84b61f758\" (UID: \"435b759a-5e77-43bd-b2df-82d84b61f758\") " Sep 12 16:53:47.495711 kubelet[3241]: I0912 16:53:47.495633 3241 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/435b759a-5e77-43bd-b2df-82d84b61f758-kube-api-access-x726p" (OuterVolumeSpecName: "kube-api-access-x726p") pod "435b759a-5e77-43bd-b2df-82d84b61f758" (UID: "435b759a-5e77-43bd-b2df-82d84b61f758"). InnerVolumeSpecName "kube-api-access-x726p". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 12 16:53:47.501735 kubelet[3241]: I0912 16:53:47.501660 3241 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/435b759a-5e77-43bd-b2df-82d84b61f758-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "435b759a-5e77-43bd-b2df-82d84b61f758" (UID: "435b759a-5e77-43bd-b2df-82d84b61f758"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 12 16:53:47.534535 kubelet[3241]: I0912 16:53:47.534490 3241 scope.go:117] "RemoveContainer" containerID="c5f7df82322ca72a152844482e96a09d632e0c0b9cf792ade991582edeb8fdae" Sep 12 16:53:47.544272 containerd[1964]: time="2025-09-12T16:53:47.543665397Z" level=info msg="RemoveContainer for \"c5f7df82322ca72a152844482e96a09d632e0c0b9cf792ade991582edeb8fdae\"" Sep 12 16:53:47.557118 systemd[1]: Removed slice kubepods-besteffort-pod435b759a_5e77_43bd_b2df_82d84b61f758.slice - libcontainer container kubepods-besteffort-pod435b759a_5e77_43bd_b2df_82d84b61f758.slice. Sep 12 16:53:47.565413 containerd[1964]: time="2025-09-12T16:53:47.563827853Z" level=info msg="RemoveContainer for \"c5f7df82322ca72a152844482e96a09d632e0c0b9cf792ade991582edeb8fdae\" returns successfully" Sep 12 16:53:47.566130 kubelet[3241]: I0912 16:53:47.564831 3241 scope.go:117] "RemoveContainer" containerID="c5f7df82322ca72a152844482e96a09d632e0c0b9cf792ade991582edeb8fdae" Sep 12 16:53:47.567072 containerd[1964]: time="2025-09-12T16:53:47.566995000Z" level=error msg="ContainerStatus for \"c5f7df82322ca72a152844482e96a09d632e0c0b9cf792ade991582edeb8fdae\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c5f7df82322ca72a152844482e96a09d632e0c0b9cf792ade991582edeb8fdae\": not found" Sep 12 16:53:47.567834 kubelet[3241]: E0912 16:53:47.567663 3241 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c5f7df82322ca72a152844482e96a09d632e0c0b9cf792ade991582edeb8fdae\": not found" containerID="c5f7df82322ca72a152844482e96a09d632e0c0b9cf792ade991582edeb8fdae" Sep 12 16:53:47.567952 kubelet[3241]: I0912 16:53:47.567724 3241 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c5f7df82322ca72a152844482e96a09d632e0c0b9cf792ade991582edeb8fdae"} err="failed to get container status \"c5f7df82322ca72a152844482e96a09d632e0c0b9cf792ade991582edeb8fdae\": rpc error: code = NotFound desc = an error occurred when try to find container \"c5f7df82322ca72a152844482e96a09d632e0c0b9cf792ade991582edeb8fdae\": not found" Sep 12 16:53:47.568035 kubelet[3241]: I0912 16:53:47.567975 3241 scope.go:117] "RemoveContainer" containerID="e3506697fd20e447900a227aabf3daba682c507bcc19cc97d7cfd1033baee1c7" Sep 12 16:53:47.574360 containerd[1964]: time="2025-09-12T16:53:47.574053748Z" level=info msg="RemoveContainer for \"e3506697fd20e447900a227aabf3daba682c507bcc19cc97d7cfd1033baee1c7\"" Sep 12 16:53:47.585741 containerd[1964]: time="2025-09-12T16:53:47.585684284Z" level=info msg="RemoveContainer for \"e3506697fd20e447900a227aabf3daba682c507bcc19cc97d7cfd1033baee1c7\" returns successfully" Sep 12 16:53:47.588062 kubelet[3241]: I0912 16:53:47.588012 3241 scope.go:117] "RemoveContainer" containerID="974dd20e7b7c147fce4db2238cc6c123504803208a8c12a01b2dccc6455a5e77" Sep 12 16:53:47.591694 kubelet[3241]: I0912 16:53:47.589648 3241 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d2050f2f-0f27-469d-8312-57577bc96f50-lib-modules\") pod \"d2050f2f-0f27-469d-8312-57577bc96f50\" (UID: \"d2050f2f-0f27-469d-8312-57577bc96f50\") " Sep 12 16:53:47.591694 kubelet[3241]: I0912 16:53:47.589703 3241 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d2050f2f-0f27-469d-8312-57577bc96f50-host-proc-sys-net\") pod \"d2050f2f-0f27-469d-8312-57577bc96f50\" (UID: \"d2050f2f-0f27-469d-8312-57577bc96f50\") " Sep 12 16:53:47.591694 kubelet[3241]: I0912 16:53:47.589746 3241 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d2050f2f-0f27-469d-8312-57577bc96f50-cilium-cgroup\") pod \"d2050f2f-0f27-469d-8312-57577bc96f50\" (UID: \"d2050f2f-0f27-469d-8312-57577bc96f50\") " Sep 12 16:53:47.591694 kubelet[3241]: I0912 16:53:47.589767 3241 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d2050f2f-0f27-469d-8312-57577bc96f50-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "d2050f2f-0f27-469d-8312-57577bc96f50" (UID: "d2050f2f-0f27-469d-8312-57577bc96f50"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 16:53:47.591694 kubelet[3241]: I0912 16:53:47.589787 3241 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d2050f2f-0f27-469d-8312-57577bc96f50-cilium-config-path\") pod \"d2050f2f-0f27-469d-8312-57577bc96f50\" (UID: \"d2050f2f-0f27-469d-8312-57577bc96f50\") " Sep 12 16:53:47.591694 kubelet[3241]: I0912 16:53:47.589899 3241 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d2050f2f-0f27-469d-8312-57577bc96f50-bpf-maps\") pod \"d2050f2f-0f27-469d-8312-57577bc96f50\" (UID: \"d2050f2f-0f27-469d-8312-57577bc96f50\") " Sep 12 16:53:47.592169 kubelet[3241]: I0912 16:53:47.589971 3241 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d2050f2f-0f27-469d-8312-57577bc96f50-xtables-lock\") pod \"d2050f2f-0f27-469d-8312-57577bc96f50\" (UID: \"d2050f2f-0f27-469d-8312-57577bc96f50\") " Sep 12 16:53:47.592169 kubelet[3241]: I0912 16:53:47.590027 3241 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d2050f2f-0f27-469d-8312-57577bc96f50-hubble-tls\") pod \"d2050f2f-0f27-469d-8312-57577bc96f50\" (UID: \"d2050f2f-0f27-469d-8312-57577bc96f50\") " Sep 12 16:53:47.592169 kubelet[3241]: I0912 16:53:47.590063 3241 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d2050f2f-0f27-469d-8312-57577bc96f50-hostproc\") pod \"d2050f2f-0f27-469d-8312-57577bc96f50\" (UID: \"d2050f2f-0f27-469d-8312-57577bc96f50\") " Sep 12 16:53:47.592169 kubelet[3241]: I0912 16:53:47.590097 3241 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d2050f2f-0f27-469d-8312-57577bc96f50-cilium-run\") pod \"d2050f2f-0f27-469d-8312-57577bc96f50\" (UID: \"d2050f2f-0f27-469d-8312-57577bc96f50\") " Sep 12 16:53:47.592169 kubelet[3241]: I0912 16:53:47.590135 3241 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d2050f2f-0f27-469d-8312-57577bc96f50-clustermesh-secrets\") pod \"d2050f2f-0f27-469d-8312-57577bc96f50\" (UID: \"d2050f2f-0f27-469d-8312-57577bc96f50\") " Sep 12 16:53:47.592169 kubelet[3241]: I0912 16:53:47.590188 3241 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xs67f\" (UniqueName: \"kubernetes.io/projected/d2050f2f-0f27-469d-8312-57577bc96f50-kube-api-access-xs67f\") pod \"d2050f2f-0f27-469d-8312-57577bc96f50\" (UID: \"d2050f2f-0f27-469d-8312-57577bc96f50\") " Sep 12 16:53:47.592484 kubelet[3241]: I0912 16:53:47.590228 3241 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d2050f2f-0f27-469d-8312-57577bc96f50-cni-path\") pod \"d2050f2f-0f27-469d-8312-57577bc96f50\" (UID: \"d2050f2f-0f27-469d-8312-57577bc96f50\") " Sep 12 16:53:47.592484 kubelet[3241]: I0912 16:53:47.590262 3241 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d2050f2f-0f27-469d-8312-57577bc96f50-host-proc-sys-kernel\") pod \"d2050f2f-0f27-469d-8312-57577bc96f50\" (UID: \"d2050f2f-0f27-469d-8312-57577bc96f50\") " Sep 12 16:53:47.592484 kubelet[3241]: I0912 16:53:47.590298 3241 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d2050f2f-0f27-469d-8312-57577bc96f50-etc-cni-netd\") pod \"d2050f2f-0f27-469d-8312-57577bc96f50\" (UID: \"d2050f2f-0f27-469d-8312-57577bc96f50\") " Sep 12 16:53:47.592484 kubelet[3241]: I0912 16:53:47.590363 3241 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/435b759a-5e77-43bd-b2df-82d84b61f758-cilium-config-path\") on node \"ip-172-31-21-42\" DevicePath \"\"" Sep 12 16:53:47.592484 kubelet[3241]: I0912 16:53:47.590387 3241 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d2050f2f-0f27-469d-8312-57577bc96f50-lib-modules\") on node \"ip-172-31-21-42\" DevicePath \"\"" Sep 12 16:53:47.592484 kubelet[3241]: I0912 16:53:47.590409 3241 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-x726p\" (UniqueName: \"kubernetes.io/projected/435b759a-5e77-43bd-b2df-82d84b61f758-kube-api-access-x726p\") on node \"ip-172-31-21-42\" DevicePath \"\"" Sep 12 16:53:47.592780 kubelet[3241]: I0912 16:53:47.590448 3241 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d2050f2f-0f27-469d-8312-57577bc96f50-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "d2050f2f-0f27-469d-8312-57577bc96f50" (UID: "d2050f2f-0f27-469d-8312-57577bc96f50"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 16:53:47.592780 kubelet[3241]: I0912 16:53:47.590489 3241 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d2050f2f-0f27-469d-8312-57577bc96f50-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "d2050f2f-0f27-469d-8312-57577bc96f50" (UID: "d2050f2f-0f27-469d-8312-57577bc96f50"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 16:53:47.592780 kubelet[3241]: I0912 16:53:47.590524 3241 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d2050f2f-0f27-469d-8312-57577bc96f50-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "d2050f2f-0f27-469d-8312-57577bc96f50" (UID: "d2050f2f-0f27-469d-8312-57577bc96f50"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 16:53:47.594101 kubelet[3241]: I0912 16:53:47.593908 3241 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d2050f2f-0f27-469d-8312-57577bc96f50-hostproc" (OuterVolumeSpecName: "hostproc") pod "d2050f2f-0f27-469d-8312-57577bc96f50" (UID: "d2050f2f-0f27-469d-8312-57577bc96f50"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 16:53:47.594101 kubelet[3241]: I0912 16:53:47.593987 3241 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d2050f2f-0f27-469d-8312-57577bc96f50-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "d2050f2f-0f27-469d-8312-57577bc96f50" (UID: "d2050f2f-0f27-469d-8312-57577bc96f50"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 16:53:47.597068 kubelet[3241]: I0912 16:53:47.596936 3241 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d2050f2f-0f27-469d-8312-57577bc96f50-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "d2050f2f-0f27-469d-8312-57577bc96f50" (UID: "d2050f2f-0f27-469d-8312-57577bc96f50"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 16:53:47.597514 kubelet[3241]: I0912 16:53:47.597037 3241 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d2050f2f-0f27-469d-8312-57577bc96f50-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "d2050f2f-0f27-469d-8312-57577bc96f50" (UID: "d2050f2f-0f27-469d-8312-57577bc96f50"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 16:53:47.598574 kubelet[3241]: I0912 16:53:47.598090 3241 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d2050f2f-0f27-469d-8312-57577bc96f50-cni-path" (OuterVolumeSpecName: "cni-path") pod "d2050f2f-0f27-469d-8312-57577bc96f50" (UID: "d2050f2f-0f27-469d-8312-57577bc96f50"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 16:53:47.601754 kubelet[3241]: I0912 16:53:47.601679 3241 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d2050f2f-0f27-469d-8312-57577bc96f50-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "d2050f2f-0f27-469d-8312-57577bc96f50" (UID: "d2050f2f-0f27-469d-8312-57577bc96f50"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 16:53:47.603604 containerd[1964]: time="2025-09-12T16:53:47.603194507Z" level=info msg="RemoveContainer for \"974dd20e7b7c147fce4db2238cc6c123504803208a8c12a01b2dccc6455a5e77\"" Sep 12 16:53:47.604307 kubelet[3241]: I0912 16:53:47.604263 3241 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2050f2f-0f27-469d-8312-57577bc96f50-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "d2050f2f-0f27-469d-8312-57577bc96f50" (UID: "d2050f2f-0f27-469d-8312-57577bc96f50"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 12 16:53:47.605144 kubelet[3241]: I0912 16:53:47.605100 3241 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2050f2f-0f27-469d-8312-57577bc96f50-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d2050f2f-0f27-469d-8312-57577bc96f50" (UID: "d2050f2f-0f27-469d-8312-57577bc96f50"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 12 16:53:47.609166 kubelet[3241]: I0912 16:53:47.609070 3241 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2050f2f-0f27-469d-8312-57577bc96f50-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "d2050f2f-0f27-469d-8312-57577bc96f50" (UID: "d2050f2f-0f27-469d-8312-57577bc96f50"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 12 16:53:47.610496 kubelet[3241]: I0912 16:53:47.610431 3241 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2050f2f-0f27-469d-8312-57577bc96f50-kube-api-access-xs67f" (OuterVolumeSpecName: "kube-api-access-xs67f") pod "d2050f2f-0f27-469d-8312-57577bc96f50" (UID: "d2050f2f-0f27-469d-8312-57577bc96f50"). InnerVolumeSpecName "kube-api-access-xs67f". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 12 16:53:47.611876 containerd[1964]: time="2025-09-12T16:53:47.611743956Z" level=info msg="RemoveContainer for \"974dd20e7b7c147fce4db2238cc6c123504803208a8c12a01b2dccc6455a5e77\" returns successfully" Sep 12 16:53:47.612413 kubelet[3241]: I0912 16:53:47.612153 3241 scope.go:117] "RemoveContainer" containerID="ef65e4f65a781c33b9abf3936a0d8286d4096baf0a486b0b67f07dfcbacd70c1" Sep 12 16:53:47.614789 containerd[1964]: time="2025-09-12T16:53:47.614737892Z" level=info msg="RemoveContainer for \"ef65e4f65a781c33b9abf3936a0d8286d4096baf0a486b0b67f07dfcbacd70c1\"" Sep 12 16:53:47.621406 containerd[1964]: time="2025-09-12T16:53:47.621333940Z" level=info msg="RemoveContainer for \"ef65e4f65a781c33b9abf3936a0d8286d4096baf0a486b0b67f07dfcbacd70c1\" returns successfully" Sep 12 16:53:47.622174 kubelet[3241]: I0912 16:53:47.621826 3241 scope.go:117] "RemoveContainer" containerID="6dad778a090795ed6b036a651e5732e347a2eccf561741b9d0c3caba3f7dce8a" Sep 12 16:53:47.624176 containerd[1964]: time="2025-09-12T16:53:47.624122994Z" level=info msg="RemoveContainer for \"6dad778a090795ed6b036a651e5732e347a2eccf561741b9d0c3caba3f7dce8a\"" Sep 12 16:53:47.630468 containerd[1964]: time="2025-09-12T16:53:47.630395133Z" level=info msg="RemoveContainer for \"6dad778a090795ed6b036a651e5732e347a2eccf561741b9d0c3caba3f7dce8a\" returns successfully" Sep 12 16:53:47.631550 kubelet[3241]: I0912 16:53:47.631028 3241 scope.go:117] "RemoveContainer" containerID="a00663801ce2e28aef52f98b5bb0068626dc331a27b20fd211771f6c14521044" Sep 12 16:53:47.633326 containerd[1964]: time="2025-09-12T16:53:47.633267208Z" level=info msg="RemoveContainer for \"a00663801ce2e28aef52f98b5bb0068626dc331a27b20fd211771f6c14521044\"" Sep 12 16:53:47.639313 containerd[1964]: time="2025-09-12T16:53:47.639257506Z" level=info msg="RemoveContainer for \"a00663801ce2e28aef52f98b5bb0068626dc331a27b20fd211771f6c14521044\" returns successfully" Sep 12 16:53:47.640023 kubelet[3241]: I0912 16:53:47.639873 3241 scope.go:117] "RemoveContainer" containerID="e3506697fd20e447900a227aabf3daba682c507bcc19cc97d7cfd1033baee1c7" Sep 12 16:53:47.640568 containerd[1964]: time="2025-09-12T16:53:47.640446688Z" level=error msg="ContainerStatus for \"e3506697fd20e447900a227aabf3daba682c507bcc19cc97d7cfd1033baee1c7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e3506697fd20e447900a227aabf3daba682c507bcc19cc97d7cfd1033baee1c7\": not found" Sep 12 16:53:47.640903 kubelet[3241]: E0912 16:53:47.640654 3241 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e3506697fd20e447900a227aabf3daba682c507bcc19cc97d7cfd1033baee1c7\": not found" containerID="e3506697fd20e447900a227aabf3daba682c507bcc19cc97d7cfd1033baee1c7" Sep 12 16:53:47.640903 kubelet[3241]: I0912 16:53:47.640697 3241 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e3506697fd20e447900a227aabf3daba682c507bcc19cc97d7cfd1033baee1c7"} err="failed to get container status \"e3506697fd20e447900a227aabf3daba682c507bcc19cc97d7cfd1033baee1c7\": rpc error: code = NotFound desc = an error occurred when try to find container \"e3506697fd20e447900a227aabf3daba682c507bcc19cc97d7cfd1033baee1c7\": not found" Sep 12 16:53:47.640903 kubelet[3241]: I0912 16:53:47.640736 3241 scope.go:117] "RemoveContainer" containerID="974dd20e7b7c147fce4db2238cc6c123504803208a8c12a01b2dccc6455a5e77" Sep 12 16:53:47.641490 containerd[1964]: time="2025-09-12T16:53:47.641435250Z" level=error msg="ContainerStatus for \"974dd20e7b7c147fce4db2238cc6c123504803208a8c12a01b2dccc6455a5e77\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"974dd20e7b7c147fce4db2238cc6c123504803208a8c12a01b2dccc6455a5e77\": not found" Sep 12 16:53:47.641901 kubelet[3241]: E0912 16:53:47.641859 3241 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"974dd20e7b7c147fce4db2238cc6c123504803208a8c12a01b2dccc6455a5e77\": not found" containerID="974dd20e7b7c147fce4db2238cc6c123504803208a8c12a01b2dccc6455a5e77" Sep 12 16:53:47.641998 kubelet[3241]: I0912 16:53:47.641912 3241 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"974dd20e7b7c147fce4db2238cc6c123504803208a8c12a01b2dccc6455a5e77"} err="failed to get container status \"974dd20e7b7c147fce4db2238cc6c123504803208a8c12a01b2dccc6455a5e77\": rpc error: code = NotFound desc = an error occurred when try to find container \"974dd20e7b7c147fce4db2238cc6c123504803208a8c12a01b2dccc6455a5e77\": not found" Sep 12 16:53:47.641998 kubelet[3241]: I0912 16:53:47.641949 3241 scope.go:117] "RemoveContainer" containerID="ef65e4f65a781c33b9abf3936a0d8286d4096baf0a486b0b67f07dfcbacd70c1" Sep 12 16:53:47.642446 containerd[1964]: time="2025-09-12T16:53:47.642327212Z" level=error msg="ContainerStatus for \"ef65e4f65a781c33b9abf3936a0d8286d4096baf0a486b0b67f07dfcbacd70c1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ef65e4f65a781c33b9abf3936a0d8286d4096baf0a486b0b67f07dfcbacd70c1\": not found" Sep 12 16:53:47.642843 kubelet[3241]: E0912 16:53:47.642628 3241 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ef65e4f65a781c33b9abf3936a0d8286d4096baf0a486b0b67f07dfcbacd70c1\": not found" containerID="ef65e4f65a781c33b9abf3936a0d8286d4096baf0a486b0b67f07dfcbacd70c1" Sep 12 16:53:47.642843 kubelet[3241]: I0912 16:53:47.642671 3241 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ef65e4f65a781c33b9abf3936a0d8286d4096baf0a486b0b67f07dfcbacd70c1"} err="failed to get container status \"ef65e4f65a781c33b9abf3936a0d8286d4096baf0a486b0b67f07dfcbacd70c1\": rpc error: code = NotFound desc = an error occurred when try to find container \"ef65e4f65a781c33b9abf3936a0d8286d4096baf0a486b0b67f07dfcbacd70c1\": not found" Sep 12 16:53:47.642843 kubelet[3241]: I0912 16:53:47.642702 3241 scope.go:117] "RemoveContainer" containerID="6dad778a090795ed6b036a651e5732e347a2eccf561741b9d0c3caba3f7dce8a" Sep 12 16:53:47.643762 containerd[1964]: time="2025-09-12T16:53:47.643274665Z" level=error msg="ContainerStatus for \"6dad778a090795ed6b036a651e5732e347a2eccf561741b9d0c3caba3f7dce8a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6dad778a090795ed6b036a651e5732e347a2eccf561741b9d0c3caba3f7dce8a\": not found" Sep 12 16:53:47.643933 kubelet[3241]: E0912 16:53:47.643581 3241 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6dad778a090795ed6b036a651e5732e347a2eccf561741b9d0c3caba3f7dce8a\": not found" containerID="6dad778a090795ed6b036a651e5732e347a2eccf561741b9d0c3caba3f7dce8a" Sep 12 16:53:47.643933 kubelet[3241]: I0912 16:53:47.643622 3241 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6dad778a090795ed6b036a651e5732e347a2eccf561741b9d0c3caba3f7dce8a"} err="failed to get container status \"6dad778a090795ed6b036a651e5732e347a2eccf561741b9d0c3caba3f7dce8a\": rpc error: code = NotFound desc = an error occurred when try to find container \"6dad778a090795ed6b036a651e5732e347a2eccf561741b9d0c3caba3f7dce8a\": not found" Sep 12 16:53:47.643933 kubelet[3241]: I0912 16:53:47.643651 3241 scope.go:117] "RemoveContainer" containerID="a00663801ce2e28aef52f98b5bb0068626dc331a27b20fd211771f6c14521044" Sep 12 16:53:47.644138 containerd[1964]: time="2025-09-12T16:53:47.644077242Z" level=error msg="ContainerStatus for \"a00663801ce2e28aef52f98b5bb0068626dc331a27b20fd211771f6c14521044\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a00663801ce2e28aef52f98b5bb0068626dc331a27b20fd211771f6c14521044\": not found" Sep 12 16:53:47.644373 kubelet[3241]: E0912 16:53:47.644339 3241 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a00663801ce2e28aef52f98b5bb0068626dc331a27b20fd211771f6c14521044\": not found" containerID="a00663801ce2e28aef52f98b5bb0068626dc331a27b20fd211771f6c14521044" Sep 12 16:53:47.644609 kubelet[3241]: I0912 16:53:47.644525 3241 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a00663801ce2e28aef52f98b5bb0068626dc331a27b20fd211771f6c14521044"} err="failed to get container status \"a00663801ce2e28aef52f98b5bb0068626dc331a27b20fd211771f6c14521044\": rpc error: code = NotFound desc = an error occurred when try to find container \"a00663801ce2e28aef52f98b5bb0068626dc331a27b20fd211771f6c14521044\": not found" Sep 12 16:53:47.691208 kubelet[3241]: I0912 16:53:47.690891 3241 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d2050f2f-0f27-469d-8312-57577bc96f50-cilium-config-path\") on node \"ip-172-31-21-42\" DevicePath \"\"" Sep 12 16:53:47.691208 kubelet[3241]: I0912 16:53:47.690939 3241 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d2050f2f-0f27-469d-8312-57577bc96f50-bpf-maps\") on node \"ip-172-31-21-42\" DevicePath \"\"" Sep 12 16:53:47.691208 kubelet[3241]: I0912 16:53:47.690960 3241 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d2050f2f-0f27-469d-8312-57577bc96f50-xtables-lock\") on node \"ip-172-31-21-42\" DevicePath \"\"" Sep 12 16:53:47.691208 kubelet[3241]: I0912 16:53:47.690982 3241 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d2050f2f-0f27-469d-8312-57577bc96f50-hubble-tls\") on node \"ip-172-31-21-42\" DevicePath \"\"" Sep 12 16:53:47.691208 kubelet[3241]: I0912 16:53:47.691006 3241 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d2050f2f-0f27-469d-8312-57577bc96f50-hostproc\") on node \"ip-172-31-21-42\" DevicePath \"\"" Sep 12 16:53:47.691208 kubelet[3241]: I0912 16:53:47.691026 3241 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d2050f2f-0f27-469d-8312-57577bc96f50-cilium-run\") on node \"ip-172-31-21-42\" DevicePath \"\"" Sep 12 16:53:47.691208 kubelet[3241]: I0912 16:53:47.691045 3241 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d2050f2f-0f27-469d-8312-57577bc96f50-clustermesh-secrets\") on node \"ip-172-31-21-42\" DevicePath \"\"" Sep 12 16:53:47.691208 kubelet[3241]: I0912 16:53:47.691065 3241 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xs67f\" (UniqueName: \"kubernetes.io/projected/d2050f2f-0f27-469d-8312-57577bc96f50-kube-api-access-xs67f\") on node \"ip-172-31-21-42\" DevicePath \"\"" Sep 12 16:53:47.691704 kubelet[3241]: I0912 16:53:47.691087 3241 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d2050f2f-0f27-469d-8312-57577bc96f50-cni-path\") on node \"ip-172-31-21-42\" DevicePath \"\"" Sep 12 16:53:47.691704 kubelet[3241]: I0912 16:53:47.691110 3241 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d2050f2f-0f27-469d-8312-57577bc96f50-host-proc-sys-kernel\") on node \"ip-172-31-21-42\" DevicePath \"\"" Sep 12 16:53:47.691704 kubelet[3241]: I0912 16:53:47.691136 3241 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d2050f2f-0f27-469d-8312-57577bc96f50-etc-cni-netd\") on node \"ip-172-31-21-42\" DevicePath \"\"" Sep 12 16:53:47.691704 kubelet[3241]: I0912 16:53:47.691156 3241 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d2050f2f-0f27-469d-8312-57577bc96f50-host-proc-sys-net\") on node \"ip-172-31-21-42\" DevicePath \"\"" Sep 12 16:53:47.691704 kubelet[3241]: I0912 16:53:47.691176 3241 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d2050f2f-0f27-469d-8312-57577bc96f50-cilium-cgroup\") on node \"ip-172-31-21-42\" DevicePath \"\"" Sep 12 16:53:47.858175 systemd[1]: Removed slice kubepods-burstable-podd2050f2f_0f27_469d_8312_57577bc96f50.slice - libcontainer container kubepods-burstable-podd2050f2f_0f27_469d_8312_57577bc96f50.slice. Sep 12 16:53:47.859202 systemd[1]: kubepods-burstable-podd2050f2f_0f27_469d_8312_57577bc96f50.slice: Consumed 16.103s CPU time, 126.1M memory peak, 128K read from disk, 12.9M written to disk. Sep 12 16:53:48.043212 kubelet[3241]: E0912 16:53:48.043152 3241 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-tdvmr" podUID="37b29caa-9f37-46e7-bb48-d1a5cd7e3a98" Sep 12 16:53:48.049958 kubelet[3241]: I0912 16:53:48.049616 3241 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="435b759a-5e77-43bd-b2df-82d84b61f758" path="/var/lib/kubelet/pods/435b759a-5e77-43bd-b2df-82d84b61f758/volumes" Sep 12 16:53:48.051198 kubelet[3241]: I0912 16:53:48.051157 3241 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2050f2f-0f27-469d-8312-57577bc96f50" path="/var/lib/kubelet/pods/d2050f2f-0f27-469d-8312-57577bc96f50/volumes" Sep 12 16:53:48.091185 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-242e7fbd2ce07db33a25decb0c9c5f78783c7ac77310613706e7bea244591d98-rootfs.mount: Deactivated successfully. Sep 12 16:53:48.091371 systemd[1]: var-lib-kubelet-pods-435b759a\x2d5e77\x2d43bd\x2db2df\x2d82d84b61f758-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dx726p.mount: Deactivated successfully. Sep 12 16:53:48.091512 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-30d753d4fe70401fa0dd1ac27deb9957a3bc710cd56b002448e1f6fcc515c0a4-rootfs.mount: Deactivated successfully. Sep 12 16:53:48.091664 systemd[1]: var-lib-kubelet-pods-d2050f2f\x2d0f27\x2d469d\x2d8312\x2d57577bc96f50-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxs67f.mount: Deactivated successfully. Sep 12 16:53:48.091847 systemd[1]: var-lib-kubelet-pods-d2050f2f\x2d0f27\x2d469d\x2d8312\x2d57577bc96f50-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 12 16:53:48.091994 systemd[1]: var-lib-kubelet-pods-d2050f2f\x2d0f27\x2d469d\x2d8312\x2d57577bc96f50-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 12 16:53:48.989929 sshd[5049]: Connection closed by 139.178.89.65 port 50454 Sep 12 16:53:48.991162 sshd-session[5047]: pam_unix(sshd:session): session closed for user core Sep 12 16:53:48.998166 systemd[1]: sshd@24-172.31.21.42:22-139.178.89.65:50454.service: Deactivated successfully. Sep 12 16:53:49.003674 systemd[1]: session-25.scope: Deactivated successfully. Sep 12 16:53:49.004955 systemd[1]: session-25.scope: Consumed 3.114s CPU time, 23.7M memory peak. Sep 12 16:53:49.006417 systemd-logind[1950]: Session 25 logged out. Waiting for processes to exit. Sep 12 16:53:49.008562 systemd-logind[1950]: Removed session 25. Sep 12 16:53:49.030387 systemd[1]: Started sshd@25-172.31.21.42:22-139.178.89.65:50464.service - OpenSSH per-connection server daemon (139.178.89.65:50464). Sep 12 16:53:49.228059 sshd[5207]: Accepted publickey for core from 139.178.89.65 port 50464 ssh2: RSA SHA256:UtlJgM7ARb7wxMu1nBhWJ04sNPurn7zs7fZADhw2VQM Sep 12 16:53:49.231596 sshd-session[5207]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 16:53:49.239472 systemd-logind[1950]: New session 26 of user core. Sep 12 16:53:49.251102 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 12 16:53:49.862221 ntpd[1945]: Deleting interface #12 lxc_health, fe80::4b4:2cff:fe1b:b4cc%8#123, interface stats: received=0, sent=0, dropped=0, active_time=81 secs Sep 12 16:53:49.862780 ntpd[1945]: 12 Sep 16:53:49 ntpd[1945]: Deleting interface #12 lxc_health, fe80::4b4:2cff:fe1b:b4cc%8#123, interface stats: received=0, sent=0, dropped=0, active_time=81 secs Sep 12 16:53:50.045144 kubelet[3241]: E0912 16:53:50.045078 3241 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-tdvmr" podUID="37b29caa-9f37-46e7-bb48-d1a5cd7e3a98" Sep 12 16:53:50.934724 sshd[5209]: Connection closed by 139.178.89.65 port 50464 Sep 12 16:53:50.939134 sshd-session[5207]: pam_unix(sshd:session): session closed for user core Sep 12 16:53:50.948450 systemd[1]: sshd@25-172.31.21.42:22-139.178.89.65:50464.service: Deactivated successfully. Sep 12 16:53:50.959390 systemd[1]: session-26.scope: Deactivated successfully. Sep 12 16:53:50.961666 systemd[1]: session-26.scope: Consumed 1.479s CPU time, 23.6M memory peak. Sep 12 16:53:50.965852 systemd-logind[1950]: Session 26 logged out. Waiting for processes to exit. Sep 12 16:53:50.973206 kubelet[3241]: I0912 16:53:50.973140 3241 memory_manager.go:355] "RemoveStaleState removing state" podUID="d2050f2f-0f27-469d-8312-57577bc96f50" containerName="cilium-agent" Sep 12 16:53:50.973206 kubelet[3241]: I0912 16:53:50.973191 3241 memory_manager.go:355] "RemoveStaleState removing state" podUID="435b759a-5e77-43bd-b2df-82d84b61f758" containerName="cilium-operator" Sep 12 16:53:50.995071 systemd-logind[1950]: Removed session 26. Sep 12 16:53:51.002574 systemd[1]: Started sshd@26-172.31.21.42:22-139.178.89.65:52844.service - OpenSSH per-connection server daemon (139.178.89.65:52844). Sep 12 16:53:51.025539 systemd[1]: Created slice kubepods-burstable-podb74110c1_6258_4870_93cf_d4e267d0d82f.slice - libcontainer container kubepods-burstable-podb74110c1_6258_4870_93cf_d4e267d0d82f.slice. Sep 12 16:53:51.113544 kubelet[3241]: I0912 16:53:51.113434 3241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b74110c1-6258-4870-93cf-d4e267d0d82f-etc-cni-netd\") pod \"cilium-lnvxn\" (UID: \"b74110c1-6258-4870-93cf-d4e267d0d82f\") " pod="kube-system/cilium-lnvxn" Sep 12 16:53:51.113544 kubelet[3241]: I0912 16:53:51.113510 3241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b74110c1-6258-4870-93cf-d4e267d0d82f-cni-path\") pod \"cilium-lnvxn\" (UID: \"b74110c1-6258-4870-93cf-d4e267d0d82f\") " pod="kube-system/cilium-lnvxn" Sep 12 16:53:51.113544 kubelet[3241]: I0912 16:53:51.113550 3241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b74110c1-6258-4870-93cf-d4e267d0d82f-cilium-config-path\") pod \"cilium-lnvxn\" (UID: \"b74110c1-6258-4870-93cf-d4e267d0d82f\") " pod="kube-system/cilium-lnvxn" Sep 12 16:53:51.114461 kubelet[3241]: I0912 16:53:51.113591 3241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b74110c1-6258-4870-93cf-d4e267d0d82f-hostproc\") pod \"cilium-lnvxn\" (UID: \"b74110c1-6258-4870-93cf-d4e267d0d82f\") " pod="kube-system/cilium-lnvxn" Sep 12 16:53:51.114461 kubelet[3241]: I0912 16:53:51.113627 3241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b74110c1-6258-4870-93cf-d4e267d0d82f-clustermesh-secrets\") pod \"cilium-lnvxn\" (UID: \"b74110c1-6258-4870-93cf-d4e267d0d82f\") " pod="kube-system/cilium-lnvxn" Sep 12 16:53:51.114461 kubelet[3241]: I0912 16:53:51.113662 3241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b74110c1-6258-4870-93cf-d4e267d0d82f-host-proc-sys-net\") pod \"cilium-lnvxn\" (UID: \"b74110c1-6258-4870-93cf-d4e267d0d82f\") " pod="kube-system/cilium-lnvxn" Sep 12 16:53:51.114461 kubelet[3241]: I0912 16:53:51.113703 3241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b74110c1-6258-4870-93cf-d4e267d0d82f-host-proc-sys-kernel\") pod \"cilium-lnvxn\" (UID: \"b74110c1-6258-4870-93cf-d4e267d0d82f\") " pod="kube-system/cilium-lnvxn" Sep 12 16:53:51.114461 kubelet[3241]: I0912 16:53:51.113738 3241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b74110c1-6258-4870-93cf-d4e267d0d82f-bpf-maps\") pod \"cilium-lnvxn\" (UID: \"b74110c1-6258-4870-93cf-d4e267d0d82f\") " pod="kube-system/cilium-lnvxn" Sep 12 16:53:51.114461 kubelet[3241]: I0912 16:53:51.113775 3241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b74110c1-6258-4870-93cf-d4e267d0d82f-lib-modules\") pod \"cilium-lnvxn\" (UID: \"b74110c1-6258-4870-93cf-d4e267d0d82f\") " pod="kube-system/cilium-lnvxn" Sep 12 16:53:51.114757 kubelet[3241]: I0912 16:53:51.113832 3241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b74110c1-6258-4870-93cf-d4e267d0d82f-cilium-run\") pod \"cilium-lnvxn\" (UID: \"b74110c1-6258-4870-93cf-d4e267d0d82f\") " pod="kube-system/cilium-lnvxn" Sep 12 16:53:51.114757 kubelet[3241]: I0912 16:53:51.113873 3241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b74110c1-6258-4870-93cf-d4e267d0d82f-cilium-cgroup\") pod \"cilium-lnvxn\" (UID: \"b74110c1-6258-4870-93cf-d4e267d0d82f\") " pod="kube-system/cilium-lnvxn" Sep 12 16:53:51.114757 kubelet[3241]: I0912 16:53:51.113912 3241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbj2v\" (UniqueName: \"kubernetes.io/projected/b74110c1-6258-4870-93cf-d4e267d0d82f-kube-api-access-kbj2v\") pod \"cilium-lnvxn\" (UID: \"b74110c1-6258-4870-93cf-d4e267d0d82f\") " pod="kube-system/cilium-lnvxn" Sep 12 16:53:51.114757 kubelet[3241]: I0912 16:53:51.113950 3241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b74110c1-6258-4870-93cf-d4e267d0d82f-xtables-lock\") pod \"cilium-lnvxn\" (UID: \"b74110c1-6258-4870-93cf-d4e267d0d82f\") " pod="kube-system/cilium-lnvxn" Sep 12 16:53:51.114757 kubelet[3241]: I0912 16:53:51.113984 3241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b74110c1-6258-4870-93cf-d4e267d0d82f-cilium-ipsec-secrets\") pod \"cilium-lnvxn\" (UID: \"b74110c1-6258-4870-93cf-d4e267d0d82f\") " pod="kube-system/cilium-lnvxn" Sep 12 16:53:51.114757 kubelet[3241]: I0912 16:53:51.114026 3241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b74110c1-6258-4870-93cf-d4e267d0d82f-hubble-tls\") pod \"cilium-lnvxn\" (UID: \"b74110c1-6258-4870-93cf-d4e267d0d82f\") " pod="kube-system/cilium-lnvxn" Sep 12 16:53:51.238098 sshd[5219]: Accepted publickey for core from 139.178.89.65 port 52844 ssh2: RSA SHA256:UtlJgM7ARb7wxMu1nBhWJ04sNPurn7zs7fZADhw2VQM Sep 12 16:53:51.241043 sshd-session[5219]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 16:53:51.286084 systemd-logind[1950]: New session 27 of user core. Sep 12 16:53:51.292127 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 12 16:53:51.339735 containerd[1964]: time="2025-09-12T16:53:51.339679594Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lnvxn,Uid:b74110c1-6258-4870-93cf-d4e267d0d82f,Namespace:kube-system,Attempt:0,}" Sep 12 16:53:51.398316 containerd[1964]: time="2025-09-12T16:53:51.398161961Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 16:53:51.398894 containerd[1964]: time="2025-09-12T16:53:51.398522693Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 16:53:51.399247 containerd[1964]: time="2025-09-12T16:53:51.399153717Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 16:53:51.399680 containerd[1964]: time="2025-09-12T16:53:51.399605130Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 16:53:51.416076 sshd[5227]: Connection closed by 139.178.89.65 port 52844 Sep 12 16:53:51.418228 sshd-session[5219]: pam_unix(sshd:session): session closed for user core Sep 12 16:53:51.425453 systemd[1]: sshd@26-172.31.21.42:22-139.178.89.65:52844.service: Deactivated successfully. Sep 12 16:53:51.431828 systemd[1]: session-27.scope: Deactivated successfully. Sep 12 16:53:51.435312 systemd-logind[1950]: Session 27 logged out. Waiting for processes to exit. Sep 12 16:53:51.457115 systemd[1]: Started cri-containerd-92ca9b9b5aa102836119444872070961ae3ac99a943548534248e1e07f849340.scope - libcontainer container 92ca9b9b5aa102836119444872070961ae3ac99a943548534248e1e07f849340. Sep 12 16:53:51.459751 systemd[1]: Started sshd@27-172.31.21.42:22-139.178.89.65:52852.service - OpenSSH per-connection server daemon (139.178.89.65:52852). Sep 12 16:53:51.463695 systemd-logind[1950]: Removed session 27. Sep 12 16:53:51.516369 containerd[1964]: time="2025-09-12T16:53:51.516180353Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lnvxn,Uid:b74110c1-6258-4870-93cf-d4e267d0d82f,Namespace:kube-system,Attempt:0,} returns sandbox id \"92ca9b9b5aa102836119444872070961ae3ac99a943548534248e1e07f849340\"" Sep 12 16:53:51.526772 containerd[1964]: time="2025-09-12T16:53:51.526586566Z" level=info msg="CreateContainer within sandbox \"92ca9b9b5aa102836119444872070961ae3ac99a943548534248e1e07f849340\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 12 16:53:51.550831 containerd[1964]: time="2025-09-12T16:53:51.550756109Z" level=info msg="CreateContainer within sandbox \"92ca9b9b5aa102836119444872070961ae3ac99a943548534248e1e07f849340\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1fe4c8d386f1076e68b63e45cc6a338443e373a62f7fa2fbf1cc7b8d278a3792\"" Sep 12 16:53:51.553002 containerd[1964]: time="2025-09-12T16:53:51.552780200Z" level=info msg="StartContainer for \"1fe4c8d386f1076e68b63e45cc6a338443e373a62f7fa2fbf1cc7b8d278a3792\"" Sep 12 16:53:51.605141 systemd[1]: Started cri-containerd-1fe4c8d386f1076e68b63e45cc6a338443e373a62f7fa2fbf1cc7b8d278a3792.scope - libcontainer container 1fe4c8d386f1076e68b63e45cc6a338443e373a62f7fa2fbf1cc7b8d278a3792. Sep 12 16:53:51.659348 containerd[1964]: time="2025-09-12T16:53:51.659261379Z" level=info msg="StartContainer for \"1fe4c8d386f1076e68b63e45cc6a338443e373a62f7fa2fbf1cc7b8d278a3792\" returns successfully" Sep 12 16:53:51.678224 systemd[1]: cri-containerd-1fe4c8d386f1076e68b63e45cc6a338443e373a62f7fa2fbf1cc7b8d278a3792.scope: Deactivated successfully. Sep 12 16:53:51.686852 sshd[5258]: Accepted publickey for core from 139.178.89.65 port 52852 ssh2: RSA SHA256:UtlJgM7ARb7wxMu1nBhWJ04sNPurn7zs7fZADhw2VQM Sep 12 16:53:51.690711 sshd-session[5258]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 16:53:51.705269 systemd-logind[1950]: New session 28 of user core. Sep 12 16:53:51.712208 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 12 16:53:51.751276 containerd[1964]: time="2025-09-12T16:53:51.751159590Z" level=info msg="shim disconnected" id=1fe4c8d386f1076e68b63e45cc6a338443e373a62f7fa2fbf1cc7b8d278a3792 namespace=k8s.io Sep 12 16:53:51.751532 containerd[1964]: time="2025-09-12T16:53:51.751284044Z" level=warning msg="cleaning up after shim disconnected" id=1fe4c8d386f1076e68b63e45cc6a338443e373a62f7fa2fbf1cc7b8d278a3792 namespace=k8s.io Sep 12 16:53:51.751532 containerd[1964]: time="2025-09-12T16:53:51.751306123Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 16:53:51.924422 containerd[1964]: time="2025-09-12T16:53:51.924256660Z" level=info msg="StopPodSandbox for \"242e7fbd2ce07db33a25decb0c9c5f78783c7ac77310613706e7bea244591d98\"" Sep 12 16:53:51.924422 containerd[1964]: time="2025-09-12T16:53:51.924401645Z" level=info msg="TearDown network for sandbox \"242e7fbd2ce07db33a25decb0c9c5f78783c7ac77310613706e7bea244591d98\" successfully" Sep 12 16:53:51.924422 containerd[1964]: time="2025-09-12T16:53:51.924425681Z" level=info msg="StopPodSandbox for \"242e7fbd2ce07db33a25decb0c9c5f78783c7ac77310613706e7bea244591d98\" returns successfully" Sep 12 16:53:51.926632 containerd[1964]: time="2025-09-12T16:53:51.926462859Z" level=info msg="RemovePodSandbox for \"242e7fbd2ce07db33a25decb0c9c5f78783c7ac77310613706e7bea244591d98\"" Sep 12 16:53:51.926632 containerd[1964]: time="2025-09-12T16:53:51.926529720Z" level=info msg="Forcibly stopping sandbox \"242e7fbd2ce07db33a25decb0c9c5f78783c7ac77310613706e7bea244591d98\"" Sep 12 16:53:51.926981 containerd[1964]: time="2025-09-12T16:53:51.926636646Z" level=info msg="TearDown network for sandbox \"242e7fbd2ce07db33a25decb0c9c5f78783c7ac77310613706e7bea244591d98\" successfully" Sep 12 16:53:51.937683 containerd[1964]: time="2025-09-12T16:53:51.935879921Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"242e7fbd2ce07db33a25decb0c9c5f78783c7ac77310613706e7bea244591d98\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 16:53:51.937683 containerd[1964]: time="2025-09-12T16:53:51.936020620Z" level=info msg="RemovePodSandbox \"242e7fbd2ce07db33a25decb0c9c5f78783c7ac77310613706e7bea244591d98\" returns successfully" Sep 12 16:53:51.937683 containerd[1964]: time="2025-09-12T16:53:51.936889050Z" level=info msg="StopPodSandbox for \"30d753d4fe70401fa0dd1ac27deb9957a3bc710cd56b002448e1f6fcc515c0a4\"" Sep 12 16:53:51.937683 containerd[1964]: time="2025-09-12T16:53:51.937082250Z" level=info msg="TearDown network for sandbox \"30d753d4fe70401fa0dd1ac27deb9957a3bc710cd56b002448e1f6fcc515c0a4\" successfully" Sep 12 16:53:51.937683 containerd[1964]: time="2025-09-12T16:53:51.937137802Z" level=info msg="StopPodSandbox for \"30d753d4fe70401fa0dd1ac27deb9957a3bc710cd56b002448e1f6fcc515c0a4\" returns successfully" Sep 12 16:53:51.938088 containerd[1964]: time="2025-09-12T16:53:51.937707655Z" level=info msg="RemovePodSandbox for \"30d753d4fe70401fa0dd1ac27deb9957a3bc710cd56b002448e1f6fcc515c0a4\"" Sep 12 16:53:51.938088 containerd[1964]: time="2025-09-12T16:53:51.937785009Z" level=info msg="Forcibly stopping sandbox \"30d753d4fe70401fa0dd1ac27deb9957a3bc710cd56b002448e1f6fcc515c0a4\"" Sep 12 16:53:51.938088 containerd[1964]: time="2025-09-12T16:53:51.937979170Z" level=info msg="TearDown network for sandbox \"30d753d4fe70401fa0dd1ac27deb9957a3bc710cd56b002448e1f6fcc515c0a4\" successfully" Sep 12 16:53:51.945508 containerd[1964]: time="2025-09-12T16:53:51.945437598Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"30d753d4fe70401fa0dd1ac27deb9957a3bc710cd56b002448e1f6fcc515c0a4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 16:53:51.945666 containerd[1964]: time="2025-09-12T16:53:51.945573001Z" level=info msg="RemovePodSandbox \"30d753d4fe70401fa0dd1ac27deb9957a3bc710cd56b002448e1f6fcc515c0a4\" returns successfully" Sep 12 16:53:52.045136 kubelet[3241]: E0912 16:53:52.043735 3241 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-tdvmr" podUID="37b29caa-9f37-46e7-bb48-d1a5cd7e3a98" Sep 12 16:53:52.309669 kubelet[3241]: E0912 16:53:52.309535 3241 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 12 16:53:52.575265 containerd[1964]: time="2025-09-12T16:53:52.574233315Z" level=info msg="CreateContainer within sandbox \"92ca9b9b5aa102836119444872070961ae3ac99a943548534248e1e07f849340\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 12 16:53:52.597633 containerd[1964]: time="2025-09-12T16:53:52.597558224Z" level=info msg="CreateContainer within sandbox \"92ca9b9b5aa102836119444872070961ae3ac99a943548534248e1e07f849340\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f45fd8f4e5fbbefbf9f795d3b2d6f1c663ab63ae0ffdfe6e4c2eb1a0dba3b028\"" Sep 12 16:53:52.601737 containerd[1964]: time="2025-09-12T16:53:52.598762161Z" level=info msg="StartContainer for \"f45fd8f4e5fbbefbf9f795d3b2d6f1c663ab63ae0ffdfe6e4c2eb1a0dba3b028\"" Sep 12 16:53:52.662162 systemd[1]: Started cri-containerd-f45fd8f4e5fbbefbf9f795d3b2d6f1c663ab63ae0ffdfe6e4c2eb1a0dba3b028.scope - libcontainer container f45fd8f4e5fbbefbf9f795d3b2d6f1c663ab63ae0ffdfe6e4c2eb1a0dba3b028. Sep 12 16:53:52.715791 containerd[1964]: time="2025-09-12T16:53:52.714904160Z" level=info msg="StartContainer for \"f45fd8f4e5fbbefbf9f795d3b2d6f1c663ab63ae0ffdfe6e4c2eb1a0dba3b028\" returns successfully" Sep 12 16:53:52.731643 systemd[1]: cri-containerd-f45fd8f4e5fbbefbf9f795d3b2d6f1c663ab63ae0ffdfe6e4c2eb1a0dba3b028.scope: Deactivated successfully. Sep 12 16:53:52.768119 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f45fd8f4e5fbbefbf9f795d3b2d6f1c663ab63ae0ffdfe6e4c2eb1a0dba3b028-rootfs.mount: Deactivated successfully. Sep 12 16:53:52.780745 containerd[1964]: time="2025-09-12T16:53:52.780655596Z" level=info msg="shim disconnected" id=f45fd8f4e5fbbefbf9f795d3b2d6f1c663ab63ae0ffdfe6e4c2eb1a0dba3b028 namespace=k8s.io Sep 12 16:53:52.781406 containerd[1964]: time="2025-09-12T16:53:52.780884310Z" level=warning msg="cleaning up after shim disconnected" id=f45fd8f4e5fbbefbf9f795d3b2d6f1c663ab63ae0ffdfe6e4c2eb1a0dba3b028 namespace=k8s.io Sep 12 16:53:52.781406 containerd[1964]: time="2025-09-12T16:53:52.780908718Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 16:53:53.579941 containerd[1964]: time="2025-09-12T16:53:53.579655982Z" level=info msg="CreateContainer within sandbox \"92ca9b9b5aa102836119444872070961ae3ac99a943548534248e1e07f849340\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 12 16:53:53.613002 containerd[1964]: time="2025-09-12T16:53:53.612768794Z" level=info msg="CreateContainer within sandbox \"92ca9b9b5aa102836119444872070961ae3ac99a943548534248e1e07f849340\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"4707dbc9701cb162358390bde8f4b365801d599f1074fce0eb5f832748543b9f\"" Sep 12 16:53:53.613884 containerd[1964]: time="2025-09-12T16:53:53.613794851Z" level=info msg="StartContainer for \"4707dbc9701cb162358390bde8f4b365801d599f1074fce0eb5f832748543b9f\"" Sep 12 16:53:53.685367 systemd[1]: Started cri-containerd-4707dbc9701cb162358390bde8f4b365801d599f1074fce0eb5f832748543b9f.scope - libcontainer container 4707dbc9701cb162358390bde8f4b365801d599f1074fce0eb5f832748543b9f. Sep 12 16:53:53.751033 systemd[1]: cri-containerd-4707dbc9701cb162358390bde8f4b365801d599f1074fce0eb5f832748543b9f.scope: Deactivated successfully. Sep 12 16:53:53.753331 containerd[1964]: time="2025-09-12T16:53:53.753152096Z" level=info msg="StartContainer for \"4707dbc9701cb162358390bde8f4b365801d599f1074fce0eb5f832748543b9f\" returns successfully" Sep 12 16:53:53.810747 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4707dbc9701cb162358390bde8f4b365801d599f1074fce0eb5f832748543b9f-rootfs.mount: Deactivated successfully. Sep 12 16:53:53.817097 containerd[1964]: time="2025-09-12T16:53:53.817009105Z" level=info msg="shim disconnected" id=4707dbc9701cb162358390bde8f4b365801d599f1074fce0eb5f832748543b9f namespace=k8s.io Sep 12 16:53:53.817097 containerd[1964]: time="2025-09-12T16:53:53.817090974Z" level=warning msg="cleaning up after shim disconnected" id=4707dbc9701cb162358390bde8f4b365801d599f1074fce0eb5f832748543b9f namespace=k8s.io Sep 12 16:53:53.821354 containerd[1964]: time="2025-09-12T16:53:53.817113977Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 16:53:54.045197 kubelet[3241]: E0912 16:53:54.044783 3241 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-tdvmr" podUID="37b29caa-9f37-46e7-bb48-d1a5cd7e3a98" Sep 12 16:53:54.589967 containerd[1964]: time="2025-09-12T16:53:54.589888829Z" level=info msg="CreateContainer within sandbox \"92ca9b9b5aa102836119444872070961ae3ac99a943548534248e1e07f849340\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 12 16:53:54.621024 containerd[1964]: time="2025-09-12T16:53:54.620949240Z" level=info msg="CreateContainer within sandbox \"92ca9b9b5aa102836119444872070961ae3ac99a943548534248e1e07f849340\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"92ef29a38697c8c4dcb0ef53aede625179cd2af3bdf92410737e7faf96fb9765\"" Sep 12 16:53:54.622127 containerd[1964]: time="2025-09-12T16:53:54.622059014Z" level=info msg="StartContainer for \"92ef29a38697c8c4dcb0ef53aede625179cd2af3bdf92410737e7faf96fb9765\"" Sep 12 16:53:54.685140 systemd[1]: Started cri-containerd-92ef29a38697c8c4dcb0ef53aede625179cd2af3bdf92410737e7faf96fb9765.scope - libcontainer container 92ef29a38697c8c4dcb0ef53aede625179cd2af3bdf92410737e7faf96fb9765. Sep 12 16:53:54.730104 systemd[1]: cri-containerd-92ef29a38697c8c4dcb0ef53aede625179cd2af3bdf92410737e7faf96fb9765.scope: Deactivated successfully. Sep 12 16:53:54.732963 containerd[1964]: time="2025-09-12T16:53:54.731262722Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb74110c1_6258_4870_93cf_d4e267d0d82f.slice/cri-containerd-92ef29a38697c8c4dcb0ef53aede625179cd2af3bdf92410737e7faf96fb9765.scope/memory.events\": no such file or directory" Sep 12 16:53:54.738628 containerd[1964]: time="2025-09-12T16:53:54.738538754Z" level=info msg="StartContainer for \"92ef29a38697c8c4dcb0ef53aede625179cd2af3bdf92410737e7faf96fb9765\" returns successfully" Sep 12 16:53:54.773783 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-92ef29a38697c8c4dcb0ef53aede625179cd2af3bdf92410737e7faf96fb9765-rootfs.mount: Deactivated successfully. Sep 12 16:53:54.784008 containerd[1964]: time="2025-09-12T16:53:54.783776618Z" level=info msg="shim disconnected" id=92ef29a38697c8c4dcb0ef53aede625179cd2af3bdf92410737e7faf96fb9765 namespace=k8s.io Sep 12 16:53:54.784274 containerd[1964]: time="2025-09-12T16:53:54.784001070Z" level=warning msg="cleaning up after shim disconnected" id=92ef29a38697c8c4dcb0ef53aede625179cd2af3bdf92410737e7faf96fb9765 namespace=k8s.io Sep 12 16:53:54.784274 containerd[1964]: time="2025-09-12T16:53:54.784057666Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 16:53:55.042872 kubelet[3241]: E0912 16:53:55.042770 3241 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-69r2x" podUID="5a6a050a-8395-4720-a1e9-38b0e610e595" Sep 12 16:53:55.303903 kubelet[3241]: I0912 16:53:55.303484 3241 setters.go:602] "Node became not ready" node="ip-172-31-21-42" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-12T16:53:55Z","lastTransitionTime":"2025-09-12T16:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 12 16:53:55.596585 containerd[1964]: time="2025-09-12T16:53:55.596417597Z" level=info msg="CreateContainer within sandbox \"92ca9b9b5aa102836119444872070961ae3ac99a943548534248e1e07f849340\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 12 16:53:55.646609 containerd[1964]: time="2025-09-12T16:53:55.646054375Z" level=info msg="CreateContainer within sandbox \"92ca9b9b5aa102836119444872070961ae3ac99a943548534248e1e07f849340\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e75c742ff4d6ae7500a244d6e7e458775cd4f7541c0a272ca8db351bd0b4eab1\"" Sep 12 16:53:55.651854 containerd[1964]: time="2025-09-12T16:53:55.650570960Z" level=info msg="StartContainer for \"e75c742ff4d6ae7500a244d6e7e458775cd4f7541c0a272ca8db351bd0b4eab1\"" Sep 12 16:53:55.718126 systemd[1]: Started cri-containerd-e75c742ff4d6ae7500a244d6e7e458775cd4f7541c0a272ca8db351bd0b4eab1.scope - libcontainer container e75c742ff4d6ae7500a244d6e7e458775cd4f7541c0a272ca8db351bd0b4eab1. Sep 12 16:53:55.779679 containerd[1964]: time="2025-09-12T16:53:55.779484390Z" level=info msg="StartContainer for \"e75c742ff4d6ae7500a244d6e7e458775cd4f7541c0a272ca8db351bd0b4eab1\" returns successfully" Sep 12 16:53:56.044834 kubelet[3241]: E0912 16:53:56.044119 3241 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-tdvmr" podUID="37b29caa-9f37-46e7-bb48-d1a5cd7e3a98" Sep 12 16:53:56.588852 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Sep 12 16:53:57.043510 kubelet[3241]: E0912 16:53:57.043040 3241 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-69r2x" podUID="5a6a050a-8395-4720-a1e9-38b0e610e595" Sep 12 16:53:58.322955 systemd[1]: run-containerd-runc-k8s.io-e75c742ff4d6ae7500a244d6e7e458775cd4f7541c0a272ca8db351bd0b4eab1-runc.c43oat.mount: Deactivated successfully. Sep 12 16:54:00.948871 (udev-worker)[6066]: Network interface NamePolicy= disabled on kernel command line. Sep 12 16:54:00.950594 systemd-networkd[1882]: lxc_health: Link UP Sep 12 16:54:00.962462 (udev-worker)[6068]: Network interface NamePolicy= disabled on kernel command line. Sep 12 16:54:00.964373 systemd-networkd[1882]: lxc_health: Gained carrier Sep 12 16:54:01.378498 kubelet[3241]: I0912 16:54:01.378276 3241 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-lnvxn" podStartSLOduration=11.378250885 podStartE2EDuration="11.378250885s" podCreationTimestamp="2025-09-12 16:53:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 16:53:56.642226743 +0000 UTC m=+124.973703778" watchObservedRunningTime="2025-09-12 16:54:01.378250885 +0000 UTC m=+129.709727932" Sep 12 16:54:02.716020 systemd-networkd[1882]: lxc_health: Gained IPv6LL Sep 12 16:54:02.838734 systemd[1]: run-containerd-runc-k8s.io-e75c742ff4d6ae7500a244d6e7e458775cd4f7541c0a272ca8db351bd0b4eab1-runc.s1afvo.mount: Deactivated successfully. Sep 12 16:54:04.862280 ntpd[1945]: Listen normally on 15 lxc_health [fe80::80e:89ff:fe10:aeee%14]:123 Sep 12 16:54:04.863057 ntpd[1945]: 12 Sep 16:54:04 ntpd[1945]: Listen normally on 15 lxc_health [fe80::80e:89ff:fe10:aeee%14]:123 Sep 12 16:54:05.150697 systemd[1]: run-containerd-runc-k8s.io-e75c742ff4d6ae7500a244d6e7e458775cd4f7541c0a272ca8db351bd0b4eab1-runc.U6WEEB.mount: Deactivated successfully. Sep 12 16:54:07.560923 sshd[5324]: Connection closed by 139.178.89.65 port 52852 Sep 12 16:54:07.562582 sshd-session[5258]: pam_unix(sshd:session): session closed for user core Sep 12 16:54:07.570031 systemd[1]: sshd@27-172.31.21.42:22-139.178.89.65:52852.service: Deactivated successfully. Sep 12 16:54:07.577718 systemd[1]: session-28.scope: Deactivated successfully. Sep 12 16:54:07.582689 systemd-logind[1950]: Session 28 logged out. Waiting for processes to exit. Sep 12 16:54:07.586151 systemd-logind[1950]: Removed session 28. Sep 12 16:54:44.364501 systemd[1]: cri-containerd-a3f4980bfa60aec7330fe08766406341c0417acdbb131a1af3ca5567aee01a04.scope: Deactivated successfully. Sep 12 16:54:44.365871 systemd[1]: cri-containerd-a3f4980bfa60aec7330fe08766406341c0417acdbb131a1af3ca5567aee01a04.scope: Consumed 5.682s CPU time, 53.6M memory peak. Sep 12 16:54:44.409433 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a3f4980bfa60aec7330fe08766406341c0417acdbb131a1af3ca5567aee01a04-rootfs.mount: Deactivated successfully. Sep 12 16:54:44.420921 containerd[1964]: time="2025-09-12T16:54:44.420843550Z" level=info msg="shim disconnected" id=a3f4980bfa60aec7330fe08766406341c0417acdbb131a1af3ca5567aee01a04 namespace=k8s.io Sep 12 16:54:44.421890 containerd[1964]: time="2025-09-12T16:54:44.421598187Z" level=warning msg="cleaning up after shim disconnected" id=a3f4980bfa60aec7330fe08766406341c0417acdbb131a1af3ca5567aee01a04 namespace=k8s.io Sep 12 16:54:44.421890 containerd[1964]: time="2025-09-12T16:54:44.421633761Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 16:54:44.729477 kubelet[3241]: I0912 16:54:44.729088 3241 scope.go:117] "RemoveContainer" containerID="a3f4980bfa60aec7330fe08766406341c0417acdbb131a1af3ca5567aee01a04" Sep 12 16:54:44.733321 containerd[1964]: time="2025-09-12T16:54:44.733265773Z" level=info msg="CreateContainer within sandbox \"ae44dd4f4520c16381e0d2d80eae957f925e2d9af29c500a0a5d456389b1a9f2\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Sep 12 16:54:44.765713 containerd[1964]: time="2025-09-12T16:54:44.765511884Z" level=info msg="CreateContainer within sandbox \"ae44dd4f4520c16381e0d2d80eae957f925e2d9af29c500a0a5d456389b1a9f2\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"a321d37f368775544799c1464531ab340208539cf1e2e23dfa3950f06992a65b\"" Sep 12 16:54:44.767870 containerd[1964]: time="2025-09-12T16:54:44.766233505Z" level=info msg="StartContainer for \"a321d37f368775544799c1464531ab340208539cf1e2e23dfa3950f06992a65b\"" Sep 12 16:54:44.820411 systemd[1]: Started cri-containerd-a321d37f368775544799c1464531ab340208539cf1e2e23dfa3950f06992a65b.scope - libcontainer container a321d37f368775544799c1464531ab340208539cf1e2e23dfa3950f06992a65b. Sep 12 16:54:44.899235 containerd[1964]: time="2025-09-12T16:54:44.899143551Z" level=info msg="StartContainer for \"a321d37f368775544799c1464531ab340208539cf1e2e23dfa3950f06992a65b\" returns successfully" Sep 12 16:54:46.601613 kubelet[3241]: E0912 16:54:46.601536 3241 controller.go:195] "Failed to update lease" err="Put \"https://172.31.21.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-42?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Sep 12 16:54:48.361077 systemd[1]: cri-containerd-26acf622b700d54dcc58745cfb8d5aa0e16616bd1aded0861f65168a0398fae0.scope: Deactivated successfully. Sep 12 16:54:48.363375 systemd[1]: cri-containerd-26acf622b700d54dcc58745cfb8d5aa0e16616bd1aded0861f65168a0398fae0.scope: Consumed 4.458s CPU time, 22.9M memory peak. Sep 12 16:54:48.403691 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-26acf622b700d54dcc58745cfb8d5aa0e16616bd1aded0861f65168a0398fae0-rootfs.mount: Deactivated successfully. Sep 12 16:54:48.418573 containerd[1964]: time="2025-09-12T16:54:48.418475358Z" level=info msg="shim disconnected" id=26acf622b700d54dcc58745cfb8d5aa0e16616bd1aded0861f65168a0398fae0 namespace=k8s.io Sep 12 16:54:48.418573 containerd[1964]: time="2025-09-12T16:54:48.418557672Z" level=warning msg="cleaning up after shim disconnected" id=26acf622b700d54dcc58745cfb8d5aa0e16616bd1aded0861f65168a0398fae0 namespace=k8s.io Sep 12 16:54:48.419375 containerd[1964]: time="2025-09-12T16:54:48.418581744Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 16:54:48.747320 kubelet[3241]: I0912 16:54:48.746499 3241 scope.go:117] "RemoveContainer" containerID="26acf622b700d54dcc58745cfb8d5aa0e16616bd1aded0861f65168a0398fae0" Sep 12 16:54:48.749453 containerd[1964]: time="2025-09-12T16:54:48.749397171Z" level=info msg="CreateContainer within sandbox \"403ce9e6ebf30bb301e9052d2f6828f45c0718a210eba51a6336671ff6916bed\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Sep 12 16:54:48.779514 containerd[1964]: time="2025-09-12T16:54:48.779425954Z" level=info msg="CreateContainer within sandbox \"403ce9e6ebf30bb301e9052d2f6828f45c0718a210eba51a6336671ff6916bed\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"ad5054c49c59c597845348ab708f42f2310a1293fc210717f29cc85fb413ebb2\"" Sep 12 16:54:48.781144 containerd[1964]: time="2025-09-12T16:54:48.780188383Z" level=info msg="StartContainer for \"ad5054c49c59c597845348ab708f42f2310a1293fc210717f29cc85fb413ebb2\"" Sep 12 16:54:48.840161 systemd[1]: Started cri-containerd-ad5054c49c59c597845348ab708f42f2310a1293fc210717f29cc85fb413ebb2.scope - libcontainer container ad5054c49c59c597845348ab708f42f2310a1293fc210717f29cc85fb413ebb2. Sep 12 16:54:48.906647 containerd[1964]: time="2025-09-12T16:54:48.906572436Z" level=info msg="StartContainer for \"ad5054c49c59c597845348ab708f42f2310a1293fc210717f29cc85fb413ebb2\" returns successfully" Sep 12 16:54:49.402772 systemd[1]: run-containerd-runc-k8s.io-ad5054c49c59c597845348ab708f42f2310a1293fc210717f29cc85fb413ebb2-runc.xBmW4x.mount: Deactivated successfully. Sep 12 16:54:56.602952 kubelet[3241]: E0912 16:54:56.602340 3241 controller.go:195] "Failed to update lease" err="Put \"https://172.31.21.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-42?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"