Dec 16 12:25:59.121888 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Dec 16 12:25:59.121932 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Fri Dec 12 15:20:48 -00 2025 Dec 16 12:25:59.121956 kernel: KASLR disabled due to lack of seed Dec 16 12:25:59.121973 kernel: efi: EFI v2.7 by EDK II Dec 16 12:25:59.121989 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7a731a98 MEMRESERVE=0x78551598 Dec 16 12:25:59.122004 kernel: secureboot: Secure boot disabled Dec 16 12:25:59.122021 kernel: ACPI: Early table checksum verification disabled Dec 16 12:25:59.122036 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Dec 16 12:25:59.122052 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Dec 16 12:25:59.122067 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Dec 16 12:25:59.122083 kernel: ACPI: DSDT 0x0000000078640000 0013D2 (v02 AMAZON AMZNDSDT 00000001 AMZN 00000001) Dec 16 12:25:59.122102 kernel: ACPI: FACS 0x0000000078630000 000040 Dec 16 12:25:59.122117 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Dec 16 12:25:59.122133 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Dec 16 12:25:59.122151 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Dec 16 12:25:59.122167 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Dec 16 12:25:59.122187 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Dec 16 12:25:59.122203 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Dec 16 12:25:59.122219 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Dec 16 12:25:59.122235 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Dec 16 12:25:59.122251 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Dec 16 12:25:59.122297 kernel: printk: legacy bootconsole [uart0] enabled Dec 16 12:25:59.122316 kernel: ACPI: Use ACPI SPCR as default console: Yes Dec 16 12:25:59.122334 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Dec 16 12:25:59.122350 kernel: NODE_DATA(0) allocated [mem 0x4b584da00-0x4b5854fff] Dec 16 12:25:59.122366 kernel: Zone ranges: Dec 16 12:25:59.122382 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Dec 16 12:25:59.122404 kernel: DMA32 empty Dec 16 12:25:59.122420 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Dec 16 12:25:59.122435 kernel: Device empty Dec 16 12:25:59.122451 kernel: Movable zone start for each node Dec 16 12:25:59.122467 kernel: Early memory node ranges Dec 16 12:25:59.122482 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Dec 16 12:25:59.122498 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Dec 16 12:25:59.122514 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Dec 16 12:25:59.122530 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Dec 16 12:25:59.122547 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Dec 16 12:25:59.122563 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Dec 16 12:25:59.122579 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Dec 16 12:25:59.122600 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Dec 16 12:25:59.122622 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Dec 16 12:25:59.122640 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Dec 16 12:25:59.122657 kernel: cma: Reserved 16 MiB at 0x000000007f000000 on node -1 Dec 16 12:25:59.122674 kernel: psci: probing for conduit method from ACPI. Dec 16 12:25:59.122695 kernel: psci: PSCIv1.0 detected in firmware. Dec 16 12:25:59.122712 kernel: psci: Using standard PSCI v0.2 function IDs Dec 16 12:25:59.122729 kernel: psci: Trusted OS migration not required Dec 16 12:25:59.122745 kernel: psci: SMC Calling Convention v1.1 Dec 16 12:25:59.122763 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Dec 16 12:25:59.122780 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Dec 16 12:25:59.122797 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Dec 16 12:25:59.122814 kernel: pcpu-alloc: [0] 0 [0] 1 Dec 16 12:25:59.122831 kernel: Detected PIPT I-cache on CPU0 Dec 16 12:25:59.122848 kernel: CPU features: detected: GIC system register CPU interface Dec 16 12:25:59.122864 kernel: CPU features: detected: Spectre-v2 Dec 16 12:25:59.122885 kernel: CPU features: detected: Spectre-v3a Dec 16 12:25:59.122902 kernel: CPU features: detected: Spectre-BHB Dec 16 12:25:59.122919 kernel: CPU features: detected: ARM erratum 1742098 Dec 16 12:25:59.122936 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Dec 16 12:25:59.122952 kernel: alternatives: applying boot alternatives Dec 16 12:25:59.122972 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=361f5baddf90aee3bc7ee7e9be879bc0cc94314f224faa1e2791d9b44cd3ec52 Dec 16 12:25:59.122990 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 16 12:25:59.123007 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 16 12:25:59.123024 kernel: Fallback order for Node 0: 0 Dec 16 12:25:59.123041 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1007616 Dec 16 12:25:59.123057 kernel: Policy zone: Normal Dec 16 12:25:59.123078 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 16 12:25:59.123095 kernel: software IO TLB: area num 2. Dec 16 12:25:59.123111 kernel: software IO TLB: mapped [mem 0x0000000074551000-0x0000000078551000] (64MB) Dec 16 12:25:59.123128 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 16 12:25:59.123145 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 16 12:25:59.123163 kernel: rcu: RCU event tracing is enabled. Dec 16 12:25:59.123181 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 16 12:25:59.123198 kernel: Trampoline variant of Tasks RCU enabled. Dec 16 12:25:59.123215 kernel: Tracing variant of Tasks RCU enabled. Dec 16 12:25:59.126362 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 16 12:25:59.126383 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 16 12:25:59.126411 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 12:25:59.126430 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 12:25:59.126447 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 16 12:25:59.126464 kernel: GICv3: 96 SPIs implemented Dec 16 12:25:59.126481 kernel: GICv3: 0 Extended SPIs implemented Dec 16 12:25:59.126498 kernel: Root IRQ handler: gic_handle_irq Dec 16 12:25:59.126515 kernel: GICv3: GICv3 features: 16 PPIs Dec 16 12:25:59.126532 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Dec 16 12:25:59.126549 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Dec 16 12:25:59.126566 kernel: ITS [mem 0x10080000-0x1009ffff] Dec 16 12:25:59.126585 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000f0000 (indirect, esz 8, psz 64K, shr 1) Dec 16 12:25:59.126605 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @400100000 (flat, esz 8, psz 64K, shr 1) Dec 16 12:25:59.126627 kernel: GICv3: using LPI property table @0x0000000400110000 Dec 16 12:25:59.126644 kernel: ITS: Using hypervisor restricted LPI range [128] Dec 16 12:25:59.126661 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000400120000 Dec 16 12:25:59.126678 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 16 12:25:59.126694 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Dec 16 12:25:59.126711 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Dec 16 12:25:59.126729 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Dec 16 12:25:59.126746 kernel: Console: colour dummy device 80x25 Dec 16 12:25:59.126764 kernel: printk: legacy console [tty1] enabled Dec 16 12:25:59.126782 kernel: ACPI: Core revision 20240827 Dec 16 12:25:59.126800 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Dec 16 12:25:59.126822 kernel: pid_max: default: 32768 minimum: 301 Dec 16 12:25:59.126839 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 16 12:25:59.126857 kernel: landlock: Up and running. Dec 16 12:25:59.126874 kernel: SELinux: Initializing. Dec 16 12:25:59.126892 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 16 12:25:59.126996 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 16 12:25:59.128136 kernel: rcu: Hierarchical SRCU implementation. Dec 16 12:25:59.128172 kernel: rcu: Max phase no-delay instances is 400. Dec 16 12:25:59.128197 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 16 12:25:59.128215 kernel: Remapping and enabling EFI services. Dec 16 12:25:59.128233 kernel: smp: Bringing up secondary CPUs ... Dec 16 12:25:59.128250 kernel: Detected PIPT I-cache on CPU1 Dec 16 12:25:59.128286 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Dec 16 12:25:59.128306 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000400130000 Dec 16 12:25:59.128324 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Dec 16 12:25:59.128341 kernel: smp: Brought up 1 node, 2 CPUs Dec 16 12:25:59.128357 kernel: SMP: Total of 2 processors activated. Dec 16 12:25:59.128380 kernel: CPU: All CPU(s) started at EL1 Dec 16 12:25:59.128408 kernel: CPU features: detected: 32-bit EL0 Support Dec 16 12:25:59.128426 kernel: CPU features: detected: 32-bit EL1 Support Dec 16 12:25:59.128447 kernel: CPU features: detected: CRC32 instructions Dec 16 12:25:59.128465 kernel: alternatives: applying system-wide alternatives Dec 16 12:25:59.128484 kernel: Memory: 3796332K/4030464K available (11200K kernel code, 2456K rwdata, 9084K rodata, 39552K init, 1038K bss, 212788K reserved, 16384K cma-reserved) Dec 16 12:25:59.128502 kernel: devtmpfs: initialized Dec 16 12:25:59.128520 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 16 12:25:59.128542 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 16 12:25:59.128560 kernel: 16880 pages in range for non-PLT usage Dec 16 12:25:59.128578 kernel: 508400 pages in range for PLT usage Dec 16 12:25:59.128595 kernel: pinctrl core: initialized pinctrl subsystem Dec 16 12:25:59.128613 kernel: SMBIOS 3.0.0 present. Dec 16 12:25:59.128631 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Dec 16 12:25:59.128648 kernel: DMI: Memory slots populated: 0/0 Dec 16 12:25:59.128666 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 16 12:25:59.128684 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Dec 16 12:25:59.128705 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 16 12:25:59.128724 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 16 12:25:59.128741 kernel: audit: initializing netlink subsys (disabled) Dec 16 12:25:59.128759 kernel: audit: type=2000 audit(0.225:1): state=initialized audit_enabled=0 res=1 Dec 16 12:25:59.128777 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 16 12:25:59.128795 kernel: cpuidle: using governor menu Dec 16 12:25:59.128813 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 16 12:25:59.128830 kernel: ASID allocator initialised with 65536 entries Dec 16 12:25:59.128848 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 16 12:25:59.128870 kernel: Serial: AMBA PL011 UART driver Dec 16 12:25:59.128888 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 16 12:25:59.128906 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Dec 16 12:25:59.128924 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Dec 16 12:25:59.128942 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Dec 16 12:25:59.128960 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 16 12:25:59.128978 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Dec 16 12:25:59.128996 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Dec 16 12:25:59.129014 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Dec 16 12:25:59.129036 kernel: ACPI: Added _OSI(Module Device) Dec 16 12:25:59.129054 kernel: ACPI: Added _OSI(Processor Device) Dec 16 12:25:59.129072 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 16 12:25:59.129089 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 16 12:25:59.129107 kernel: ACPI: Interpreter enabled Dec 16 12:25:59.129124 kernel: ACPI: Using GIC for interrupt routing Dec 16 12:25:59.129142 kernel: ACPI: MCFG table detected, 1 entries Dec 16 12:25:59.129160 kernel: ACPI: CPU0 has been hot-added Dec 16 12:25:59.129178 kernel: ACPI: CPU1 has been hot-added Dec 16 12:25:59.129200 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00]) Dec 16 12:25:59.129525 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 16 12:25:59.129775 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 16 12:25:59.129969 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 16 12:25:59.130152 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x200fffff] reserved by PNP0C02:00 Dec 16 12:25:59.130421 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x200fffff] for [bus 00] Dec 16 12:25:59.130447 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Dec 16 12:25:59.130474 kernel: acpiphp: Slot [1] registered Dec 16 12:25:59.130493 kernel: acpiphp: Slot [2] registered Dec 16 12:25:59.130511 kernel: acpiphp: Slot [3] registered Dec 16 12:25:59.130529 kernel: acpiphp: Slot [4] registered Dec 16 12:25:59.130546 kernel: acpiphp: Slot [5] registered Dec 16 12:25:59.130564 kernel: acpiphp: Slot [6] registered Dec 16 12:25:59.130581 kernel: acpiphp: Slot [7] registered Dec 16 12:25:59.130599 kernel: acpiphp: Slot [8] registered Dec 16 12:25:59.130616 kernel: acpiphp: Slot [9] registered Dec 16 12:25:59.130634 kernel: acpiphp: Slot [10] registered Dec 16 12:25:59.130656 kernel: acpiphp: Slot [11] registered Dec 16 12:25:59.130673 kernel: acpiphp: Slot [12] registered Dec 16 12:25:59.130691 kernel: acpiphp: Slot [13] registered Dec 16 12:25:59.130709 kernel: acpiphp: Slot [14] registered Dec 16 12:25:59.130726 kernel: acpiphp: Slot [15] registered Dec 16 12:25:59.130744 kernel: acpiphp: Slot [16] registered Dec 16 12:25:59.130761 kernel: acpiphp: Slot [17] registered Dec 16 12:25:59.130779 kernel: acpiphp: Slot [18] registered Dec 16 12:25:59.130796 kernel: acpiphp: Slot [19] registered Dec 16 12:25:59.130818 kernel: acpiphp: Slot [20] registered Dec 16 12:25:59.130836 kernel: acpiphp: Slot [21] registered Dec 16 12:25:59.130854 kernel: acpiphp: Slot [22] registered Dec 16 12:25:59.130871 kernel: acpiphp: Slot [23] registered Dec 16 12:25:59.130889 kernel: acpiphp: Slot [24] registered Dec 16 12:25:59.130906 kernel: acpiphp: Slot [25] registered Dec 16 12:25:59.130924 kernel: acpiphp: Slot [26] registered Dec 16 12:25:59.130941 kernel: acpiphp: Slot [27] registered Dec 16 12:25:59.130959 kernel: acpiphp: Slot [28] registered Dec 16 12:25:59.130977 kernel: acpiphp: Slot [29] registered Dec 16 12:25:59.130998 kernel: acpiphp: Slot [30] registered Dec 16 12:25:59.131016 kernel: acpiphp: Slot [31] registered Dec 16 12:25:59.131034 kernel: PCI host bridge to bus 0000:00 Dec 16 12:25:59.131218 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Dec 16 12:25:59.131414 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Dec 16 12:25:59.131583 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Dec 16 12:25:59.131795 kernel: pci_bus 0000:00: root bus resource [bus 00] Dec 16 12:25:59.134191 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 conventional PCI endpoint Dec 16 12:25:59.134439 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 conventional PCI endpoint Dec 16 12:25:59.134634 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80118000-0x80118fff] Dec 16 12:25:59.134835 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Root Complex Integrated Endpoint Dec 16 12:25:59.135026 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80114000-0x80117fff] Dec 16 12:25:59.135214 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Dec 16 12:25:59.135455 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Root Complex Integrated Endpoint Dec 16 12:25:59.135646 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80110000-0x80113fff] Dec 16 12:25:59.135839 kernel: pci 0000:00:05.0: BAR 2 [mem 0x80000000-0x800fffff pref] Dec 16 12:25:59.136026 kernel: pci 0000:00:05.0: BAR 4 [mem 0x80100000-0x8010ffff] Dec 16 12:25:59.136212 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Dec 16 12:25:59.136409 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Dec 16 12:25:59.136578 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Dec 16 12:25:59.136750 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Dec 16 12:25:59.136775 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Dec 16 12:25:59.136793 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Dec 16 12:25:59.136812 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Dec 16 12:25:59.136829 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Dec 16 12:25:59.136847 kernel: iommu: Default domain type: Translated Dec 16 12:25:59.136865 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 16 12:25:59.136883 kernel: efivars: Registered efivars operations Dec 16 12:25:59.136900 kernel: vgaarb: loaded Dec 16 12:25:59.136923 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 16 12:25:59.136941 kernel: VFS: Disk quotas dquot_6.6.0 Dec 16 12:25:59.136958 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 16 12:25:59.136976 kernel: pnp: PnP ACPI init Dec 16 12:25:59.137163 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Dec 16 12:25:59.137189 kernel: pnp: PnP ACPI: found 1 devices Dec 16 12:25:59.137207 kernel: NET: Registered PF_INET protocol family Dec 16 12:25:59.137225 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 16 12:25:59.137248 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 16 12:25:59.137286 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 16 12:25:59.137307 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 16 12:25:59.137325 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 16 12:25:59.137343 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 16 12:25:59.137361 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 16 12:25:59.137380 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 16 12:25:59.137397 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 16 12:25:59.137415 kernel: PCI: CLS 0 bytes, default 64 Dec 16 12:25:59.137439 kernel: kvm [1]: HYP mode not available Dec 16 12:25:59.137457 kernel: Initialise system trusted keyrings Dec 16 12:25:59.137475 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 16 12:25:59.137492 kernel: Key type asymmetric registered Dec 16 12:25:59.137510 kernel: Asymmetric key parser 'x509' registered Dec 16 12:25:59.137528 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 16 12:25:59.137546 kernel: io scheduler mq-deadline registered Dec 16 12:25:59.137563 kernel: io scheduler kyber registered Dec 16 12:25:59.137581 kernel: io scheduler bfq registered Dec 16 12:25:59.137841 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Dec 16 12:25:59.137871 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Dec 16 12:25:59.137889 kernel: ACPI: button: Power Button [PWRB] Dec 16 12:25:59.137908 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Dec 16 12:25:59.137927 kernel: ACPI: button: Sleep Button [SLPB] Dec 16 12:25:59.137945 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 16 12:25:59.137963 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Dec 16 12:25:59.138159 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Dec 16 12:25:59.138193 kernel: printk: legacy console [ttyS0] disabled Dec 16 12:25:59.138212 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Dec 16 12:25:59.138230 kernel: printk: legacy console [ttyS0] enabled Dec 16 12:25:59.138247 kernel: printk: legacy bootconsole [uart0] disabled Dec 16 12:25:59.138293 kernel: thunder_xcv, ver 1.0 Dec 16 12:25:59.138314 kernel: thunder_bgx, ver 1.0 Dec 16 12:25:59.138332 kernel: nicpf, ver 1.0 Dec 16 12:25:59.138350 kernel: nicvf, ver 1.0 Dec 16 12:25:59.138551 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 16 12:25:59.138735 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-12-16T12:25:58 UTC (1765887958) Dec 16 12:25:59.138759 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 16 12:25:59.138778 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 (0,80000003) counters available Dec 16 12:25:59.138796 kernel: watchdog: NMI not fully supported Dec 16 12:25:59.138814 kernel: NET: Registered PF_INET6 protocol family Dec 16 12:25:59.138831 kernel: watchdog: Hard watchdog permanently disabled Dec 16 12:25:59.138849 kernel: Segment Routing with IPv6 Dec 16 12:25:59.138867 kernel: In-situ OAM (IOAM) with IPv6 Dec 16 12:25:59.138884 kernel: NET: Registered PF_PACKET protocol family Dec 16 12:25:59.138907 kernel: Key type dns_resolver registered Dec 16 12:25:59.138924 kernel: registered taskstats version 1 Dec 16 12:25:59.138942 kernel: Loading compiled-in X.509 certificates Dec 16 12:25:59.138960 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: 92f3a94fb747a7ba7cbcfde1535be91b86f9429a' Dec 16 12:25:59.138978 kernel: Demotion targets for Node 0: null Dec 16 12:25:59.138996 kernel: Key type .fscrypt registered Dec 16 12:25:59.139013 kernel: Key type fscrypt-provisioning registered Dec 16 12:25:59.139031 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 16 12:25:59.139049 kernel: ima: Allocated hash algorithm: sha1 Dec 16 12:25:59.139071 kernel: ima: No architecture policies found Dec 16 12:25:59.139089 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 16 12:25:59.139106 kernel: clk: Disabling unused clocks Dec 16 12:25:59.139124 kernel: PM: genpd: Disabling unused power domains Dec 16 12:25:59.139142 kernel: Warning: unable to open an initial console. Dec 16 12:25:59.139160 kernel: Freeing unused kernel memory: 39552K Dec 16 12:25:59.139178 kernel: Run /init as init process Dec 16 12:25:59.139195 kernel: with arguments: Dec 16 12:25:59.139213 kernel: /init Dec 16 12:25:59.139234 kernel: with environment: Dec 16 12:25:59.139251 kernel: HOME=/ Dec 16 12:25:59.139290 kernel: TERM=linux Dec 16 12:25:59.139312 systemd[1]: Successfully made /usr/ read-only. Dec 16 12:25:59.139336 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 12:25:59.139356 systemd[1]: Detected virtualization amazon. Dec 16 12:25:59.139375 systemd[1]: Detected architecture arm64. Dec 16 12:25:59.139399 systemd[1]: Running in initrd. Dec 16 12:25:59.139419 systemd[1]: No hostname configured, using default hostname. Dec 16 12:25:59.139438 systemd[1]: Hostname set to . Dec 16 12:25:59.139457 systemd[1]: Initializing machine ID from VM UUID. Dec 16 12:25:59.139476 systemd[1]: Queued start job for default target initrd.target. Dec 16 12:25:59.139495 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 12:25:59.139515 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 12:25:59.139535 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 16 12:25:59.139559 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 12:25:59.139579 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 16 12:25:59.139600 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 16 12:25:59.139622 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 16 12:25:59.139641 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 16 12:25:59.139661 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 12:25:59.139680 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 12:25:59.139704 systemd[1]: Reached target paths.target - Path Units. Dec 16 12:25:59.139723 systemd[1]: Reached target slices.target - Slice Units. Dec 16 12:25:59.139742 systemd[1]: Reached target swap.target - Swaps. Dec 16 12:25:59.139762 systemd[1]: Reached target timers.target - Timer Units. Dec 16 12:25:59.139781 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 12:25:59.139801 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 12:25:59.139821 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 16 12:25:59.139840 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 16 12:25:59.139860 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 12:25:59.139883 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 12:25:59.139903 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 12:25:59.139922 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 12:25:59.139941 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 16 12:25:59.139961 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 12:25:59.139980 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 16 12:25:59.140000 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 16 12:25:59.140019 systemd[1]: Starting systemd-fsck-usr.service... Dec 16 12:25:59.140043 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 12:25:59.140063 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 12:25:59.140082 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:25:59.140102 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 16 12:25:59.140122 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 12:25:59.140146 systemd[1]: Finished systemd-fsck-usr.service. Dec 16 12:25:59.140166 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 12:25:59.140218 systemd-journald[259]: Collecting audit messages is disabled. Dec 16 12:25:59.140277 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 16 12:25:59.140305 kernel: Bridge firewalling registered Dec 16 12:25:59.140325 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:25:59.140345 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 12:25:59.140365 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 12:25:59.140385 systemd-journald[259]: Journal started Dec 16 12:25:59.140420 systemd-journald[259]: Runtime Journal (/run/log/journal/ec2f723e98e85b3cb3d91864fa6050ca) is 8M, max 75.3M, 67.3M free. Dec 16 12:25:59.087730 systemd-modules-load[260]: Inserted module 'overlay' Dec 16 12:25:59.147084 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 12:25:59.125087 systemd-modules-load[260]: Inserted module 'br_netfilter' Dec 16 12:25:59.151732 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 16 12:25:59.164621 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 12:25:59.170700 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 12:25:59.187870 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 12:25:59.221288 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 12:25:59.229896 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 12:25:59.237190 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 12:25:59.242772 systemd-tmpfiles[277]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 16 12:25:59.244170 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 16 12:25:59.262881 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 12:25:59.273507 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 12:25:59.296416 dracut-cmdline[298]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=361f5baddf90aee3bc7ee7e9be879bc0cc94314f224faa1e2791d9b44cd3ec52 Dec 16 12:25:59.373525 systemd-resolved[300]: Positive Trust Anchors: Dec 16 12:25:59.373552 systemd-resolved[300]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 12:25:59.373612 systemd-resolved[300]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 12:25:59.476304 kernel: SCSI subsystem initialized Dec 16 12:25:59.483293 kernel: Loading iSCSI transport class v2.0-870. Dec 16 12:25:59.498474 kernel: iscsi: registered transport (tcp) Dec 16 12:25:59.520435 kernel: iscsi: registered transport (qla4xxx) Dec 16 12:25:59.520518 kernel: QLogic iSCSI HBA Driver Dec 16 12:25:59.554452 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 12:25:59.583188 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 12:25:59.594835 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 12:25:59.661305 kernel: random: crng init done Dec 16 12:25:59.661657 systemd-resolved[300]: Defaulting to hostname 'linux'. Dec 16 12:25:59.665309 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 12:25:59.670245 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 12:25:59.699513 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 16 12:25:59.706433 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 16 12:25:59.803305 kernel: raid6: neonx8 gen() 6503 MB/s Dec 16 12:25:59.820295 kernel: raid6: neonx4 gen() 6493 MB/s Dec 16 12:25:59.837296 kernel: raid6: neonx2 gen() 5397 MB/s Dec 16 12:25:59.854295 kernel: raid6: neonx1 gen() 3930 MB/s Dec 16 12:25:59.871296 kernel: raid6: int64x8 gen() 3632 MB/s Dec 16 12:25:59.888295 kernel: raid6: int64x4 gen() 3681 MB/s Dec 16 12:25:59.905294 kernel: raid6: int64x2 gen() 3568 MB/s Dec 16 12:25:59.923353 kernel: raid6: int64x1 gen() 2772 MB/s Dec 16 12:25:59.923391 kernel: raid6: using algorithm neonx8 gen() 6503 MB/s Dec 16 12:25:59.942366 kernel: raid6: .... xor() 4713 MB/s, rmw enabled Dec 16 12:25:59.942409 kernel: raid6: using neon recovery algorithm Dec 16 12:25:59.951089 kernel: xor: measuring software checksum speed Dec 16 12:25:59.951147 kernel: 8regs : 12609 MB/sec Dec 16 12:25:59.953576 kernel: 32regs : 12068 MB/sec Dec 16 12:25:59.953618 kernel: arm64_neon : 9067 MB/sec Dec 16 12:25:59.953642 kernel: xor: using function: 8regs (12609 MB/sec) Dec 16 12:26:00.045306 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 16 12:26:00.056690 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 16 12:26:00.063737 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 12:26:00.113660 systemd-udevd[508]: Using default interface naming scheme 'v255'. Dec 16 12:26:00.124387 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 12:26:00.141463 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 16 12:26:00.193175 dracut-pre-trigger[516]: rd.md=0: removing MD RAID activation Dec 16 12:26:00.238510 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 12:26:00.245498 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 12:26:00.375794 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 12:26:00.382851 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 16 12:26:00.549590 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Dec 16 12:26:00.549664 kernel: nvme nvme0: pci function 0000:00:04.0 Dec 16 12:26:00.549968 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Dec 16 12:26:00.553606 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Dec 16 12:26:00.559666 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 12:26:00.565661 kernel: nvme nvme0: 2/0/0 default/read/poll queues Dec 16 12:26:00.560138 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:26:00.574220 kernel: ena 0000:00:05.0: ENA device version: 0.10 Dec 16 12:26:00.577579 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Dec 16 12:26:00.577866 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 16 12:26:00.566168 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:26:00.575009 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:26:00.589962 kernel: GPT:9289727 != 33554431 Dec 16 12:26:00.590016 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 16 12:26:00.590042 kernel: GPT:9289727 != 33554431 Dec 16 12:26:00.590066 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 16 12:26:00.590091 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 16 12:26:00.584716 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Dec 16 12:26:00.598305 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80110000, mac addr 06:35:bc:b4:0f:c1 Dec 16 12:26:00.601116 (udev-worker)[579]: Network interface NamePolicy= disabled on kernel command line. Dec 16 12:26:00.637461 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:26:00.656990 kernel: nvme nvme0: using unchecked data buffer Dec 16 12:26:00.811288 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Dec 16 12:26:00.838134 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Dec 16 12:26:00.844439 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 16 12:26:00.885666 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Dec 16 12:26:00.888589 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Dec 16 12:26:00.915891 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Dec 16 12:26:00.916142 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 12:26:00.916908 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 12:26:00.917740 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 12:26:00.925477 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 16 12:26:00.958803 disk-uuid[686]: Primary Header is updated. Dec 16 12:26:00.958803 disk-uuid[686]: Secondary Entries is updated. Dec 16 12:26:00.958803 disk-uuid[686]: Secondary Header is updated. Dec 16 12:26:00.930337 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 16 12:26:00.974315 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 16 12:26:00.987612 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 16 12:26:01.989304 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 16 12:26:01.990720 disk-uuid[688]: The operation has completed successfully. Dec 16 12:26:02.177391 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 16 12:26:02.177613 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 16 12:26:02.264343 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 16 12:26:02.301615 sh[955]: Success Dec 16 12:26:02.324100 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 16 12:26:02.324175 kernel: device-mapper: uevent: version 1.0.3 Dec 16 12:26:02.327305 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 16 12:26:02.339456 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Dec 16 12:26:02.431960 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 16 12:26:02.443928 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 16 12:26:02.455334 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 16 12:26:02.489312 kernel: BTRFS: device fsid 6d6d314d-b8a1-4727-8a34-8525e276a248 devid 1 transid 38 /dev/mapper/usr (254:0) scanned by mount (978) Dec 16 12:26:02.493478 kernel: BTRFS info (device dm-0): first mount of filesystem 6d6d314d-b8a1-4727-8a34-8525e276a248 Dec 16 12:26:02.493533 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Dec 16 12:26:02.603145 kernel: BTRFS info (device dm-0): enabling ssd optimizations Dec 16 12:26:02.603217 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 16 12:26:02.603253 kernel: BTRFS info (device dm-0): enabling free space tree Dec 16 12:26:02.625022 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 16 12:26:02.629603 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 16 12:26:02.634695 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 16 12:26:02.640502 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 16 12:26:02.651356 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 16 12:26:02.696353 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1001) Dec 16 12:26:02.702038 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 16 12:26:02.702121 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Dec 16 12:26:02.719610 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 16 12:26:02.719682 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Dec 16 12:26:02.728344 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 16 12:26:02.730249 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 16 12:26:02.740036 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 16 12:26:02.842214 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 12:26:02.850968 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 12:26:02.929639 systemd-networkd[1147]: lo: Link UP Dec 16 12:26:02.929652 systemd-networkd[1147]: lo: Gained carrier Dec 16 12:26:02.935638 systemd-networkd[1147]: Enumeration completed Dec 16 12:26:02.935797 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 12:26:02.938702 systemd[1]: Reached target network.target - Network. Dec 16 12:26:02.948169 systemd-networkd[1147]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 12:26:02.948176 systemd-networkd[1147]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 12:26:02.961705 systemd-networkd[1147]: eth0: Link UP Dec 16 12:26:02.961724 systemd-networkd[1147]: eth0: Gained carrier Dec 16 12:26:02.961747 systemd-networkd[1147]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 12:26:02.977335 systemd-networkd[1147]: eth0: DHCPv4 address 172.31.21.37/20, gateway 172.31.16.1 acquired from 172.31.16.1 Dec 16 12:26:03.318829 ignition[1064]: Ignition 2.22.0 Dec 16 12:26:03.319379 ignition[1064]: Stage: fetch-offline Dec 16 12:26:03.320238 ignition[1064]: no configs at "/usr/lib/ignition/base.d" Dec 16 12:26:03.320283 ignition[1064]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 16 12:26:03.320666 ignition[1064]: Ignition finished successfully Dec 16 12:26:03.331120 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 12:26:03.341417 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 16 12:26:03.386453 ignition[1158]: Ignition 2.22.0 Dec 16 12:26:03.386484 ignition[1158]: Stage: fetch Dec 16 12:26:03.386984 ignition[1158]: no configs at "/usr/lib/ignition/base.d" Dec 16 12:26:03.387009 ignition[1158]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 16 12:26:03.387132 ignition[1158]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 16 12:26:03.420727 ignition[1158]: PUT result: OK Dec 16 12:26:03.424485 ignition[1158]: parsed url from cmdline: "" Dec 16 12:26:03.424528 ignition[1158]: no config URL provided Dec 16 12:26:03.424544 ignition[1158]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 12:26:03.424570 ignition[1158]: no config at "/usr/lib/ignition/user.ign" Dec 16 12:26:03.424640 ignition[1158]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 16 12:26:03.429000 ignition[1158]: PUT result: OK Dec 16 12:26:03.429104 ignition[1158]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Dec 16 12:26:03.431691 ignition[1158]: GET result: OK Dec 16 12:26:03.431850 ignition[1158]: parsing config with SHA512: 610a6481bae75734188382e1ab41ce8dbc9555443b5e612a39316bcb24e71b3be6c14f9e69c84aa88a70a11ee89716929ca6fa7e0935a131d1f100593c0df4ad Dec 16 12:26:03.450628 unknown[1158]: fetched base config from "system" Dec 16 12:26:03.451707 unknown[1158]: fetched base config from "system" Dec 16 12:26:03.452407 ignition[1158]: fetch: fetch complete Dec 16 12:26:03.451721 unknown[1158]: fetched user config from "aws" Dec 16 12:26:03.452419 ignition[1158]: fetch: fetch passed Dec 16 12:26:03.452507 ignition[1158]: Ignition finished successfully Dec 16 12:26:03.460498 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 16 12:26:03.469018 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 16 12:26:03.522379 ignition[1165]: Ignition 2.22.0 Dec 16 12:26:03.522411 ignition[1165]: Stage: kargs Dec 16 12:26:03.522999 ignition[1165]: no configs at "/usr/lib/ignition/base.d" Dec 16 12:26:03.523765 ignition[1165]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 16 12:26:03.525009 ignition[1165]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 16 12:26:03.527929 ignition[1165]: PUT result: OK Dec 16 12:26:03.542049 ignition[1165]: kargs: kargs passed Dec 16 12:26:03.542175 ignition[1165]: Ignition finished successfully Dec 16 12:26:03.548170 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 16 12:26:03.554433 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 16 12:26:03.601307 ignition[1172]: Ignition 2.22.0 Dec 16 12:26:03.601570 ignition[1172]: Stage: disks Dec 16 12:26:03.602158 ignition[1172]: no configs at "/usr/lib/ignition/base.d" Dec 16 12:26:03.602181 ignition[1172]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 16 12:26:03.602653 ignition[1172]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 16 12:26:03.610173 ignition[1172]: PUT result: OK Dec 16 12:26:03.617414 ignition[1172]: disks: disks passed Dec 16 12:26:03.617704 ignition[1172]: Ignition finished successfully Dec 16 12:26:03.623887 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 16 12:26:03.628757 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 16 12:26:03.633962 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 16 12:26:03.636796 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 12:26:03.644134 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 12:26:03.646479 systemd[1]: Reached target basic.target - Basic System. Dec 16 12:26:03.652940 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 16 12:26:03.728397 systemd-fsck[1180]: ROOT: clean, 15/553520 files, 52789/553472 blocks Dec 16 12:26:03.735081 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 16 12:26:03.742426 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 16 12:26:03.866396 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 895d7845-d0e8-43ae-a778-7804b473b868 r/w with ordered data mode. Quota mode: none. Dec 16 12:26:03.867663 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 16 12:26:03.871736 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 16 12:26:03.876456 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 12:26:03.888884 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 16 12:26:03.893510 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 16 12:26:03.896368 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 16 12:26:03.896423 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 12:26:03.931315 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1199) Dec 16 12:26:03.928043 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 16 12:26:03.938073 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 16 12:26:03.938137 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Dec 16 12:26:03.938311 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 16 12:26:03.947005 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 16 12:26:03.947080 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Dec 16 12:26:03.951409 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 12:26:04.023439 systemd-networkd[1147]: eth0: Gained IPv6LL Dec 16 12:26:04.267128 initrd-setup-root[1223]: cut: /sysroot/etc/passwd: No such file or directory Dec 16 12:26:04.287234 initrd-setup-root[1230]: cut: /sysroot/etc/group: No such file or directory Dec 16 12:26:04.304287 initrd-setup-root[1237]: cut: /sysroot/etc/shadow: No such file or directory Dec 16 12:26:04.313456 initrd-setup-root[1244]: cut: /sysroot/etc/gshadow: No such file or directory Dec 16 12:26:04.585593 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 16 12:26:04.590783 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 16 12:26:04.600989 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 16 12:26:04.624488 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 16 12:26:04.629304 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 16 12:26:04.654077 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 16 12:26:04.685495 ignition[1313]: INFO : Ignition 2.22.0 Dec 16 12:26:04.685495 ignition[1313]: INFO : Stage: mount Dec 16 12:26:04.689231 ignition[1313]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 12:26:04.689231 ignition[1313]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 16 12:26:04.689231 ignition[1313]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 16 12:26:04.698384 ignition[1313]: INFO : PUT result: OK Dec 16 12:26:04.700225 ignition[1313]: INFO : mount: mount passed Dec 16 12:26:04.702194 ignition[1313]: INFO : Ignition finished successfully Dec 16 12:26:04.707450 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 16 12:26:04.716408 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 16 12:26:04.870469 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 12:26:04.921305 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1323) Dec 16 12:26:04.926294 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 16 12:26:04.926522 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Dec 16 12:26:04.933423 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 16 12:26:04.933519 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Dec 16 12:26:04.937018 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 12:26:04.988943 ignition[1340]: INFO : Ignition 2.22.0 Dec 16 12:26:04.988943 ignition[1340]: INFO : Stage: files Dec 16 12:26:04.992630 ignition[1340]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 12:26:04.992630 ignition[1340]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 16 12:26:04.992630 ignition[1340]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 16 12:26:05.001268 ignition[1340]: INFO : PUT result: OK Dec 16 12:26:05.006425 ignition[1340]: DEBUG : files: compiled without relabeling support, skipping Dec 16 12:26:05.009040 ignition[1340]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 16 12:26:05.009040 ignition[1340]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 16 12:26:05.018599 ignition[1340]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 16 12:26:05.022187 ignition[1340]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 16 12:26:05.025739 unknown[1340]: wrote ssh authorized keys file for user: core Dec 16 12:26:05.028178 ignition[1340]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 16 12:26:05.041578 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Dec 16 12:26:05.046246 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Dec 16 12:26:05.116063 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 16 12:26:05.257001 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Dec 16 12:26:05.257001 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 16 12:26:05.257001 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Dec 16 12:26:05.495877 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 16 12:26:05.626073 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 16 12:26:05.626073 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Dec 16 12:26:05.634567 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Dec 16 12:26:05.634567 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 16 12:26:05.634567 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 16 12:26:05.634567 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 12:26:05.634567 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 12:26:05.634567 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 12:26:05.634567 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 12:26:05.662052 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 12:26:05.662052 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 12:26:05.662052 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Dec 16 12:26:05.675596 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Dec 16 12:26:05.675596 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Dec 16 12:26:05.675596 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Dec 16 12:26:06.078086 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Dec 16 12:26:06.426625 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Dec 16 12:26:06.431330 ignition[1340]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Dec 16 12:26:06.431330 ignition[1340]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 12:26:06.442323 ignition[1340]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 12:26:06.442323 ignition[1340]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Dec 16 12:26:06.442323 ignition[1340]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Dec 16 12:26:06.455790 ignition[1340]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Dec 16 12:26:06.455790 ignition[1340]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 16 12:26:06.455790 ignition[1340]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 16 12:26:06.455790 ignition[1340]: INFO : files: files passed Dec 16 12:26:06.455790 ignition[1340]: INFO : Ignition finished successfully Dec 16 12:26:06.453505 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 16 12:26:06.463413 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 16 12:26:06.476945 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 16 12:26:06.495786 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 16 12:26:06.497346 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 16 12:26:06.537125 initrd-setup-root-after-ignition[1369]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 12:26:06.541471 initrd-setup-root-after-ignition[1373]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 12:26:06.545170 initrd-setup-root-after-ignition[1369]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 16 12:26:06.551037 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 12:26:06.552032 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 16 12:26:06.558109 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 16 12:26:06.657475 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 16 12:26:06.658680 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 16 12:26:06.666218 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 16 12:26:06.669904 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 16 12:26:06.674633 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 16 12:26:06.676348 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 16 12:26:06.722416 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 12:26:06.728426 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 16 12:26:06.769048 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 16 12:26:06.774363 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 12:26:06.779701 systemd[1]: Stopped target timers.target - Timer Units. Dec 16 12:26:06.782356 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 16 12:26:06.782591 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 12:26:06.791242 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 16 12:26:06.794244 systemd[1]: Stopped target basic.target - Basic System. Dec 16 12:26:06.801003 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 16 12:26:06.806022 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 12:26:06.808952 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 16 12:26:06.816350 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 16 12:26:06.819595 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 16 12:26:06.826749 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 12:26:06.829826 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 16 12:26:06.837240 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 16 12:26:06.843790 systemd[1]: Stopped target swap.target - Swaps. Dec 16 12:26:06.847404 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 16 12:26:06.848086 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 16 12:26:06.854732 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 16 12:26:06.855131 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 12:26:06.862361 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 16 12:26:06.864576 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 12:26:06.867545 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 16 12:26:06.867770 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 16 12:26:06.876184 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 16 12:26:06.876497 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 12:26:06.884077 systemd[1]: ignition-files.service: Deactivated successfully. Dec 16 12:26:06.884304 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 16 12:26:06.898198 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 16 12:26:06.905607 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 16 12:26:06.912352 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 16 12:26:06.919310 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 12:26:06.927957 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 16 12:26:06.932752 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 12:26:06.953928 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 16 12:26:06.954118 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 16 12:26:06.972980 ignition[1393]: INFO : Ignition 2.22.0 Dec 16 12:26:06.972980 ignition[1393]: INFO : Stage: umount Dec 16 12:26:06.984927 ignition[1393]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 12:26:06.984927 ignition[1393]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 16 12:26:06.984927 ignition[1393]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 16 12:26:06.984927 ignition[1393]: INFO : PUT result: OK Dec 16 12:26:06.983821 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 16 12:26:07.003142 ignition[1393]: INFO : umount: umount passed Dec 16 12:26:07.003142 ignition[1393]: INFO : Ignition finished successfully Dec 16 12:26:07.006093 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 16 12:26:07.006317 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 16 12:26:07.007698 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 16 12:26:07.007789 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 16 12:26:07.014512 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 16 12:26:07.014616 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 16 12:26:07.017007 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 16 12:26:07.018292 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 16 12:26:07.021400 systemd[1]: Stopped target network.target - Network. Dec 16 12:26:07.025487 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 16 12:26:07.026311 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 12:26:07.030150 systemd[1]: Stopped target paths.target - Path Units. Dec 16 12:26:07.034368 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 16 12:26:07.039977 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 12:26:07.040118 systemd[1]: Stopped target slices.target - Slice Units. Dec 16 12:26:07.046563 systemd[1]: Stopped target sockets.target - Socket Units. Dec 16 12:26:07.049372 systemd[1]: iscsid.socket: Deactivated successfully. Dec 16 12:26:07.049452 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 12:26:07.052976 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 16 12:26:07.053046 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 12:26:07.055360 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 16 12:26:07.055455 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 16 12:26:07.059752 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 16 12:26:07.059851 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 16 12:26:07.064713 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 16 12:26:07.069945 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 16 12:26:07.083390 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 16 12:26:07.083621 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 16 12:26:07.102931 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Dec 16 12:26:07.110068 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 16 12:26:07.115703 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 16 12:26:07.144299 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Dec 16 12:26:07.145001 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 16 12:26:07.145204 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 16 12:26:07.156745 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 16 12:26:07.161886 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 16 12:26:07.161970 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 16 12:26:07.164938 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 16 12:26:07.165034 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 16 12:26:07.177981 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 16 12:26:07.181764 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 16 12:26:07.181872 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 12:26:07.188062 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 16 12:26:07.188158 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 16 12:26:07.195962 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 16 12:26:07.196065 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 16 12:26:07.199203 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 16 12:26:07.199357 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 12:26:07.210524 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 12:26:07.225286 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 16 12:26:07.225583 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Dec 16 12:26:07.243745 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 16 12:26:07.244248 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 12:26:07.252748 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 16 12:26:07.252864 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 16 12:26:07.260444 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 16 12:26:07.260519 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 12:26:07.263927 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 16 12:26:07.264015 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 16 12:26:07.270974 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 16 12:26:07.271066 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 16 12:26:07.278079 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 16 12:26:07.278176 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 12:26:07.287908 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 16 12:26:07.296915 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 16 12:26:07.297051 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 12:26:07.301956 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 16 12:26:07.304179 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 12:26:07.310510 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 16 12:26:07.310610 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 12:26:07.325530 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 16 12:26:07.325686 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 12:26:07.336313 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 12:26:07.336422 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:26:07.344147 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Dec 16 12:26:07.344255 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Dec 16 12:26:07.344379 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Dec 16 12:26:07.344463 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Dec 16 12:26:07.345244 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 16 12:26:07.345466 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 16 12:26:07.348844 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 16 12:26:07.349004 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 16 12:26:07.355688 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 16 12:26:07.381129 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 16 12:26:07.426774 systemd[1]: Switching root. Dec 16 12:26:07.484751 systemd-journald[259]: Journal stopped Dec 16 12:26:09.932534 systemd-journald[259]: Received SIGTERM from PID 1 (systemd). Dec 16 12:26:09.932670 kernel: SELinux: policy capability network_peer_controls=1 Dec 16 12:26:09.932718 kernel: SELinux: policy capability open_perms=1 Dec 16 12:26:09.932747 kernel: SELinux: policy capability extended_socket_class=1 Dec 16 12:26:09.932776 kernel: SELinux: policy capability always_check_network=0 Dec 16 12:26:09.932805 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 16 12:26:09.932833 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 16 12:26:09.932862 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 16 12:26:09.932892 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 16 12:26:09.932924 kernel: SELinux: policy capability userspace_initial_context=0 Dec 16 12:26:09.932953 kernel: audit: type=1403 audit(1765887967.987:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 16 12:26:09.932992 systemd[1]: Successfully loaded SELinux policy in 100.626ms. Dec 16 12:26:09.933034 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 14.761ms. Dec 16 12:26:09.933067 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 12:26:09.933099 systemd[1]: Detected virtualization amazon. Dec 16 12:26:09.933128 systemd[1]: Detected architecture arm64. Dec 16 12:26:09.933155 systemd[1]: Detected first boot. Dec 16 12:26:09.933185 systemd[1]: Initializing machine ID from VM UUID. Dec 16 12:26:09.933218 zram_generator::config[1438]: No configuration found. Dec 16 12:26:09.939275 kernel: NET: Registered PF_VSOCK protocol family Dec 16 12:26:09.939364 systemd[1]: Populated /etc with preset unit settings. Dec 16 12:26:09.939400 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Dec 16 12:26:09.939431 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 16 12:26:09.939473 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 16 12:26:09.939503 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 16 12:26:09.939533 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 16 12:26:09.939570 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 16 12:26:09.939607 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 16 12:26:09.939634 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 16 12:26:09.939667 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 16 12:26:09.939698 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 16 12:26:09.939726 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 16 12:26:09.939756 systemd[1]: Created slice user.slice - User and Session Slice. Dec 16 12:26:09.939785 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 12:26:09.939815 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 12:26:09.939848 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 16 12:26:09.939880 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 16 12:26:09.939910 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 16 12:26:09.939942 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 12:26:09.939988 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 16 12:26:09.940021 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 12:26:09.940052 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 12:26:09.940084 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 16 12:26:09.940112 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 16 12:26:09.940143 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 16 12:26:09.940173 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 16 12:26:09.940201 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 12:26:09.940230 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 12:26:09.940280 systemd[1]: Reached target slices.target - Slice Units. Dec 16 12:26:09.940315 systemd[1]: Reached target swap.target - Swaps. Dec 16 12:26:09.940344 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 16 12:26:09.940372 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 16 12:26:09.940405 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 16 12:26:09.940435 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 12:26:09.940468 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 12:26:09.940496 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 12:26:09.940523 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 16 12:26:09.940551 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 16 12:26:09.940580 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 16 12:26:09.940609 systemd[1]: Mounting media.mount - External Media Directory... Dec 16 12:26:09.940639 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 16 12:26:09.940672 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 16 12:26:09.940701 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 16 12:26:09.940732 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 16 12:26:09.940764 systemd[1]: Reached target machines.target - Containers. Dec 16 12:26:09.940793 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 16 12:26:09.940821 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 12:26:09.940848 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 12:26:09.940876 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 16 12:26:09.940908 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 12:26:09.940949 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 12:26:09.940982 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 12:26:09.941013 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 16 12:26:09.941044 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 12:26:09.941073 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 16 12:26:09.941121 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 16 12:26:09.941151 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 16 12:26:09.941185 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 16 12:26:09.941214 systemd[1]: Stopped systemd-fsck-usr.service. Dec 16 12:26:09.941243 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 12:26:09.955341 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 12:26:09.955386 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 12:26:09.955416 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 12:26:09.955451 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 16 12:26:09.955483 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 16 12:26:09.955522 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 12:26:09.955556 systemd[1]: verity-setup.service: Deactivated successfully. Dec 16 12:26:09.955586 systemd[1]: Stopped verity-setup.service. Dec 16 12:26:09.955619 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 16 12:26:09.955650 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 16 12:26:09.955680 systemd[1]: Mounted media.mount - External Media Directory. Dec 16 12:26:09.955708 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 16 12:26:09.955736 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 16 12:26:09.955764 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 16 12:26:09.955793 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 12:26:09.955821 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 16 12:26:09.955853 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 16 12:26:09.955882 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 12:26:09.955910 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 12:26:09.955940 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 12:26:09.955968 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 12:26:09.955997 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 12:26:09.956025 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 16 12:26:09.956054 kernel: loop: module loaded Dec 16 12:26:09.956082 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 12:26:09.956116 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 12:26:09.956144 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 12:26:09.956228 systemd-journald[1521]: Collecting audit messages is disabled. Dec 16 12:26:09.956307 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 12:26:09.956340 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 12:26:09.956368 kernel: fuse: init (API version 7.41) Dec 16 12:26:09.956396 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 16 12:26:09.956425 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 16 12:26:09.956460 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 16 12:26:09.956488 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 16 12:26:09.956515 systemd-journald[1521]: Journal started Dec 16 12:26:09.956560 systemd-journald[1521]: Runtime Journal (/run/log/journal/ec2f723e98e85b3cb3d91864fa6050ca) is 8M, max 75.3M, 67.3M free. Dec 16 12:26:09.959293 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 16 12:26:09.295663 systemd[1]: Queued start job for default target multi-user.target. Dec 16 12:26:09.310062 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Dec 16 12:26:09.310907 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 16 12:26:09.964229 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 12:26:10.005998 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 12:26:10.009446 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 16 12:26:10.009491 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 12:26:10.013860 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 16 12:26:10.019326 kernel: ACPI: bus type drm_connector registered Dec 16 12:26:10.020114 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 16 12:26:10.022622 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 12:26:10.028719 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 16 12:26:10.040570 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 16 12:26:10.043442 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 12:26:10.046681 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 16 12:26:10.049340 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 12:26:10.058710 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 16 12:26:10.062894 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 12:26:10.065435 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 12:26:10.069231 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 12:26:10.105611 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 16 12:26:10.121078 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 16 12:26:10.125194 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 16 12:26:10.133704 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 16 12:26:10.149413 systemd-tmpfiles[1542]: ACLs are not supported, ignoring. Dec 16 12:26:10.149454 systemd-tmpfiles[1542]: ACLs are not supported, ignoring. Dec 16 12:26:10.157245 kernel: loop0: detected capacity change from 0 to 100632 Dec 16 12:26:10.162403 systemd-journald[1521]: Time spent on flushing to /var/log/journal/ec2f723e98e85b3cb3d91864fa6050ca is 167.279ms for 936 entries. Dec 16 12:26:10.162403 systemd-journald[1521]: System Journal (/var/log/journal/ec2f723e98e85b3cb3d91864fa6050ca) is 8M, max 195.6M, 187.6M free. Dec 16 12:26:10.351213 systemd-journald[1521]: Received client request to flush runtime journal. Dec 16 12:26:10.351377 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 16 12:26:10.351429 kernel: loop1: detected capacity change from 0 to 207008 Dec 16 12:26:10.171467 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 12:26:10.180484 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 16 12:26:10.316587 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 16 12:26:10.335530 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 16 12:26:10.342773 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 12:26:10.353199 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 16 12:26:10.356623 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 16 12:26:10.371111 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 16 12:26:10.374466 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 16 12:26:10.423288 kernel: loop2: detected capacity change from 0 to 61264 Dec 16 12:26:10.441574 systemd-tmpfiles[1587]: ACLs are not supported, ignoring. Dec 16 12:26:10.442841 systemd-tmpfiles[1587]: ACLs are not supported, ignoring. Dec 16 12:26:10.442985 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 12:26:10.457860 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 12:26:10.546667 kernel: loop3: detected capacity change from 0 to 119840 Dec 16 12:26:10.660300 kernel: loop4: detected capacity change from 0 to 100632 Dec 16 12:26:10.680363 kernel: loop5: detected capacity change from 0 to 207008 Dec 16 12:26:10.716337 kernel: loop6: detected capacity change from 0 to 61264 Dec 16 12:26:10.735358 kernel: loop7: detected capacity change from 0 to 119840 Dec 16 12:26:10.747475 (sd-merge)[1598]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Dec 16 12:26:10.751095 (sd-merge)[1598]: Merged extensions into '/usr'. Dec 16 12:26:10.762666 systemd[1]: Reload requested from client PID 1574 ('systemd-sysext') (unit systemd-sysext.service)... Dec 16 12:26:10.762875 systemd[1]: Reloading... Dec 16 12:26:10.967728 zram_generator::config[1620]: No configuration found. Dec 16 12:26:11.430192 systemd[1]: Reloading finished in 666 ms. Dec 16 12:26:11.453255 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 16 12:26:11.457419 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 16 12:26:11.476447 systemd[1]: Starting ensure-sysext.service... Dec 16 12:26:11.482555 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 12:26:11.489675 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 12:26:11.533590 systemd[1]: Reload requested from client PID 1676 ('systemctl') (unit ensure-sysext.service)... Dec 16 12:26:11.533635 systemd[1]: Reloading... Dec 16 12:26:11.546457 systemd-tmpfiles[1677]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 16 12:26:11.546528 systemd-tmpfiles[1677]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 16 12:26:11.547126 systemd-tmpfiles[1677]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 16 12:26:11.547686 systemd-tmpfiles[1677]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 16 12:26:11.559213 systemd-tmpfiles[1677]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 16 12:26:11.560559 systemd-tmpfiles[1677]: ACLs are not supported, ignoring. Dec 16 12:26:11.560705 systemd-tmpfiles[1677]: ACLs are not supported, ignoring. Dec 16 12:26:11.578841 systemd-tmpfiles[1677]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 12:26:11.580309 systemd-tmpfiles[1677]: Skipping /boot Dec 16 12:26:11.630008 systemd-tmpfiles[1677]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 12:26:11.630039 systemd-tmpfiles[1677]: Skipping /boot Dec 16 12:26:11.670980 systemd-udevd[1678]: Using default interface naming scheme 'v255'. Dec 16 12:26:11.702304 ldconfig[1567]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 16 12:26:11.735347 zram_generator::config[1708]: No configuration found. Dec 16 12:26:12.189713 (udev-worker)[1765]: Network interface NamePolicy= disabled on kernel command line. Dec 16 12:26:12.303710 systemd[1]: Reloading finished in 769 ms. Dec 16 12:26:12.337153 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 12:26:12.341222 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 16 12:26:12.390831 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 12:26:12.433833 systemd[1]: Finished ensure-sysext.service. Dec 16 12:26:12.440751 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 16 12:26:12.451967 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 12:26:12.460693 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 16 12:26:12.463690 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 12:26:12.466624 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 12:26:12.475681 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 12:26:12.486642 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 12:26:12.515071 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 12:26:12.517955 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 12:26:12.518050 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 12:26:12.522567 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 16 12:26:12.532188 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 12:26:12.540654 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 12:26:12.543119 systemd[1]: Reached target time-set.target - System Time Set. Dec 16 12:26:12.550645 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 16 12:26:12.603346 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 16 12:26:12.612407 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 12:26:12.612797 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 12:26:12.623361 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 12:26:12.623757 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 12:26:12.666080 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 12:26:12.668611 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 12:26:12.672160 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 12:26:12.673748 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 12:26:12.676789 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 12:26:12.676906 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 12:26:12.703465 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 16 12:26:12.716890 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 16 12:26:12.720541 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 16 12:26:12.728872 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 16 12:26:12.732365 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 16 12:26:12.797699 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 16 12:26:12.834055 augenrules[1928]: No rules Dec 16 12:26:12.837951 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 12:26:12.838442 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 12:26:12.861831 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:26:13.013964 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Dec 16 12:26:13.017162 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 16 12:26:13.084950 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:26:13.090556 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 16 12:26:13.093861 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 16 12:26:13.216035 systemd-networkd[1868]: lo: Link UP Dec 16 12:26:13.216565 systemd-networkd[1868]: lo: Gained carrier Dec 16 12:26:13.219713 systemd-networkd[1868]: Enumeration completed Dec 16 12:26:13.219954 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 12:26:13.220703 systemd-networkd[1868]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 12:26:13.220711 systemd-networkd[1868]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 12:26:13.221078 systemd-resolved[1870]: Positive Trust Anchors: Dec 16 12:26:13.221099 systemd-resolved[1870]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 12:26:13.221161 systemd-resolved[1870]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 12:26:13.229987 systemd-networkd[1868]: eth0: Link UP Dec 16 12:26:13.230479 systemd-networkd[1868]: eth0: Gained carrier Dec 16 12:26:13.230555 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 16 12:26:13.236317 systemd-networkd[1868]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 12:26:13.242579 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 16 12:26:13.246466 systemd-resolved[1870]: Defaulting to hostname 'linux'. Dec 16 12:26:13.251164 systemd-networkd[1868]: eth0: DHCPv4 address 172.31.21.37/20, gateway 172.31.16.1 acquired from 172.31.16.1 Dec 16 12:26:13.251675 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 12:26:13.257426 systemd[1]: Reached target network.target - Network. Dec 16 12:26:13.259807 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 12:26:13.262630 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 12:26:13.266329 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 16 12:26:13.266878 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 16 12:26:13.267569 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 16 12:26:13.267862 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 16 12:26:13.268062 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 16 12:26:13.268850 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 16 12:26:13.268894 systemd[1]: Reached target paths.target - Path Units. Dec 16 12:26:13.269296 systemd[1]: Reached target timers.target - Timer Units. Dec 16 12:26:13.275075 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 16 12:26:13.288970 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 16 12:26:13.301688 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 16 12:26:13.305278 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 16 12:26:13.308123 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 16 12:26:13.315482 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 16 12:26:13.318523 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 16 12:26:13.324346 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 16 12:26:13.327654 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 16 12:26:13.331173 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 12:26:13.334168 systemd[1]: Reached target basic.target - Basic System. Dec 16 12:26:13.336887 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 16 12:26:13.336953 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 16 12:26:13.338958 systemd[1]: Starting containerd.service - containerd container runtime... Dec 16 12:26:13.346036 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 16 12:26:13.351845 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 16 12:26:13.358144 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 16 12:26:13.365531 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 16 12:26:13.380254 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 16 12:26:13.384452 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 16 12:26:13.386858 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 16 12:26:13.392720 systemd[1]: Started ntpd.service - Network Time Service. Dec 16 12:26:13.399576 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 16 12:26:13.407674 systemd[1]: Starting setup-oem.service - Setup OEM... Dec 16 12:26:13.418219 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 16 12:26:13.437441 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 16 12:26:13.447370 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 16 12:26:13.448463 jq[1963]: false Dec 16 12:26:13.457871 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 16 12:26:13.463960 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 16 12:26:13.466359 systemd[1]: Starting update-engine.service - Update Engine... Dec 16 12:26:13.479654 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 16 12:26:13.488234 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 16 12:26:13.491657 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 16 12:26:13.493363 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 16 12:26:13.498887 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 16 12:26:13.500420 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 16 12:26:13.623730 coreos-metadata[1960]: Dec 16 12:26:13.620 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Dec 16 12:26:13.623730 coreos-metadata[1960]: Dec 16 12:26:13.620 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Dec 16 12:26:13.623730 coreos-metadata[1960]: Dec 16 12:26:13.620 INFO Fetch successful Dec 16 12:26:13.623730 coreos-metadata[1960]: Dec 16 12:26:13.620 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Dec 16 12:26:13.623730 coreos-metadata[1960]: Dec 16 12:26:13.620 INFO Fetch successful Dec 16 12:26:13.623730 coreos-metadata[1960]: Dec 16 12:26:13.620 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Dec 16 12:26:13.623730 coreos-metadata[1960]: Dec 16 12:26:13.620 INFO Fetch successful Dec 16 12:26:13.623730 coreos-metadata[1960]: Dec 16 12:26:13.620 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Dec 16 12:26:13.624906 coreos-metadata[1960]: Dec 16 12:26:13.624 INFO Fetch successful Dec 16 12:26:13.624906 coreos-metadata[1960]: Dec 16 12:26:13.624 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Dec 16 12:26:13.627297 coreos-metadata[1960]: Dec 16 12:26:13.627 INFO Fetch failed with 404: resource not found Dec 16 12:26:13.627297 coreos-metadata[1960]: Dec 16 12:26:13.627 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Dec 16 12:26:13.633292 coreos-metadata[1960]: Dec 16 12:26:13.629 INFO Fetch successful Dec 16 12:26:13.633292 coreos-metadata[1960]: Dec 16 12:26:13.629 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Dec 16 12:26:13.639151 jq[1973]: true Dec 16 12:26:13.642291 tar[1976]: linux-arm64/LICENSE Dec 16 12:26:13.642730 coreos-metadata[1960]: Dec 16 12:26:13.642 INFO Fetch successful Dec 16 12:26:13.642730 coreos-metadata[1960]: Dec 16 12:26:13.642 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Dec 16 12:26:13.642730 coreos-metadata[1960]: Dec 16 12:26:13.642 INFO Fetch successful Dec 16 12:26:13.642730 coreos-metadata[1960]: Dec 16 12:26:13.642 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Dec 16 12:26:13.642730 coreos-metadata[1960]: Dec 16 12:26:13.642 INFO Fetch successful Dec 16 12:26:13.642730 coreos-metadata[1960]: Dec 16 12:26:13.642 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Dec 16 12:26:13.642730 coreos-metadata[1960]: Dec 16 12:26:13.642 INFO Fetch successful Dec 16 12:26:13.655634 tar[1976]: linux-arm64/helm Dec 16 12:26:13.677703 extend-filesystems[1964]: Found /dev/nvme0n1p6 Dec 16 12:26:13.690025 update_engine[1972]: I20251216 12:26:13.677482 1972 main.cc:92] Flatcar Update Engine starting Dec 16 12:26:13.691035 jq[2004]: true Dec 16 12:26:13.694138 systemd[1]: motdgen.service: Deactivated successfully. Dec 16 12:26:13.700483 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 16 12:26:13.713310 (ntainerd)[2002]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 16 12:26:13.715209 dbus-daemon[1961]: [system] SELinux support is enabled Dec 16 12:26:13.720855 ntpd[1966]: 16 Dec 12:26:13 ntpd[1966]: ntpd 4.2.8p18@1.4062-o Fri Dec 12 14:43:59 UTC 2025 (1): Starting Dec 16 12:26:13.720855 ntpd[1966]: 16 Dec 12:26:13 ntpd[1966]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 16 12:26:13.720855 ntpd[1966]: 16 Dec 12:26:13 ntpd[1966]: ---------------------------------------------------- Dec 16 12:26:13.720855 ntpd[1966]: 16 Dec 12:26:13 ntpd[1966]: ntp-4 is maintained by Network Time Foundation, Dec 16 12:26:13.720855 ntpd[1966]: 16 Dec 12:26:13 ntpd[1966]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 16 12:26:13.720855 ntpd[1966]: 16 Dec 12:26:13 ntpd[1966]: corporation. Support and training for ntp-4 are Dec 16 12:26:13.720855 ntpd[1966]: 16 Dec 12:26:13 ntpd[1966]: available at https://www.nwtime.org/support Dec 16 12:26:13.720855 ntpd[1966]: 16 Dec 12:26:13 ntpd[1966]: ---------------------------------------------------- Dec 16 12:26:13.717359 ntpd[1966]: ntpd 4.2.8p18@1.4062-o Fri Dec 12 14:43:59 UTC 2025 (1): Starting Dec 16 12:26:13.717458 ntpd[1966]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 16 12:26:13.717476 ntpd[1966]: ---------------------------------------------------- Dec 16 12:26:13.717493 ntpd[1966]: ntp-4 is maintained by Network Time Foundation, Dec 16 12:26:13.717509 ntpd[1966]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 16 12:26:13.717526 ntpd[1966]: corporation. Support and training for ntp-4 are Dec 16 12:26:13.717542 ntpd[1966]: available at https://www.nwtime.org/support Dec 16 12:26:13.726458 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 16 12:26:13.717558 ntpd[1966]: ---------------------------------------------------- Dec 16 12:26:13.736918 extend-filesystems[1964]: Found /dev/nvme0n1p9 Dec 16 12:26:13.737017 systemd[1]: Finished setup-oem.service - Setup OEM. Dec 16 12:26:13.754605 ntpd[1966]: 16 Dec 12:26:13 ntpd[1966]: proto: precision = 0.096 usec (-23) Dec 16 12:26:13.742517 ntpd[1966]: proto: precision = 0.096 usec (-23) Dec 16 12:26:13.742893 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 16 12:26:13.742972 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 16 12:26:13.749799 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 16 12:26:13.749841 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 16 12:26:13.755600 ntpd[1966]: basedate set to 2025-11-30 Dec 16 12:26:13.756939 ntpd[1966]: 16 Dec 12:26:13 ntpd[1966]: basedate set to 2025-11-30 Dec 16 12:26:13.756939 ntpd[1966]: 16 Dec 12:26:13 ntpd[1966]: gps base set to 2025-11-30 (week 2395) Dec 16 12:26:13.756939 ntpd[1966]: 16 Dec 12:26:13 ntpd[1966]: Listen and drop on 0 v6wildcard [::]:123 Dec 16 12:26:13.756939 ntpd[1966]: 16 Dec 12:26:13 ntpd[1966]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 16 12:26:13.756939 ntpd[1966]: 16 Dec 12:26:13 ntpd[1966]: Listen normally on 2 lo 127.0.0.1:123 Dec 16 12:26:13.756939 ntpd[1966]: 16 Dec 12:26:13 ntpd[1966]: Listen normally on 3 eth0 172.31.21.37:123 Dec 16 12:26:13.755637 ntpd[1966]: gps base set to 2025-11-30 (week 2395) Dec 16 12:26:13.755829 ntpd[1966]: Listen and drop on 0 v6wildcard [::]:123 Dec 16 12:26:13.755876 ntpd[1966]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 16 12:26:13.774975 systemd-coredump[2017]: Process 1966 (ntpd) of user 0 terminated abnormally with signal 11/SEGV, processing... Dec 16 12:26:13.775785 ntpd[1966]: 16 Dec 12:26:13 ntpd[1966]: Listen normally on 4 lo [::1]:123 Dec 16 12:26:13.775785 ntpd[1966]: 16 Dec 12:26:13 ntpd[1966]: bind(21) AF_INET6 [fe80::435:bcff:feb4:fc1%2]:123 flags 0x811 failed: Cannot assign requested address Dec 16 12:26:13.775785 ntpd[1966]: 16 Dec 12:26:13 ntpd[1966]: unable to create socket on eth0 (5) for [fe80::435:bcff:feb4:fc1%2]:123 Dec 16 12:26:13.775949 extend-filesystems[1964]: Checking size of /dev/nvme0n1p9 Dec 16 12:26:13.756174 ntpd[1966]: Listen normally on 2 lo 127.0.0.1:123 Dec 16 12:26:13.756216 ntpd[1966]: Listen normally on 3 eth0 172.31.21.37:123 Dec 16 12:26:13.759293 ntpd[1966]: Listen normally on 4 lo [::1]:123 Dec 16 12:26:13.759386 ntpd[1966]: bind(21) AF_INET6 [fe80::435:bcff:feb4:fc1%2]:123 flags 0x811 failed: Cannot assign requested address Dec 16 12:26:13.759427 ntpd[1966]: unable to create socket on eth0 (5) for [fe80::435:bcff:feb4:fc1%2]:123 Dec 16 12:26:13.778950 dbus-daemon[1961]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1868 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 16 12:26:13.781914 systemd[1]: Created slice system-systemd\x2dcoredump.slice - Slice /system/systemd-coredump. Dec 16 12:26:13.791885 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Dec 16 12:26:13.801475 systemd[1]: Started systemd-coredump@0-2017-0.service - Process Core Dump (PID 2017/UID 0). Dec 16 12:26:13.813623 update_engine[1972]: I20251216 12:26:13.811013 1972 update_check_scheduler.cc:74] Next update check in 6m32s Dec 16 12:26:13.811462 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 16 12:26:13.815201 systemd[1]: Started update-engine.service - Update Engine. Dec 16 12:26:13.818359 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 16 12:26:13.836064 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 16 12:26:13.872111 extend-filesystems[1964]: Resized partition /dev/nvme0n1p9 Dec 16 12:26:13.888751 extend-filesystems[2036]: resize2fs 1.47.3 (8-Jul-2025) Dec 16 12:26:13.916667 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Dec 16 12:26:14.107732 bash[2049]: Updated "/home/core/.ssh/authorized_keys" Dec 16 12:26:14.108767 systemd-logind[1971]: Watching system buttons on /dev/input/event0 (Power Button) Dec 16 12:26:14.108834 systemd-logind[1971]: Watching system buttons on /dev/input/event1 (Sleep Button) Dec 16 12:26:14.114828 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 16 12:26:14.122667 systemd-logind[1971]: New seat seat0. Dec 16 12:26:14.127042 systemd[1]: Starting sshkeys.service... Dec 16 12:26:14.146365 systemd[1]: Started systemd-logind.service - User Login Management. Dec 16 12:26:14.161691 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Dec 16 12:26:14.192010 extend-filesystems[2036]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Dec 16 12:26:14.192010 extend-filesystems[2036]: old_desc_blocks = 1, new_desc_blocks = 2 Dec 16 12:26:14.192010 extend-filesystems[2036]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Dec 16 12:26:14.226474 extend-filesystems[1964]: Resized filesystem in /dev/nvme0n1p9 Dec 16 12:26:14.204162 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 16 12:26:14.205383 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 16 12:26:14.260433 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 16 12:26:14.265748 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 16 12:26:14.272799 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 16 12:26:14.576016 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Dec 16 12:26:14.599087 dbus-daemon[1961]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 16 12:26:14.607475 dbus-daemon[1961]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=2021 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 16 12:26:14.619971 systemd[1]: Starting polkit.service - Authorization Manager... Dec 16 12:26:14.691870 coreos-metadata[2092]: Dec 16 12:26:14.691 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Dec 16 12:26:14.700497 coreos-metadata[2092]: Dec 16 12:26:14.700 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Dec 16 12:26:14.704225 coreos-metadata[2092]: Dec 16 12:26:14.702 INFO Fetch successful Dec 16 12:26:14.704225 coreos-metadata[2092]: Dec 16 12:26:14.703 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Dec 16 12:26:14.707182 coreos-metadata[2092]: Dec 16 12:26:14.706 INFO Fetch successful Dec 16 12:26:14.722046 unknown[2092]: wrote ssh authorized keys file for user: core Dec 16 12:26:14.751512 systemd-coredump[2022]: Process 1966 (ntpd) of user 0 dumped core. Module libnss_usrfiles.so.2 without build-id. Module libgcc_s.so.1 without build-id. Module libc.so.6 without build-id. Module libcrypto.so.3 without build-id. Module libm.so.6 without build-id. Module libcap.so.2 without build-id. Module ntpd without build-id. Stack trace of thread 1966: #0 0x0000aaaac7980b5c n/a (ntpd + 0x60b5c) #1 0x0000aaaac792fe60 n/a (ntpd + 0xfe60) #2 0x0000aaaac7930240 n/a (ntpd + 0x10240) #3 0x0000aaaac792be14 n/a (ntpd + 0xbe14) #4 0x0000aaaac792d3ec n/a (ntpd + 0xd3ec) #5 0x0000aaaac7935a38 n/a (ntpd + 0x15a38) #6 0x0000aaaac792738c n/a (ntpd + 0x738c) #7 0x0000ffff96e52034 n/a (libc.so.6 + 0x22034) #8 0x0000ffff96e52118 __libc_start_main (libc.so.6 + 0x22118) #9 0x0000aaaac79273f0 n/a (ntpd + 0x73f0) ELF object binary architecture: AARCH64 Dec 16 12:26:14.764620 containerd[2002]: time="2025-12-16T12:26:14Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 16 12:26:14.764603 systemd[1]: systemd-coredump@0-2017-0.service: Deactivated successfully. Dec 16 12:26:14.771985 containerd[2002]: time="2025-12-16T12:26:14.767056752Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Dec 16 12:26:14.776048 systemd[1]: ntpd.service: Main process exited, code=dumped, status=11/SEGV Dec 16 12:26:14.776567 systemd[1]: ntpd.service: Failed with result 'core-dump'. Dec 16 12:26:14.828856 containerd[2002]: time="2025-12-16T12:26:14.825900685Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="16.116µs" Dec 16 12:26:14.828856 containerd[2002]: time="2025-12-16T12:26:14.826467433Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 16 12:26:14.828856 containerd[2002]: time="2025-12-16T12:26:14.826531849Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 16 12:26:14.828856 containerd[2002]: time="2025-12-16T12:26:14.826837225Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 16 12:26:14.828856 containerd[2002]: time="2025-12-16T12:26:14.826873141Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 16 12:26:14.828856 containerd[2002]: time="2025-12-16T12:26:14.826922629Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 12:26:14.828856 containerd[2002]: time="2025-12-16T12:26:14.827031193Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 12:26:14.828856 containerd[2002]: time="2025-12-16T12:26:14.827059273Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 12:26:14.828856 containerd[2002]: time="2025-12-16T12:26:14.827463469Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 12:26:14.828856 containerd[2002]: time="2025-12-16T12:26:14.827495713Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 12:26:14.828856 containerd[2002]: time="2025-12-16T12:26:14.827522881Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 12:26:14.828856 containerd[2002]: time="2025-12-16T12:26:14.827544361Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 16 12:26:14.829487 containerd[2002]: time="2025-12-16T12:26:14.827711221Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 16 12:26:14.829487 containerd[2002]: time="2025-12-16T12:26:14.828067921Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 12:26:14.829487 containerd[2002]: time="2025-12-16T12:26:14.828120601Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 12:26:14.829487 containerd[2002]: time="2025-12-16T12:26:14.828148213Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 16 12:26:14.835535 containerd[2002]: time="2025-12-16T12:26:14.833672377Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 16 12:26:14.835535 containerd[2002]: time="2025-12-16T12:26:14.834085261Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 16 12:26:14.835535 containerd[2002]: time="2025-12-16T12:26:14.834248425Z" level=info msg="metadata content store policy set" policy=shared Dec 16 12:26:14.854136 containerd[2002]: time="2025-12-16T12:26:14.851969617Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 16 12:26:14.854136 containerd[2002]: time="2025-12-16T12:26:14.852084325Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 16 12:26:14.854136 containerd[2002]: time="2025-12-16T12:26:14.852117361Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 16 12:26:14.854136 containerd[2002]: time="2025-12-16T12:26:14.852156409Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 16 12:26:14.854136 containerd[2002]: time="2025-12-16T12:26:14.852186289Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 16 12:26:14.854136 containerd[2002]: time="2025-12-16T12:26:14.852214501Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 16 12:26:14.854136 containerd[2002]: time="2025-12-16T12:26:14.852246901Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 16 12:26:14.854136 containerd[2002]: time="2025-12-16T12:26:14.852296053Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 16 12:26:14.854136 containerd[2002]: time="2025-12-16T12:26:14.852325597Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 16 12:26:14.854136 containerd[2002]: time="2025-12-16T12:26:14.852354973Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 16 12:26:14.854136 containerd[2002]: time="2025-12-16T12:26:14.852379393Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 16 12:26:14.854136 containerd[2002]: time="2025-12-16T12:26:14.852414301Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 16 12:26:14.854136 containerd[2002]: time="2025-12-16T12:26:14.852791689Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 16 12:26:14.854136 containerd[2002]: time="2025-12-16T12:26:14.852852277Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 16 12:26:14.855093 containerd[2002]: time="2025-12-16T12:26:14.852888493Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 16 12:26:14.855093 containerd[2002]: time="2025-12-16T12:26:14.852916957Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 16 12:26:14.855093 containerd[2002]: time="2025-12-16T12:26:14.852943393Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 16 12:26:14.855093 containerd[2002]: time="2025-12-16T12:26:14.852972265Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 16 12:26:14.855093 containerd[2002]: time="2025-12-16T12:26:14.853001197Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 16 12:26:14.855093 containerd[2002]: time="2025-12-16T12:26:14.853028125Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 16 12:26:14.855093 containerd[2002]: time="2025-12-16T12:26:14.853058197Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 16 12:26:14.855093 containerd[2002]: time="2025-12-16T12:26:14.853084837Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 16 12:26:14.855093 containerd[2002]: time="2025-12-16T12:26:14.853110685Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 16 12:26:14.862901 containerd[2002]: time="2025-12-16T12:26:14.862708681Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 16 12:26:14.862901 containerd[2002]: time="2025-12-16T12:26:14.862777621Z" level=info msg="Start snapshots syncer" Dec 16 12:26:14.862901 containerd[2002]: time="2025-12-16T12:26:14.862862629Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 16 12:26:14.864601 containerd[2002]: time="2025-12-16T12:26:14.863666725Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 16 12:26:14.864601 containerd[2002]: time="2025-12-16T12:26:14.863786881Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 16 12:26:14.864857 update-ssh-keys[2150]: Updated "/home/core/.ssh/authorized_keys" Dec 16 12:26:14.865212 containerd[2002]: time="2025-12-16T12:26:14.863910217Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 16 12:26:14.867432 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 16 12:26:14.871549 containerd[2002]: time="2025-12-16T12:26:14.867348601Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 16 12:26:14.871549 containerd[2002]: time="2025-12-16T12:26:14.867427561Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 16 12:26:14.871549 containerd[2002]: time="2025-12-16T12:26:14.867456913Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 16 12:26:14.871549 containerd[2002]: time="2025-12-16T12:26:14.867496081Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 16 12:26:14.871549 containerd[2002]: time="2025-12-16T12:26:14.867527785Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 16 12:26:14.871549 containerd[2002]: time="2025-12-16T12:26:14.867555241Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 16 12:26:14.874359 containerd[2002]: time="2025-12-16T12:26:14.873683173Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 16 12:26:14.874974 containerd[2002]: time="2025-12-16T12:26:14.874896205Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 16 12:26:14.875075 containerd[2002]: time="2025-12-16T12:26:14.874978081Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 16 12:26:14.875075 containerd[2002]: time="2025-12-16T12:26:14.875012713Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 16 12:26:14.876051 containerd[2002]: time="2025-12-16T12:26:14.875101309Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 12:26:14.876051 containerd[2002]: time="2025-12-16T12:26:14.875137777Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 12:26:14.876051 containerd[2002]: time="2025-12-16T12:26:14.875161645Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 12:26:14.876051 containerd[2002]: time="2025-12-16T12:26:14.875191801Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 12:26:14.876051 containerd[2002]: time="2025-12-16T12:26:14.875217553Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 16 12:26:14.881721 containerd[2002]: time="2025-12-16T12:26:14.875246833Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 16 12:26:14.876364 systemd[1]: Finished sshkeys.service. Dec 16 12:26:14.881938 containerd[2002]: time="2025-12-16T12:26:14.881724589Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 16 12:26:14.881938 containerd[2002]: time="2025-12-16T12:26:14.881916001Z" level=info msg="runtime interface created" Dec 16 12:26:14.881938 containerd[2002]: time="2025-12-16T12:26:14.881934181Z" level=info msg="created NRI interface" Dec 16 12:26:14.882067 containerd[2002]: time="2025-12-16T12:26:14.881956369Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 16 12:26:14.882067 containerd[2002]: time="2025-12-16T12:26:14.881987089Z" level=info msg="Connect containerd service" Dec 16 12:26:14.882067 containerd[2002]: time="2025-12-16T12:26:14.882048421Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 16 12:26:14.889802 containerd[2002]: time="2025-12-16T12:26:14.889462933Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 12:26:14.895067 systemd[1]: ntpd.service: Scheduled restart job, restart counter is at 1. Dec 16 12:26:14.900785 systemd[1]: Started ntpd.service - Network Time Service. Dec 16 12:26:15.026065 ntpd[2175]: ntpd 4.2.8p18@1.4062-o Fri Dec 12 14:43:59 UTC 2025 (1): Starting Dec 16 12:26:15.026182 ntpd[2175]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 16 12:26:15.026796 ntpd[2175]: 16 Dec 12:26:15 ntpd[2175]: ntpd 4.2.8p18@1.4062-o Fri Dec 12 14:43:59 UTC 2025 (1): Starting Dec 16 12:26:15.026796 ntpd[2175]: 16 Dec 12:26:15 ntpd[2175]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 16 12:26:15.026796 ntpd[2175]: 16 Dec 12:26:15 ntpd[2175]: ---------------------------------------------------- Dec 16 12:26:15.026796 ntpd[2175]: 16 Dec 12:26:15 ntpd[2175]: ntp-4 is maintained by Network Time Foundation, Dec 16 12:26:15.026796 ntpd[2175]: 16 Dec 12:26:15 ntpd[2175]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 16 12:26:15.026201 ntpd[2175]: ---------------------------------------------------- Dec 16 12:26:15.026219 ntpd[2175]: ntp-4 is maintained by Network Time Foundation, Dec 16 12:26:15.026236 ntpd[2175]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 16 12:26:15.026252 ntpd[2175]: corporation. Support and training for ntp-4 are Dec 16 12:26:15.029250 ntpd[2175]: 16 Dec 12:26:15 ntpd[2175]: corporation. Support and training for ntp-4 are Dec 16 12:26:15.029250 ntpd[2175]: 16 Dec 12:26:15 ntpd[2175]: available at https://www.nwtime.org/support Dec 16 12:26:15.029250 ntpd[2175]: 16 Dec 12:26:15 ntpd[2175]: ---------------------------------------------------- Dec 16 12:26:15.028388 ntpd[2175]: available at https://www.nwtime.org/support Dec 16 12:26:15.028420 ntpd[2175]: ---------------------------------------------------- Dec 16 12:26:15.032746 ntpd[2175]: proto: precision = 0.108 usec (-23) Dec 16 12:26:15.032934 ntpd[2175]: 16 Dec 12:26:15 ntpd[2175]: proto: precision = 0.108 usec (-23) Dec 16 12:26:15.033096 ntpd[2175]: basedate set to 2025-11-30 Dec 16 12:26:15.033200 ntpd[2175]: 16 Dec 12:26:15 ntpd[2175]: basedate set to 2025-11-30 Dec 16 12:26:15.033200 ntpd[2175]: 16 Dec 12:26:15 ntpd[2175]: gps base set to 2025-11-30 (week 2395) Dec 16 12:26:15.033132 ntpd[2175]: gps base set to 2025-11-30 (week 2395) Dec 16 12:26:15.035281 ntpd[2175]: Listen and drop on 0 v6wildcard [::]:123 Dec 16 12:26:15.035367 ntpd[2175]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 16 12:26:15.035486 ntpd[2175]: 16 Dec 12:26:15 ntpd[2175]: Listen and drop on 0 v6wildcard [::]:123 Dec 16 12:26:15.035486 ntpd[2175]: 16 Dec 12:26:15 ntpd[2175]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 16 12:26:15.035646 ntpd[2175]: Listen normally on 2 lo 127.0.0.1:123 Dec 16 12:26:15.035750 ntpd[2175]: 16 Dec 12:26:15 ntpd[2175]: Listen normally on 2 lo 127.0.0.1:123 Dec 16 12:26:15.035750 ntpd[2175]: 16 Dec 12:26:15 ntpd[2175]: Listen normally on 3 eth0 172.31.21.37:123 Dec 16 12:26:15.035708 ntpd[2175]: Listen normally on 3 eth0 172.31.21.37:123 Dec 16 12:26:15.035912 ntpd[2175]: 16 Dec 12:26:15 ntpd[2175]: Listen normally on 4 lo [::1]:123 Dec 16 12:26:15.035912 ntpd[2175]: 16 Dec 12:26:15 ntpd[2175]: bind(21) AF_INET6 [fe80::435:bcff:feb4:fc1%2]:123 flags 0x811 failed: Cannot assign requested address Dec 16 12:26:15.035912 ntpd[2175]: 16 Dec 12:26:15 ntpd[2175]: unable to create socket on eth0 (5) for [fe80::435:bcff:feb4:fc1%2]:123 Dec 16 12:26:15.035755 ntpd[2175]: Listen normally on 4 lo [::1]:123 Dec 16 12:26:15.035803 ntpd[2175]: bind(21) AF_INET6 [fe80::435:bcff:feb4:fc1%2]:123 flags 0x811 failed: Cannot assign requested address Dec 16 12:26:15.035841 ntpd[2175]: unable to create socket on eth0 (5) for [fe80::435:bcff:feb4:fc1%2]:123 Dec 16 12:26:15.045452 systemd-coredump[2184]: Process 2175 (ntpd) of user 0 terminated abnormally with signal 11/SEGV, processing... Dec 16 12:26:15.060039 systemd[1]: Started systemd-coredump@1-2184-0.service - Process Core Dump (PID 2184/UID 0). Dec 16 12:26:15.110520 locksmithd[2025]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 16 12:26:15.224071 containerd[2002]: time="2025-12-16T12:26:15.223963955Z" level=info msg="Start subscribing containerd event" Dec 16 12:26:15.224213 containerd[2002]: time="2025-12-16T12:26:15.224084267Z" level=info msg="Start recovering state" Dec 16 12:26:15.224339 containerd[2002]: time="2025-12-16T12:26:15.224226971Z" level=info msg="Start event monitor" Dec 16 12:26:15.224377 systemd-networkd[1868]: eth0: Gained IPv6LL Dec 16 12:26:15.226657 containerd[2002]: time="2025-12-16T12:26:15.226318643Z" level=info msg="Start cni network conf syncer for default" Dec 16 12:26:15.226657 containerd[2002]: time="2025-12-16T12:26:15.226376279Z" level=info msg="Start streaming server" Dec 16 12:26:15.226657 containerd[2002]: time="2025-12-16T12:26:15.226401719Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 16 12:26:15.226657 containerd[2002]: time="2025-12-16T12:26:15.226419347Z" level=info msg="runtime interface starting up..." Dec 16 12:26:15.226657 containerd[2002]: time="2025-12-16T12:26:15.226442171Z" level=info msg="starting plugins..." Dec 16 12:26:15.226657 containerd[2002]: time="2025-12-16T12:26:15.226482107Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 16 12:26:15.228588 containerd[2002]: time="2025-12-16T12:26:15.228498383Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 16 12:26:15.228701 containerd[2002]: time="2025-12-16T12:26:15.228628739Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 16 12:26:15.230034 containerd[2002]: time="2025-12-16T12:26:15.228805211Z" level=info msg="containerd successfully booted in 0.467381s" Dec 16 12:26:15.228946 systemd[1]: Started containerd.service - containerd container runtime. Dec 16 12:26:15.238519 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 16 12:26:15.242171 systemd[1]: Reached target network-online.target - Network is Online. Dec 16 12:26:15.249718 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Dec 16 12:26:15.262399 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:26:15.272887 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 16 12:26:15.346038 polkitd[2146]: Started polkitd version 126 Dec 16 12:26:15.376932 polkitd[2146]: Loading rules from directory /etc/polkit-1/rules.d Dec 16 12:26:15.377899 polkitd[2146]: Loading rules from directory /run/polkit-1/rules.d Dec 16 12:26:15.377986 polkitd[2146]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Dec 16 12:26:15.379670 polkitd[2146]: Loading rules from directory /usr/local/share/polkit-1/rules.d Dec 16 12:26:15.380928 polkitd[2146]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Dec 16 12:26:15.381025 polkitd[2146]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 16 12:26:15.392809 polkitd[2146]: Finished loading, compiling and executing 2 rules Dec 16 12:26:15.400605 systemd[1]: Started polkit.service - Authorization Manager. Dec 16 12:26:15.409177 dbus-daemon[1961]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 16 12:26:15.412350 polkitd[2146]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 16 12:26:15.453569 amazon-ssm-agent[2191]: Initializing new seelog logger Dec 16 12:26:15.455124 amazon-ssm-agent[2191]: New Seelog Logger Creation Complete Dec 16 12:26:15.455124 amazon-ssm-agent[2191]: 2025/12/16 12:26:15 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 12:26:15.455124 amazon-ssm-agent[2191]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 12:26:15.455421 amazon-ssm-agent[2191]: 2025/12/16 12:26:15 processing appconfig overrides Dec 16 12:26:15.455998 amazon-ssm-agent[2191]: 2025/12/16 12:26:15 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 12:26:15.456150 amazon-ssm-agent[2191]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 12:26:15.456362 amazon-ssm-agent[2191]: 2025/12/16 12:26:15 processing appconfig overrides Dec 16 12:26:15.456690 amazon-ssm-agent[2191]: 2025/12/16 12:26:15 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 12:26:15.456774 amazon-ssm-agent[2191]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 12:26:15.456968 amazon-ssm-agent[2191]: 2025/12/16 12:26:15 processing appconfig overrides Dec 16 12:26:15.457923 amazon-ssm-agent[2191]: 2025-12-16 12:26:15.4558 INFO Proxy environment variables: Dec 16 12:26:15.464081 amazon-ssm-agent[2191]: 2025/12/16 12:26:15 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 12:26:15.464081 amazon-ssm-agent[2191]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 12:26:15.464081 amazon-ssm-agent[2191]: 2025/12/16 12:26:15 processing appconfig overrides Dec 16 12:26:15.466413 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 16 12:26:15.498821 systemd-hostnamed[2021]: Hostname set to (transient) Dec 16 12:26:15.499964 systemd-resolved[1870]: System hostname changed to 'ip-172-31-21-37'. Dec 16 12:26:15.561603 amazon-ssm-agent[2191]: 2025-12-16 12:26:15.4559 INFO http_proxy: Dec 16 12:26:15.665233 amazon-ssm-agent[2191]: 2025-12-16 12:26:15.4559 INFO no_proxy: Dec 16 12:26:15.677785 systemd-coredump[2185]: Process 2175 (ntpd) of user 0 dumped core. Module libnss_usrfiles.so.2 without build-id. Module libgcc_s.so.1 without build-id. Module libc.so.6 without build-id. Module libcrypto.so.3 without build-id. Module libm.so.6 without build-id. Module libcap.so.2 without build-id. Module ntpd without build-id. Stack trace of thread 2175: #0 0x0000aaaae12a0b5c n/a (ntpd + 0x60b5c) #1 0x0000aaaae124fe60 n/a (ntpd + 0xfe60) #2 0x0000aaaae1250240 n/a (ntpd + 0x10240) #3 0x0000aaaae124be14 n/a (ntpd + 0xbe14) #4 0x0000aaaae124d3ec n/a (ntpd + 0xd3ec) #5 0x0000aaaae1255a38 n/a (ntpd + 0x15a38) #6 0x0000aaaae124738c n/a (ntpd + 0x738c) #7 0x0000ffff91eb2034 n/a (libc.so.6 + 0x22034) #8 0x0000ffff91eb2118 __libc_start_main (libc.so.6 + 0x22118) #9 0x0000aaaae12473f0 n/a (ntpd + 0x73f0) ELF object binary architecture: AARCH64 Dec 16 12:26:15.686123 systemd[1]: ntpd.service: Main process exited, code=dumped, status=11/SEGV Dec 16 12:26:15.686580 systemd[1]: ntpd.service: Failed with result 'core-dump'. Dec 16 12:26:15.694207 systemd[1]: systemd-coredump@1-2184-0.service: Deactivated successfully. Dec 16 12:26:15.767281 amazon-ssm-agent[2191]: 2025-12-16 12:26:15.4559 INFO https_proxy: Dec 16 12:26:15.863590 amazon-ssm-agent[2191]: 2025-12-16 12:26:15.4564 INFO Checking if agent identity type OnPrem can be assumed Dec 16 12:26:15.965343 amazon-ssm-agent[2191]: 2025-12-16 12:26:15.4565 INFO Checking if agent identity type EC2 can be assumed Dec 16 12:26:15.965490 tar[1976]: linux-arm64/README.md Dec 16 12:26:15.972588 systemd[1]: ntpd.service: Scheduled restart job, restart counter is at 2. Dec 16 12:26:15.984863 systemd[1]: Started ntpd.service - Network Time Service. Dec 16 12:26:16.027909 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 16 12:26:16.063224 ntpd[2229]: ntpd 4.2.8p18@1.4062-o Fri Dec 12 14:43:59 UTC 2025 (1): Starting Dec 16 12:26:16.063882 amazon-ssm-agent[2191]: 2025-12-16 12:26:15.6571 INFO Agent will take identity from EC2 Dec 16 12:26:16.068298 ntpd[2229]: 16 Dec 12:26:16 ntpd[2229]: ntpd 4.2.8p18@1.4062-o Fri Dec 12 14:43:59 UTC 2025 (1): Starting Dec 16 12:26:16.068298 ntpd[2229]: 16 Dec 12:26:16 ntpd[2229]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 16 12:26:16.068298 ntpd[2229]: 16 Dec 12:26:16 ntpd[2229]: ---------------------------------------------------- Dec 16 12:26:16.068298 ntpd[2229]: 16 Dec 12:26:16 ntpd[2229]: ntp-4 is maintained by Network Time Foundation, Dec 16 12:26:16.068298 ntpd[2229]: 16 Dec 12:26:16 ntpd[2229]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 16 12:26:16.068298 ntpd[2229]: 16 Dec 12:26:16 ntpd[2229]: corporation. Support and training for ntp-4 are Dec 16 12:26:16.068298 ntpd[2229]: 16 Dec 12:26:16 ntpd[2229]: available at https://www.nwtime.org/support Dec 16 12:26:16.068298 ntpd[2229]: 16 Dec 12:26:16 ntpd[2229]: ---------------------------------------------------- Dec 16 12:26:16.068298 ntpd[2229]: 16 Dec 12:26:16 ntpd[2229]: proto: precision = 0.096 usec (-23) Dec 16 12:26:16.068298 ntpd[2229]: 16 Dec 12:26:16 ntpd[2229]: basedate set to 2025-11-30 Dec 16 12:26:16.068298 ntpd[2229]: 16 Dec 12:26:16 ntpd[2229]: gps base set to 2025-11-30 (week 2395) Dec 16 12:26:16.068298 ntpd[2229]: 16 Dec 12:26:16 ntpd[2229]: Listen and drop on 0 v6wildcard [::]:123 Dec 16 12:26:16.068298 ntpd[2229]: 16 Dec 12:26:16 ntpd[2229]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 16 12:26:16.066412 ntpd[2229]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 16 12:26:16.066432 ntpd[2229]: ---------------------------------------------------- Dec 16 12:26:16.066449 ntpd[2229]: ntp-4 is maintained by Network Time Foundation, Dec 16 12:26:16.066468 ntpd[2229]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 16 12:26:16.066484 ntpd[2229]: corporation. Support and training for ntp-4 are Dec 16 12:26:16.066500 ntpd[2229]: available at https://www.nwtime.org/support Dec 16 12:26:16.066516 ntpd[2229]: ---------------------------------------------------- Dec 16 12:26:16.067534 ntpd[2229]: proto: precision = 0.096 usec (-23) Dec 16 12:26:16.067833 ntpd[2229]: basedate set to 2025-11-30 Dec 16 12:26:16.067854 ntpd[2229]: gps base set to 2025-11-30 (week 2395) Dec 16 12:26:16.067966 ntpd[2229]: Listen and drop on 0 v6wildcard [::]:123 Dec 16 12:26:16.068008 ntpd[2229]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 16 12:26:16.072316 ntpd[2229]: Listen normally on 2 lo 127.0.0.1:123 Dec 16 12:26:16.074464 ntpd[2229]: 16 Dec 12:26:16 ntpd[2229]: Listen normally on 2 lo 127.0.0.1:123 Dec 16 12:26:16.074464 ntpd[2229]: 16 Dec 12:26:16 ntpd[2229]: Listen normally on 3 eth0 172.31.21.37:123 Dec 16 12:26:16.074464 ntpd[2229]: 16 Dec 12:26:16 ntpd[2229]: Listen normally on 4 lo [::1]:123 Dec 16 12:26:16.074464 ntpd[2229]: 16 Dec 12:26:16 ntpd[2229]: Listen normally on 5 eth0 [fe80::435:bcff:feb4:fc1%2]:123 Dec 16 12:26:16.074464 ntpd[2229]: 16 Dec 12:26:16 ntpd[2229]: Listening on routing socket on fd #22 for interface updates Dec 16 12:26:16.072387 ntpd[2229]: Listen normally on 3 eth0 172.31.21.37:123 Dec 16 12:26:16.072434 ntpd[2229]: Listen normally on 4 lo [::1]:123 Dec 16 12:26:16.072479 ntpd[2229]: Listen normally on 5 eth0 [fe80::435:bcff:feb4:fc1%2]:123 Dec 16 12:26:16.072531 ntpd[2229]: Listening on routing socket on fd #22 for interface updates Dec 16 12:26:16.090144 ntpd[2229]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 16 12:26:16.091680 ntpd[2229]: 16 Dec 12:26:16 ntpd[2229]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 16 12:26:16.091680 ntpd[2229]: 16 Dec 12:26:16 ntpd[2229]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 16 12:26:16.090198 ntpd[2229]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 16 12:26:16.163173 amazon-ssm-agent[2191]: 2025-12-16 12:26:15.6664 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Dec 16 12:26:16.262514 amazon-ssm-agent[2191]: 2025-12-16 12:26:15.6665 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Dec 16 12:26:16.362693 amazon-ssm-agent[2191]: 2025-12-16 12:26:15.6665 INFO [amazon-ssm-agent] Starting Core Agent Dec 16 12:26:16.384461 sshd_keygen[2006]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 16 12:26:16.440412 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 16 12:26:16.448834 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 16 12:26:16.456139 systemd[1]: Started sshd@0-172.31.21.37:22-139.178.89.65:47930.service - OpenSSH per-connection server daemon (139.178.89.65:47930). Dec 16 12:26:16.464349 amazon-ssm-agent[2191]: 2025-12-16 12:26:15.6665 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Dec 16 12:26:16.516163 systemd[1]: issuegen.service: Deactivated successfully. Dec 16 12:26:16.517124 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 16 12:26:16.525972 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 16 12:26:16.563239 amazon-ssm-agent[2191]: 2025-12-16 12:26:15.6665 INFO [Registrar] Starting registrar module Dec 16 12:26:16.597004 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 16 12:26:16.607880 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 16 12:26:16.616360 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 16 12:26:16.622229 systemd[1]: Reached target getty.target - Login Prompts. Dec 16 12:26:16.662879 amazon-ssm-agent[2191]: 2025-12-16 12:26:15.6713 INFO [EC2Identity] Checking disk for registration info Dec 16 12:26:16.760060 amazon-ssm-agent[2191]: 2025/12/16 12:26:16 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 12:26:16.760305 amazon-ssm-agent[2191]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 12:26:16.760989 amazon-ssm-agent[2191]: 2025/12/16 12:26:16 processing appconfig overrides Dec 16 12:26:16.764804 amazon-ssm-agent[2191]: 2025-12-16 12:26:15.6714 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Dec 16 12:26:16.792768 sshd[2242]: Accepted publickey for core from 139.178.89.65 port 47930 ssh2: RSA SHA256:xUh8ykt9z5cbsZrCvFOBrVTYezXLnfIWheoQb+9aZNE Dec 16 12:26:16.797514 sshd-session[2242]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:26:16.815583 amazon-ssm-agent[2191]: 2025-12-16 12:26:15.6715 INFO [EC2Identity] Generating registration keypair Dec 16 12:26:16.815583 amazon-ssm-agent[2191]: 2025-12-16 12:26:16.7066 INFO [EC2Identity] Checking write access before registering Dec 16 12:26:16.815764 amazon-ssm-agent[2191]: 2025-12-16 12:26:16.7108 INFO [EC2Identity] Registering EC2 instance with Systems Manager Dec 16 12:26:16.815764 amazon-ssm-agent[2191]: 2025-12-16 12:26:16.7587 INFO [EC2Identity] EC2 registration was successful. Dec 16 12:26:16.815764 amazon-ssm-agent[2191]: 2025-12-16 12:26:16.7588 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Dec 16 12:26:16.815764 amazon-ssm-agent[2191]: 2025-12-16 12:26:16.7595 INFO [CredentialRefresher] credentialRefresher has started Dec 16 12:26:16.815764 amazon-ssm-agent[2191]: 2025-12-16 12:26:16.7595 INFO [CredentialRefresher] Starting credentials refresher loop Dec 16 12:26:16.815764 amazon-ssm-agent[2191]: 2025-12-16 12:26:16.8125 INFO EC2RoleProvider Successfully connected with instance profile role credentials Dec 16 12:26:16.815764 amazon-ssm-agent[2191]: 2025-12-16 12:26:16.8154 INFO [CredentialRefresher] Credentials ready Dec 16 12:26:16.816957 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 16 12:26:16.823025 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 16 12:26:16.854890 systemd-logind[1971]: New session 1 of user core. Dec 16 12:26:16.864636 amazon-ssm-agent[2191]: 2025-12-16 12:26:16.8157 INFO [CredentialRefresher] Next credential rotation will be in 29.9999466712 minutes Dec 16 12:26:16.877477 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 16 12:26:16.890807 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 16 12:26:16.915814 (systemd)[2255]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 16 12:26:16.923051 systemd-logind[1971]: New session c1 of user core. Dec 16 12:26:17.257878 systemd[2255]: Queued start job for default target default.target. Dec 16 12:26:17.267380 systemd[2255]: Created slice app.slice - User Application Slice. Dec 16 12:26:17.267658 systemd[2255]: Reached target paths.target - Paths. Dec 16 12:26:17.267907 systemd[2255]: Reached target timers.target - Timers. Dec 16 12:26:17.271440 systemd[2255]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 16 12:26:17.302735 systemd[2255]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 16 12:26:17.303049 systemd[2255]: Reached target sockets.target - Sockets. Dec 16 12:26:17.303306 systemd[2255]: Reached target basic.target - Basic System. Dec 16 12:26:17.303556 systemd[2255]: Reached target default.target - Main User Target. Dec 16 12:26:17.303629 systemd[2255]: Startup finished in 362ms. Dec 16 12:26:17.304170 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 16 12:26:17.315078 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 16 12:26:17.321508 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:26:17.330507 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 16 12:26:17.333434 systemd[1]: Startup finished in 3.715s (kernel) + 9.268s (initrd) + 9.445s (userspace) = 22.429s. Dec 16 12:26:17.342975 (kubelet)[2267]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 12:26:17.498716 systemd[1]: Started sshd@1-172.31.21.37:22-139.178.89.65:47932.service - OpenSSH per-connection server daemon (139.178.89.65:47932). Dec 16 12:26:17.705200 sshd[2276]: Accepted publickey for core from 139.178.89.65 port 47932 ssh2: RSA SHA256:xUh8ykt9z5cbsZrCvFOBrVTYezXLnfIWheoQb+9aZNE Dec 16 12:26:17.707326 sshd-session[2276]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:26:17.716679 systemd-logind[1971]: New session 2 of user core. Dec 16 12:26:17.725538 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 16 12:26:17.856463 sshd[2283]: Connection closed by 139.178.89.65 port 47932 Dec 16 12:26:17.858699 amazon-ssm-agent[2191]: 2025-12-16 12:26:17.8546 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Dec 16 12:26:17.857240 sshd-session[2276]: pam_unix(sshd:session): session closed for user core Dec 16 12:26:17.868235 systemd[1]: sshd@1-172.31.21.37:22-139.178.89.65:47932.service: Deactivated successfully. Dec 16 12:26:17.874627 systemd[1]: session-2.scope: Deactivated successfully. Dec 16 12:26:17.879318 systemd-logind[1971]: Session 2 logged out. Waiting for processes to exit. Dec 16 12:26:17.898706 systemd[1]: Started sshd@2-172.31.21.37:22-139.178.89.65:47936.service - OpenSSH per-connection server daemon (139.178.89.65:47936). Dec 16 12:26:17.905652 systemd-logind[1971]: Removed session 2. Dec 16 12:26:17.960675 amazon-ssm-agent[2191]: 2025-12-16 12:26:17.8602 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2287) started Dec 16 12:26:18.061509 amazon-ssm-agent[2191]: 2025-12-16 12:26:17.8603 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Dec 16 12:26:18.177853 sshd[2292]: Accepted publickey for core from 139.178.89.65 port 47936 ssh2: RSA SHA256:xUh8ykt9z5cbsZrCvFOBrVTYezXLnfIWheoQb+9aZNE Dec 16 12:26:18.185870 sshd-session[2292]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:26:18.197317 systemd-logind[1971]: New session 3 of user core. Dec 16 12:26:18.203568 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 16 12:26:18.326139 sshd[2301]: Connection closed by 139.178.89.65 port 47936 Dec 16 12:26:18.326834 sshd-session[2292]: pam_unix(sshd:session): session closed for user core Dec 16 12:26:18.337962 systemd[1]: sshd@2-172.31.21.37:22-139.178.89.65:47936.service: Deactivated successfully. Dec 16 12:26:18.342771 systemd[1]: session-3.scope: Deactivated successfully. Dec 16 12:26:18.346538 systemd-logind[1971]: Session 3 logged out. Waiting for processes to exit. Dec 16 12:26:18.368515 systemd[1]: Started sshd@3-172.31.21.37:22-139.178.89.65:47944.service - OpenSSH per-connection server daemon (139.178.89.65:47944). Dec 16 12:26:18.390623 systemd-logind[1971]: Removed session 3. Dec 16 12:26:18.438678 kubelet[2267]: E1216 12:26:18.438591 2267 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 12:26:18.445475 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 12:26:18.445813 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 12:26:18.448402 systemd[1]: kubelet.service: Consumed 1.517s CPU time, 257.8M memory peak. Dec 16 12:26:18.589706 sshd[2312]: Accepted publickey for core from 139.178.89.65 port 47944 ssh2: RSA SHA256:xUh8ykt9z5cbsZrCvFOBrVTYezXLnfIWheoQb+9aZNE Dec 16 12:26:18.592101 sshd-session[2312]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:26:18.600224 systemd-logind[1971]: New session 4 of user core. Dec 16 12:26:18.609513 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 16 12:26:18.735286 sshd[2316]: Connection closed by 139.178.89.65 port 47944 Dec 16 12:26:18.736110 sshd-session[2312]: pam_unix(sshd:session): session closed for user core Dec 16 12:26:18.742706 systemd[1]: sshd@3-172.31.21.37:22-139.178.89.65:47944.service: Deactivated successfully. Dec 16 12:26:18.746434 systemd[1]: session-4.scope: Deactivated successfully. Dec 16 12:26:18.748384 systemd-logind[1971]: Session 4 logged out. Waiting for processes to exit. Dec 16 12:26:18.751189 systemd-logind[1971]: Removed session 4. Dec 16 12:26:18.771893 systemd[1]: Started sshd@4-172.31.21.37:22-139.178.89.65:47952.service - OpenSSH per-connection server daemon (139.178.89.65:47952). Dec 16 12:26:18.966219 sshd[2322]: Accepted publickey for core from 139.178.89.65 port 47952 ssh2: RSA SHA256:xUh8ykt9z5cbsZrCvFOBrVTYezXLnfIWheoQb+9aZNE Dec 16 12:26:18.969011 sshd-session[2322]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:26:18.982363 systemd-logind[1971]: New session 5 of user core. Dec 16 12:26:19.001566 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 16 12:26:19.129177 sudo[2326]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 16 12:26:19.129866 sudo[2326]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:26:19.144242 sudo[2326]: pam_unix(sudo:session): session closed for user root Dec 16 12:26:19.169296 sshd[2325]: Connection closed by 139.178.89.65 port 47952 Dec 16 12:26:19.170439 sshd-session[2322]: pam_unix(sshd:session): session closed for user core Dec 16 12:26:19.177409 systemd[1]: sshd@4-172.31.21.37:22-139.178.89.65:47952.service: Deactivated successfully. Dec 16 12:26:19.182055 systemd[1]: session-5.scope: Deactivated successfully. Dec 16 12:26:19.187655 systemd-logind[1971]: Session 5 logged out. Waiting for processes to exit. Dec 16 12:26:19.204658 systemd-logind[1971]: Removed session 5. Dec 16 12:26:19.206722 systemd[1]: Started sshd@5-172.31.21.37:22-139.178.89.65:47966.service - OpenSSH per-connection server daemon (139.178.89.65:47966). Dec 16 12:26:19.402209 sshd[2332]: Accepted publickey for core from 139.178.89.65 port 47966 ssh2: RSA SHA256:xUh8ykt9z5cbsZrCvFOBrVTYezXLnfIWheoQb+9aZNE Dec 16 12:26:19.404589 sshd-session[2332]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:26:19.412361 systemd-logind[1971]: New session 6 of user core. Dec 16 12:26:19.420553 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 16 12:26:19.524546 sudo[2337]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 16 12:26:19.525129 sudo[2337]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:26:19.535627 sudo[2337]: pam_unix(sudo:session): session closed for user root Dec 16 12:26:19.545041 sudo[2336]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 16 12:26:19.546135 sudo[2336]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:26:19.565135 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 12:26:19.624814 augenrules[2359]: No rules Dec 16 12:26:19.627459 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 12:26:19.627942 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 12:26:19.629730 sudo[2336]: pam_unix(sudo:session): session closed for user root Dec 16 12:26:19.654217 sshd[2335]: Connection closed by 139.178.89.65 port 47966 Dec 16 12:26:19.655484 sshd-session[2332]: pam_unix(sshd:session): session closed for user core Dec 16 12:26:19.662851 systemd-logind[1971]: Session 6 logged out. Waiting for processes to exit. Dec 16 12:26:19.663391 systemd[1]: sshd@5-172.31.21.37:22-139.178.89.65:47966.service: Deactivated successfully. Dec 16 12:26:19.666654 systemd[1]: session-6.scope: Deactivated successfully. Dec 16 12:26:19.670562 systemd-logind[1971]: Removed session 6. Dec 16 12:26:19.686425 systemd[1]: Started sshd@6-172.31.21.37:22-139.178.89.65:47972.service - OpenSSH per-connection server daemon (139.178.89.65:47972). Dec 16 12:26:19.875578 sshd[2368]: Accepted publickey for core from 139.178.89.65 port 47972 ssh2: RSA SHA256:xUh8ykt9z5cbsZrCvFOBrVTYezXLnfIWheoQb+9aZNE Dec 16 12:26:19.877831 sshd-session[2368]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:26:19.887119 systemd-logind[1971]: New session 7 of user core. Dec 16 12:26:19.893552 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 16 12:26:19.995161 sudo[2372]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 16 12:26:19.995860 sudo[2372]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:26:20.804703 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 16 12:26:20.834803 (dockerd)[2389]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 16 12:26:21.370831 dockerd[2389]: time="2025-12-16T12:26:21.370316573Z" level=info msg="Starting up" Dec 16 12:26:21.372231 dockerd[2389]: time="2025-12-16T12:26:21.372191969Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 16 12:26:21.392823 dockerd[2389]: time="2025-12-16T12:26:21.392766737Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 16 12:26:21.513680 systemd[1]: var-lib-docker-metacopy\x2dcheck3256874482-merged.mount: Deactivated successfully. Dec 16 12:26:21.524723 dockerd[2389]: time="2025-12-16T12:26:21.524375670Z" level=info msg="Loading containers: start." Dec 16 12:26:21.540303 kernel: Initializing XFRM netlink socket Dec 16 12:26:21.904215 (udev-worker)[2412]: Network interface NamePolicy= disabled on kernel command line. Dec 16 12:26:21.978763 systemd-networkd[1868]: docker0: Link UP Dec 16 12:26:21.991755 dockerd[2389]: time="2025-12-16T12:26:21.991673168Z" level=info msg="Loading containers: done." Dec 16 12:26:22.027273 dockerd[2389]: time="2025-12-16T12:26:22.027192004Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 16 12:26:22.027480 dockerd[2389]: time="2025-12-16T12:26:22.027339616Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 16 12:26:22.027536 dockerd[2389]: time="2025-12-16T12:26:22.027485308Z" level=info msg="Initializing buildkit" Dec 16 12:26:22.078694 dockerd[2389]: time="2025-12-16T12:26:22.078636749Z" level=info msg="Completed buildkit initialization" Dec 16 12:26:22.093840 dockerd[2389]: time="2025-12-16T12:26:22.093758561Z" level=info msg="Daemon has completed initialization" Dec 16 12:26:22.094218 dockerd[2389]: time="2025-12-16T12:26:22.094168553Z" level=info msg="API listen on /run/docker.sock" Dec 16 12:26:22.094253 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 16 12:26:23.156250 containerd[2002]: time="2025-12-16T12:26:23.156186978Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.10\"" Dec 16 12:26:23.892021 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2359412028.mount: Deactivated successfully. Dec 16 12:26:25.395127 containerd[2002]: time="2025-12-16T12:26:25.395070121Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:26:25.397760 containerd[2002]: time="2025-12-16T12:26:25.397703745Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.10: active requests=0, bytes read=26431959" Dec 16 12:26:25.398256 containerd[2002]: time="2025-12-16T12:26:25.398166312Z" level=info msg="ImageCreate event name:\"sha256:03aec5fd5841efdd990b8fe285e036fc1386e2f8851378ce2c9dfd1b331897ea\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:26:25.403065 containerd[2002]: time="2025-12-16T12:26:25.402998307Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:af4ee57c047e31a7f58422b94a9ec4c62221d3deebb16755bdeff720df796189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:26:25.405276 containerd[2002]: time="2025-12-16T12:26:25.405215011Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.10\" with image id \"sha256:03aec5fd5841efdd990b8fe285e036fc1386e2f8851378ce2c9dfd1b331897ea\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:af4ee57c047e31a7f58422b94a9ec4c62221d3deebb16755bdeff720df796189\", size \"26428558\" in 2.248964473s" Dec 16 12:26:25.405444 containerd[2002]: time="2025-12-16T12:26:25.405416435Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.10\" returns image reference \"sha256:03aec5fd5841efdd990b8fe285e036fc1386e2f8851378ce2c9dfd1b331897ea\"" Dec 16 12:26:25.406432 containerd[2002]: time="2025-12-16T12:26:25.406372509Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.10\"" Dec 16 12:26:27.110225 containerd[2002]: time="2025-12-16T12:26:27.109884075Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:26:27.112087 containerd[2002]: time="2025-12-16T12:26:27.111738246Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.10: active requests=0, bytes read=22618955" Dec 16 12:26:27.113827 containerd[2002]: time="2025-12-16T12:26:27.113767512Z" level=info msg="ImageCreate event name:\"sha256:66490a6490dde2df4a78eba21320da67070ad88461899536880edb5301ec2ba3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:26:27.118706 containerd[2002]: time="2025-12-16T12:26:27.118642092Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:efbd9d1dfcd2940e1c73a1476c880c3c2cdf04cc60722d329b21cd48745c8660\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:26:27.120943 containerd[2002]: time="2025-12-16T12:26:27.120765401Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.10\" with image id \"sha256:66490a6490dde2df4a78eba21320da67070ad88461899536880edb5301ec2ba3\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:efbd9d1dfcd2940e1c73a1476c880c3c2cdf04cc60722d329b21cd48745c8660\", size \"24203439\" in 1.714158234s" Dec 16 12:26:27.120943 containerd[2002]: time="2025-12-16T12:26:27.120818432Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.10\" returns image reference \"sha256:66490a6490dde2df4a78eba21320da67070ad88461899536880edb5301ec2ba3\"" Dec 16 12:26:27.122731 containerd[2002]: time="2025-12-16T12:26:27.122413153Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.10\"" Dec 16 12:26:28.624295 containerd[2002]: time="2025-12-16T12:26:28.622762379Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:26:28.624787 containerd[2002]: time="2025-12-16T12:26:28.624508748Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.10: active requests=0, bytes read=17618436" Dec 16 12:26:28.625158 containerd[2002]: time="2025-12-16T12:26:28.625120946Z" level=info msg="ImageCreate event name:\"sha256:fcf368a1abd0b48cff2fd3cca12fcc008aaf52eeab885656f11e7773c6a188a3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:26:28.629984 containerd[2002]: time="2025-12-16T12:26:28.629919780Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9c58e1adcad5af66d1d9ca5cf9a4c266e4054b8f19f91a8fff1993549e657b10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:26:28.632155 containerd[2002]: time="2025-12-16T12:26:28.632108918Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.10\" with image id \"sha256:fcf368a1abd0b48cff2fd3cca12fcc008aaf52eeab885656f11e7773c6a188a3\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9c58e1adcad5af66d1d9ca5cf9a4c266e4054b8f19f91a8fff1993549e657b10\", size \"19202938\" in 1.509641991s" Dec 16 12:26:28.632316 containerd[2002]: time="2025-12-16T12:26:28.632288828Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.10\" returns image reference \"sha256:fcf368a1abd0b48cff2fd3cca12fcc008aaf52eeab885656f11e7773c6a188a3\"" Dec 16 12:26:28.632943 containerd[2002]: time="2025-12-16T12:26:28.632891121Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.10\"" Dec 16 12:26:28.696934 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 16 12:26:28.702613 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:26:29.096038 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:26:29.115055 (kubelet)[2674]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 12:26:29.211513 kubelet[2674]: E1216 12:26:29.211391 2674 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 12:26:29.218183 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 12:26:29.218538 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 12:26:29.219393 systemd[1]: kubelet.service: Consumed 336ms CPU time, 106.5M memory peak. Dec 16 12:26:30.016905 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1957988309.mount: Deactivated successfully. Dec 16 12:26:30.606788 containerd[2002]: time="2025-12-16T12:26:30.606727311Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:26:30.608546 containerd[2002]: time="2025-12-16T12:26:30.608494727Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.10: active requests=0, bytes read=27561799" Dec 16 12:26:30.609786 containerd[2002]: time="2025-12-16T12:26:30.609713120Z" level=info msg="ImageCreate event name:\"sha256:8b57c1f8bd2ddfa793889457b41e87132f192046e262b32ab0514f32d28be47d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:26:30.614039 containerd[2002]: time="2025-12-16T12:26:30.613966437Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e3dda1c7b384f9eb5b2fa1c27493b23b80e6204b9fa2ee8791b2de078f468cbf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:26:30.615857 containerd[2002]: time="2025-12-16T12:26:30.615782201Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.10\" with image id \"sha256:8b57c1f8bd2ddfa793889457b41e87132f192046e262b32ab0514f32d28be47d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:e3dda1c7b384f9eb5b2fa1c27493b23b80e6204b9fa2ee8791b2de078f468cbf\", size \"27560818\" in 1.982631666s" Dec 16 12:26:30.615857 containerd[2002]: time="2025-12-16T12:26:30.615846001Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.10\" returns image reference \"sha256:8b57c1f8bd2ddfa793889457b41e87132f192046e262b32ab0514f32d28be47d\"" Dec 16 12:26:30.617042 containerd[2002]: time="2025-12-16T12:26:30.616694357Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Dec 16 12:26:31.167598 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1035514440.mount: Deactivated successfully. Dec 16 12:26:32.406763 containerd[2002]: time="2025-12-16T12:26:32.406683844Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:26:32.408682 containerd[2002]: time="2025-12-16T12:26:32.408609175Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Dec 16 12:26:32.411350 containerd[2002]: time="2025-12-16T12:26:32.411254505Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:26:32.417764 containerd[2002]: time="2025-12-16T12:26:32.416838423Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:26:32.418964 containerd[2002]: time="2025-12-16T12:26:32.418902255Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.802154183s" Dec 16 12:26:32.419097 containerd[2002]: time="2025-12-16T12:26:32.418960880Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Dec 16 12:26:32.420836 containerd[2002]: time="2025-12-16T12:26:32.420718366Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 16 12:26:32.984927 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4215035870.mount: Deactivated successfully. Dec 16 12:26:32.998302 containerd[2002]: time="2025-12-16T12:26:32.997525570Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 12:26:33.000380 containerd[2002]: time="2025-12-16T12:26:33.000342910Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Dec 16 12:26:33.003194 containerd[2002]: time="2025-12-16T12:26:33.003153070Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 12:26:33.007625 containerd[2002]: time="2025-12-16T12:26:33.007573283Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 12:26:33.008958 containerd[2002]: time="2025-12-16T12:26:33.008898433Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 588.122631ms" Dec 16 12:26:33.009096 containerd[2002]: time="2025-12-16T12:26:33.008955270Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Dec 16 12:26:33.009962 containerd[2002]: time="2025-12-16T12:26:33.009753813Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Dec 16 12:26:33.604998 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1274585912.mount: Deactivated successfully. Dec 16 12:26:35.908866 containerd[2002]: time="2025-12-16T12:26:35.908782647Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:26:35.911230 containerd[2002]: time="2025-12-16T12:26:35.911169176Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67943165" Dec 16 12:26:35.913116 containerd[2002]: time="2025-12-16T12:26:35.912234204Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:26:35.919090 containerd[2002]: time="2025-12-16T12:26:35.919025218Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:26:35.922072 containerd[2002]: time="2025-12-16T12:26:35.921997892Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.912180699s" Dec 16 12:26:35.922072 containerd[2002]: time="2025-12-16T12:26:35.922061980Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Dec 16 12:26:39.469176 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 16 12:26:39.474068 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:26:39.824537 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:26:39.837787 (kubelet)[2828]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 12:26:39.930573 kubelet[2828]: E1216 12:26:39.930453 2828 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 12:26:39.935635 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 12:26:39.935945 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 12:26:39.936801 systemd[1]: kubelet.service: Consumed 303ms CPU time, 105.6M memory peak. Dec 16 12:26:45.534956 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 16 12:26:46.024989 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:26:46.025606 systemd[1]: kubelet.service: Consumed 303ms CPU time, 105.6M memory peak. Dec 16 12:26:46.029559 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:26:46.080771 systemd[1]: Reload requested from client PID 2845 ('systemctl') (unit session-7.scope)... Dec 16 12:26:46.080801 systemd[1]: Reloading... Dec 16 12:26:46.326312 zram_generator::config[2892]: No configuration found. Dec 16 12:26:46.798780 systemd[1]: Reloading finished in 717 ms. Dec 16 12:26:46.896243 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 16 12:26:46.896500 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 16 12:26:46.897076 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:26:46.897174 systemd[1]: kubelet.service: Consumed 230ms CPU time, 95M memory peak. Dec 16 12:26:46.901785 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:26:47.237364 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:26:47.253093 (kubelet)[2953]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 12:26:47.327311 kubelet[2953]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 12:26:47.327311 kubelet[2953]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 12:26:47.327311 kubelet[2953]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 12:26:47.327311 kubelet[2953]: I1216 12:26:47.326949 2953 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 12:26:49.466920 kubelet[2953]: I1216 12:26:49.466862 2953 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Dec 16 12:26:49.468337 kubelet[2953]: I1216 12:26:49.467526 2953 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 12:26:49.468337 kubelet[2953]: I1216 12:26:49.468006 2953 server.go:954] "Client rotation is on, will bootstrap in background" Dec 16 12:26:49.517643 kubelet[2953]: E1216 12:26:49.517475 2953 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.21.37:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.21.37:6443: connect: connection refused" logger="UnhandledError" Dec 16 12:26:49.521066 kubelet[2953]: I1216 12:26:49.520979 2953 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 12:26:49.534708 kubelet[2953]: I1216 12:26:49.534665 2953 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 12:26:49.541978 kubelet[2953]: I1216 12:26:49.540394 2953 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 16 12:26:49.541978 kubelet[2953]: I1216 12:26:49.540850 2953 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 12:26:49.541978 kubelet[2953]: I1216 12:26:49.540894 2953 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-21-37","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 12:26:49.541978 kubelet[2953]: I1216 12:26:49.541335 2953 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 12:26:49.542523 kubelet[2953]: I1216 12:26:49.541355 2953 container_manager_linux.go:304] "Creating device plugin manager" Dec 16 12:26:49.542523 kubelet[2953]: I1216 12:26:49.541688 2953 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:26:49.547609 kubelet[2953]: I1216 12:26:49.547573 2953 kubelet.go:446] "Attempting to sync node with API server" Dec 16 12:26:49.547770 kubelet[2953]: I1216 12:26:49.547751 2953 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 12:26:49.549745 kubelet[2953]: I1216 12:26:49.549713 2953 kubelet.go:352] "Adding apiserver pod source" Dec 16 12:26:49.549902 kubelet[2953]: I1216 12:26:49.549883 2953 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 12:26:49.554736 kubelet[2953]: W1216 12:26:49.554657 2953 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.21.37:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-21-37&limit=500&resourceVersion=0": dial tcp 172.31.21.37:6443: connect: connection refused Dec 16 12:26:49.554989 kubelet[2953]: E1216 12:26:49.554958 2953 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.21.37:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-21-37&limit=500&resourceVersion=0\": dial tcp 172.31.21.37:6443: connect: connection refused" logger="UnhandledError" Dec 16 12:26:49.555669 kubelet[2953]: W1216 12:26:49.555604 2953 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.21.37:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.21.37:6443: connect: connection refused Dec 16 12:26:49.555927 kubelet[2953]: E1216 12:26:49.555895 2953 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.21.37:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.21.37:6443: connect: connection refused" logger="UnhandledError" Dec 16 12:26:49.556191 kubelet[2953]: I1216 12:26:49.556166 2953 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 16 12:26:49.557363 kubelet[2953]: I1216 12:26:49.557329 2953 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 16 12:26:49.557744 kubelet[2953]: W1216 12:26:49.557720 2953 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 16 12:26:49.561225 kubelet[2953]: I1216 12:26:49.561177 2953 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 16 12:26:49.561824 kubelet[2953]: I1216 12:26:49.561348 2953 server.go:1287] "Started kubelet" Dec 16 12:26:49.566181 kubelet[2953]: I1216 12:26:49.565972 2953 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 12:26:49.568170 kubelet[2953]: I1216 12:26:49.567588 2953 server.go:479] "Adding debug handlers to kubelet server" Dec 16 12:26:49.570502 kubelet[2953]: I1216 12:26:49.570408 2953 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 12:26:49.571098 kubelet[2953]: I1216 12:26:49.571069 2953 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 12:26:49.571928 kubelet[2953]: E1216 12:26:49.571485 2953 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.21.37:6443/api/v1/namespaces/default/events\": dial tcp 172.31.21.37:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-21-37.1881b1c9e3062827 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-21-37,UID:ip-172-31-21-37,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-21-37,},FirstTimestamp:2025-12-16 12:26:49.561319463 +0000 UTC m=+2.301421885,LastTimestamp:2025-12-16 12:26:49.561319463 +0000 UTC m=+2.301421885,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-21-37,}" Dec 16 12:26:49.573317 kubelet[2953]: I1216 12:26:49.572850 2953 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 12:26:49.573317 kubelet[2953]: I1216 12:26:49.573215 2953 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 12:26:49.582925 kubelet[2953]: E1216 12:26:49.582863 2953 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-21-37\" not found" Dec 16 12:26:49.583120 kubelet[2953]: I1216 12:26:49.583100 2953 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 16 12:26:49.583619 kubelet[2953]: I1216 12:26:49.583592 2953 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 16 12:26:49.583843 kubelet[2953]: I1216 12:26:49.583823 2953 reconciler.go:26] "Reconciler: start to sync state" Dec 16 12:26:49.584755 kubelet[2953]: W1216 12:26:49.584665 2953 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.21.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.21.37:6443: connect: connection refused Dec 16 12:26:49.585053 kubelet[2953]: E1216 12:26:49.584887 2953 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.21.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.21.37:6443: connect: connection refused" logger="UnhandledError" Dec 16 12:26:49.585558 kubelet[2953]: E1216 12:26:49.585498 2953 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 12:26:49.586339 kubelet[2953]: I1216 12:26:49.586306 2953 factory.go:221] Registration of the systemd container factory successfully Dec 16 12:26:49.586597 kubelet[2953]: I1216 12:26:49.586568 2953 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 12:26:49.588175 kubelet[2953]: E1216 12:26:49.588127 2953 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-37?timeout=10s\": dial tcp 172.31.21.37:6443: connect: connection refused" interval="200ms" Dec 16 12:26:49.589338 kubelet[2953]: I1216 12:26:49.588928 2953 factory.go:221] Registration of the containerd container factory successfully Dec 16 12:26:49.613813 kubelet[2953]: I1216 12:26:49.613703 2953 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 16 12:26:49.615997 kubelet[2953]: I1216 12:26:49.615930 2953 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 16 12:26:49.615997 kubelet[2953]: I1216 12:26:49.615978 2953 status_manager.go:227] "Starting to sync pod status with apiserver" Dec 16 12:26:49.616179 kubelet[2953]: I1216 12:26:49.616014 2953 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 12:26:49.616179 kubelet[2953]: I1216 12:26:49.616028 2953 kubelet.go:2382] "Starting kubelet main sync loop" Dec 16 12:26:49.616179 kubelet[2953]: E1216 12:26:49.616103 2953 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 12:26:49.628303 kubelet[2953]: W1216 12:26:49.628093 2953 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.21.37:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.21.37:6443: connect: connection refused Dec 16 12:26:49.628303 kubelet[2953]: E1216 12:26:49.628172 2953 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.21.37:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.21.37:6443: connect: connection refused" logger="UnhandledError" Dec 16 12:26:49.644549 kubelet[2953]: I1216 12:26:49.644508 2953 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 12:26:49.644549 kubelet[2953]: I1216 12:26:49.644546 2953 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 12:26:49.644770 kubelet[2953]: I1216 12:26:49.644580 2953 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:26:49.649559 kubelet[2953]: I1216 12:26:49.649505 2953 policy_none.go:49] "None policy: Start" Dec 16 12:26:49.649559 kubelet[2953]: I1216 12:26:49.649549 2953 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 16 12:26:49.649699 kubelet[2953]: I1216 12:26:49.649574 2953 state_mem.go:35] "Initializing new in-memory state store" Dec 16 12:26:49.662694 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 16 12:26:49.678488 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 16 12:26:49.683624 kubelet[2953]: E1216 12:26:49.683570 2953 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-21-37\" not found" Dec 16 12:26:49.687555 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 16 12:26:49.700040 kubelet[2953]: I1216 12:26:49.699872 2953 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 16 12:26:49.700174 kubelet[2953]: I1216 12:26:49.700163 2953 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 12:26:49.700231 kubelet[2953]: I1216 12:26:49.700183 2953 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 12:26:49.701608 kubelet[2953]: I1216 12:26:49.701550 2953 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 12:26:49.705255 kubelet[2953]: E1216 12:26:49.704960 2953 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 12:26:49.705255 kubelet[2953]: E1216 12:26:49.705040 2953 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-21-37\" not found" Dec 16 12:26:49.734653 systemd[1]: Created slice kubepods-burstable-pod22d2cd9613782fdd70e2b21ba58e259b.slice - libcontainer container kubepods-burstable-pod22d2cd9613782fdd70e2b21ba58e259b.slice. Dec 16 12:26:49.747960 kubelet[2953]: E1216 12:26:49.747888 2953 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-37\" not found" node="ip-172-31-21-37" Dec 16 12:26:49.754546 systemd[1]: Created slice kubepods-burstable-pod91c5db9b1b90ef299e78e57f70b182bc.slice - libcontainer container kubepods-burstable-pod91c5db9b1b90ef299e78e57f70b182bc.slice. Dec 16 12:26:49.760313 kubelet[2953]: E1216 12:26:49.759539 2953 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-37\" not found" node="ip-172-31-21-37" Dec 16 12:26:49.765647 systemd[1]: Created slice kubepods-burstable-pod58cd32a4eb7cc1690ee27930fc742a87.slice - libcontainer container kubepods-burstable-pod58cd32a4eb7cc1690ee27930fc742a87.slice. Dec 16 12:26:49.769582 kubelet[2953]: E1216 12:26:49.769550 2953 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-37\" not found" node="ip-172-31-21-37" Dec 16 12:26:49.789226 kubelet[2953]: E1216 12:26:49.789163 2953 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-37?timeout=10s\": dial tcp 172.31.21.37:6443: connect: connection refused" interval="400ms" Dec 16 12:26:49.803305 kubelet[2953]: I1216 12:26:49.803176 2953 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-21-37" Dec 16 12:26:49.804245 kubelet[2953]: E1216 12:26:49.804197 2953 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.21.37:6443/api/v1/nodes\": dial tcp 172.31.21.37:6443: connect: connection refused" node="ip-172-31-21-37" Dec 16 12:26:49.886106 kubelet[2953]: I1216 12:26:49.885704 2953 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/58cd32a4eb7cc1690ee27930fc742a87-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-21-37\" (UID: \"58cd32a4eb7cc1690ee27930fc742a87\") " pod="kube-system/kube-controller-manager-ip-172-31-21-37" Dec 16 12:26:49.886106 kubelet[2953]: I1216 12:26:49.885767 2953 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/91c5db9b1b90ef299e78e57f70b182bc-kubeconfig\") pod \"kube-scheduler-ip-172-31-21-37\" (UID: \"91c5db9b1b90ef299e78e57f70b182bc\") " pod="kube-system/kube-scheduler-ip-172-31-21-37" Dec 16 12:26:49.886106 kubelet[2953]: I1216 12:26:49.885808 2953 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/22d2cd9613782fdd70e2b21ba58e259b-ca-certs\") pod \"kube-apiserver-ip-172-31-21-37\" (UID: \"22d2cd9613782fdd70e2b21ba58e259b\") " pod="kube-system/kube-apiserver-ip-172-31-21-37" Dec 16 12:26:49.886106 kubelet[2953]: I1216 12:26:49.885846 2953 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/22d2cd9613782fdd70e2b21ba58e259b-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-21-37\" (UID: \"22d2cd9613782fdd70e2b21ba58e259b\") " pod="kube-system/kube-apiserver-ip-172-31-21-37" Dec 16 12:26:49.886106 kubelet[2953]: I1216 12:26:49.885882 2953 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/58cd32a4eb7cc1690ee27930fc742a87-ca-certs\") pod \"kube-controller-manager-ip-172-31-21-37\" (UID: \"58cd32a4eb7cc1690ee27930fc742a87\") " pod="kube-system/kube-controller-manager-ip-172-31-21-37" Dec 16 12:26:49.886481 kubelet[2953]: I1216 12:26:49.885917 2953 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/58cd32a4eb7cc1690ee27930fc742a87-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-21-37\" (UID: \"58cd32a4eb7cc1690ee27930fc742a87\") " pod="kube-system/kube-controller-manager-ip-172-31-21-37" Dec 16 12:26:49.886481 kubelet[2953]: I1216 12:26:49.885955 2953 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/22d2cd9613782fdd70e2b21ba58e259b-k8s-certs\") pod \"kube-apiserver-ip-172-31-21-37\" (UID: \"22d2cd9613782fdd70e2b21ba58e259b\") " pod="kube-system/kube-apiserver-ip-172-31-21-37" Dec 16 12:26:49.886481 kubelet[2953]: I1216 12:26:49.886013 2953 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/58cd32a4eb7cc1690ee27930fc742a87-k8s-certs\") pod \"kube-controller-manager-ip-172-31-21-37\" (UID: \"58cd32a4eb7cc1690ee27930fc742a87\") " pod="kube-system/kube-controller-manager-ip-172-31-21-37" Dec 16 12:26:49.886481 kubelet[2953]: I1216 12:26:49.886052 2953 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/58cd32a4eb7cc1690ee27930fc742a87-kubeconfig\") pod \"kube-controller-manager-ip-172-31-21-37\" (UID: \"58cd32a4eb7cc1690ee27930fc742a87\") " pod="kube-system/kube-controller-manager-ip-172-31-21-37" Dec 16 12:26:50.007537 kubelet[2953]: I1216 12:26:50.007411 2953 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-21-37" Dec 16 12:26:50.007960 kubelet[2953]: E1216 12:26:50.007897 2953 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.21.37:6443/api/v1/nodes\": dial tcp 172.31.21.37:6443: connect: connection refused" node="ip-172-31-21-37" Dec 16 12:26:50.049400 containerd[2002]: time="2025-12-16T12:26:50.049320157Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-21-37,Uid:22d2cd9613782fdd70e2b21ba58e259b,Namespace:kube-system,Attempt:0,}" Dec 16 12:26:50.062224 containerd[2002]: time="2025-12-16T12:26:50.061931199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-21-37,Uid:91c5db9b1b90ef299e78e57f70b182bc,Namespace:kube-system,Attempt:0,}" Dec 16 12:26:50.071624 containerd[2002]: time="2025-12-16T12:26:50.071532158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-21-37,Uid:58cd32a4eb7cc1690ee27930fc742a87,Namespace:kube-system,Attempt:0,}" Dec 16 12:26:50.100288 containerd[2002]: time="2025-12-16T12:26:50.099513581Z" level=info msg="connecting to shim 105e4f0096971f6a53ad178910184ea1048e558c8b7b2d94dadc6ff35ac3f116" address="unix:///run/containerd/s/09bb0e7dcfc0662d82a16bad1d18a4e2be6e8a855fafcdd53a3da3d0bf11ae26" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:26:50.166813 containerd[2002]: time="2025-12-16T12:26:50.166759212Z" level=info msg="connecting to shim 76247fe4d867b53b4e26a48d5c51832f9571e6453d0fac5238ef8957401358e4" address="unix:///run/containerd/s/a980d067614b00757700226061aeb061546ab248cc43ee2eab33608d7698f97a" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:26:50.167503 containerd[2002]: time="2025-12-16T12:26:50.166784292Z" level=info msg="connecting to shim 9452a11ea11dafbd28999b93c325f9977cc1d2b162134fca0a43958852f5576d" address="unix:///run/containerd/s/ad0c0a56ce30c24424b1496f58951a180ff5d6e73db43c9d695a59cfabb5adb8" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:26:50.181610 systemd[1]: Started cri-containerd-105e4f0096971f6a53ad178910184ea1048e558c8b7b2d94dadc6ff35ac3f116.scope - libcontainer container 105e4f0096971f6a53ad178910184ea1048e558c8b7b2d94dadc6ff35ac3f116. Dec 16 12:26:50.190252 kubelet[2953]: E1216 12:26:50.190202 2953 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-37?timeout=10s\": dial tcp 172.31.21.37:6443: connect: connection refused" interval="800ms" Dec 16 12:26:50.251600 systemd[1]: Started cri-containerd-76247fe4d867b53b4e26a48d5c51832f9571e6453d0fac5238ef8957401358e4.scope - libcontainer container 76247fe4d867b53b4e26a48d5c51832f9571e6453d0fac5238ef8957401358e4. Dec 16 12:26:50.278605 systemd[1]: Started cri-containerd-9452a11ea11dafbd28999b93c325f9977cc1d2b162134fca0a43958852f5576d.scope - libcontainer container 9452a11ea11dafbd28999b93c325f9977cc1d2b162134fca0a43958852f5576d. Dec 16 12:26:50.341480 containerd[2002]: time="2025-12-16T12:26:50.341416618Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-21-37,Uid:22d2cd9613782fdd70e2b21ba58e259b,Namespace:kube-system,Attempt:0,} returns sandbox id \"105e4f0096971f6a53ad178910184ea1048e558c8b7b2d94dadc6ff35ac3f116\"" Dec 16 12:26:50.352091 containerd[2002]: time="2025-12-16T12:26:50.351939577Z" level=info msg="CreateContainer within sandbox \"105e4f0096971f6a53ad178910184ea1048e558c8b7b2d94dadc6ff35ac3f116\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 16 12:26:50.386600 containerd[2002]: time="2025-12-16T12:26:50.386539573Z" level=info msg="Container cb68a9a27f83655c5174a135adde90588f7b328af7d73b9345d9e2ff8e7750d3: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:26:50.394466 containerd[2002]: time="2025-12-16T12:26:50.394310216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-21-37,Uid:58cd32a4eb7cc1690ee27930fc742a87,Namespace:kube-system,Attempt:0,} returns sandbox id \"76247fe4d867b53b4e26a48d5c51832f9571e6453d0fac5238ef8957401358e4\"" Dec 16 12:26:50.400123 containerd[2002]: time="2025-12-16T12:26:50.400059469Z" level=info msg="CreateContainer within sandbox \"76247fe4d867b53b4e26a48d5c51832f9571e6453d0fac5238ef8957401358e4\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 16 12:26:50.408112 containerd[2002]: time="2025-12-16T12:26:50.408030301Z" level=info msg="CreateContainer within sandbox \"105e4f0096971f6a53ad178910184ea1048e558c8b7b2d94dadc6ff35ac3f116\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"cb68a9a27f83655c5174a135adde90588f7b328af7d73b9345d9e2ff8e7750d3\"" Dec 16 12:26:50.409774 containerd[2002]: time="2025-12-16T12:26:50.409670380Z" level=info msg="StartContainer for \"cb68a9a27f83655c5174a135adde90588f7b328af7d73b9345d9e2ff8e7750d3\"" Dec 16 12:26:50.412717 containerd[2002]: time="2025-12-16T12:26:50.412639873Z" level=info msg="connecting to shim cb68a9a27f83655c5174a135adde90588f7b328af7d73b9345d9e2ff8e7750d3" address="unix:///run/containerd/s/09bb0e7dcfc0662d82a16bad1d18a4e2be6e8a855fafcdd53a3da3d0bf11ae26" protocol=ttrpc version=3 Dec 16 12:26:50.416562 kubelet[2953]: I1216 12:26:50.416253 2953 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-21-37" Dec 16 12:26:50.417809 kubelet[2953]: E1216 12:26:50.417697 2953 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.21.37:6443/api/v1/nodes\": dial tcp 172.31.21.37:6443: connect: connection refused" node="ip-172-31-21-37" Dec 16 12:26:50.424827 containerd[2002]: time="2025-12-16T12:26:50.424751429Z" level=info msg="Container 45270984168d05cbb7deff63bdf599407ba5525b3ad06832aef39e4f4a99a887: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:26:50.442643 containerd[2002]: time="2025-12-16T12:26:50.442581396Z" level=info msg="CreateContainer within sandbox \"76247fe4d867b53b4e26a48d5c51832f9571e6453d0fac5238ef8957401358e4\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"45270984168d05cbb7deff63bdf599407ba5525b3ad06832aef39e4f4a99a887\"" Dec 16 12:26:50.448310 containerd[2002]: time="2025-12-16T12:26:50.447539346Z" level=info msg="StartContainer for \"45270984168d05cbb7deff63bdf599407ba5525b3ad06832aef39e4f4a99a887\"" Dec 16 12:26:50.453562 containerd[2002]: time="2025-12-16T12:26:50.453326886Z" level=info msg="connecting to shim 45270984168d05cbb7deff63bdf599407ba5525b3ad06832aef39e4f4a99a887" address="unix:///run/containerd/s/a980d067614b00757700226061aeb061546ab248cc43ee2eab33608d7698f97a" protocol=ttrpc version=3 Dec 16 12:26:50.458892 containerd[2002]: time="2025-12-16T12:26:50.458842887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-21-37,Uid:91c5db9b1b90ef299e78e57f70b182bc,Namespace:kube-system,Attempt:0,} returns sandbox id \"9452a11ea11dafbd28999b93c325f9977cc1d2b162134fca0a43958852f5576d\"" Dec 16 12:26:50.470970 systemd[1]: Started cri-containerd-cb68a9a27f83655c5174a135adde90588f7b328af7d73b9345d9e2ff8e7750d3.scope - libcontainer container cb68a9a27f83655c5174a135adde90588f7b328af7d73b9345d9e2ff8e7750d3. Dec 16 12:26:50.471863 containerd[2002]: time="2025-12-16T12:26:50.471817879Z" level=info msg="CreateContainer within sandbox \"9452a11ea11dafbd28999b93c325f9977cc1d2b162134fca0a43958852f5576d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 16 12:26:50.521404 containerd[2002]: time="2025-12-16T12:26:50.521184330Z" level=info msg="Container fbd36fd60dccef25a57e876fba68fceb94e141b42be5a8e1f0c63c3e95f6dc00: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:26:50.525656 systemd[1]: Started cri-containerd-45270984168d05cbb7deff63bdf599407ba5525b3ad06832aef39e4f4a99a887.scope - libcontainer container 45270984168d05cbb7deff63bdf599407ba5525b3ad06832aef39e4f4a99a887. Dec 16 12:26:50.539206 kubelet[2953]: W1216 12:26:50.538789 2953 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.21.37:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.21.37:6443: connect: connection refused Dec 16 12:26:50.540689 kubelet[2953]: E1216 12:26:50.538889 2953 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.21.37:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.21.37:6443: connect: connection refused" logger="UnhandledError" Dec 16 12:26:50.543023 containerd[2002]: time="2025-12-16T12:26:50.542923030Z" level=info msg="CreateContainer within sandbox \"9452a11ea11dafbd28999b93c325f9977cc1d2b162134fca0a43958852f5576d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"fbd36fd60dccef25a57e876fba68fceb94e141b42be5a8e1f0c63c3e95f6dc00\"" Dec 16 12:26:50.546330 containerd[2002]: time="2025-12-16T12:26:50.544673817Z" level=info msg="StartContainer for \"fbd36fd60dccef25a57e876fba68fceb94e141b42be5a8e1f0c63c3e95f6dc00\"" Dec 16 12:26:50.547897 containerd[2002]: time="2025-12-16T12:26:50.547822271Z" level=info msg="connecting to shim fbd36fd60dccef25a57e876fba68fceb94e141b42be5a8e1f0c63c3e95f6dc00" address="unix:///run/containerd/s/ad0c0a56ce30c24424b1496f58951a180ff5d6e73db43c9d695a59cfabb5adb8" protocol=ttrpc version=3 Dec 16 12:26:50.615561 systemd[1]: Started cri-containerd-fbd36fd60dccef25a57e876fba68fceb94e141b42be5a8e1f0c63c3e95f6dc00.scope - libcontainer container fbd36fd60dccef25a57e876fba68fceb94e141b42be5a8e1f0c63c3e95f6dc00. Dec 16 12:26:50.656107 containerd[2002]: time="2025-12-16T12:26:50.655949472Z" level=info msg="StartContainer for \"cb68a9a27f83655c5174a135adde90588f7b328af7d73b9345d9e2ff8e7750d3\" returns successfully" Dec 16 12:26:50.695480 kubelet[2953]: E1216 12:26:50.695252 2953 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-37\" not found" node="ip-172-31-21-37" Dec 16 12:26:50.709478 containerd[2002]: time="2025-12-16T12:26:50.709398564Z" level=info msg="StartContainer for \"45270984168d05cbb7deff63bdf599407ba5525b3ad06832aef39e4f4a99a887\" returns successfully" Dec 16 12:26:50.737974 kubelet[2953]: W1216 12:26:50.737890 2953 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.21.37:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.21.37:6443: connect: connection refused Dec 16 12:26:50.738153 kubelet[2953]: E1216 12:26:50.737988 2953 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.21.37:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.21.37:6443: connect: connection refused" logger="UnhandledError" Dec 16 12:26:50.832283 containerd[2002]: time="2025-12-16T12:26:50.831978816Z" level=info msg="StartContainer for \"fbd36fd60dccef25a57e876fba68fceb94e141b42be5a8e1f0c63c3e95f6dc00\" returns successfully" Dec 16 12:26:51.240374 kubelet[2953]: I1216 12:26:51.240303 2953 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-21-37" Dec 16 12:26:51.702059 kubelet[2953]: E1216 12:26:51.702023 2953 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-37\" not found" node="ip-172-31-21-37" Dec 16 12:26:51.711520 kubelet[2953]: E1216 12:26:51.711220 2953 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-37\" not found" node="ip-172-31-21-37" Dec 16 12:26:51.712572 kubelet[2953]: E1216 12:26:51.712516 2953 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-37\" not found" node="ip-172-31-21-37" Dec 16 12:26:52.712877 kubelet[2953]: E1216 12:26:52.712009 2953 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-37\" not found" node="ip-172-31-21-37" Dec 16 12:26:52.712877 kubelet[2953]: E1216 12:26:52.712079 2953 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-37\" not found" node="ip-172-31-21-37" Dec 16 12:26:53.713125 kubelet[2953]: E1216 12:26:53.713048 2953 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-37\" not found" node="ip-172-31-21-37" Dec 16 12:26:53.717009 kubelet[2953]: E1216 12:26:53.716864 2953 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-37\" not found" node="ip-172-31-21-37" Dec 16 12:26:54.445711 kubelet[2953]: E1216 12:26:54.445649 2953 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-21-37\" not found" node="ip-172-31-21-37" Dec 16 12:26:54.502153 kubelet[2953]: I1216 12:26:54.501802 2953 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-21-37" Dec 16 12:26:54.554339 kubelet[2953]: I1216 12:26:54.554302 2953 apiserver.go:52] "Watching apiserver" Dec 16 12:26:54.565583 kubelet[2953]: E1216 12:26:54.565382 2953 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-21-37.1881b1c9e3062827 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-21-37,UID:ip-172-31-21-37,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-21-37,},FirstTimestamp:2025-12-16 12:26:49.561319463 +0000 UTC m=+2.301421885,LastTimestamp:2025-12-16 12:26:49.561319463 +0000 UTC m=+2.301421885,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-21-37,}" Dec 16 12:26:54.584953 kubelet[2953]: I1216 12:26:54.584899 2953 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 16 12:26:54.588316 kubelet[2953]: I1216 12:26:54.586911 2953 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-21-37" Dec 16 12:26:54.726650 kubelet[2953]: E1216 12:26:54.725749 2953 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-21-37\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-21-37" Dec 16 12:26:54.726650 kubelet[2953]: I1216 12:26:54.725800 2953 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-21-37" Dec 16 12:26:54.735292 kubelet[2953]: E1216 12:26:54.735154 2953 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-21-37\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-21-37" Dec 16 12:26:54.735292 kubelet[2953]: I1216 12:26:54.735206 2953 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-21-37" Dec 16 12:26:54.743649 kubelet[2953]: E1216 12:26:54.743583 2953 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-21-37\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-21-37" Dec 16 12:26:56.076293 kubelet[2953]: I1216 12:26:56.076232 2953 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-21-37" Dec 16 12:26:56.635295 kubelet[2953]: I1216 12:26:56.634983 2953 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-21-37" Dec 16 12:26:56.735094 systemd[1]: Reload requested from client PID 3224 ('systemctl') (unit session-7.scope)... Dec 16 12:26:56.735329 systemd[1]: Reloading... Dec 16 12:26:57.048463 zram_generator::config[3272]: No configuration found. Dec 16 12:26:57.554844 systemd[1]: Reloading finished in 818 ms. Dec 16 12:26:57.607907 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:26:57.624754 systemd[1]: kubelet.service: Deactivated successfully. Dec 16 12:26:57.626377 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:26:57.626489 systemd[1]: kubelet.service: Consumed 3.077s CPU time, 129.1M memory peak. Dec 16 12:26:57.630511 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:26:58.008134 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:26:58.022945 (kubelet)[3329]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 12:26:58.122286 kubelet[3329]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 12:26:58.122286 kubelet[3329]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 12:26:58.122286 kubelet[3329]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 12:26:58.123313 kubelet[3329]: I1216 12:26:58.122967 3329 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 12:26:58.137742 kubelet[3329]: I1216 12:26:58.137675 3329 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Dec 16 12:26:58.138150 kubelet[3329]: I1216 12:26:58.137887 3329 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 12:26:58.138652 kubelet[3329]: I1216 12:26:58.138594 3329 server.go:954] "Client rotation is on, will bootstrap in background" Dec 16 12:26:58.144350 kubelet[3329]: I1216 12:26:58.143872 3329 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 16 12:26:58.154550 kubelet[3329]: I1216 12:26:58.154496 3329 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 12:26:58.164082 sudo[3343]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Dec 16 12:26:58.165940 kubelet[3329]: I1216 12:26:58.164761 3329 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 12:26:58.164869 sudo[3343]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Dec 16 12:26:58.172686 kubelet[3329]: I1216 12:26:58.172635 3329 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 16 12:26:58.173909 kubelet[3329]: I1216 12:26:58.173386 3329 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 12:26:58.173909 kubelet[3329]: I1216 12:26:58.173442 3329 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-21-37","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 12:26:58.173909 kubelet[3329]: I1216 12:26:58.173750 3329 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 12:26:58.173909 kubelet[3329]: I1216 12:26:58.173770 3329 container_manager_linux.go:304] "Creating device plugin manager" Dec 16 12:26:58.174297 kubelet[3329]: I1216 12:26:58.173842 3329 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:26:58.175325 kubelet[3329]: I1216 12:26:58.174629 3329 kubelet.go:446] "Attempting to sync node with API server" Dec 16 12:26:58.176290 kubelet[3329]: I1216 12:26:58.176232 3329 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 12:26:58.176513 kubelet[3329]: I1216 12:26:58.176494 3329 kubelet.go:352] "Adding apiserver pod source" Dec 16 12:26:58.178354 kubelet[3329]: I1216 12:26:58.178306 3329 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 12:26:58.180282 kubelet[3329]: I1216 12:26:58.180228 3329 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 16 12:26:58.181223 kubelet[3329]: I1216 12:26:58.181197 3329 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 16 12:26:58.184090 kubelet[3329]: I1216 12:26:58.184056 3329 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 16 12:26:58.184329 kubelet[3329]: I1216 12:26:58.184311 3329 server.go:1287] "Started kubelet" Dec 16 12:26:58.195566 kubelet[3329]: I1216 12:26:58.195529 3329 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 12:26:58.200353 kubelet[3329]: I1216 12:26:58.200287 3329 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 12:26:58.201750 kubelet[3329]: I1216 12:26:58.201473 3329 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 12:26:58.224472 kubelet[3329]: I1216 12:26:58.224392 3329 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 12:26:58.241166 kubelet[3329]: I1216 12:26:58.239833 3329 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 16 12:26:58.252970 kubelet[3329]: E1216 12:26:58.252927 3329 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-21-37\" not found" Dec 16 12:26:58.262168 kubelet[3329]: I1216 12:26:58.261415 3329 server.go:479] "Adding debug handlers to kubelet server" Dec 16 12:26:58.263758 kubelet[3329]: I1216 12:26:58.263708 3329 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 16 12:26:58.267872 kubelet[3329]: I1216 12:26:58.243316 3329 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 12:26:58.268426 kubelet[3329]: I1216 12:26:58.268401 3329 reconciler.go:26] "Reconciler: start to sync state" Dec 16 12:26:58.295619 kubelet[3329]: I1216 12:26:58.293958 3329 factory.go:221] Registration of the systemd container factory successfully Dec 16 12:26:58.295619 kubelet[3329]: I1216 12:26:58.295536 3329 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 12:26:58.301180 kubelet[3329]: I1216 12:26:58.301144 3329 factory.go:221] Registration of the containerd container factory successfully Dec 16 12:26:58.307374 kubelet[3329]: E1216 12:26:58.307337 3329 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 12:26:58.328924 kubelet[3329]: I1216 12:26:58.328873 3329 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 16 12:26:58.331082 kubelet[3329]: I1216 12:26:58.331041 3329 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 16 12:26:58.331335 kubelet[3329]: I1216 12:26:58.331315 3329 status_manager.go:227] "Starting to sync pod status with apiserver" Dec 16 12:26:58.331458 kubelet[3329]: I1216 12:26:58.331439 3329 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 12:26:58.331568 kubelet[3329]: I1216 12:26:58.331535 3329 kubelet.go:2382] "Starting kubelet main sync loop" Dec 16 12:26:58.331754 kubelet[3329]: E1216 12:26:58.331724 3329 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 12:26:58.432006 kubelet[3329]: E1216 12:26:58.431954 3329 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 16 12:26:58.439286 kubelet[3329]: I1216 12:26:58.439232 3329 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 12:26:58.439664 kubelet[3329]: I1216 12:26:58.439617 3329 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 12:26:58.439834 kubelet[3329]: I1216 12:26:58.439813 3329 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:26:58.440412 kubelet[3329]: I1216 12:26:58.440378 3329 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 16 12:26:58.440617 kubelet[3329]: I1216 12:26:58.440571 3329 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 16 12:26:58.440741 kubelet[3329]: I1216 12:26:58.440724 3329 policy_none.go:49] "None policy: Start" Dec 16 12:26:58.440867 kubelet[3329]: I1216 12:26:58.440849 3329 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 16 12:26:58.441034 kubelet[3329]: I1216 12:26:58.440997 3329 state_mem.go:35] "Initializing new in-memory state store" Dec 16 12:26:58.441500 kubelet[3329]: I1216 12:26:58.441474 3329 state_mem.go:75] "Updated machine memory state" Dec 16 12:26:58.458307 kubelet[3329]: I1216 12:26:58.456731 3329 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 16 12:26:58.458307 kubelet[3329]: I1216 12:26:58.457003 3329 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 12:26:58.458307 kubelet[3329]: I1216 12:26:58.457025 3329 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 12:26:58.458307 kubelet[3329]: I1216 12:26:58.457934 3329 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 12:26:58.464741 kubelet[3329]: E1216 12:26:58.464688 3329 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 12:26:58.589803 kubelet[3329]: I1216 12:26:58.589672 3329 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-21-37" Dec 16 12:26:58.604971 kubelet[3329]: I1216 12:26:58.604917 3329 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-21-37" Dec 16 12:26:58.605097 kubelet[3329]: I1216 12:26:58.605039 3329 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-21-37" Dec 16 12:26:58.633357 kubelet[3329]: I1216 12:26:58.632954 3329 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-21-37" Dec 16 12:26:58.633357 kubelet[3329]: I1216 12:26:58.633088 3329 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-21-37" Dec 16 12:26:58.633357 kubelet[3329]: I1216 12:26:58.632954 3329 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-21-37" Dec 16 12:26:58.655577 kubelet[3329]: E1216 12:26:58.655520 3329 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-21-37\" already exists" pod="kube-system/kube-apiserver-ip-172-31-21-37" Dec 16 12:26:58.659841 kubelet[3329]: E1216 12:26:58.659784 3329 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-21-37\" already exists" pod="kube-system/kube-scheduler-ip-172-31-21-37" Dec 16 12:26:58.672688 kubelet[3329]: I1216 12:26:58.672630 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/58cd32a4eb7cc1690ee27930fc742a87-kubeconfig\") pod \"kube-controller-manager-ip-172-31-21-37\" (UID: \"58cd32a4eb7cc1690ee27930fc742a87\") " pod="kube-system/kube-controller-manager-ip-172-31-21-37" Dec 16 12:26:58.672818 kubelet[3329]: I1216 12:26:58.672698 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/58cd32a4eb7cc1690ee27930fc742a87-ca-certs\") pod \"kube-controller-manager-ip-172-31-21-37\" (UID: \"58cd32a4eb7cc1690ee27930fc742a87\") " pod="kube-system/kube-controller-manager-ip-172-31-21-37" Dec 16 12:26:58.672818 kubelet[3329]: I1216 12:26:58.672741 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/58cd32a4eb7cc1690ee27930fc742a87-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-21-37\" (UID: \"58cd32a4eb7cc1690ee27930fc742a87\") " pod="kube-system/kube-controller-manager-ip-172-31-21-37" Dec 16 12:26:58.672818 kubelet[3329]: I1216 12:26:58.672781 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/58cd32a4eb7cc1690ee27930fc742a87-k8s-certs\") pod \"kube-controller-manager-ip-172-31-21-37\" (UID: \"58cd32a4eb7cc1690ee27930fc742a87\") " pod="kube-system/kube-controller-manager-ip-172-31-21-37" Dec 16 12:26:58.673001 kubelet[3329]: I1216 12:26:58.672819 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/58cd32a4eb7cc1690ee27930fc742a87-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-21-37\" (UID: \"58cd32a4eb7cc1690ee27930fc742a87\") " pod="kube-system/kube-controller-manager-ip-172-31-21-37" Dec 16 12:26:58.673001 kubelet[3329]: I1216 12:26:58.672856 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/91c5db9b1b90ef299e78e57f70b182bc-kubeconfig\") pod \"kube-scheduler-ip-172-31-21-37\" (UID: \"91c5db9b1b90ef299e78e57f70b182bc\") " pod="kube-system/kube-scheduler-ip-172-31-21-37" Dec 16 12:26:58.673001 kubelet[3329]: I1216 12:26:58.672889 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/22d2cd9613782fdd70e2b21ba58e259b-ca-certs\") pod \"kube-apiserver-ip-172-31-21-37\" (UID: \"22d2cd9613782fdd70e2b21ba58e259b\") " pod="kube-system/kube-apiserver-ip-172-31-21-37" Dec 16 12:26:58.673001 kubelet[3329]: I1216 12:26:58.672926 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/22d2cd9613782fdd70e2b21ba58e259b-k8s-certs\") pod \"kube-apiserver-ip-172-31-21-37\" (UID: \"22d2cd9613782fdd70e2b21ba58e259b\") " pod="kube-system/kube-apiserver-ip-172-31-21-37" Dec 16 12:26:58.673001 kubelet[3329]: I1216 12:26:58.672961 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/22d2cd9613782fdd70e2b21ba58e259b-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-21-37\" (UID: \"22d2cd9613782fdd70e2b21ba58e259b\") " pod="kube-system/kube-apiserver-ip-172-31-21-37" Dec 16 12:26:58.906324 update_engine[1972]: I20251216 12:26:58.904443 1972 update_attempter.cc:509] Updating boot flags... Dec 16 12:26:58.979681 sudo[3343]: pam_unix(sudo:session): session closed for user root Dec 16 12:26:59.180229 kubelet[3329]: I1216 12:26:59.179887 3329 apiserver.go:52] "Watching apiserver" Dec 16 12:26:59.268679 kubelet[3329]: I1216 12:26:59.268583 3329 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 16 12:26:59.414452 kubelet[3329]: I1216 12:26:59.408157 3329 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-21-37" Dec 16 12:26:59.452390 kubelet[3329]: E1216 12:26:59.452016 3329 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-21-37\" already exists" pod="kube-system/kube-apiserver-ip-172-31-21-37" Dec 16 12:26:59.489695 kubelet[3329]: I1216 12:26:59.489605 3329 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-21-37" podStartSLOduration=3.489579842 podStartE2EDuration="3.489579842s" podCreationTimestamp="2025-12-16 12:26:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:26:59.443007727 +0000 UTC m=+1.410088575" watchObservedRunningTime="2025-12-16 12:26:59.489579842 +0000 UTC m=+1.456660678" Dec 16 12:26:59.521682 kubelet[3329]: I1216 12:26:59.518158 3329 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-21-37" podStartSLOduration=3.5181315619999998 podStartE2EDuration="3.518131562s" podCreationTimestamp="2025-12-16 12:26:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:26:59.490314885 +0000 UTC m=+1.457395745" watchObservedRunningTime="2025-12-16 12:26:59.518131562 +0000 UTC m=+1.485212399" Dec 16 12:26:59.565631 kubelet[3329]: I1216 12:26:59.564284 3329 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-21-37" podStartSLOduration=1.564238121 podStartE2EDuration="1.564238121s" podCreationTimestamp="2025-12-16 12:26:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:26:59.521224068 +0000 UTC m=+1.488304904" watchObservedRunningTime="2025-12-16 12:26:59.564238121 +0000 UTC m=+1.531318957" Dec 16 12:27:03.000219 sudo[2372]: pam_unix(sudo:session): session closed for user root Dec 16 12:27:03.023521 sshd[2371]: Connection closed by 139.178.89.65 port 47972 Dec 16 12:27:03.023379 sshd-session[2368]: pam_unix(sshd:session): session closed for user core Dec 16 12:27:03.031521 systemd[1]: sshd@6-172.31.21.37:22-139.178.89.65:47972.service: Deactivated successfully. Dec 16 12:27:03.040038 systemd[1]: session-7.scope: Deactivated successfully. Dec 16 12:27:03.041057 systemd[1]: session-7.scope: Consumed 14.235s CPU time, 262.5M memory peak. Dec 16 12:27:03.046130 systemd-logind[1971]: Session 7 logged out. Waiting for processes to exit. Dec 16 12:27:03.049892 systemd-logind[1971]: Removed session 7. Dec 16 12:27:03.594450 kubelet[3329]: I1216 12:27:03.594366 3329 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 16 12:27:03.596228 containerd[2002]: time="2025-12-16T12:27:03.595416879Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 16 12:27:03.596780 kubelet[3329]: I1216 12:27:03.595966 3329 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 16 12:27:04.208886 kubelet[3329]: I1216 12:27:04.208646 3329 status_manager.go:890] "Failed to get status for pod" podUID="11d2dca4-29c6-4dd2-84eb-be29a9ea6b63" pod="kube-system/kube-proxy-n6dlx" err="pods \"kube-proxy-n6dlx\" is forbidden: User \"system:node:ip-172-31-21-37\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-21-37' and this object" Dec 16 12:27:04.208886 kubelet[3329]: W1216 12:27:04.208688 3329 reflector.go:569] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-21-37" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-21-37' and this object Dec 16 12:27:04.208886 kubelet[3329]: E1216 12:27:04.208766 3329 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ip-172-31-21-37\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-21-37' and this object" logger="UnhandledError" Dec 16 12:27:04.212976 systemd[1]: Created slice kubepods-besteffort-pod11d2dca4_29c6_4dd2_84eb_be29a9ea6b63.slice - libcontainer container kubepods-besteffort-pod11d2dca4_29c6_4dd2_84eb_be29a9ea6b63.slice. Dec 16 12:27:04.214356 kubelet[3329]: W1216 12:27:04.214302 3329 reflector.go:569] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ip-172-31-21-37" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-21-37' and this object Dec 16 12:27:04.214536 kubelet[3329]: E1216 12:27:04.214405 3329 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:ip-172-31-21-37\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-21-37' and this object" logger="UnhandledError" Dec 16 12:27:04.218389 kubelet[3329]: I1216 12:27:04.217831 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/11d2dca4-29c6-4dd2-84eb-be29a9ea6b63-kube-proxy\") pod \"kube-proxy-n6dlx\" (UID: \"11d2dca4-29c6-4dd2-84eb-be29a9ea6b63\") " pod="kube-system/kube-proxy-n6dlx" Dec 16 12:27:04.218389 kubelet[3329]: I1216 12:27:04.217897 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5qbp\" (UniqueName: \"kubernetes.io/projected/11d2dca4-29c6-4dd2-84eb-be29a9ea6b63-kube-api-access-b5qbp\") pod \"kube-proxy-n6dlx\" (UID: \"11d2dca4-29c6-4dd2-84eb-be29a9ea6b63\") " pod="kube-system/kube-proxy-n6dlx" Dec 16 12:27:04.218389 kubelet[3329]: I1216 12:27:04.217946 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/11d2dca4-29c6-4dd2-84eb-be29a9ea6b63-xtables-lock\") pod \"kube-proxy-n6dlx\" (UID: \"11d2dca4-29c6-4dd2-84eb-be29a9ea6b63\") " pod="kube-system/kube-proxy-n6dlx" Dec 16 12:27:04.218389 kubelet[3329]: I1216 12:27:04.217981 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/11d2dca4-29c6-4dd2-84eb-be29a9ea6b63-lib-modules\") pod \"kube-proxy-n6dlx\" (UID: \"11d2dca4-29c6-4dd2-84eb-be29a9ea6b63\") " pod="kube-system/kube-proxy-n6dlx" Dec 16 12:27:04.259487 systemd[1]: Created slice kubepods-burstable-pod9c5f0d5a_5dce_43bc_a6fe_ec3e3c012a15.slice - libcontainer container kubepods-burstable-pod9c5f0d5a_5dce_43bc_a6fe_ec3e3c012a15.slice. Dec 16 12:27:04.318770 kubelet[3329]: I1216 12:27:04.318700 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15-clustermesh-secrets\") pod \"cilium-b4cmk\" (UID: \"9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15\") " pod="kube-system/cilium-b4cmk" Dec 16 12:27:04.318770 kubelet[3329]: I1216 12:27:04.318771 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15-hubble-tls\") pod \"cilium-b4cmk\" (UID: \"9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15\") " pod="kube-system/cilium-b4cmk" Dec 16 12:27:04.318998 kubelet[3329]: I1216 12:27:04.318815 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15-host-proc-sys-net\") pod \"cilium-b4cmk\" (UID: \"9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15\") " pod="kube-system/cilium-b4cmk" Dec 16 12:27:04.318998 kubelet[3329]: I1216 12:27:04.318883 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15-hostproc\") pod \"cilium-b4cmk\" (UID: \"9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15\") " pod="kube-system/cilium-b4cmk" Dec 16 12:27:04.318998 kubelet[3329]: I1216 12:27:04.318921 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15-lib-modules\") pod \"cilium-b4cmk\" (UID: \"9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15\") " pod="kube-system/cilium-b4cmk" Dec 16 12:27:04.318998 kubelet[3329]: I1216 12:27:04.318964 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dx2pm\" (UniqueName: \"kubernetes.io/projected/9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15-kube-api-access-dx2pm\") pod \"cilium-b4cmk\" (UID: \"9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15\") " pod="kube-system/cilium-b4cmk" Dec 16 12:27:04.319184 kubelet[3329]: I1216 12:27:04.319001 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15-etc-cni-netd\") pod \"cilium-b4cmk\" (UID: \"9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15\") " pod="kube-system/cilium-b4cmk" Dec 16 12:27:04.319184 kubelet[3329]: I1216 12:27:04.319038 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15-cni-path\") pod \"cilium-b4cmk\" (UID: \"9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15\") " pod="kube-system/cilium-b4cmk" Dec 16 12:27:04.319184 kubelet[3329]: I1216 12:27:04.319131 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15-cilium-cgroup\") pod \"cilium-b4cmk\" (UID: \"9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15\") " pod="kube-system/cilium-b4cmk" Dec 16 12:27:04.319376 kubelet[3329]: I1216 12:27:04.319187 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15-cilium-run\") pod \"cilium-b4cmk\" (UID: \"9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15\") " pod="kube-system/cilium-b4cmk" Dec 16 12:27:04.319376 kubelet[3329]: I1216 12:27:04.319223 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15-cilium-config-path\") pod \"cilium-b4cmk\" (UID: \"9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15\") " pod="kube-system/cilium-b4cmk" Dec 16 12:27:04.320570 kubelet[3329]: I1216 12:27:04.320487 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15-bpf-maps\") pod \"cilium-b4cmk\" (UID: \"9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15\") " pod="kube-system/cilium-b4cmk" Dec 16 12:27:04.322341 kubelet[3329]: I1216 12:27:04.320700 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15-xtables-lock\") pod \"cilium-b4cmk\" (UID: \"9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15\") " pod="kube-system/cilium-b4cmk" Dec 16 12:27:04.322341 kubelet[3329]: I1216 12:27:04.321164 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15-host-proc-sys-kernel\") pod \"cilium-b4cmk\" (UID: \"9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15\") " pod="kube-system/cilium-b4cmk" Dec 16 12:27:04.583944 systemd[1]: Created slice kubepods-besteffort-pod2b09daed_aa23_4573_b09a_03aa103e9afe.slice - libcontainer container kubepods-besteffort-pod2b09daed_aa23_4573_b09a_03aa103e9afe.slice. Dec 16 12:27:04.625425 kubelet[3329]: I1216 12:27:04.625226 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2b09daed-aa23-4573-b09a-03aa103e9afe-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-xt57r\" (UID: \"2b09daed-aa23-4573-b09a-03aa103e9afe\") " pod="kube-system/cilium-operator-6c4d7847fc-xt57r" Dec 16 12:27:04.625425 kubelet[3329]: I1216 12:27:04.625325 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ct7h\" (UniqueName: \"kubernetes.io/projected/2b09daed-aa23-4573-b09a-03aa103e9afe-kube-api-access-7ct7h\") pod \"cilium-operator-6c4d7847fc-xt57r\" (UID: \"2b09daed-aa23-4573-b09a-03aa103e9afe\") " pod="kube-system/cilium-operator-6c4d7847fc-xt57r" Dec 16 12:27:05.429188 containerd[2002]: time="2025-12-16T12:27:05.429115170Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-n6dlx,Uid:11d2dca4-29c6-4dd2-84eb-be29a9ea6b63,Namespace:kube-system,Attempt:0,}" Dec 16 12:27:05.470985 containerd[2002]: time="2025-12-16T12:27:05.468980614Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-b4cmk,Uid:9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15,Namespace:kube-system,Attempt:0,}" Dec 16 12:27:05.495415 containerd[2002]: time="2025-12-16T12:27:05.495342308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-xt57r,Uid:2b09daed-aa23-4573-b09a-03aa103e9afe,Namespace:kube-system,Attempt:0,}" Dec 16 12:27:05.496656 containerd[2002]: time="2025-12-16T12:27:05.496603166Z" level=info msg="connecting to shim 2379f29ed50955f1cc8113b6c928588a78ecf85f19a82a6625da284db198012b" address="unix:///run/containerd/s/8cd463e5ab2f798ae7cf2bcd457054c7446bcf05ab9d7a61487c3259b098aa6d" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:27:05.579604 containerd[2002]: time="2025-12-16T12:27:05.579545525Z" level=info msg="connecting to shim de8757e5b3e6cb732c739be970b8512617cad57213f2cf882b8af0c4de105fa2" address="unix:///run/containerd/s/8750d9713e2d5cc1e7fc667d6640a6d5f7918fca63de98fe2e6d4c82aae94476" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:27:05.579627 systemd[1]: Started cri-containerd-2379f29ed50955f1cc8113b6c928588a78ecf85f19a82a6625da284db198012b.scope - libcontainer container 2379f29ed50955f1cc8113b6c928588a78ecf85f19a82a6625da284db198012b. Dec 16 12:27:05.589762 containerd[2002]: time="2025-12-16T12:27:05.589656990Z" level=info msg="connecting to shim 38d4591418001ad9fb8edd2fbdb87075f05c7e92153f139003519a48744610ce" address="unix:///run/containerd/s/d62bff0babd57e113f5014d5590c3720f8b54b3447ad0e9758f0017f9769a7f5" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:27:05.681625 systemd[1]: Started cri-containerd-38d4591418001ad9fb8edd2fbdb87075f05c7e92153f139003519a48744610ce.scope - libcontainer container 38d4591418001ad9fb8edd2fbdb87075f05c7e92153f139003519a48744610ce. Dec 16 12:27:05.702621 systemd[1]: Started cri-containerd-de8757e5b3e6cb732c739be970b8512617cad57213f2cf882b8af0c4de105fa2.scope - libcontainer container de8757e5b3e6cb732c739be970b8512617cad57213f2cf882b8af0c4de105fa2. Dec 16 12:27:05.714969 containerd[2002]: time="2025-12-16T12:27:05.714896260Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-n6dlx,Uid:11d2dca4-29c6-4dd2-84eb-be29a9ea6b63,Namespace:kube-system,Attempt:0,} returns sandbox id \"2379f29ed50955f1cc8113b6c928588a78ecf85f19a82a6625da284db198012b\"" Dec 16 12:27:05.734598 containerd[2002]: time="2025-12-16T12:27:05.733677246Z" level=info msg="CreateContainer within sandbox \"2379f29ed50955f1cc8113b6c928588a78ecf85f19a82a6625da284db198012b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 16 12:27:05.782891 containerd[2002]: time="2025-12-16T12:27:05.782810060Z" level=info msg="Container c948e67262423a585cb3a4fb361cde64f2d2bded53ee0961e990f4f0faacdd8f: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:27:05.801815 containerd[2002]: time="2025-12-16T12:27:05.801747580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-b4cmk,Uid:9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15,Namespace:kube-system,Attempt:0,} returns sandbox id \"38d4591418001ad9fb8edd2fbdb87075f05c7e92153f139003519a48744610ce\"" Dec 16 12:27:05.806802 containerd[2002]: time="2025-12-16T12:27:05.806211531Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 16 12:27:05.807380 containerd[2002]: time="2025-12-16T12:27:05.807331931Z" level=info msg="CreateContainer within sandbox \"2379f29ed50955f1cc8113b6c928588a78ecf85f19a82a6625da284db198012b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c948e67262423a585cb3a4fb361cde64f2d2bded53ee0961e990f4f0faacdd8f\"" Dec 16 12:27:05.810172 containerd[2002]: time="2025-12-16T12:27:05.810123938Z" level=info msg="StartContainer for \"c948e67262423a585cb3a4fb361cde64f2d2bded53ee0961e990f4f0faacdd8f\"" Dec 16 12:27:05.814544 containerd[2002]: time="2025-12-16T12:27:05.814486523Z" level=info msg="connecting to shim c948e67262423a585cb3a4fb361cde64f2d2bded53ee0961e990f4f0faacdd8f" address="unix:///run/containerd/s/8cd463e5ab2f798ae7cf2bcd457054c7446bcf05ab9d7a61487c3259b098aa6d" protocol=ttrpc version=3 Dec 16 12:27:05.844503 containerd[2002]: time="2025-12-16T12:27:05.844448132Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-xt57r,Uid:2b09daed-aa23-4573-b09a-03aa103e9afe,Namespace:kube-system,Attempt:0,} returns sandbox id \"de8757e5b3e6cb732c739be970b8512617cad57213f2cf882b8af0c4de105fa2\"" Dec 16 12:27:05.875572 systemd[1]: Started cri-containerd-c948e67262423a585cb3a4fb361cde64f2d2bded53ee0961e990f4f0faacdd8f.scope - libcontainer container c948e67262423a585cb3a4fb361cde64f2d2bded53ee0961e990f4f0faacdd8f. Dec 16 12:27:06.014139 containerd[2002]: time="2025-12-16T12:27:06.013975206Z" level=info msg="StartContainer for \"c948e67262423a585cb3a4fb361cde64f2d2bded53ee0961e990f4f0faacdd8f\" returns successfully" Dec 16 12:27:06.513786 kubelet[3329]: I1216 12:27:06.512508 3329 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-n6dlx" podStartSLOduration=2.512481667 podStartE2EDuration="2.512481667s" podCreationTimestamp="2025-12-16 12:27:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:27:06.512442347 +0000 UTC m=+8.479523279" watchObservedRunningTime="2025-12-16 12:27:06.512481667 +0000 UTC m=+8.479562515" Dec 16 12:27:12.644985 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2615659371.mount: Deactivated successfully. Dec 16 12:27:15.174304 containerd[2002]: time="2025-12-16T12:27:15.173754763Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:27:15.177408 containerd[2002]: time="2025-12-16T12:27:15.177351509Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Dec 16 12:27:15.179554 containerd[2002]: time="2025-12-16T12:27:15.179483246Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:27:15.184374 containerd[2002]: time="2025-12-16T12:27:15.184218412Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 9.377594788s" Dec 16 12:27:15.184374 containerd[2002]: time="2025-12-16T12:27:15.184315709Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Dec 16 12:27:15.187314 containerd[2002]: time="2025-12-16T12:27:15.187045417Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 16 12:27:15.189147 containerd[2002]: time="2025-12-16T12:27:15.189075728Z" level=info msg="CreateContainer within sandbox \"38d4591418001ad9fb8edd2fbdb87075f05c7e92153f139003519a48744610ce\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 16 12:27:15.210555 containerd[2002]: time="2025-12-16T12:27:15.208556363Z" level=info msg="Container ce7e896281fb2b74f050bc0a4a49b1675aa9d3af5e5bbb86642e6a6de4491658: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:27:15.227187 containerd[2002]: time="2025-12-16T12:27:15.227107975Z" level=info msg="CreateContainer within sandbox \"38d4591418001ad9fb8edd2fbdb87075f05c7e92153f139003519a48744610ce\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ce7e896281fb2b74f050bc0a4a49b1675aa9d3af5e5bbb86642e6a6de4491658\"" Dec 16 12:27:15.228573 containerd[2002]: time="2025-12-16T12:27:15.228406148Z" level=info msg="StartContainer for \"ce7e896281fb2b74f050bc0a4a49b1675aa9d3af5e5bbb86642e6a6de4491658\"" Dec 16 12:27:15.231289 containerd[2002]: time="2025-12-16T12:27:15.230975828Z" level=info msg="connecting to shim ce7e896281fb2b74f050bc0a4a49b1675aa9d3af5e5bbb86642e6a6de4491658" address="unix:///run/containerd/s/d62bff0babd57e113f5014d5590c3720f8b54b3447ad0e9758f0017f9769a7f5" protocol=ttrpc version=3 Dec 16 12:27:15.273598 systemd[1]: Started cri-containerd-ce7e896281fb2b74f050bc0a4a49b1675aa9d3af5e5bbb86642e6a6de4491658.scope - libcontainer container ce7e896281fb2b74f050bc0a4a49b1675aa9d3af5e5bbb86642e6a6de4491658. Dec 16 12:27:15.354855 containerd[2002]: time="2025-12-16T12:27:15.354772732Z" level=info msg="StartContainer for \"ce7e896281fb2b74f050bc0a4a49b1675aa9d3af5e5bbb86642e6a6de4491658\" returns successfully" Dec 16 12:27:15.370410 systemd[1]: cri-containerd-ce7e896281fb2b74f050bc0a4a49b1675aa9d3af5e5bbb86642e6a6de4491658.scope: Deactivated successfully. Dec 16 12:27:15.375503 containerd[2002]: time="2025-12-16T12:27:15.375435478Z" level=info msg="received container exit event container_id:\"ce7e896281fb2b74f050bc0a4a49b1675aa9d3af5e5bbb86642e6a6de4491658\" id:\"ce7e896281fb2b74f050bc0a4a49b1675aa9d3af5e5bbb86642e6a6de4491658\" pid:4016 exited_at:{seconds:1765888035 nanos:374749011}" Dec 16 12:27:16.204757 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ce7e896281fb2b74f050bc0a4a49b1675aa9d3af5e5bbb86642e6a6de4491658-rootfs.mount: Deactivated successfully. Dec 16 12:27:17.380429 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount487251415.mount: Deactivated successfully. Dec 16 12:27:17.553295 containerd[2002]: time="2025-12-16T12:27:17.553187782Z" level=info msg="CreateContainer within sandbox \"38d4591418001ad9fb8edd2fbdb87075f05c7e92153f139003519a48744610ce\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 16 12:27:17.591109 containerd[2002]: time="2025-12-16T12:27:17.583600168Z" level=info msg="Container 7c84dd3b83d4cb7517a52e0672680f81d275dea128b5c92e37eeb2cde3b4379e: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:27:17.589431 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2964107460.mount: Deactivated successfully. Dec 16 12:27:17.609479 containerd[2002]: time="2025-12-16T12:27:17.609398901Z" level=info msg="CreateContainer within sandbox \"38d4591418001ad9fb8edd2fbdb87075f05c7e92153f139003519a48744610ce\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"7c84dd3b83d4cb7517a52e0672680f81d275dea128b5c92e37eeb2cde3b4379e\"" Dec 16 12:27:17.611873 containerd[2002]: time="2025-12-16T12:27:17.610930015Z" level=info msg="StartContainer for \"7c84dd3b83d4cb7517a52e0672680f81d275dea128b5c92e37eeb2cde3b4379e\"" Dec 16 12:27:17.615355 containerd[2002]: time="2025-12-16T12:27:17.615174448Z" level=info msg="connecting to shim 7c84dd3b83d4cb7517a52e0672680f81d275dea128b5c92e37eeb2cde3b4379e" address="unix:///run/containerd/s/d62bff0babd57e113f5014d5590c3720f8b54b3447ad0e9758f0017f9769a7f5" protocol=ttrpc version=3 Dec 16 12:27:17.680066 systemd[1]: Started cri-containerd-7c84dd3b83d4cb7517a52e0672680f81d275dea128b5c92e37eeb2cde3b4379e.scope - libcontainer container 7c84dd3b83d4cb7517a52e0672680f81d275dea128b5c92e37eeb2cde3b4379e. Dec 16 12:27:17.778347 containerd[2002]: time="2025-12-16T12:27:17.778136821Z" level=info msg="StartContainer for \"7c84dd3b83d4cb7517a52e0672680f81d275dea128b5c92e37eeb2cde3b4379e\" returns successfully" Dec 16 12:27:17.811876 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 16 12:27:17.813196 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 16 12:27:17.814439 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Dec 16 12:27:17.819561 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 12:27:17.823828 systemd[1]: cri-containerd-7c84dd3b83d4cb7517a52e0672680f81d275dea128b5c92e37eeb2cde3b4379e.scope: Deactivated successfully. Dec 16 12:27:17.827230 containerd[2002]: time="2025-12-16T12:27:17.827148976Z" level=info msg="received container exit event container_id:\"7c84dd3b83d4cb7517a52e0672680f81d275dea128b5c92e37eeb2cde3b4379e\" id:\"7c84dd3b83d4cb7517a52e0672680f81d275dea128b5c92e37eeb2cde3b4379e\" pid:4076 exited_at:{seconds:1765888037 nanos:826755983}" Dec 16 12:27:17.875407 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 12:27:18.311731 containerd[2002]: time="2025-12-16T12:27:18.311663547Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:27:18.315057 containerd[2002]: time="2025-12-16T12:27:18.314545263Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Dec 16 12:27:18.317527 containerd[2002]: time="2025-12-16T12:27:18.317441531Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:27:18.321643 containerd[2002]: time="2025-12-16T12:27:18.321571343Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 3.133948941s" Dec 16 12:27:18.321643 containerd[2002]: time="2025-12-16T12:27:18.321633666Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Dec 16 12:27:18.326032 containerd[2002]: time="2025-12-16T12:27:18.325964890Z" level=info msg="CreateContainer within sandbox \"de8757e5b3e6cb732c739be970b8512617cad57213f2cf882b8af0c4de105fa2\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 16 12:27:18.347787 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7c84dd3b83d4cb7517a52e0672680f81d275dea128b5c92e37eeb2cde3b4379e-rootfs.mount: Deactivated successfully. Dec 16 12:27:18.349823 containerd[2002]: time="2025-12-16T12:27:18.348905883Z" level=info msg="Container 797828a533c44a31613183fed666e9bfe0316b198e0473ff30e6e29c267ab227: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:27:18.370487 containerd[2002]: time="2025-12-16T12:27:18.370436291Z" level=info msg="CreateContainer within sandbox \"de8757e5b3e6cb732c739be970b8512617cad57213f2cf882b8af0c4de105fa2\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"797828a533c44a31613183fed666e9bfe0316b198e0473ff30e6e29c267ab227\"" Dec 16 12:27:18.373463 containerd[2002]: time="2025-12-16T12:27:18.373410274Z" level=info msg="StartContainer for \"797828a533c44a31613183fed666e9bfe0316b198e0473ff30e6e29c267ab227\"" Dec 16 12:27:18.375707 containerd[2002]: time="2025-12-16T12:27:18.375594129Z" level=info msg="connecting to shim 797828a533c44a31613183fed666e9bfe0316b198e0473ff30e6e29c267ab227" address="unix:///run/containerd/s/8750d9713e2d5cc1e7fc667d6640a6d5f7918fca63de98fe2e6d4c82aae94476" protocol=ttrpc version=3 Dec 16 12:27:18.415592 systemd[1]: Started cri-containerd-797828a533c44a31613183fed666e9bfe0316b198e0473ff30e6e29c267ab227.scope - libcontainer container 797828a533c44a31613183fed666e9bfe0316b198e0473ff30e6e29c267ab227. Dec 16 12:27:18.495962 containerd[2002]: time="2025-12-16T12:27:18.495902978Z" level=info msg="StartContainer for \"797828a533c44a31613183fed666e9bfe0316b198e0473ff30e6e29c267ab227\" returns successfully" Dec 16 12:27:18.567342 containerd[2002]: time="2025-12-16T12:27:18.566923859Z" level=info msg="CreateContainer within sandbox \"38d4591418001ad9fb8edd2fbdb87075f05c7e92153f139003519a48744610ce\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 16 12:27:18.617665 containerd[2002]: time="2025-12-16T12:27:18.617594835Z" level=info msg="Container 9afb2dc031a7a9dbd722844f09030a0330feea76060712a7d94556ed49de6717: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:27:18.656943 containerd[2002]: time="2025-12-16T12:27:18.656867445Z" level=info msg="CreateContainer within sandbox \"38d4591418001ad9fb8edd2fbdb87075f05c7e92153f139003519a48744610ce\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"9afb2dc031a7a9dbd722844f09030a0330feea76060712a7d94556ed49de6717\"" Dec 16 12:27:18.659044 kubelet[3329]: I1216 12:27:18.658389 3329 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-xt57r" podStartSLOduration=2.181460829 podStartE2EDuration="14.657228658s" podCreationTimestamp="2025-12-16 12:27:04 +0000 UTC" firstStartedPulling="2025-12-16 12:27:05.847858665 +0000 UTC m=+7.814939489" lastFinishedPulling="2025-12-16 12:27:18.323626482 +0000 UTC m=+20.290707318" observedRunningTime="2025-12-16 12:27:18.581338131 +0000 UTC m=+20.548419003" watchObservedRunningTime="2025-12-16 12:27:18.657228658 +0000 UTC m=+20.624309638" Dec 16 12:27:18.664952 containerd[2002]: time="2025-12-16T12:27:18.664655533Z" level=info msg="StartContainer for \"9afb2dc031a7a9dbd722844f09030a0330feea76060712a7d94556ed49de6717\"" Dec 16 12:27:18.678368 containerd[2002]: time="2025-12-16T12:27:18.676733113Z" level=info msg="connecting to shim 9afb2dc031a7a9dbd722844f09030a0330feea76060712a7d94556ed49de6717" address="unix:///run/containerd/s/d62bff0babd57e113f5014d5590c3720f8b54b3447ad0e9758f0017f9769a7f5" protocol=ttrpc version=3 Dec 16 12:27:18.724685 systemd[1]: Started cri-containerd-9afb2dc031a7a9dbd722844f09030a0330feea76060712a7d94556ed49de6717.scope - libcontainer container 9afb2dc031a7a9dbd722844f09030a0330feea76060712a7d94556ed49de6717. Dec 16 12:27:18.878428 containerd[2002]: time="2025-12-16T12:27:18.876892092Z" level=info msg="StartContainer for \"9afb2dc031a7a9dbd722844f09030a0330feea76060712a7d94556ed49de6717\" returns successfully" Dec 16 12:27:18.892627 systemd[1]: cri-containerd-9afb2dc031a7a9dbd722844f09030a0330feea76060712a7d94556ed49de6717.scope: Deactivated successfully. Dec 16 12:27:18.904327 containerd[2002]: time="2025-12-16T12:27:18.903614243Z" level=info msg="received container exit event container_id:\"9afb2dc031a7a9dbd722844f09030a0330feea76060712a7d94556ed49de6717\" id:\"9afb2dc031a7a9dbd722844f09030a0330feea76060712a7d94556ed49de6717\" pid:4164 exited_at:{seconds:1765888038 nanos:902367984}" Dec 16 12:27:19.597846 containerd[2002]: time="2025-12-16T12:27:19.597775643Z" level=info msg="CreateContainer within sandbox \"38d4591418001ad9fb8edd2fbdb87075f05c7e92153f139003519a48744610ce\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 16 12:27:19.624490 containerd[2002]: time="2025-12-16T12:27:19.623522702Z" level=info msg="Container f49163aa2659ca5a948fba8da2e0b8d4419ab5c5f67e1cd90966f4b5536f21ea: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:27:19.653915 containerd[2002]: time="2025-12-16T12:27:19.653837396Z" level=info msg="CreateContainer within sandbox \"38d4591418001ad9fb8edd2fbdb87075f05c7e92153f139003519a48744610ce\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f49163aa2659ca5a948fba8da2e0b8d4419ab5c5f67e1cd90966f4b5536f21ea\"" Dec 16 12:27:19.656221 containerd[2002]: time="2025-12-16T12:27:19.656162130Z" level=info msg="StartContainer for \"f49163aa2659ca5a948fba8da2e0b8d4419ab5c5f67e1cd90966f4b5536f21ea\"" Dec 16 12:27:19.661228 containerd[2002]: time="2025-12-16T12:27:19.660895856Z" level=info msg="connecting to shim f49163aa2659ca5a948fba8da2e0b8d4419ab5c5f67e1cd90966f4b5536f21ea" address="unix:///run/containerd/s/d62bff0babd57e113f5014d5590c3720f8b54b3447ad0e9758f0017f9769a7f5" protocol=ttrpc version=3 Dec 16 12:27:19.737028 systemd[1]: Started cri-containerd-f49163aa2659ca5a948fba8da2e0b8d4419ab5c5f67e1cd90966f4b5536f21ea.scope - libcontainer container f49163aa2659ca5a948fba8da2e0b8d4419ab5c5f67e1cd90966f4b5536f21ea. Dec 16 12:27:19.877720 containerd[2002]: time="2025-12-16T12:27:19.877451573Z" level=info msg="StartContainer for \"f49163aa2659ca5a948fba8da2e0b8d4419ab5c5f67e1cd90966f4b5536f21ea\" returns successfully" Dec 16 12:27:19.878994 systemd[1]: cri-containerd-f49163aa2659ca5a948fba8da2e0b8d4419ab5c5f67e1cd90966f4b5536f21ea.scope: Deactivated successfully. Dec 16 12:27:19.887222 containerd[2002]: time="2025-12-16T12:27:19.887153069Z" level=info msg="received container exit event container_id:\"f49163aa2659ca5a948fba8da2e0b8d4419ab5c5f67e1cd90966f4b5536f21ea\" id:\"f49163aa2659ca5a948fba8da2e0b8d4419ab5c5f67e1cd90966f4b5536f21ea\" pid:4205 exited_at:{seconds:1765888039 nanos:886746966}" Dec 16 12:27:19.960100 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f49163aa2659ca5a948fba8da2e0b8d4419ab5c5f67e1cd90966f4b5536f21ea-rootfs.mount: Deactivated successfully. Dec 16 12:27:20.605681 containerd[2002]: time="2025-12-16T12:27:20.605431762Z" level=info msg="CreateContainer within sandbox \"38d4591418001ad9fb8edd2fbdb87075f05c7e92153f139003519a48744610ce\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 16 12:27:20.637320 containerd[2002]: time="2025-12-16T12:27:20.634420177Z" level=info msg="Container 4aec4ea9b93d142a38ba731edb8ed8f39f5ca534cb51004a2fbe74aa086a823d: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:27:20.655011 containerd[2002]: time="2025-12-16T12:27:20.654835264Z" level=info msg="CreateContainer within sandbox \"38d4591418001ad9fb8edd2fbdb87075f05c7e92153f139003519a48744610ce\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"4aec4ea9b93d142a38ba731edb8ed8f39f5ca534cb51004a2fbe74aa086a823d\"" Dec 16 12:27:20.657838 containerd[2002]: time="2025-12-16T12:27:20.657290515Z" level=info msg="StartContainer for \"4aec4ea9b93d142a38ba731edb8ed8f39f5ca534cb51004a2fbe74aa086a823d\"" Dec 16 12:27:20.659996 containerd[2002]: time="2025-12-16T12:27:20.659931667Z" level=info msg="connecting to shim 4aec4ea9b93d142a38ba731edb8ed8f39f5ca534cb51004a2fbe74aa086a823d" address="unix:///run/containerd/s/d62bff0babd57e113f5014d5590c3720f8b54b3447ad0e9758f0017f9769a7f5" protocol=ttrpc version=3 Dec 16 12:27:20.700881 systemd[1]: Started cri-containerd-4aec4ea9b93d142a38ba731edb8ed8f39f5ca534cb51004a2fbe74aa086a823d.scope - libcontainer container 4aec4ea9b93d142a38ba731edb8ed8f39f5ca534cb51004a2fbe74aa086a823d. Dec 16 12:27:20.883561 containerd[2002]: time="2025-12-16T12:27:20.880752456Z" level=info msg="StartContainer for \"4aec4ea9b93d142a38ba731edb8ed8f39f5ca534cb51004a2fbe74aa086a823d\" returns successfully" Dec 16 12:27:21.038865 kubelet[3329]: I1216 12:27:21.037358 3329 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Dec 16 12:27:21.128587 systemd[1]: Created slice kubepods-burstable-podacfb16b4_19eb_4ee3_a206_1ccac7b8ffa6.slice - libcontainer container kubepods-burstable-podacfb16b4_19eb_4ee3_a206_1ccac7b8ffa6.slice. Dec 16 12:27:21.147240 systemd[1]: Created slice kubepods-burstable-pod74be2a1c_60e5_4184_bee6_a62d84ba0b0e.slice - libcontainer container kubepods-burstable-pod74be2a1c_60e5_4184_bee6_a62d84ba0b0e.slice. Dec 16 12:27:21.177412 kubelet[3329]: I1216 12:27:21.177339 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/74be2a1c-60e5-4184-bee6-a62d84ba0b0e-config-volume\") pod \"coredns-668d6bf9bc-h4rbl\" (UID: \"74be2a1c-60e5-4184-bee6-a62d84ba0b0e\") " pod="kube-system/coredns-668d6bf9bc-h4rbl" Dec 16 12:27:21.177549 kubelet[3329]: I1216 12:27:21.177445 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6cb6\" (UniqueName: \"kubernetes.io/projected/acfb16b4-19eb-4ee3-a206-1ccac7b8ffa6-kube-api-access-s6cb6\") pod \"coredns-668d6bf9bc-4fc6k\" (UID: \"acfb16b4-19eb-4ee3-a206-1ccac7b8ffa6\") " pod="kube-system/coredns-668d6bf9bc-4fc6k" Dec 16 12:27:21.179294 kubelet[3329]: I1216 12:27:21.177621 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-676kx\" (UniqueName: \"kubernetes.io/projected/74be2a1c-60e5-4184-bee6-a62d84ba0b0e-kube-api-access-676kx\") pod \"coredns-668d6bf9bc-h4rbl\" (UID: \"74be2a1c-60e5-4184-bee6-a62d84ba0b0e\") " pod="kube-system/coredns-668d6bf9bc-h4rbl" Dec 16 12:27:21.179477 kubelet[3329]: I1216 12:27:21.179359 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/acfb16b4-19eb-4ee3-a206-1ccac7b8ffa6-config-volume\") pod \"coredns-668d6bf9bc-4fc6k\" (UID: \"acfb16b4-19eb-4ee3-a206-1ccac7b8ffa6\") " pod="kube-system/coredns-668d6bf9bc-4fc6k" Dec 16 12:27:21.441694 containerd[2002]: time="2025-12-16T12:27:21.441529140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4fc6k,Uid:acfb16b4-19eb-4ee3-a206-1ccac7b8ffa6,Namespace:kube-system,Attempt:0,}" Dec 16 12:27:21.464861 containerd[2002]: time="2025-12-16T12:27:21.464771964Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-h4rbl,Uid:74be2a1c-60e5-4184-bee6-a62d84ba0b0e,Namespace:kube-system,Attempt:0,}" Dec 16 12:27:24.091909 (udev-worker)[4335]: Network interface NamePolicy= disabled on kernel command line. Dec 16 12:27:24.095940 (udev-worker)[4333]: Network interface NamePolicy= disabled on kernel command line. Dec 16 12:27:24.097125 systemd-networkd[1868]: cilium_host: Link UP Dec 16 12:27:24.098426 systemd-networkd[1868]: cilium_net: Link UP Dec 16 12:27:24.099581 systemd-networkd[1868]: cilium_host: Gained carrier Dec 16 12:27:24.100631 systemd-networkd[1868]: cilium_net: Gained carrier Dec 16 12:27:24.280696 systemd-networkd[1868]: cilium_vxlan: Link UP Dec 16 12:27:24.280715 systemd-networkd[1868]: cilium_vxlan: Gained carrier Dec 16 12:27:24.583640 systemd-networkd[1868]: cilium_host: Gained IPv6LL Dec 16 12:27:24.791634 systemd-networkd[1868]: cilium_net: Gained IPv6LL Dec 16 12:27:24.860316 kernel: NET: Registered PF_ALG protocol family Dec 16 12:27:25.367522 systemd-networkd[1868]: cilium_vxlan: Gained IPv6LL Dec 16 12:27:26.249695 systemd-networkd[1868]: lxc_health: Link UP Dec 16 12:27:26.267537 systemd-networkd[1868]: lxc_health: Gained carrier Dec 16 12:27:26.551301 kernel: eth0: renamed from tmpa311f Dec 16 12:27:26.552392 systemd-networkd[1868]: lxc02e4abdd3735: Link UP Dec 16 12:27:26.559451 systemd-networkd[1868]: lxc02e4abdd3735: Gained carrier Dec 16 12:27:26.576430 kernel: eth0: renamed from tmp27059 Dec 16 12:27:26.577706 (udev-worker)[4375]: Network interface NamePolicy= disabled on kernel command line. Dec 16 12:27:26.582219 systemd-networkd[1868]: lxcc1484e399769: Link UP Dec 16 12:27:26.588101 systemd-networkd[1868]: lxcc1484e399769: Gained carrier Dec 16 12:27:27.513194 kubelet[3329]: I1216 12:27:27.512940 3329 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-b4cmk" podStartSLOduration=14.13076888 podStartE2EDuration="23.512891442s" podCreationTimestamp="2025-12-16 12:27:04 +0000 UTC" firstStartedPulling="2025-12-16 12:27:05.803860276 +0000 UTC m=+7.770941112" lastFinishedPulling="2025-12-16 12:27:15.18598285 +0000 UTC m=+17.153063674" observedRunningTime="2025-12-16 12:27:21.665323178 +0000 UTC m=+23.632404050" watchObservedRunningTime="2025-12-16 12:27:27.512891442 +0000 UTC m=+29.479972290" Dec 16 12:27:27.864472 systemd-networkd[1868]: lxc_health: Gained IPv6LL Dec 16 12:27:27.927623 systemd-networkd[1868]: lxc02e4abdd3735: Gained IPv6LL Dec 16 12:27:28.119677 systemd-networkd[1868]: lxcc1484e399769: Gained IPv6LL Dec 16 12:27:31.067125 ntpd[2229]: Listen normally on 6 cilium_host 192.168.0.19:123 Dec 16 12:27:31.067912 ntpd[2229]: 16 Dec 12:27:31 ntpd[2229]: Listen normally on 6 cilium_host 192.168.0.19:123 Dec 16 12:27:31.067912 ntpd[2229]: 16 Dec 12:27:31 ntpd[2229]: Listen normally on 7 cilium_net [fe80::44e9:dbff:fedd:b6fe%4]:123 Dec 16 12:27:31.067912 ntpd[2229]: 16 Dec 12:27:31 ntpd[2229]: Listen normally on 8 cilium_host [fe80::345b:60ff:fe66:8329%5]:123 Dec 16 12:27:31.067912 ntpd[2229]: 16 Dec 12:27:31 ntpd[2229]: Listen normally on 9 cilium_vxlan [fe80::dcb6:53ff:fe62:9712%6]:123 Dec 16 12:27:31.067912 ntpd[2229]: 16 Dec 12:27:31 ntpd[2229]: Listen normally on 10 lxc_health [fe80::34eb:50ff:fef7:6c98%8]:123 Dec 16 12:27:31.067221 ntpd[2229]: Listen normally on 7 cilium_net [fe80::44e9:dbff:fedd:b6fe%4]:123 Dec 16 12:27:31.068208 ntpd[2229]: 16 Dec 12:27:31 ntpd[2229]: Listen normally on 11 lxc02e4abdd3735 [fe80::1099:f4ff:fece:2747%10]:123 Dec 16 12:27:31.068208 ntpd[2229]: 16 Dec 12:27:31 ntpd[2229]: Listen normally on 12 lxcc1484e399769 [fe80::4069:6ff:fe5b:7d2d%12]:123 Dec 16 12:27:31.067786 ntpd[2229]: Listen normally on 8 cilium_host [fe80::345b:60ff:fe66:8329%5]:123 Dec 16 12:27:31.067851 ntpd[2229]: Listen normally on 9 cilium_vxlan [fe80::dcb6:53ff:fe62:9712%6]:123 Dec 16 12:27:31.067899 ntpd[2229]: Listen normally on 10 lxc_health [fe80::34eb:50ff:fef7:6c98%8]:123 Dec 16 12:27:31.067944 ntpd[2229]: Listen normally on 11 lxc02e4abdd3735 [fe80::1099:f4ff:fece:2747%10]:123 Dec 16 12:27:31.067990 ntpd[2229]: Listen normally on 12 lxcc1484e399769 [fe80::4069:6ff:fe5b:7d2d%12]:123 Dec 16 12:27:34.576074 kubelet[3329]: I1216 12:27:34.576008 3329 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 12:27:35.078984 containerd[2002]: time="2025-12-16T12:27:35.078818132Z" level=info msg="connecting to shim 2705965b62e3c0869432d32f4bdcb233e633f15169f9ca099fc3380254b8e21d" address="unix:///run/containerd/s/37a9e440f564b314682cf368b11b3e90081d1275334dfd30a51057f41a99ec7b" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:27:35.130932 systemd[1]: Started cri-containerd-2705965b62e3c0869432d32f4bdcb233e633f15169f9ca099fc3380254b8e21d.scope - libcontainer container 2705965b62e3c0869432d32f4bdcb233e633f15169f9ca099fc3380254b8e21d. Dec 16 12:27:35.218375 containerd[2002]: time="2025-12-16T12:27:35.217689710Z" level=info msg="connecting to shim a311f3352a02b7284e9d9f3af0aa7defcdee5e2ed539ec4537157a04402f88f6" address="unix:///run/containerd/s/0f26906feb9e8d4476d4828842423ebe42ad83a9fc2c4baad4b466fef22120e7" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:27:35.289493 containerd[2002]: time="2025-12-16T12:27:35.289438683Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-h4rbl,Uid:74be2a1c-60e5-4184-bee6-a62d84ba0b0e,Namespace:kube-system,Attempt:0,} returns sandbox id \"2705965b62e3c0869432d32f4bdcb233e633f15169f9ca099fc3380254b8e21d\"" Dec 16 12:27:35.306480 containerd[2002]: time="2025-12-16T12:27:35.306348065Z" level=info msg="CreateContainer within sandbox \"2705965b62e3c0869432d32f4bdcb233e633f15169f9ca099fc3380254b8e21d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 12:27:35.317908 systemd[1]: Started cri-containerd-a311f3352a02b7284e9d9f3af0aa7defcdee5e2ed539ec4537157a04402f88f6.scope - libcontainer container a311f3352a02b7284e9d9f3af0aa7defcdee5e2ed539ec4537157a04402f88f6. Dec 16 12:27:35.340182 containerd[2002]: time="2025-12-16T12:27:35.338620253Z" level=info msg="Container 014c93f7c88d6f89a7e03c462e596508af00292524bc507110d19424a47ecdb1: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:27:35.357414 containerd[2002]: time="2025-12-16T12:27:35.357360995Z" level=info msg="CreateContainer within sandbox \"2705965b62e3c0869432d32f4bdcb233e633f15169f9ca099fc3380254b8e21d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"014c93f7c88d6f89a7e03c462e596508af00292524bc507110d19424a47ecdb1\"" Dec 16 12:27:35.358506 containerd[2002]: time="2025-12-16T12:27:35.358434296Z" level=info msg="StartContainer for \"014c93f7c88d6f89a7e03c462e596508af00292524bc507110d19424a47ecdb1\"" Dec 16 12:27:35.362470 containerd[2002]: time="2025-12-16T12:27:35.362199593Z" level=info msg="connecting to shim 014c93f7c88d6f89a7e03c462e596508af00292524bc507110d19424a47ecdb1" address="unix:///run/containerd/s/37a9e440f564b314682cf368b11b3e90081d1275334dfd30a51057f41a99ec7b" protocol=ttrpc version=3 Dec 16 12:27:35.416582 systemd[1]: Started cri-containerd-014c93f7c88d6f89a7e03c462e596508af00292524bc507110d19424a47ecdb1.scope - libcontainer container 014c93f7c88d6f89a7e03c462e596508af00292524bc507110d19424a47ecdb1. Dec 16 12:27:35.498479 containerd[2002]: time="2025-12-16T12:27:35.498415180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4fc6k,Uid:acfb16b4-19eb-4ee3-a206-1ccac7b8ffa6,Namespace:kube-system,Attempt:0,} returns sandbox id \"a311f3352a02b7284e9d9f3af0aa7defcdee5e2ed539ec4537157a04402f88f6\"" Dec 16 12:27:35.514974 containerd[2002]: time="2025-12-16T12:27:35.514539298Z" level=info msg="CreateContainer within sandbox \"a311f3352a02b7284e9d9f3af0aa7defcdee5e2ed539ec4537157a04402f88f6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 12:27:35.545423 containerd[2002]: time="2025-12-16T12:27:35.543828320Z" level=info msg="Container 876c4998377f2a371450c9c3875eceb810b921ee07cd4eaa069bb9776e6a7e4f: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:27:35.553880 containerd[2002]: time="2025-12-16T12:27:35.553745756Z" level=info msg="StartContainer for \"014c93f7c88d6f89a7e03c462e596508af00292524bc507110d19424a47ecdb1\" returns successfully" Dec 16 12:27:35.562984 containerd[2002]: time="2025-12-16T12:27:35.562833986Z" level=info msg="CreateContainer within sandbox \"a311f3352a02b7284e9d9f3af0aa7defcdee5e2ed539ec4537157a04402f88f6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"876c4998377f2a371450c9c3875eceb810b921ee07cd4eaa069bb9776e6a7e4f\"" Dec 16 12:27:35.565801 containerd[2002]: time="2025-12-16T12:27:35.564668983Z" level=info msg="StartContainer for \"876c4998377f2a371450c9c3875eceb810b921ee07cd4eaa069bb9776e6a7e4f\"" Dec 16 12:27:35.568897 containerd[2002]: time="2025-12-16T12:27:35.568016040Z" level=info msg="connecting to shim 876c4998377f2a371450c9c3875eceb810b921ee07cd4eaa069bb9776e6a7e4f" address="unix:///run/containerd/s/0f26906feb9e8d4476d4828842423ebe42ad83a9fc2c4baad4b466fef22120e7" protocol=ttrpc version=3 Dec 16 12:27:35.607060 systemd[1]: Started cri-containerd-876c4998377f2a371450c9c3875eceb810b921ee07cd4eaa069bb9776e6a7e4f.scope - libcontainer container 876c4998377f2a371450c9c3875eceb810b921ee07cd4eaa069bb9776e6a7e4f. Dec 16 12:27:35.718836 containerd[2002]: time="2025-12-16T12:27:35.718775791Z" level=info msg="StartContainer for \"876c4998377f2a371450c9c3875eceb810b921ee07cd4eaa069bb9776e6a7e4f\" returns successfully" Dec 16 12:27:36.705504 kubelet[3329]: I1216 12:27:36.704825 3329 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-h4rbl" podStartSLOduration=32.704798382999996 podStartE2EDuration="32.704798383s" podCreationTimestamp="2025-12-16 12:27:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:27:35.734730949 +0000 UTC m=+37.701811809" watchObservedRunningTime="2025-12-16 12:27:36.704798383 +0000 UTC m=+38.671879231" Dec 16 12:27:36.708096 kubelet[3329]: I1216 12:27:36.707199 3329 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-4fc6k" podStartSLOduration=32.707177624 podStartE2EDuration="32.707177624s" podCreationTimestamp="2025-12-16 12:27:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:27:36.702077487 +0000 UTC m=+38.669158419" watchObservedRunningTime="2025-12-16 12:27:36.707177624 +0000 UTC m=+38.674258520" Dec 16 12:27:44.839352 systemd[1]: Started sshd@7-172.31.21.37:22-139.178.89.65:44482.service - OpenSSH per-connection server daemon (139.178.89.65:44482). Dec 16 12:27:45.046706 sshd[4912]: Accepted publickey for core from 139.178.89.65 port 44482 ssh2: RSA SHA256:xUh8ykt9z5cbsZrCvFOBrVTYezXLnfIWheoQb+9aZNE Dec 16 12:27:45.048525 sshd-session[4912]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:27:45.056056 systemd-logind[1971]: New session 8 of user core. Dec 16 12:27:45.069561 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 16 12:27:45.385305 sshd[4915]: Connection closed by 139.178.89.65 port 44482 Dec 16 12:27:45.385320 sshd-session[4912]: pam_unix(sshd:session): session closed for user core Dec 16 12:27:45.392388 systemd[1]: sshd@7-172.31.21.37:22-139.178.89.65:44482.service: Deactivated successfully. Dec 16 12:27:45.397881 systemd[1]: session-8.scope: Deactivated successfully. Dec 16 12:27:45.400066 systemd-logind[1971]: Session 8 logged out. Waiting for processes to exit. Dec 16 12:27:45.405779 systemd-logind[1971]: Removed session 8. Dec 16 12:27:50.432714 systemd[1]: Started sshd@8-172.31.21.37:22-139.178.89.65:51812.service - OpenSSH per-connection server daemon (139.178.89.65:51812). Dec 16 12:27:50.627400 sshd[4930]: Accepted publickey for core from 139.178.89.65 port 51812 ssh2: RSA SHA256:xUh8ykt9z5cbsZrCvFOBrVTYezXLnfIWheoQb+9aZNE Dec 16 12:27:50.629885 sshd-session[4930]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:27:50.638372 systemd-logind[1971]: New session 9 of user core. Dec 16 12:27:50.646609 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 16 12:27:50.888428 sshd[4933]: Connection closed by 139.178.89.65 port 51812 Dec 16 12:27:50.888938 sshd-session[4930]: pam_unix(sshd:session): session closed for user core Dec 16 12:27:50.897107 systemd[1]: sshd@8-172.31.21.37:22-139.178.89.65:51812.service: Deactivated successfully. Dec 16 12:27:50.901502 systemd[1]: session-9.scope: Deactivated successfully. Dec 16 12:27:50.903889 systemd-logind[1971]: Session 9 logged out. Waiting for processes to exit. Dec 16 12:27:50.906918 systemd-logind[1971]: Removed session 9. Dec 16 12:27:55.932389 systemd[1]: Started sshd@9-172.31.21.37:22-139.178.89.65:51816.service - OpenSSH per-connection server daemon (139.178.89.65:51816). Dec 16 12:27:56.129623 sshd[4946]: Accepted publickey for core from 139.178.89.65 port 51816 ssh2: RSA SHA256:xUh8ykt9z5cbsZrCvFOBrVTYezXLnfIWheoQb+9aZNE Dec 16 12:27:56.132629 sshd-session[4946]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:27:56.140595 systemd-logind[1971]: New session 10 of user core. Dec 16 12:27:56.147520 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 16 12:27:56.400495 sshd[4949]: Connection closed by 139.178.89.65 port 51816 Dec 16 12:27:56.400899 sshd-session[4946]: pam_unix(sshd:session): session closed for user core Dec 16 12:27:56.409306 systemd[1]: sshd@9-172.31.21.37:22-139.178.89.65:51816.service: Deactivated successfully. Dec 16 12:27:56.413760 systemd[1]: session-10.scope: Deactivated successfully. Dec 16 12:27:56.415457 systemd-logind[1971]: Session 10 logged out. Waiting for processes to exit. Dec 16 12:27:56.419413 systemd-logind[1971]: Removed session 10. Dec 16 12:28:01.441894 systemd[1]: Started sshd@10-172.31.21.37:22-139.178.89.65:36640.service - OpenSSH per-connection server daemon (139.178.89.65:36640). Dec 16 12:28:01.651893 sshd[4964]: Accepted publickey for core from 139.178.89.65 port 36640 ssh2: RSA SHA256:xUh8ykt9z5cbsZrCvFOBrVTYezXLnfIWheoQb+9aZNE Dec 16 12:28:01.654200 sshd-session[4964]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:28:01.666760 systemd-logind[1971]: New session 11 of user core. Dec 16 12:28:01.675569 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 16 12:28:01.946698 sshd[4967]: Connection closed by 139.178.89.65 port 36640 Dec 16 12:28:01.947171 sshd-session[4964]: pam_unix(sshd:session): session closed for user core Dec 16 12:28:01.957752 systemd-logind[1971]: Session 11 logged out. Waiting for processes to exit. Dec 16 12:28:01.959434 systemd[1]: sshd@10-172.31.21.37:22-139.178.89.65:36640.service: Deactivated successfully. Dec 16 12:28:01.966757 systemd[1]: session-11.scope: Deactivated successfully. Dec 16 12:28:01.988155 systemd-logind[1971]: Removed session 11. Dec 16 12:28:01.992756 systemd[1]: Started sshd@11-172.31.21.37:22-139.178.89.65:36646.service - OpenSSH per-connection server daemon (139.178.89.65:36646). Dec 16 12:28:02.191186 sshd[4980]: Accepted publickey for core from 139.178.89.65 port 36646 ssh2: RSA SHA256:xUh8ykt9z5cbsZrCvFOBrVTYezXLnfIWheoQb+9aZNE Dec 16 12:28:02.193465 sshd-session[4980]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:28:02.201460 systemd-logind[1971]: New session 12 of user core. Dec 16 12:28:02.211816 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 16 12:28:02.556257 sshd[4983]: Connection closed by 139.178.89.65 port 36646 Dec 16 12:28:02.555004 sshd-session[4980]: pam_unix(sshd:session): session closed for user core Dec 16 12:28:02.564750 systemd[1]: sshd@11-172.31.21.37:22-139.178.89.65:36646.service: Deactivated successfully. Dec 16 12:28:02.571371 systemd[1]: session-12.scope: Deactivated successfully. Dec 16 12:28:02.577143 systemd-logind[1971]: Session 12 logged out. Waiting for processes to exit. Dec 16 12:28:02.602729 systemd[1]: Started sshd@12-172.31.21.37:22-139.178.89.65:36656.service - OpenSSH per-connection server daemon (139.178.89.65:36656). Dec 16 12:28:02.608934 systemd-logind[1971]: Removed session 12. Dec 16 12:28:02.803090 sshd[4993]: Accepted publickey for core from 139.178.89.65 port 36656 ssh2: RSA SHA256:xUh8ykt9z5cbsZrCvFOBrVTYezXLnfIWheoQb+9aZNE Dec 16 12:28:02.805485 sshd-session[4993]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:28:02.815367 systemd-logind[1971]: New session 13 of user core. Dec 16 12:28:02.820802 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 16 12:28:03.066627 sshd[4996]: Connection closed by 139.178.89.65 port 36656 Dec 16 12:28:03.067652 sshd-session[4993]: pam_unix(sshd:session): session closed for user core Dec 16 12:28:03.075011 systemd[1]: sshd@12-172.31.21.37:22-139.178.89.65:36656.service: Deactivated successfully. Dec 16 12:28:03.080152 systemd[1]: session-13.scope: Deactivated successfully. Dec 16 12:28:03.082611 systemd-logind[1971]: Session 13 logged out. Waiting for processes to exit. Dec 16 12:28:03.085890 systemd-logind[1971]: Removed session 13. Dec 16 12:28:08.112418 systemd[1]: Started sshd@13-172.31.21.37:22-139.178.89.65:36666.service - OpenSSH per-connection server daemon (139.178.89.65:36666). Dec 16 12:28:08.319373 sshd[5010]: Accepted publickey for core from 139.178.89.65 port 36666 ssh2: RSA SHA256:xUh8ykt9z5cbsZrCvFOBrVTYezXLnfIWheoQb+9aZNE Dec 16 12:28:08.321793 sshd-session[5010]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:28:08.334680 systemd-logind[1971]: New session 14 of user core. Dec 16 12:28:08.340991 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 16 12:28:08.599487 sshd[5013]: Connection closed by 139.178.89.65 port 36666 Dec 16 12:28:08.600397 sshd-session[5010]: pam_unix(sshd:session): session closed for user core Dec 16 12:28:08.609591 systemd[1]: sshd@13-172.31.21.37:22-139.178.89.65:36666.service: Deactivated successfully. Dec 16 12:28:08.614565 systemd[1]: session-14.scope: Deactivated successfully. Dec 16 12:28:08.617850 systemd-logind[1971]: Session 14 logged out. Waiting for processes to exit. Dec 16 12:28:08.621187 systemd-logind[1971]: Removed session 14. Dec 16 12:28:13.641911 systemd[1]: Started sshd@14-172.31.21.37:22-139.178.89.65:57958.service - OpenSSH per-connection server daemon (139.178.89.65:57958). Dec 16 12:28:13.853145 sshd[5024]: Accepted publickey for core from 139.178.89.65 port 57958 ssh2: RSA SHA256:xUh8ykt9z5cbsZrCvFOBrVTYezXLnfIWheoQb+9aZNE Dec 16 12:28:13.855515 sshd-session[5024]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:28:13.864386 systemd-logind[1971]: New session 15 of user core. Dec 16 12:28:13.873540 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 16 12:28:14.123646 sshd[5027]: Connection closed by 139.178.89.65 port 57958 Dec 16 12:28:14.124546 sshd-session[5024]: pam_unix(sshd:session): session closed for user core Dec 16 12:28:14.132131 systemd[1]: sshd@14-172.31.21.37:22-139.178.89.65:57958.service: Deactivated successfully. Dec 16 12:28:14.136317 systemd[1]: session-15.scope: Deactivated successfully. Dec 16 12:28:14.138456 systemd-logind[1971]: Session 15 logged out. Waiting for processes to exit. Dec 16 12:28:14.142007 systemd-logind[1971]: Removed session 15. Dec 16 12:28:19.163965 systemd[1]: Started sshd@15-172.31.21.37:22-139.178.89.65:57964.service - OpenSSH per-connection server daemon (139.178.89.65:57964). Dec 16 12:28:19.359362 sshd[5038]: Accepted publickey for core from 139.178.89.65 port 57964 ssh2: RSA SHA256:xUh8ykt9z5cbsZrCvFOBrVTYezXLnfIWheoQb+9aZNE Dec 16 12:28:19.361631 sshd-session[5038]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:28:19.371407 systemd-logind[1971]: New session 16 of user core. Dec 16 12:28:19.376565 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 16 12:28:19.620358 sshd[5041]: Connection closed by 139.178.89.65 port 57964 Dec 16 12:28:19.621448 sshd-session[5038]: pam_unix(sshd:session): session closed for user core Dec 16 12:28:19.627532 systemd[1]: sshd@15-172.31.21.37:22-139.178.89.65:57964.service: Deactivated successfully. Dec 16 12:28:19.632111 systemd[1]: session-16.scope: Deactivated successfully. Dec 16 12:28:19.635644 systemd-logind[1971]: Session 16 logged out. Waiting for processes to exit. Dec 16 12:28:19.640777 systemd-logind[1971]: Removed session 16. Dec 16 12:28:24.659813 systemd[1]: Started sshd@16-172.31.21.37:22-139.178.89.65:54138.service - OpenSSH per-connection server daemon (139.178.89.65:54138). Dec 16 12:28:24.852357 sshd[5055]: Accepted publickey for core from 139.178.89.65 port 54138 ssh2: RSA SHA256:xUh8ykt9z5cbsZrCvFOBrVTYezXLnfIWheoQb+9aZNE Dec 16 12:28:24.854058 sshd-session[5055]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:28:24.862741 systemd-logind[1971]: New session 17 of user core. Dec 16 12:28:24.873149 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 16 12:28:25.118945 sshd[5058]: Connection closed by 139.178.89.65 port 54138 Dec 16 12:28:25.120029 sshd-session[5055]: pam_unix(sshd:session): session closed for user core Dec 16 12:28:25.127089 systemd[1]: sshd@16-172.31.21.37:22-139.178.89.65:54138.service: Deactivated successfully. Dec 16 12:28:25.135847 systemd[1]: session-17.scope: Deactivated successfully. Dec 16 12:28:25.138244 systemd-logind[1971]: Session 17 logged out. Waiting for processes to exit. Dec 16 12:28:25.159351 systemd[1]: Started sshd@17-172.31.21.37:22-139.178.89.65:54140.service - OpenSSH per-connection server daemon (139.178.89.65:54140). Dec 16 12:28:25.162622 systemd-logind[1971]: Removed session 17. Dec 16 12:28:25.356665 sshd[5070]: Accepted publickey for core from 139.178.89.65 port 54140 ssh2: RSA SHA256:xUh8ykt9z5cbsZrCvFOBrVTYezXLnfIWheoQb+9aZNE Dec 16 12:28:25.358993 sshd-session[5070]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:28:25.366957 systemd-logind[1971]: New session 18 of user core. Dec 16 12:28:25.376516 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 16 12:28:25.700300 sshd[5073]: Connection closed by 139.178.89.65 port 54140 Dec 16 12:28:25.700146 sshd-session[5070]: pam_unix(sshd:session): session closed for user core Dec 16 12:28:25.707134 systemd-logind[1971]: Session 18 logged out. Waiting for processes to exit. Dec 16 12:28:25.708079 systemd[1]: sshd@17-172.31.21.37:22-139.178.89.65:54140.service: Deactivated successfully. Dec 16 12:28:25.712923 systemd[1]: session-18.scope: Deactivated successfully. Dec 16 12:28:25.718746 systemd-logind[1971]: Removed session 18. Dec 16 12:28:25.736423 systemd[1]: Started sshd@18-172.31.21.37:22-139.178.89.65:54154.service - OpenSSH per-connection server daemon (139.178.89.65:54154). Dec 16 12:28:25.930154 sshd[5082]: Accepted publickey for core from 139.178.89.65 port 54154 ssh2: RSA SHA256:xUh8ykt9z5cbsZrCvFOBrVTYezXLnfIWheoQb+9aZNE Dec 16 12:28:25.932534 sshd-session[5082]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:28:25.941591 systemd-logind[1971]: New session 19 of user core. Dec 16 12:28:25.947528 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 16 12:28:27.009324 sshd[5085]: Connection closed by 139.178.89.65 port 54154 Dec 16 12:28:27.010439 sshd-session[5082]: pam_unix(sshd:session): session closed for user core Dec 16 12:28:27.022962 systemd[1]: sshd@18-172.31.21.37:22-139.178.89.65:54154.service: Deactivated successfully. Dec 16 12:28:27.033736 systemd[1]: session-19.scope: Deactivated successfully. Dec 16 12:28:27.038364 systemd-logind[1971]: Session 19 logged out. Waiting for processes to exit. Dec 16 12:28:27.062023 systemd[1]: Started sshd@19-172.31.21.37:22-139.178.89.65:54168.service - OpenSSH per-connection server daemon (139.178.89.65:54168). Dec 16 12:28:27.066454 systemd-logind[1971]: Removed session 19. Dec 16 12:28:27.261528 sshd[5102]: Accepted publickey for core from 139.178.89.65 port 54168 ssh2: RSA SHA256:xUh8ykt9z5cbsZrCvFOBrVTYezXLnfIWheoQb+9aZNE Dec 16 12:28:27.264902 sshd-session[5102]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:28:27.274413 systemd-logind[1971]: New session 20 of user core. Dec 16 12:28:27.288537 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 16 12:28:27.787572 sshd[5105]: Connection closed by 139.178.89.65 port 54168 Dec 16 12:28:27.789360 sshd-session[5102]: pam_unix(sshd:session): session closed for user core Dec 16 12:28:27.799655 systemd[1]: sshd@19-172.31.21.37:22-139.178.89.65:54168.service: Deactivated successfully. Dec 16 12:28:27.804835 systemd[1]: session-20.scope: Deactivated successfully. Dec 16 12:28:27.806684 systemd-logind[1971]: Session 20 logged out. Waiting for processes to exit. Dec 16 12:28:27.809793 systemd-logind[1971]: Removed session 20. Dec 16 12:28:27.826334 systemd[1]: Started sshd@20-172.31.21.37:22-139.178.89.65:54184.service - OpenSSH per-connection server daemon (139.178.89.65:54184). Dec 16 12:28:28.026736 sshd[5114]: Accepted publickey for core from 139.178.89.65 port 54184 ssh2: RSA SHA256:xUh8ykt9z5cbsZrCvFOBrVTYezXLnfIWheoQb+9aZNE Dec 16 12:28:28.029015 sshd-session[5114]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:28:28.039375 systemd-logind[1971]: New session 21 of user core. Dec 16 12:28:28.047503 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 16 12:28:28.291542 sshd[5117]: Connection closed by 139.178.89.65 port 54184 Dec 16 12:28:28.292924 sshd-session[5114]: pam_unix(sshd:session): session closed for user core Dec 16 12:28:28.301132 systemd[1]: sshd@20-172.31.21.37:22-139.178.89.65:54184.service: Deactivated successfully. Dec 16 12:28:28.305793 systemd[1]: session-21.scope: Deactivated successfully. Dec 16 12:28:28.308001 systemd-logind[1971]: Session 21 logged out. Waiting for processes to exit. Dec 16 12:28:28.311232 systemd-logind[1971]: Removed session 21. Dec 16 12:28:33.332110 systemd[1]: Started sshd@21-172.31.21.37:22-139.178.89.65:36224.service - OpenSSH per-connection server daemon (139.178.89.65:36224). Dec 16 12:28:33.532675 sshd[5129]: Accepted publickey for core from 139.178.89.65 port 36224 ssh2: RSA SHA256:xUh8ykt9z5cbsZrCvFOBrVTYezXLnfIWheoQb+9aZNE Dec 16 12:28:33.535711 sshd-session[5129]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:28:33.544855 systemd-logind[1971]: New session 22 of user core. Dec 16 12:28:33.551549 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 16 12:28:33.830302 sshd[5132]: Connection closed by 139.178.89.65 port 36224 Dec 16 12:28:33.829559 sshd-session[5129]: pam_unix(sshd:session): session closed for user core Dec 16 12:28:33.836851 systemd[1]: sshd@21-172.31.21.37:22-139.178.89.65:36224.service: Deactivated successfully. Dec 16 12:28:33.842328 systemd[1]: session-22.scope: Deactivated successfully. Dec 16 12:28:33.844472 systemd-logind[1971]: Session 22 logged out. Waiting for processes to exit. Dec 16 12:28:33.847577 systemd-logind[1971]: Removed session 22. Dec 16 12:28:38.868688 systemd[1]: Started sshd@22-172.31.21.37:22-139.178.89.65:36234.service - OpenSSH per-connection server daemon (139.178.89.65:36234). Dec 16 12:28:39.075395 sshd[5149]: Accepted publickey for core from 139.178.89.65 port 36234 ssh2: RSA SHA256:xUh8ykt9z5cbsZrCvFOBrVTYezXLnfIWheoQb+9aZNE Dec 16 12:28:39.077762 sshd-session[5149]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:28:39.086361 systemd-logind[1971]: New session 23 of user core. Dec 16 12:28:39.095525 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 16 12:28:39.328827 sshd[5153]: Connection closed by 139.178.89.65 port 36234 Dec 16 12:28:39.330561 sshd-session[5149]: pam_unix(sshd:session): session closed for user core Dec 16 12:28:39.337769 systemd[1]: sshd@22-172.31.21.37:22-139.178.89.65:36234.service: Deactivated successfully. Dec 16 12:28:39.341867 systemd[1]: session-23.scope: Deactivated successfully. Dec 16 12:28:39.346871 systemd-logind[1971]: Session 23 logged out. Waiting for processes to exit. Dec 16 12:28:39.348749 systemd-logind[1971]: Removed session 23. Dec 16 12:28:44.372706 systemd[1]: Started sshd@23-172.31.21.37:22-139.178.89.65:35122.service - OpenSSH per-connection server daemon (139.178.89.65:35122). Dec 16 12:28:44.585001 sshd[5165]: Accepted publickey for core from 139.178.89.65 port 35122 ssh2: RSA SHA256:xUh8ykt9z5cbsZrCvFOBrVTYezXLnfIWheoQb+9aZNE Dec 16 12:28:44.587402 sshd-session[5165]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:28:44.595371 systemd-logind[1971]: New session 24 of user core. Dec 16 12:28:44.604511 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 16 12:28:44.846448 sshd[5168]: Connection closed by 139.178.89.65 port 35122 Dec 16 12:28:44.847382 sshd-session[5165]: pam_unix(sshd:session): session closed for user core Dec 16 12:28:44.854525 systemd[1]: sshd@23-172.31.21.37:22-139.178.89.65:35122.service: Deactivated successfully. Dec 16 12:28:44.863088 systemd[1]: session-24.scope: Deactivated successfully. Dec 16 12:28:44.871363 systemd-logind[1971]: Session 24 logged out. Waiting for processes to exit. Dec 16 12:28:44.875027 systemd-logind[1971]: Removed session 24. Dec 16 12:28:49.888814 systemd[1]: Started sshd@24-172.31.21.37:22-139.178.89.65:35128.service - OpenSSH per-connection server daemon (139.178.89.65:35128). Dec 16 12:28:50.086747 sshd[5180]: Accepted publickey for core from 139.178.89.65 port 35128 ssh2: RSA SHA256:xUh8ykt9z5cbsZrCvFOBrVTYezXLnfIWheoQb+9aZNE Dec 16 12:28:50.089087 sshd-session[5180]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:28:50.097283 systemd-logind[1971]: New session 25 of user core. Dec 16 12:28:50.110572 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 16 12:28:50.351653 sshd[5183]: Connection closed by 139.178.89.65 port 35128 Dec 16 12:28:50.352632 sshd-session[5180]: pam_unix(sshd:session): session closed for user core Dec 16 12:28:50.360138 systemd[1]: sshd@24-172.31.21.37:22-139.178.89.65:35128.service: Deactivated successfully. Dec 16 12:28:50.363451 systemd[1]: session-25.scope: Deactivated successfully. Dec 16 12:28:50.367385 systemd-logind[1971]: Session 25 logged out. Waiting for processes to exit. Dec 16 12:28:50.370091 systemd-logind[1971]: Removed session 25. Dec 16 12:28:50.392729 systemd[1]: Started sshd@25-172.31.21.37:22-139.178.89.65:48012.service - OpenSSH per-connection server daemon (139.178.89.65:48012). Dec 16 12:28:50.595159 sshd[5194]: Accepted publickey for core from 139.178.89.65 port 48012 ssh2: RSA SHA256:xUh8ykt9z5cbsZrCvFOBrVTYezXLnfIWheoQb+9aZNE Dec 16 12:28:50.597520 sshd-session[5194]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:28:50.605903 systemd-logind[1971]: New session 26 of user core. Dec 16 12:28:50.614560 systemd[1]: Started session-26.scope - Session 26 of User core. Dec 16 12:28:52.461706 containerd[2002]: time="2025-12-16T12:28:52.461633357Z" level=info msg="StopContainer for \"797828a533c44a31613183fed666e9bfe0316b198e0473ff30e6e29c267ab227\" with timeout 30 (s)" Dec 16 12:28:52.465532 containerd[2002]: time="2025-12-16T12:28:52.465222071Z" level=info msg="Stop container \"797828a533c44a31613183fed666e9bfe0316b198e0473ff30e6e29c267ab227\" with signal terminated" Dec 16 12:28:52.500595 systemd[1]: cri-containerd-797828a533c44a31613183fed666e9bfe0316b198e0473ff30e6e29c267ab227.scope: Deactivated successfully. Dec 16 12:28:52.507506 containerd[2002]: time="2025-12-16T12:28:52.507147947Z" level=info msg="received container exit event container_id:\"797828a533c44a31613183fed666e9bfe0316b198e0473ff30e6e29c267ab227\" id:\"797828a533c44a31613183fed666e9bfe0316b198e0473ff30e6e29c267ab227\" pid:4131 exited_at:{seconds:1765888132 nanos:506691371}" Dec 16 12:28:52.519158 containerd[2002]: time="2025-12-16T12:28:52.519054322Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 12:28:52.536491 containerd[2002]: time="2025-12-16T12:28:52.536404433Z" level=info msg="StopContainer for \"4aec4ea9b93d142a38ba731edb8ed8f39f5ca534cb51004a2fbe74aa086a823d\" with timeout 2 (s)" Dec 16 12:28:52.537740 containerd[2002]: time="2025-12-16T12:28:52.537686637Z" level=info msg="Stop container \"4aec4ea9b93d142a38ba731edb8ed8f39f5ca534cb51004a2fbe74aa086a823d\" with signal terminated" Dec 16 12:28:52.557146 systemd-networkd[1868]: lxc_health: Link DOWN Dec 16 12:28:52.557159 systemd-networkd[1868]: lxc_health: Lost carrier Dec 16 12:28:52.595245 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-797828a533c44a31613183fed666e9bfe0316b198e0473ff30e6e29c267ab227-rootfs.mount: Deactivated successfully. Dec 16 12:28:52.603336 containerd[2002]: time="2025-12-16T12:28:52.599818624Z" level=info msg="received container exit event container_id:\"4aec4ea9b93d142a38ba731edb8ed8f39f5ca534cb51004a2fbe74aa086a823d\" id:\"4aec4ea9b93d142a38ba731edb8ed8f39f5ca534cb51004a2fbe74aa086a823d\" pid:4240 exited_at:{seconds:1765888132 nanos:598986404}" Dec 16 12:28:52.602003 systemd[1]: cri-containerd-4aec4ea9b93d142a38ba731edb8ed8f39f5ca534cb51004a2fbe74aa086a823d.scope: Deactivated successfully. Dec 16 12:28:52.602619 systemd[1]: cri-containerd-4aec4ea9b93d142a38ba731edb8ed8f39f5ca534cb51004a2fbe74aa086a823d.scope: Consumed 14.576s CPU time, 127M memory peak, 128K read from disk, 12.9M written to disk. Dec 16 12:28:52.634471 containerd[2002]: time="2025-12-16T12:28:52.634405893Z" level=info msg="StopContainer for \"797828a533c44a31613183fed666e9bfe0316b198e0473ff30e6e29c267ab227\" returns successfully" Dec 16 12:28:52.636734 containerd[2002]: time="2025-12-16T12:28:52.636041651Z" level=info msg="StopPodSandbox for \"de8757e5b3e6cb732c739be970b8512617cad57213f2cf882b8af0c4de105fa2\"" Dec 16 12:28:52.636734 containerd[2002]: time="2025-12-16T12:28:52.636209435Z" level=info msg="Container to stop \"797828a533c44a31613183fed666e9bfe0316b198e0473ff30e6e29c267ab227\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 16 12:28:52.659421 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4aec4ea9b93d142a38ba731edb8ed8f39f5ca534cb51004a2fbe74aa086a823d-rootfs.mount: Deactivated successfully. Dec 16 12:28:52.660842 systemd[1]: cri-containerd-de8757e5b3e6cb732c739be970b8512617cad57213f2cf882b8af0c4de105fa2.scope: Deactivated successfully. Dec 16 12:28:52.672880 containerd[2002]: time="2025-12-16T12:28:52.672733584Z" level=info msg="received sandbox exit event container_id:\"de8757e5b3e6cb732c739be970b8512617cad57213f2cf882b8af0c4de105fa2\" id:\"de8757e5b3e6cb732c739be970b8512617cad57213f2cf882b8af0c4de105fa2\" exit_status:137 exited_at:{seconds:1765888132 nanos:670403291}" monitor_name=podsandbox Dec 16 12:28:52.684308 containerd[2002]: time="2025-12-16T12:28:52.684209267Z" level=info msg="StopContainer for \"4aec4ea9b93d142a38ba731edb8ed8f39f5ca534cb51004a2fbe74aa086a823d\" returns successfully" Dec 16 12:28:52.685238 containerd[2002]: time="2025-12-16T12:28:52.685089823Z" level=info msg="StopPodSandbox for \"38d4591418001ad9fb8edd2fbdb87075f05c7e92153f139003519a48744610ce\"" Dec 16 12:28:52.685855 containerd[2002]: time="2025-12-16T12:28:52.685730127Z" level=info msg="Container to stop \"ce7e896281fb2b74f050bc0a4a49b1675aa9d3af5e5bbb86642e6a6de4491658\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 16 12:28:52.686044 containerd[2002]: time="2025-12-16T12:28:52.685981353Z" level=info msg="Container to stop \"7c84dd3b83d4cb7517a52e0672680f81d275dea128b5c92e37eeb2cde3b4379e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 16 12:28:52.686798 containerd[2002]: time="2025-12-16T12:28:52.686242291Z" level=info msg="Container to stop \"9afb2dc031a7a9dbd722844f09030a0330feea76060712a7d94556ed49de6717\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 16 12:28:52.686798 containerd[2002]: time="2025-12-16T12:28:52.686399714Z" level=info msg="Container to stop \"f49163aa2659ca5a948fba8da2e0b8d4419ab5c5f67e1cd90966f4b5536f21ea\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 16 12:28:52.686798 containerd[2002]: time="2025-12-16T12:28:52.686423498Z" level=info msg="Container to stop \"4aec4ea9b93d142a38ba731edb8ed8f39f5ca534cb51004a2fbe74aa086a823d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 16 12:28:52.707942 systemd[1]: cri-containerd-38d4591418001ad9fb8edd2fbdb87075f05c7e92153f139003519a48744610ce.scope: Deactivated successfully. Dec 16 12:28:52.715404 containerd[2002]: time="2025-12-16T12:28:52.713809172Z" level=info msg="received sandbox exit event container_id:\"38d4591418001ad9fb8edd2fbdb87075f05c7e92153f139003519a48744610ce\" id:\"38d4591418001ad9fb8edd2fbdb87075f05c7e92153f139003519a48744610ce\" exit_status:137 exited_at:{seconds:1765888132 nanos:712876570}" monitor_name=podsandbox Dec 16 12:28:52.738650 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-de8757e5b3e6cb732c739be970b8512617cad57213f2cf882b8af0c4de105fa2-rootfs.mount: Deactivated successfully. Dec 16 12:28:52.745831 containerd[2002]: time="2025-12-16T12:28:52.745473160Z" level=info msg="shim disconnected" id=de8757e5b3e6cb732c739be970b8512617cad57213f2cf882b8af0c4de105fa2 namespace=k8s.io Dec 16 12:28:52.745831 containerd[2002]: time="2025-12-16T12:28:52.745525398Z" level=warning msg="cleaning up after shim disconnected" id=de8757e5b3e6cb732c739be970b8512617cad57213f2cf882b8af0c4de105fa2 namespace=k8s.io Dec 16 12:28:52.745831 containerd[2002]: time="2025-12-16T12:28:52.745578177Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 16 12:28:52.780372 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-38d4591418001ad9fb8edd2fbdb87075f05c7e92153f139003519a48744610ce-rootfs.mount: Deactivated successfully. Dec 16 12:28:52.786691 containerd[2002]: time="2025-12-16T12:28:52.786621360Z" level=info msg="shim disconnected" id=38d4591418001ad9fb8edd2fbdb87075f05c7e92153f139003519a48744610ce namespace=k8s.io Dec 16 12:28:52.786897 containerd[2002]: time="2025-12-16T12:28:52.786681270Z" level=warning msg="cleaning up after shim disconnected" id=38d4591418001ad9fb8edd2fbdb87075f05c7e92153f139003519a48744610ce namespace=k8s.io Dec 16 12:28:52.786897 containerd[2002]: time="2025-12-16T12:28:52.786748300Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 16 12:28:52.795925 containerd[2002]: time="2025-12-16T12:28:52.795347165Z" level=info msg="received sandbox container exit event sandbox_id:\"de8757e5b3e6cb732c739be970b8512617cad57213f2cf882b8af0c4de105fa2\" exit_status:137 exited_at:{seconds:1765888132 nanos:670403291}" monitor_name=criService Dec 16 12:28:52.798195 containerd[2002]: time="2025-12-16T12:28:52.798097500Z" level=info msg="TearDown network for sandbox \"de8757e5b3e6cb732c739be970b8512617cad57213f2cf882b8af0c4de105fa2\" successfully" Dec 16 12:28:52.798195 containerd[2002]: time="2025-12-16T12:28:52.798184927Z" level=info msg="StopPodSandbox for \"de8757e5b3e6cb732c739be970b8512617cad57213f2cf882b8af0c4de105fa2\" returns successfully" Dec 16 12:28:52.802948 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-de8757e5b3e6cb732c739be970b8512617cad57213f2cf882b8af0c4de105fa2-shm.mount: Deactivated successfully. Dec 16 12:28:52.829984 containerd[2002]: time="2025-12-16T12:28:52.829706164Z" level=info msg="received sandbox container exit event sandbox_id:\"38d4591418001ad9fb8edd2fbdb87075f05c7e92153f139003519a48744610ce\" exit_status:137 exited_at:{seconds:1765888132 nanos:712876570}" monitor_name=criService Dec 16 12:28:52.831718 containerd[2002]: time="2025-12-16T12:28:52.831496139Z" level=info msg="TearDown network for sandbox \"38d4591418001ad9fb8edd2fbdb87075f05c7e92153f139003519a48744610ce\" successfully" Dec 16 12:28:52.831718 containerd[2002]: time="2025-12-16T12:28:52.831547885Z" level=info msg="StopPodSandbox for \"38d4591418001ad9fb8edd2fbdb87075f05c7e92153f139003519a48744610ce\" returns successfully" Dec 16 12:28:52.900174 kubelet[3329]: I1216 12:28:52.899619 3329 scope.go:117] "RemoveContainer" containerID="797828a533c44a31613183fed666e9bfe0316b198e0473ff30e6e29c267ab227" Dec 16 12:28:52.907654 containerd[2002]: time="2025-12-16T12:28:52.907599580Z" level=info msg="RemoveContainer for \"797828a533c44a31613183fed666e9bfe0316b198e0473ff30e6e29c267ab227\"" Dec 16 12:28:52.916918 containerd[2002]: time="2025-12-16T12:28:52.916825951Z" level=info msg="RemoveContainer for \"797828a533c44a31613183fed666e9bfe0316b198e0473ff30e6e29c267ab227\" returns successfully" Dec 16 12:28:52.918406 kubelet[3329]: I1216 12:28:52.917237 3329 scope.go:117] "RemoveContainer" containerID="797828a533c44a31613183fed666e9bfe0316b198e0473ff30e6e29c267ab227" Dec 16 12:28:52.919297 containerd[2002]: time="2025-12-16T12:28:52.919194795Z" level=error msg="ContainerStatus for \"797828a533c44a31613183fed666e9bfe0316b198e0473ff30e6e29c267ab227\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"797828a533c44a31613183fed666e9bfe0316b198e0473ff30e6e29c267ab227\": not found" Dec 16 12:28:52.919639 kubelet[3329]: E1216 12:28:52.919591 3329 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"797828a533c44a31613183fed666e9bfe0316b198e0473ff30e6e29c267ab227\": not found" containerID="797828a533c44a31613183fed666e9bfe0316b198e0473ff30e6e29c267ab227" Dec 16 12:28:52.921000 kubelet[3329]: I1216 12:28:52.919786 3329 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"797828a533c44a31613183fed666e9bfe0316b198e0473ff30e6e29c267ab227"} err="failed to get container status \"797828a533c44a31613183fed666e9bfe0316b198e0473ff30e6e29c267ab227\": rpc error: code = NotFound desc = an error occurred when try to find container \"797828a533c44a31613183fed666e9bfe0316b198e0473ff30e6e29c267ab227\": not found" Dec 16 12:28:52.921173 kubelet[3329]: I1216 12:28:52.921006 3329 scope.go:117] "RemoveContainer" containerID="4aec4ea9b93d142a38ba731edb8ed8f39f5ca534cb51004a2fbe74aa086a823d" Dec 16 12:28:52.926285 containerd[2002]: time="2025-12-16T12:28:52.926142391Z" level=info msg="RemoveContainer for \"4aec4ea9b93d142a38ba731edb8ed8f39f5ca534cb51004a2fbe74aa086a823d\"" Dec 16 12:28:52.933288 kubelet[3329]: I1216 12:28:52.933034 3329 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15-clustermesh-secrets\") pod \"9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15\" (UID: \"9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15\") " Dec 16 12:28:52.933288 kubelet[3329]: I1216 12:28:52.933104 3329 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15-hubble-tls\") pod \"9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15\" (UID: \"9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15\") " Dec 16 12:28:52.933288 kubelet[3329]: I1216 12:28:52.933145 3329 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15-cilium-config-path\") pod \"9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15\" (UID: \"9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15\") " Dec 16 12:28:52.933288 kubelet[3329]: I1216 12:28:52.933181 3329 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15-lib-modules\") pod \"9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15\" (UID: \"9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15\") " Dec 16 12:28:52.933288 kubelet[3329]: I1216 12:28:52.933211 3329 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15-cilium-run\") pod \"9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15\" (UID: \"9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15\") " Dec 16 12:28:52.933838 kubelet[3329]: I1216 12:28:52.933254 3329 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15-host-proc-sys-kernel\") pod \"9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15\" (UID: \"9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15\") " Dec 16 12:28:52.933838 kubelet[3329]: I1216 12:28:52.933723 3329 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15-bpf-maps\") pod \"9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15\" (UID: \"9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15\") " Dec 16 12:28:52.933838 kubelet[3329]: I1216 12:28:52.933790 3329 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15-host-proc-sys-net\") pod \"9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15\" (UID: \"9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15\") " Dec 16 12:28:52.934157 kubelet[3329]: I1216 12:28:52.934038 3329 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15-cilium-cgroup\") pod \"9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15\" (UID: \"9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15\") " Dec 16 12:28:52.934384 kubelet[3329]: I1216 12:28:52.934087 3329 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2b09daed-aa23-4573-b09a-03aa103e9afe-cilium-config-path\") pod \"2b09daed-aa23-4573-b09a-03aa103e9afe\" (UID: \"2b09daed-aa23-4573-b09a-03aa103e9afe\") " Dec 16 12:28:52.934934 kubelet[3329]: I1216 12:28:52.934891 3329 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dx2pm\" (UniqueName: \"kubernetes.io/projected/9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15-kube-api-access-dx2pm\") pod \"9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15\" (UID: \"9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15\") " Dec 16 12:28:52.935092 kubelet[3329]: I1216 12:28:52.935067 3329 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7ct7h\" (UniqueName: \"kubernetes.io/projected/2b09daed-aa23-4573-b09a-03aa103e9afe-kube-api-access-7ct7h\") pod \"2b09daed-aa23-4573-b09a-03aa103e9afe\" (UID: \"2b09daed-aa23-4573-b09a-03aa103e9afe\") " Dec 16 12:28:52.939981 containerd[2002]: time="2025-12-16T12:28:52.939378322Z" level=info msg="RemoveContainer for \"4aec4ea9b93d142a38ba731edb8ed8f39f5ca534cb51004a2fbe74aa086a823d\" returns successfully" Dec 16 12:28:52.940694 kubelet[3329]: I1216 12:28:52.940663 3329 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15-cni-path\") pod \"9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15\" (UID: \"9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15\") " Dec 16 12:28:52.944747 kubelet[3329]: I1216 12:28:52.944684 3329 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15-xtables-lock\") pod \"9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15\" (UID: \"9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15\") " Dec 16 12:28:52.946061 kubelet[3329]: I1216 12:28:52.946007 3329 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15-hostproc\") pod \"9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15\" (UID: \"9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15\") " Dec 16 12:28:52.946206 kubelet[3329]: I1216 12:28:52.946076 3329 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15-etc-cni-netd\") pod \"9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15\" (UID: \"9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15\") " Dec 16 12:28:52.946478 kubelet[3329]: I1216 12:28:52.946434 3329 scope.go:117] "RemoveContainer" containerID="f49163aa2659ca5a948fba8da2e0b8d4419ab5c5f67e1cd90966f4b5536f21ea" Dec 16 12:28:52.948689 kubelet[3329]: I1216 12:28:52.940469 3329 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15" (UID: "9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 12:28:52.949285 kubelet[3329]: I1216 12:28:52.940508 3329 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15" (UID: "9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 12:28:52.949285 kubelet[3329]: I1216 12:28:52.940537 3329 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15" (UID: "9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 12:28:52.949285 kubelet[3329]: I1216 12:28:52.940561 3329 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15" (UID: "9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 12:28:52.949285 kubelet[3329]: I1216 12:28:52.940583 3329 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15" (UID: "9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 12:28:52.949285 kubelet[3329]: I1216 12:28:52.940607 3329 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15" (UID: "9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 12:28:52.949642 kubelet[3329]: I1216 12:28:52.941369 3329 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15-cni-path" (OuterVolumeSpecName: "cni-path") pod "9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15" (UID: "9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 12:28:52.950332 kubelet[3329]: I1216 12:28:52.946180 3329 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15" (UID: "9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 12:28:52.951172 kubelet[3329]: I1216 12:28:52.946208 3329 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15" (UID: "9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 12:28:52.951172 kubelet[3329]: I1216 12:28:52.946230 3329 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15-hostproc" (OuterVolumeSpecName: "hostproc") pod "9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15" (UID: "9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 12:28:52.951172 kubelet[3329]: I1216 12:28:52.950856 3329 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15" (UID: "9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 16 12:28:52.953465 kubelet[3329]: I1216 12:28:52.953416 3329 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15" (UID: "9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 16 12:28:52.954153 kubelet[3329]: I1216 12:28:52.954109 3329 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b09daed-aa23-4573-b09a-03aa103e9afe-kube-api-access-7ct7h" (OuterVolumeSpecName: "kube-api-access-7ct7h") pod "2b09daed-aa23-4573-b09a-03aa103e9afe" (UID: "2b09daed-aa23-4573-b09a-03aa103e9afe"). InnerVolumeSpecName "kube-api-access-7ct7h". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 16 12:28:52.954845 containerd[2002]: time="2025-12-16T12:28:52.954232745Z" level=info msg="RemoveContainer for \"f49163aa2659ca5a948fba8da2e0b8d4419ab5c5f67e1cd90966f4b5536f21ea\"" Dec 16 12:28:52.957153 kubelet[3329]: I1216 12:28:52.957099 3329 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b09daed-aa23-4573-b09a-03aa103e9afe-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "2b09daed-aa23-4573-b09a-03aa103e9afe" (UID: "2b09daed-aa23-4573-b09a-03aa103e9afe"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 16 12:28:52.957420 kubelet[3329]: I1216 12:28:52.957140 3329 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15-kube-api-access-dx2pm" (OuterVolumeSpecName: "kube-api-access-dx2pm") pod "9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15" (UID: "9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15"). InnerVolumeSpecName "kube-api-access-dx2pm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 16 12:28:52.958391 kubelet[3329]: I1216 12:28:52.958348 3329 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15" (UID: "9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 16 12:28:52.977574 containerd[2002]: time="2025-12-16T12:28:52.977387073Z" level=info msg="RemoveContainer for \"f49163aa2659ca5a948fba8da2e0b8d4419ab5c5f67e1cd90966f4b5536f21ea\" returns successfully" Dec 16 12:28:52.978552 kubelet[3329]: I1216 12:28:52.978302 3329 scope.go:117] "RemoveContainer" containerID="9afb2dc031a7a9dbd722844f09030a0330feea76060712a7d94556ed49de6717" Dec 16 12:28:52.984927 containerd[2002]: time="2025-12-16T12:28:52.984879069Z" level=info msg="RemoveContainer for \"9afb2dc031a7a9dbd722844f09030a0330feea76060712a7d94556ed49de6717\"" Dec 16 12:28:52.993098 containerd[2002]: time="2025-12-16T12:28:52.992950307Z" level=info msg="RemoveContainer for \"9afb2dc031a7a9dbd722844f09030a0330feea76060712a7d94556ed49de6717\" returns successfully" Dec 16 12:28:52.993585 kubelet[3329]: I1216 12:28:52.993555 3329 scope.go:117] "RemoveContainer" containerID="7c84dd3b83d4cb7517a52e0672680f81d275dea128b5c92e37eeb2cde3b4379e" Dec 16 12:28:52.996470 containerd[2002]: time="2025-12-16T12:28:52.996382822Z" level=info msg="RemoveContainer for \"7c84dd3b83d4cb7517a52e0672680f81d275dea128b5c92e37eeb2cde3b4379e\"" Dec 16 12:28:53.003600 containerd[2002]: time="2025-12-16T12:28:53.003534544Z" level=info msg="RemoveContainer for \"7c84dd3b83d4cb7517a52e0672680f81d275dea128b5c92e37eeb2cde3b4379e\" returns successfully" Dec 16 12:28:53.004142 kubelet[3329]: I1216 12:28:53.003994 3329 scope.go:117] "RemoveContainer" containerID="ce7e896281fb2b74f050bc0a4a49b1675aa9d3af5e5bbb86642e6a6de4491658" Dec 16 12:28:53.007175 containerd[2002]: time="2025-12-16T12:28:53.007107662Z" level=info msg="RemoveContainer for \"ce7e896281fb2b74f050bc0a4a49b1675aa9d3af5e5bbb86642e6a6de4491658\"" Dec 16 12:28:53.013798 containerd[2002]: time="2025-12-16T12:28:53.013715488Z" level=info msg="RemoveContainer for \"ce7e896281fb2b74f050bc0a4a49b1675aa9d3af5e5bbb86642e6a6de4491658\" returns successfully" Dec 16 12:28:53.014254 kubelet[3329]: I1216 12:28:53.014036 3329 scope.go:117] "RemoveContainer" containerID="4aec4ea9b93d142a38ba731edb8ed8f39f5ca534cb51004a2fbe74aa086a823d" Dec 16 12:28:53.014690 containerd[2002]: time="2025-12-16T12:28:53.014637969Z" level=error msg="ContainerStatus for \"4aec4ea9b93d142a38ba731edb8ed8f39f5ca534cb51004a2fbe74aa086a823d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4aec4ea9b93d142a38ba731edb8ed8f39f5ca534cb51004a2fbe74aa086a823d\": not found" Dec 16 12:28:53.015182 kubelet[3329]: E1216 12:28:53.015125 3329 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4aec4ea9b93d142a38ba731edb8ed8f39f5ca534cb51004a2fbe74aa086a823d\": not found" containerID="4aec4ea9b93d142a38ba731edb8ed8f39f5ca534cb51004a2fbe74aa086a823d" Dec 16 12:28:53.015496 kubelet[3329]: I1216 12:28:53.015180 3329 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4aec4ea9b93d142a38ba731edb8ed8f39f5ca534cb51004a2fbe74aa086a823d"} err="failed to get container status \"4aec4ea9b93d142a38ba731edb8ed8f39f5ca534cb51004a2fbe74aa086a823d\": rpc error: code = NotFound desc = an error occurred when try to find container \"4aec4ea9b93d142a38ba731edb8ed8f39f5ca534cb51004a2fbe74aa086a823d\": not found" Dec 16 12:28:53.015496 kubelet[3329]: I1216 12:28:53.015217 3329 scope.go:117] "RemoveContainer" containerID="f49163aa2659ca5a948fba8da2e0b8d4419ab5c5f67e1cd90966f4b5536f21ea" Dec 16 12:28:53.015930 containerd[2002]: time="2025-12-16T12:28:53.015828592Z" level=error msg="ContainerStatus for \"f49163aa2659ca5a948fba8da2e0b8d4419ab5c5f67e1cd90966f4b5536f21ea\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f49163aa2659ca5a948fba8da2e0b8d4419ab5c5f67e1cd90966f4b5536f21ea\": not found" Dec 16 12:28:53.016243 kubelet[3329]: E1216 12:28:53.016188 3329 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f49163aa2659ca5a948fba8da2e0b8d4419ab5c5f67e1cd90966f4b5536f21ea\": not found" containerID="f49163aa2659ca5a948fba8da2e0b8d4419ab5c5f67e1cd90966f4b5536f21ea" Dec 16 12:28:53.016362 kubelet[3329]: I1216 12:28:53.016241 3329 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f49163aa2659ca5a948fba8da2e0b8d4419ab5c5f67e1cd90966f4b5536f21ea"} err="failed to get container status \"f49163aa2659ca5a948fba8da2e0b8d4419ab5c5f67e1cd90966f4b5536f21ea\": rpc error: code = NotFound desc = an error occurred when try to find container \"f49163aa2659ca5a948fba8da2e0b8d4419ab5c5f67e1cd90966f4b5536f21ea\": not found" Dec 16 12:28:53.016362 kubelet[3329]: I1216 12:28:53.016299 3329 scope.go:117] "RemoveContainer" containerID="9afb2dc031a7a9dbd722844f09030a0330feea76060712a7d94556ed49de6717" Dec 16 12:28:53.016907 containerd[2002]: time="2025-12-16T12:28:53.016627171Z" level=error msg="ContainerStatus for \"9afb2dc031a7a9dbd722844f09030a0330feea76060712a7d94556ed49de6717\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9afb2dc031a7a9dbd722844f09030a0330feea76060712a7d94556ed49de6717\": not found" Dec 16 12:28:53.017146 kubelet[3329]: E1216 12:28:53.017113 3329 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9afb2dc031a7a9dbd722844f09030a0330feea76060712a7d94556ed49de6717\": not found" containerID="9afb2dc031a7a9dbd722844f09030a0330feea76060712a7d94556ed49de6717" Dec 16 12:28:53.017353 kubelet[3329]: I1216 12:28:53.017315 3329 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9afb2dc031a7a9dbd722844f09030a0330feea76060712a7d94556ed49de6717"} err="failed to get container status \"9afb2dc031a7a9dbd722844f09030a0330feea76060712a7d94556ed49de6717\": rpc error: code = NotFound desc = an error occurred when try to find container \"9afb2dc031a7a9dbd722844f09030a0330feea76060712a7d94556ed49de6717\": not found" Dec 16 12:28:53.017496 kubelet[3329]: I1216 12:28:53.017472 3329 scope.go:117] "RemoveContainer" containerID="7c84dd3b83d4cb7517a52e0672680f81d275dea128b5c92e37eeb2cde3b4379e" Dec 16 12:28:53.018174 containerd[2002]: time="2025-12-16T12:28:53.018041670Z" level=error msg="ContainerStatus for \"7c84dd3b83d4cb7517a52e0672680f81d275dea128b5c92e37eeb2cde3b4379e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7c84dd3b83d4cb7517a52e0672680f81d275dea128b5c92e37eeb2cde3b4379e\": not found" Dec 16 12:28:53.018374 kubelet[3329]: E1216 12:28:53.018325 3329 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7c84dd3b83d4cb7517a52e0672680f81d275dea128b5c92e37eeb2cde3b4379e\": not found" containerID="7c84dd3b83d4cb7517a52e0672680f81d275dea128b5c92e37eeb2cde3b4379e" Dec 16 12:28:53.018474 kubelet[3329]: I1216 12:28:53.018382 3329 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7c84dd3b83d4cb7517a52e0672680f81d275dea128b5c92e37eeb2cde3b4379e"} err="failed to get container status \"7c84dd3b83d4cb7517a52e0672680f81d275dea128b5c92e37eeb2cde3b4379e\": rpc error: code = NotFound desc = an error occurred when try to find container \"7c84dd3b83d4cb7517a52e0672680f81d275dea128b5c92e37eeb2cde3b4379e\": not found" Dec 16 12:28:53.018474 kubelet[3329]: I1216 12:28:53.018416 3329 scope.go:117] "RemoveContainer" containerID="ce7e896281fb2b74f050bc0a4a49b1675aa9d3af5e5bbb86642e6a6de4491658" Dec 16 12:28:53.019169 containerd[2002]: time="2025-12-16T12:28:53.019039501Z" level=error msg="ContainerStatus for \"ce7e896281fb2b74f050bc0a4a49b1675aa9d3af5e5bbb86642e6a6de4491658\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ce7e896281fb2b74f050bc0a4a49b1675aa9d3af5e5bbb86642e6a6de4491658\": not found" Dec 16 12:28:53.019344 kubelet[3329]: E1216 12:28:53.019300 3329 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ce7e896281fb2b74f050bc0a4a49b1675aa9d3af5e5bbb86642e6a6de4491658\": not found" containerID="ce7e896281fb2b74f050bc0a4a49b1675aa9d3af5e5bbb86642e6a6de4491658" Dec 16 12:28:53.019411 kubelet[3329]: I1216 12:28:53.019353 3329 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ce7e896281fb2b74f050bc0a4a49b1675aa9d3af5e5bbb86642e6a6de4491658"} err="failed to get container status \"ce7e896281fb2b74f050bc0a4a49b1675aa9d3af5e5bbb86642e6a6de4491658\": rpc error: code = NotFound desc = an error occurred when try to find container \"ce7e896281fb2b74f050bc0a4a49b1675aa9d3af5e5bbb86642e6a6de4491658\": not found" Dec 16 12:28:53.046816 kubelet[3329]: I1216 12:28:53.046716 3329 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7ct7h\" (UniqueName: \"kubernetes.io/projected/2b09daed-aa23-4573-b09a-03aa103e9afe-kube-api-access-7ct7h\") on node \"ip-172-31-21-37\" DevicePath \"\"" Dec 16 12:28:53.046816 kubelet[3329]: I1216 12:28:53.046797 3329 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dx2pm\" (UniqueName: \"kubernetes.io/projected/9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15-kube-api-access-dx2pm\") on node \"ip-172-31-21-37\" DevicePath \"\"" Dec 16 12:28:53.047067 kubelet[3329]: I1216 12:28:53.046871 3329 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15-cni-path\") on node \"ip-172-31-21-37\" DevicePath \"\"" Dec 16 12:28:53.047067 kubelet[3329]: I1216 12:28:53.046900 3329 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15-xtables-lock\") on node \"ip-172-31-21-37\" DevicePath \"\"" Dec 16 12:28:53.047067 kubelet[3329]: I1216 12:28:53.046951 3329 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15-hostproc\") on node \"ip-172-31-21-37\" DevicePath \"\"" Dec 16 12:28:53.047067 kubelet[3329]: I1216 12:28:53.046973 3329 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15-etc-cni-netd\") on node \"ip-172-31-21-37\" DevicePath \"\"" Dec 16 12:28:53.047320 kubelet[3329]: I1216 12:28:53.046993 3329 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15-cilium-config-path\") on node \"ip-172-31-21-37\" DevicePath \"\"" Dec 16 12:28:53.047376 kubelet[3329]: I1216 12:28:53.047340 3329 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15-clustermesh-secrets\") on node \"ip-172-31-21-37\" DevicePath \"\"" Dec 16 12:28:53.047426 kubelet[3329]: I1216 12:28:53.047364 3329 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15-hubble-tls\") on node \"ip-172-31-21-37\" DevicePath \"\"" Dec 16 12:28:53.047426 kubelet[3329]: I1216 12:28:53.047419 3329 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15-lib-modules\") on node \"ip-172-31-21-37\" DevicePath \"\"" Dec 16 12:28:53.047528 kubelet[3329]: I1216 12:28:53.047444 3329 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15-cilium-run\") on node \"ip-172-31-21-37\" DevicePath \"\"" Dec 16 12:28:53.047528 kubelet[3329]: I1216 12:28:53.047491 3329 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15-host-proc-sys-kernel\") on node \"ip-172-31-21-37\" DevicePath \"\"" Dec 16 12:28:53.047528 kubelet[3329]: I1216 12:28:53.047514 3329 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15-bpf-maps\") on node \"ip-172-31-21-37\" DevicePath \"\"" Dec 16 12:28:53.047676 kubelet[3329]: I1216 12:28:53.047535 3329 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15-cilium-cgroup\") on node \"ip-172-31-21-37\" DevicePath \"\"" Dec 16 12:28:53.048206 kubelet[3329]: I1216 12:28:53.048126 3329 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2b09daed-aa23-4573-b09a-03aa103e9afe-cilium-config-path\") on node \"ip-172-31-21-37\" DevicePath \"\"" Dec 16 12:28:53.048206 kubelet[3329]: I1216 12:28:53.048168 3329 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15-host-proc-sys-net\") on node \"ip-172-31-21-37\" DevicePath \"\"" Dec 16 12:28:53.209254 systemd[1]: Removed slice kubepods-besteffort-pod2b09daed_aa23_4573_b09a_03aa103e9afe.slice - libcontainer container kubepods-besteffort-pod2b09daed_aa23_4573_b09a_03aa103e9afe.slice. Dec 16 12:28:53.237472 systemd[1]: Removed slice kubepods-burstable-pod9c5f0d5a_5dce_43bc_a6fe_ec3e3c012a15.slice - libcontainer container kubepods-burstable-pod9c5f0d5a_5dce_43bc_a6fe_ec3e3c012a15.slice. Dec 16 12:28:53.237707 systemd[1]: kubepods-burstable-pod9c5f0d5a_5dce_43bc_a6fe_ec3e3c012a15.slice: Consumed 14.785s CPU time, 127.5M memory peak, 128K read from disk, 12.9M written to disk. Dec 16 12:28:53.511233 kubelet[3329]: E1216 12:28:53.511023 3329 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 16 12:28:53.593306 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-38d4591418001ad9fb8edd2fbdb87075f05c7e92153f139003519a48744610ce-shm.mount: Deactivated successfully. Dec 16 12:28:53.593487 systemd[1]: var-lib-kubelet-pods-9c5f0d5a\x2d5dce\x2d43bc\x2da6fe\x2dec3e3c012a15-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddx2pm.mount: Deactivated successfully. Dec 16 12:28:53.593627 systemd[1]: var-lib-kubelet-pods-2b09daed\x2daa23\x2d4573\x2db09a\x2d03aa103e9afe-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7ct7h.mount: Deactivated successfully. Dec 16 12:28:53.593762 systemd[1]: var-lib-kubelet-pods-9c5f0d5a\x2d5dce\x2d43bc\x2da6fe\x2dec3e3c012a15-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 16 12:28:53.593885 systemd[1]: var-lib-kubelet-pods-9c5f0d5a\x2d5dce\x2d43bc\x2da6fe\x2dec3e3c012a15-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 16 12:28:54.337045 kubelet[3329]: I1216 12:28:54.336996 3329 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b09daed-aa23-4573-b09a-03aa103e9afe" path="/var/lib/kubelet/pods/2b09daed-aa23-4573-b09a-03aa103e9afe/volumes" Dec 16 12:28:54.338113 kubelet[3329]: I1216 12:28:54.338072 3329 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15" path="/var/lib/kubelet/pods/9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15/volumes" Dec 16 12:28:54.390933 sshd[5197]: Connection closed by 139.178.89.65 port 48012 Dec 16 12:28:54.391918 sshd-session[5194]: pam_unix(sshd:session): session closed for user core Dec 16 12:28:54.399435 systemd[1]: sshd@25-172.31.21.37:22-139.178.89.65:48012.service: Deactivated successfully. Dec 16 12:28:54.403455 systemd[1]: session-26.scope: Deactivated successfully. Dec 16 12:28:54.405332 systemd[1]: session-26.scope: Consumed 1.094s CPU time, 21.6M memory peak. Dec 16 12:28:54.406763 systemd-logind[1971]: Session 26 logged out. Waiting for processes to exit. Dec 16 12:28:54.410676 systemd-logind[1971]: Removed session 26. Dec 16 12:28:54.431096 systemd[1]: Started sshd@26-172.31.21.37:22-139.178.89.65:48028.service - OpenSSH per-connection server daemon (139.178.89.65:48028). Dec 16 12:28:54.630500 sshd[5345]: Accepted publickey for core from 139.178.89.65 port 48028 ssh2: RSA SHA256:xUh8ykt9z5cbsZrCvFOBrVTYezXLnfIWheoQb+9aZNE Dec 16 12:28:54.632923 sshd-session[5345]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:28:54.641334 systemd-logind[1971]: New session 27 of user core. Dec 16 12:28:54.649547 systemd[1]: Started session-27.scope - Session 27 of User core. Dec 16 12:28:55.067072 ntpd[2229]: Deleting 10 lxc_health, [fe80::34eb:50ff:fef7:6c98%8]:123, stats: received=0, sent=0, dropped=0, active_time=84 secs Dec 16 12:28:55.067772 ntpd[2229]: 16 Dec 12:28:55 ntpd[2229]: Deleting 10 lxc_health, [fe80::34eb:50ff:fef7:6c98%8]:123, stats: received=0, sent=0, dropped=0, active_time=84 secs Dec 16 12:28:56.058364 sshd[5348]: Connection closed by 139.178.89.65 port 48028 Dec 16 12:28:56.059229 sshd-session[5345]: pam_unix(sshd:session): session closed for user core Dec 16 12:28:56.071155 systemd-logind[1971]: Session 27 logged out. Waiting for processes to exit. Dec 16 12:28:56.073845 systemd[1]: sshd@26-172.31.21.37:22-139.178.89.65:48028.service: Deactivated successfully. Dec 16 12:28:56.080376 systemd[1]: session-27.scope: Deactivated successfully. Dec 16 12:28:56.081221 systemd[1]: session-27.scope: Consumed 1.187s CPU time, 21.5M memory peak. Dec 16 12:28:56.103326 systemd-logind[1971]: Removed session 27. Dec 16 12:28:56.107301 systemd[1]: Started sshd@27-172.31.21.37:22-139.178.89.65:48040.service - OpenSSH per-connection server daemon (139.178.89.65:48040). Dec 16 12:28:56.157770 kubelet[3329]: I1216 12:28:56.157643 3329 memory_manager.go:355] "RemoveStaleState removing state" podUID="9c5f0d5a-5dce-43bc-a6fe-ec3e3c012a15" containerName="cilium-agent" Dec 16 12:28:56.157770 kubelet[3329]: I1216 12:28:56.157711 3329 memory_manager.go:355] "RemoveStaleState removing state" podUID="2b09daed-aa23-4573-b09a-03aa103e9afe" containerName="cilium-operator" Dec 16 12:28:56.185518 systemd[1]: Created slice kubepods-burstable-pod80ceff86_26a7_4175_9136_c15d8a778653.slice - libcontainer container kubepods-burstable-pod80ceff86_26a7_4175_9136_c15d8a778653.slice. Dec 16 12:28:56.203516 kubelet[3329]: I1216 12:28:56.203443 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/80ceff86-26a7-4175-9136-c15d8a778653-xtables-lock\") pod \"cilium-lj9vj\" (UID: \"80ceff86-26a7-4175-9136-c15d8a778653\") " pod="kube-system/cilium-lj9vj" Dec 16 12:28:56.203678 kubelet[3329]: I1216 12:28:56.203520 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/80ceff86-26a7-4175-9136-c15d8a778653-hostproc\") pod \"cilium-lj9vj\" (UID: \"80ceff86-26a7-4175-9136-c15d8a778653\") " pod="kube-system/cilium-lj9vj" Dec 16 12:28:56.203678 kubelet[3329]: I1216 12:28:56.203563 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/80ceff86-26a7-4175-9136-c15d8a778653-lib-modules\") pod \"cilium-lj9vj\" (UID: \"80ceff86-26a7-4175-9136-c15d8a778653\") " pod="kube-system/cilium-lj9vj" Dec 16 12:28:56.203678 kubelet[3329]: I1216 12:28:56.203601 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/80ceff86-26a7-4175-9136-c15d8a778653-etc-cni-netd\") pod \"cilium-lj9vj\" (UID: \"80ceff86-26a7-4175-9136-c15d8a778653\") " pod="kube-system/cilium-lj9vj" Dec 16 12:28:56.203678 kubelet[3329]: I1216 12:28:56.203636 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/80ceff86-26a7-4175-9136-c15d8a778653-cilium-config-path\") pod \"cilium-lj9vj\" (UID: \"80ceff86-26a7-4175-9136-c15d8a778653\") " pod="kube-system/cilium-lj9vj" Dec 16 12:28:56.203678 kubelet[3329]: I1216 12:28:56.203671 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/80ceff86-26a7-4175-9136-c15d8a778653-hubble-tls\") pod \"cilium-lj9vj\" (UID: \"80ceff86-26a7-4175-9136-c15d8a778653\") " pod="kube-system/cilium-lj9vj" Dec 16 12:28:56.203915 kubelet[3329]: I1216 12:28:56.203710 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/80ceff86-26a7-4175-9136-c15d8a778653-cilium-run\") pod \"cilium-lj9vj\" (UID: \"80ceff86-26a7-4175-9136-c15d8a778653\") " pod="kube-system/cilium-lj9vj" Dec 16 12:28:56.203915 kubelet[3329]: I1216 12:28:56.203744 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/80ceff86-26a7-4175-9136-c15d8a778653-cilium-cgroup\") pod \"cilium-lj9vj\" (UID: \"80ceff86-26a7-4175-9136-c15d8a778653\") " pod="kube-system/cilium-lj9vj" Dec 16 12:28:56.203915 kubelet[3329]: I1216 12:28:56.203780 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/80ceff86-26a7-4175-9136-c15d8a778653-cilium-ipsec-secrets\") pod \"cilium-lj9vj\" (UID: \"80ceff86-26a7-4175-9136-c15d8a778653\") " pod="kube-system/cilium-lj9vj" Dec 16 12:28:56.203915 kubelet[3329]: I1216 12:28:56.203818 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/80ceff86-26a7-4175-9136-c15d8a778653-host-proc-sys-net\") pod \"cilium-lj9vj\" (UID: \"80ceff86-26a7-4175-9136-c15d8a778653\") " pod="kube-system/cilium-lj9vj" Dec 16 12:28:56.203915 kubelet[3329]: I1216 12:28:56.203854 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/80ceff86-26a7-4175-9136-c15d8a778653-host-proc-sys-kernel\") pod \"cilium-lj9vj\" (UID: \"80ceff86-26a7-4175-9136-c15d8a778653\") " pod="kube-system/cilium-lj9vj" Dec 16 12:28:56.204146 kubelet[3329]: I1216 12:28:56.203892 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/80ceff86-26a7-4175-9136-c15d8a778653-clustermesh-secrets\") pod \"cilium-lj9vj\" (UID: \"80ceff86-26a7-4175-9136-c15d8a778653\") " pod="kube-system/cilium-lj9vj" Dec 16 12:28:56.204146 kubelet[3329]: I1216 12:28:56.203928 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qt7x2\" (UniqueName: \"kubernetes.io/projected/80ceff86-26a7-4175-9136-c15d8a778653-kube-api-access-qt7x2\") pod \"cilium-lj9vj\" (UID: \"80ceff86-26a7-4175-9136-c15d8a778653\") " pod="kube-system/cilium-lj9vj" Dec 16 12:28:56.204146 kubelet[3329]: I1216 12:28:56.203972 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/80ceff86-26a7-4175-9136-c15d8a778653-bpf-maps\") pod \"cilium-lj9vj\" (UID: \"80ceff86-26a7-4175-9136-c15d8a778653\") " pod="kube-system/cilium-lj9vj" Dec 16 12:28:56.204146 kubelet[3329]: I1216 12:28:56.204014 3329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/80ceff86-26a7-4175-9136-c15d8a778653-cni-path\") pod \"cilium-lj9vj\" (UID: \"80ceff86-26a7-4175-9136-c15d8a778653\") " pod="kube-system/cilium-lj9vj" Dec 16 12:28:56.347096 sshd[5359]: Accepted publickey for core from 139.178.89.65 port 48040 ssh2: RSA SHA256:xUh8ykt9z5cbsZrCvFOBrVTYezXLnfIWheoQb+9aZNE Dec 16 12:28:56.355634 sshd-session[5359]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:28:56.384379 systemd-logind[1971]: New session 28 of user core. Dec 16 12:28:56.390829 systemd[1]: Started session-28.scope - Session 28 of User core. Dec 16 12:28:56.503574 containerd[2002]: time="2025-12-16T12:28:56.503495162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lj9vj,Uid:80ceff86-26a7-4175-9136-c15d8a778653,Namespace:kube-system,Attempt:0,}" Dec 16 12:28:56.518921 sshd[5366]: Connection closed by 139.178.89.65 port 48040 Dec 16 12:28:56.523825 sshd-session[5359]: pam_unix(sshd:session): session closed for user core Dec 16 12:28:56.539832 systemd[1]: sshd@27-172.31.21.37:22-139.178.89.65:48040.service: Deactivated successfully. Dec 16 12:28:56.546861 systemd[1]: session-28.scope: Deactivated successfully. Dec 16 12:28:56.552637 systemd-logind[1971]: Session 28 logged out. Waiting for processes to exit. Dec 16 12:28:56.554910 containerd[2002]: time="2025-12-16T12:28:56.554616483Z" level=info msg="connecting to shim 63fbefc0c56f4b451d456773d2c7e88d4dfbe8389923f9fa293e08e1c867ab49" address="unix:///run/containerd/s/6fc4b6977a9e1bf7fcea6d71b56d674c8155f2ae5a8bb18f7702754263069ff0" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:28:56.575765 systemd[1]: Started sshd@28-172.31.21.37:22-139.178.89.65:48056.service - OpenSSH per-connection server daemon (139.178.89.65:48056). Dec 16 12:28:56.578759 systemd-logind[1971]: Removed session 28. Dec 16 12:28:56.607565 systemd[1]: Started cri-containerd-63fbefc0c56f4b451d456773d2c7e88d4dfbe8389923f9fa293e08e1c867ab49.scope - libcontainer container 63fbefc0c56f4b451d456773d2c7e88d4dfbe8389923f9fa293e08e1c867ab49. Dec 16 12:28:56.671940 containerd[2002]: time="2025-12-16T12:28:56.671877164Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lj9vj,Uid:80ceff86-26a7-4175-9136-c15d8a778653,Namespace:kube-system,Attempt:0,} returns sandbox id \"63fbefc0c56f4b451d456773d2c7e88d4dfbe8389923f9fa293e08e1c867ab49\"" Dec 16 12:28:56.677505 containerd[2002]: time="2025-12-16T12:28:56.677444167Z" level=info msg="CreateContainer within sandbox \"63fbefc0c56f4b451d456773d2c7e88d4dfbe8389923f9fa293e08e1c867ab49\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 16 12:28:56.695257 containerd[2002]: time="2025-12-16T12:28:56.695184088Z" level=info msg="Container f4a52d616141faacb089ca4c2ade363f8292eb9680c63125f66fb7a52ebf36d7: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:28:56.708072 containerd[2002]: time="2025-12-16T12:28:56.707991729Z" level=info msg="CreateContainer within sandbox \"63fbefc0c56f4b451d456773d2c7e88d4dfbe8389923f9fa293e08e1c867ab49\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f4a52d616141faacb089ca4c2ade363f8292eb9680c63125f66fb7a52ebf36d7\"" Dec 16 12:28:56.709202 containerd[2002]: time="2025-12-16T12:28:56.709137329Z" level=info msg="StartContainer for \"f4a52d616141faacb089ca4c2ade363f8292eb9680c63125f66fb7a52ebf36d7\"" Dec 16 12:28:56.711413 containerd[2002]: time="2025-12-16T12:28:56.711332627Z" level=info msg="connecting to shim f4a52d616141faacb089ca4c2ade363f8292eb9680c63125f66fb7a52ebf36d7" address="unix:///run/containerd/s/6fc4b6977a9e1bf7fcea6d71b56d674c8155f2ae5a8bb18f7702754263069ff0" protocol=ttrpc version=3 Dec 16 12:28:56.744589 systemd[1]: Started cri-containerd-f4a52d616141faacb089ca4c2ade363f8292eb9680c63125f66fb7a52ebf36d7.scope - libcontainer container f4a52d616141faacb089ca4c2ade363f8292eb9680c63125f66fb7a52ebf36d7. Dec 16 12:28:56.797073 sshd[5399]: Accepted publickey for core from 139.178.89.65 port 48056 ssh2: RSA SHA256:xUh8ykt9z5cbsZrCvFOBrVTYezXLnfIWheoQb+9aZNE Dec 16 12:28:56.803439 sshd-session[5399]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:28:56.812566 containerd[2002]: time="2025-12-16T12:28:56.812437194Z" level=info msg="StartContainer for \"f4a52d616141faacb089ca4c2ade363f8292eb9680c63125f66fb7a52ebf36d7\" returns successfully" Dec 16 12:28:56.821375 systemd-logind[1971]: New session 29 of user core. Dec 16 12:28:56.828848 systemd[1]: Started session-29.scope - Session 29 of User core. Dec 16 12:28:56.834468 systemd[1]: cri-containerd-f4a52d616141faacb089ca4c2ade363f8292eb9680c63125f66fb7a52ebf36d7.scope: Deactivated successfully. Dec 16 12:28:56.842753 containerd[2002]: time="2025-12-16T12:28:56.842525407Z" level=info msg="received container exit event container_id:\"f4a52d616141faacb089ca4c2ade363f8292eb9680c63125f66fb7a52ebf36d7\" id:\"f4a52d616141faacb089ca4c2ade363f8292eb9680c63125f66fb7a52ebf36d7\" pid:5436 exited_at:{seconds:1765888136 nanos:841791985}" Dec 16 12:28:56.948449 containerd[2002]: time="2025-12-16T12:28:56.948391710Z" level=info msg="CreateContainer within sandbox \"63fbefc0c56f4b451d456773d2c7e88d4dfbe8389923f9fa293e08e1c867ab49\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 16 12:28:56.977490 containerd[2002]: time="2025-12-16T12:28:56.977418905Z" level=info msg="Container 66621c3d5f3c6adcf8e635993950f197e17738fce201f49f9b3f08a9df299825: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:28:56.993127 containerd[2002]: time="2025-12-16T12:28:56.992252150Z" level=info msg="CreateContainer within sandbox \"63fbefc0c56f4b451d456773d2c7e88d4dfbe8389923f9fa293e08e1c867ab49\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"66621c3d5f3c6adcf8e635993950f197e17738fce201f49f9b3f08a9df299825\"" Dec 16 12:28:56.995872 containerd[2002]: time="2025-12-16T12:28:56.995745560Z" level=info msg="StartContainer for \"66621c3d5f3c6adcf8e635993950f197e17738fce201f49f9b3f08a9df299825\"" Dec 16 12:28:57.002165 containerd[2002]: time="2025-12-16T12:28:57.001188372Z" level=info msg="connecting to shim 66621c3d5f3c6adcf8e635993950f197e17738fce201f49f9b3f08a9df299825" address="unix:///run/containerd/s/6fc4b6977a9e1bf7fcea6d71b56d674c8155f2ae5a8bb18f7702754263069ff0" protocol=ttrpc version=3 Dec 16 12:28:57.062661 systemd[1]: Started cri-containerd-66621c3d5f3c6adcf8e635993950f197e17738fce201f49f9b3f08a9df299825.scope - libcontainer container 66621c3d5f3c6adcf8e635993950f197e17738fce201f49f9b3f08a9df299825. Dec 16 12:28:57.189911 containerd[2002]: time="2025-12-16T12:28:57.189787146Z" level=info msg="StartContainer for \"66621c3d5f3c6adcf8e635993950f197e17738fce201f49f9b3f08a9df299825\" returns successfully" Dec 16 12:28:57.204488 systemd[1]: cri-containerd-66621c3d5f3c6adcf8e635993950f197e17738fce201f49f9b3f08a9df299825.scope: Deactivated successfully. Dec 16 12:28:57.209711 containerd[2002]: time="2025-12-16T12:28:57.209475053Z" level=info msg="received container exit event container_id:\"66621c3d5f3c6adcf8e635993950f197e17738fce201f49f9b3f08a9df299825\" id:\"66621c3d5f3c6adcf8e635993950f197e17738fce201f49f9b3f08a9df299825\" pid:5488 exited_at:{seconds:1765888137 nanos:208179449}" Dec 16 12:28:57.955842 containerd[2002]: time="2025-12-16T12:28:57.955633911Z" level=info msg="CreateContainer within sandbox \"63fbefc0c56f4b451d456773d2c7e88d4dfbe8389923f9fa293e08e1c867ab49\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 16 12:28:57.979872 containerd[2002]: time="2025-12-16T12:28:57.978470836Z" level=info msg="Container 935889a73f2fb83c14db8beed4e73b5509d59f2690f87444e33fa88e06bf9360: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:28:58.003278 containerd[2002]: time="2025-12-16T12:28:58.003172869Z" level=info msg="CreateContainer within sandbox \"63fbefc0c56f4b451d456773d2c7e88d4dfbe8389923f9fa293e08e1c867ab49\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"935889a73f2fb83c14db8beed4e73b5509d59f2690f87444e33fa88e06bf9360\"" Dec 16 12:28:58.005505 containerd[2002]: time="2025-12-16T12:28:58.005440982Z" level=info msg="StartContainer for \"935889a73f2fb83c14db8beed4e73b5509d59f2690f87444e33fa88e06bf9360\"" Dec 16 12:28:58.009455 containerd[2002]: time="2025-12-16T12:28:58.009390164Z" level=info msg="connecting to shim 935889a73f2fb83c14db8beed4e73b5509d59f2690f87444e33fa88e06bf9360" address="unix:///run/containerd/s/6fc4b6977a9e1bf7fcea6d71b56d674c8155f2ae5a8bb18f7702754263069ff0" protocol=ttrpc version=3 Dec 16 12:28:58.044583 systemd[1]: Started cri-containerd-935889a73f2fb83c14db8beed4e73b5509d59f2690f87444e33fa88e06bf9360.scope - libcontainer container 935889a73f2fb83c14db8beed4e73b5509d59f2690f87444e33fa88e06bf9360. Dec 16 12:28:58.172257 systemd[1]: cri-containerd-935889a73f2fb83c14db8beed4e73b5509d59f2690f87444e33fa88e06bf9360.scope: Deactivated successfully. Dec 16 12:28:58.182913 containerd[2002]: time="2025-12-16T12:28:58.182749974Z" level=info msg="StartContainer for \"935889a73f2fb83c14db8beed4e73b5509d59f2690f87444e33fa88e06bf9360\" returns successfully" Dec 16 12:28:58.183084 containerd[2002]: time="2025-12-16T12:28:58.183037001Z" level=info msg="received container exit event container_id:\"935889a73f2fb83c14db8beed4e73b5509d59f2690f87444e33fa88e06bf9360\" id:\"935889a73f2fb83c14db8beed4e73b5509d59f2690f87444e33fa88e06bf9360\" pid:5531 exited_at:{seconds:1765888138 nanos:182710798}" Dec 16 12:28:58.228686 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-935889a73f2fb83c14db8beed4e73b5509d59f2690f87444e33fa88e06bf9360-rootfs.mount: Deactivated successfully. Dec 16 12:28:58.304761 containerd[2002]: time="2025-12-16T12:28:58.304674735Z" level=info msg="StopPodSandbox for \"de8757e5b3e6cb732c739be970b8512617cad57213f2cf882b8af0c4de105fa2\"" Dec 16 12:28:58.305015 containerd[2002]: time="2025-12-16T12:28:58.304952577Z" level=info msg="TearDown network for sandbox \"de8757e5b3e6cb732c739be970b8512617cad57213f2cf882b8af0c4de105fa2\" successfully" Dec 16 12:28:58.305083 containerd[2002]: time="2025-12-16T12:28:58.305014732Z" level=info msg="StopPodSandbox for \"de8757e5b3e6cb732c739be970b8512617cad57213f2cf882b8af0c4de105fa2\" returns successfully" Dec 16 12:28:58.305710 containerd[2002]: time="2025-12-16T12:28:58.305671545Z" level=info msg="RemovePodSandbox for \"de8757e5b3e6cb732c739be970b8512617cad57213f2cf882b8af0c4de105fa2\"" Dec 16 12:28:58.307300 containerd[2002]: time="2025-12-16T12:28:58.305975729Z" level=info msg="Forcibly stopping sandbox \"de8757e5b3e6cb732c739be970b8512617cad57213f2cf882b8af0c4de105fa2\"" Dec 16 12:28:58.307300 containerd[2002]: time="2025-12-16T12:28:58.306127232Z" level=info msg="TearDown network for sandbox \"de8757e5b3e6cb732c739be970b8512617cad57213f2cf882b8af0c4de105fa2\" successfully" Dec 16 12:28:58.308191 containerd[2002]: time="2025-12-16T12:28:58.308148034Z" level=info msg="Ensure that sandbox de8757e5b3e6cb732c739be970b8512617cad57213f2cf882b8af0c4de105fa2 in task-service has been cleanup successfully" Dec 16 12:28:58.314646 containerd[2002]: time="2025-12-16T12:28:58.314595737Z" level=info msg="RemovePodSandbox \"de8757e5b3e6cb732c739be970b8512617cad57213f2cf882b8af0c4de105fa2\" returns successfully" Dec 16 12:28:58.315961 containerd[2002]: time="2025-12-16T12:28:58.315902133Z" level=info msg="StopPodSandbox for \"38d4591418001ad9fb8edd2fbdb87075f05c7e92153f139003519a48744610ce\"" Dec 16 12:28:58.316142 containerd[2002]: time="2025-12-16T12:28:58.316093341Z" level=info msg="TearDown network for sandbox \"38d4591418001ad9fb8edd2fbdb87075f05c7e92153f139003519a48744610ce\" successfully" Dec 16 12:28:58.316210 containerd[2002]: time="2025-12-16T12:28:58.316138568Z" level=info msg="StopPodSandbox for \"38d4591418001ad9fb8edd2fbdb87075f05c7e92153f139003519a48744610ce\" returns successfully" Dec 16 12:28:58.317173 containerd[2002]: time="2025-12-16T12:28:58.317088626Z" level=info msg="RemovePodSandbox for \"38d4591418001ad9fb8edd2fbdb87075f05c7e92153f139003519a48744610ce\"" Dec 16 12:28:58.317336 containerd[2002]: time="2025-12-16T12:28:58.317178227Z" level=info msg="Forcibly stopping sandbox \"38d4591418001ad9fb8edd2fbdb87075f05c7e92153f139003519a48744610ce\"" Dec 16 12:28:58.319285 containerd[2002]: time="2025-12-16T12:28:58.317436452Z" level=info msg="TearDown network for sandbox \"38d4591418001ad9fb8edd2fbdb87075f05c7e92153f139003519a48744610ce\" successfully" Dec 16 12:28:58.320549 containerd[2002]: time="2025-12-16T12:28:58.320496229Z" level=info msg="Ensure that sandbox 38d4591418001ad9fb8edd2fbdb87075f05c7e92153f139003519a48744610ce in task-service has been cleanup successfully" Dec 16 12:28:58.327845 containerd[2002]: time="2025-12-16T12:28:58.327771769Z" level=info msg="RemovePodSandbox \"38d4591418001ad9fb8edd2fbdb87075f05c7e92153f139003519a48744610ce\" returns successfully" Dec 16 12:28:58.512895 kubelet[3329]: E1216 12:28:58.512738 3329 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 16 12:28:58.967891 containerd[2002]: time="2025-12-16T12:28:58.967830063Z" level=info msg="CreateContainer within sandbox \"63fbefc0c56f4b451d456773d2c7e88d4dfbe8389923f9fa293e08e1c867ab49\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 16 12:28:58.988629 containerd[2002]: time="2025-12-16T12:28:58.988558458Z" level=info msg="Container 74523f0d2c4c4ff1c26076239879426da97ebe5f6c1a1f1082ad91ca2f2f99a3: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:28:59.015628 containerd[2002]: time="2025-12-16T12:28:59.015412002Z" level=info msg="CreateContainer within sandbox \"63fbefc0c56f4b451d456773d2c7e88d4dfbe8389923f9fa293e08e1c867ab49\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"74523f0d2c4c4ff1c26076239879426da97ebe5f6c1a1f1082ad91ca2f2f99a3\"" Dec 16 12:28:59.017776 containerd[2002]: time="2025-12-16T12:28:59.017697597Z" level=info msg="StartContainer for \"74523f0d2c4c4ff1c26076239879426da97ebe5f6c1a1f1082ad91ca2f2f99a3\"" Dec 16 12:28:59.022543 containerd[2002]: time="2025-12-16T12:28:59.022461866Z" level=info msg="connecting to shim 74523f0d2c4c4ff1c26076239879426da97ebe5f6c1a1f1082ad91ca2f2f99a3" address="unix:///run/containerd/s/6fc4b6977a9e1bf7fcea6d71b56d674c8155f2ae5a8bb18f7702754263069ff0" protocol=ttrpc version=3 Dec 16 12:28:59.083638 systemd[1]: Started cri-containerd-74523f0d2c4c4ff1c26076239879426da97ebe5f6c1a1f1082ad91ca2f2f99a3.scope - libcontainer container 74523f0d2c4c4ff1c26076239879426da97ebe5f6c1a1f1082ad91ca2f2f99a3. Dec 16 12:28:59.166798 systemd[1]: cri-containerd-74523f0d2c4c4ff1c26076239879426da97ebe5f6c1a1f1082ad91ca2f2f99a3.scope: Deactivated successfully. Dec 16 12:28:59.178013 containerd[2002]: time="2025-12-16T12:28:59.177821860Z" level=info msg="received container exit event container_id:\"74523f0d2c4c4ff1c26076239879426da97ebe5f6c1a1f1082ad91ca2f2f99a3\" id:\"74523f0d2c4c4ff1c26076239879426da97ebe5f6c1a1f1082ad91ca2f2f99a3\" pid:5572 exited_at:{seconds:1765888139 nanos:175177911}" Dec 16 12:28:59.188115 containerd[2002]: time="2025-12-16T12:28:59.186595725Z" level=info msg="StartContainer for \"74523f0d2c4c4ff1c26076239879426da97ebe5f6c1a1f1082ad91ca2f2f99a3\" returns successfully" Dec 16 12:28:59.239803 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-74523f0d2c4c4ff1c26076239879426da97ebe5f6c1a1f1082ad91ca2f2f99a3-rootfs.mount: Deactivated successfully. Dec 16 12:28:59.979288 containerd[2002]: time="2025-12-16T12:28:59.979209135Z" level=info msg="CreateContainer within sandbox \"63fbefc0c56f4b451d456773d2c7e88d4dfbe8389923f9fa293e08e1c867ab49\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 16 12:29:00.004048 containerd[2002]: time="2025-12-16T12:29:00.002593149Z" level=info msg="Container df2c1ece407de75620ce8c376fd4117bfc6945cd7cb3a1ce995c0e96a3d83b97: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:29:00.024758 containerd[2002]: time="2025-12-16T12:29:00.024683529Z" level=info msg="CreateContainer within sandbox \"63fbefc0c56f4b451d456773d2c7e88d4dfbe8389923f9fa293e08e1c867ab49\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"df2c1ece407de75620ce8c376fd4117bfc6945cd7cb3a1ce995c0e96a3d83b97\"" Dec 16 12:29:00.027305 containerd[2002]: time="2025-12-16T12:29:00.025944735Z" level=info msg="StartContainer for \"df2c1ece407de75620ce8c376fd4117bfc6945cd7cb3a1ce995c0e96a3d83b97\"" Dec 16 12:29:00.028045 containerd[2002]: time="2025-12-16T12:29:00.027995924Z" level=info msg="connecting to shim df2c1ece407de75620ce8c376fd4117bfc6945cd7cb3a1ce995c0e96a3d83b97" address="unix:///run/containerd/s/6fc4b6977a9e1bf7fcea6d71b56d674c8155f2ae5a8bb18f7702754263069ff0" protocol=ttrpc version=3 Dec 16 12:29:00.070708 systemd[1]: Started cri-containerd-df2c1ece407de75620ce8c376fd4117bfc6945cd7cb3a1ce995c0e96a3d83b97.scope - libcontainer container df2c1ece407de75620ce8c376fd4117bfc6945cd7cb3a1ce995c0e96a3d83b97. Dec 16 12:29:00.164654 containerd[2002]: time="2025-12-16T12:29:00.164596940Z" level=info msg="StartContainer for \"df2c1ece407de75620ce8c376fd4117bfc6945cd7cb3a1ce995c0e96a3d83b97\" returns successfully" Dec 16 12:29:00.299719 kubelet[3329]: I1216 12:29:00.299549 3329 setters.go:602] "Node became not ready" node="ip-172-31-21-37" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T12:29:00Z","lastTransitionTime":"2025-12-16T12:29:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 16 12:29:01.214350 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Dec 16 12:29:05.576422 systemd-networkd[1868]: lxc_health: Link UP Dec 16 12:29:05.591208 (udev-worker)[6149]: Network interface NamePolicy= disabled on kernel command line. Dec 16 12:29:05.596460 systemd-networkd[1868]: lxc_health: Gained carrier Dec 16 12:29:06.553532 kubelet[3329]: I1216 12:29:06.552481 3329 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-lj9vj" podStartSLOduration=10.552460492 podStartE2EDuration="10.552460492s" podCreationTimestamp="2025-12-16 12:28:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:29:01.035659704 +0000 UTC m=+123.002740576" watchObservedRunningTime="2025-12-16 12:29:06.552460492 +0000 UTC m=+128.519541340" Dec 16 12:29:07.127846 systemd-networkd[1868]: lxc_health: Gained IPv6LL Dec 16 12:29:08.486615 kubelet[3329]: E1216 12:29:08.486413 3329 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:59276->127.0.0.1:43673: write tcp 127.0.0.1:59276->127.0.0.1:43673: write: broken pipe Dec 16 12:29:10.067099 ntpd[2229]: Listen normally on 13 lxc_health [fe80::6cd9:c2ff:fe85:f65b%14]:123 Dec 16 12:29:10.068471 ntpd[2229]: 16 Dec 12:29:10 ntpd[2229]: Listen normally on 13 lxc_health [fe80::6cd9:c2ff:fe85:f65b%14]:123 Dec 16 12:29:10.730517 kubelet[3329]: E1216 12:29:10.730459 3329 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:59288->127.0.0.1:43673: write tcp 127.0.0.1:59288->127.0.0.1:43673: write: broken pipe Dec 16 12:29:13.041947 sshd[5456]: Connection closed by 139.178.89.65 port 48056 Dec 16 12:29:13.042581 sshd-session[5399]: pam_unix(sshd:session): session closed for user core Dec 16 12:29:13.052082 systemd-logind[1971]: Session 29 logged out. Waiting for processes to exit. Dec 16 12:29:13.052899 systemd[1]: sshd@28-172.31.21.37:22-139.178.89.65:48056.service: Deactivated successfully. Dec 16 12:29:13.062104 systemd[1]: session-29.scope: Deactivated successfully. Dec 16 12:29:13.071659 systemd-logind[1971]: Removed session 29. Dec 16 12:29:27.322443 systemd[1]: cri-containerd-45270984168d05cbb7deff63bdf599407ba5525b3ad06832aef39e4f4a99a887.scope: Deactivated successfully. Dec 16 12:29:27.324225 systemd[1]: cri-containerd-45270984168d05cbb7deff63bdf599407ba5525b3ad06832aef39e4f4a99a887.scope: Consumed 5.966s CPU time, 54.7M memory peak. Dec 16 12:29:27.332454 containerd[2002]: time="2025-12-16T12:29:27.332389140Z" level=info msg="received container exit event container_id:\"45270984168d05cbb7deff63bdf599407ba5525b3ad06832aef39e4f4a99a887\" id:\"45270984168d05cbb7deff63bdf599407ba5525b3ad06832aef39e4f4a99a887\" pid:3159 exit_status:1 exited_at:{seconds:1765888167 nanos:331771996}" Dec 16 12:29:27.378121 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-45270984168d05cbb7deff63bdf599407ba5525b3ad06832aef39e4f4a99a887-rootfs.mount: Deactivated successfully. Dec 16 12:29:28.069301 kubelet[3329]: I1216 12:29:28.069156 3329 scope.go:117] "RemoveContainer" containerID="45270984168d05cbb7deff63bdf599407ba5525b3ad06832aef39e4f4a99a887" Dec 16 12:29:28.073980 containerd[2002]: time="2025-12-16T12:29:28.073317885Z" level=info msg="CreateContainer within sandbox \"76247fe4d867b53b4e26a48d5c51832f9571e6453d0fac5238ef8957401358e4\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Dec 16 12:29:28.090224 containerd[2002]: time="2025-12-16T12:29:28.090151786Z" level=info msg="Container f18e21715df22e6378d847251d2b5d3245ca00ce55363012197805f204cc0a25: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:29:28.107928 containerd[2002]: time="2025-12-16T12:29:28.107840178Z" level=info msg="CreateContainer within sandbox \"76247fe4d867b53b4e26a48d5c51832f9571e6453d0fac5238ef8957401358e4\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"f18e21715df22e6378d847251d2b5d3245ca00ce55363012197805f204cc0a25\"" Dec 16 12:29:28.109458 containerd[2002]: time="2025-12-16T12:29:28.108922867Z" level=info msg="StartContainer for \"f18e21715df22e6378d847251d2b5d3245ca00ce55363012197805f204cc0a25\"" Dec 16 12:29:28.111690 containerd[2002]: time="2025-12-16T12:29:28.111640509Z" level=info msg="connecting to shim f18e21715df22e6378d847251d2b5d3245ca00ce55363012197805f204cc0a25" address="unix:///run/containerd/s/a980d067614b00757700226061aeb061546ab248cc43ee2eab33608d7698f97a" protocol=ttrpc version=3 Dec 16 12:29:28.153583 systemd[1]: Started cri-containerd-f18e21715df22e6378d847251d2b5d3245ca00ce55363012197805f204cc0a25.scope - libcontainer container f18e21715df22e6378d847251d2b5d3245ca00ce55363012197805f204cc0a25. Dec 16 12:29:28.248050 containerd[2002]: time="2025-12-16T12:29:28.247968256Z" level=info msg="StartContainer for \"f18e21715df22e6378d847251d2b5d3245ca00ce55363012197805f204cc0a25\" returns successfully" Dec 16 12:29:30.565384 kubelet[3329]: E1216 12:29:30.564746 3329 controller.go:195] "Failed to update lease" err="Put \"https://172.31.21.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-37?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 16 12:29:32.642544 systemd[1]: cri-containerd-fbd36fd60dccef25a57e876fba68fceb94e141b42be5a8e1f0c63c3e95f6dc00.scope: Deactivated successfully. Dec 16 12:29:32.643072 systemd[1]: cri-containerd-fbd36fd60dccef25a57e876fba68fceb94e141b42be5a8e1f0c63c3e95f6dc00.scope: Consumed 4.742s CPU time, 20.2M memory peak. Dec 16 12:29:32.648560 containerd[2002]: time="2025-12-16T12:29:32.648367895Z" level=info msg="received container exit event container_id:\"fbd36fd60dccef25a57e876fba68fceb94e141b42be5a8e1f0c63c3e95f6dc00\" id:\"fbd36fd60dccef25a57e876fba68fceb94e141b42be5a8e1f0c63c3e95f6dc00\" pid:3188 exit_status:1 exited_at:{seconds:1765888172 nanos:647662722}" Dec 16 12:29:32.693952 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fbd36fd60dccef25a57e876fba68fceb94e141b42be5a8e1f0c63c3e95f6dc00-rootfs.mount: Deactivated successfully. Dec 16 12:29:33.093389 kubelet[3329]: I1216 12:29:33.092607 3329 scope.go:117] "RemoveContainer" containerID="fbd36fd60dccef25a57e876fba68fceb94e141b42be5a8e1f0c63c3e95f6dc00" Dec 16 12:29:33.095897 containerd[2002]: time="2025-12-16T12:29:33.095811936Z" level=info msg="CreateContainer within sandbox \"9452a11ea11dafbd28999b93c325f9977cc1d2b162134fca0a43958852f5576d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Dec 16 12:29:33.116411 containerd[2002]: time="2025-12-16T12:29:33.113782553Z" level=info msg="Container 4005a87a4e38df10cc538a2e118f08167061b5628d9c00ad8ab0ff1c4d490b9f: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:29:33.123030 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2804233597.mount: Deactivated successfully. Dec 16 12:29:33.134673 containerd[2002]: time="2025-12-16T12:29:33.134610262Z" level=info msg="CreateContainer within sandbox \"9452a11ea11dafbd28999b93c325f9977cc1d2b162134fca0a43958852f5576d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"4005a87a4e38df10cc538a2e118f08167061b5628d9c00ad8ab0ff1c4d490b9f\"" Dec 16 12:29:33.135630 containerd[2002]: time="2025-12-16T12:29:33.135570874Z" level=info msg="StartContainer for \"4005a87a4e38df10cc538a2e118f08167061b5628d9c00ad8ab0ff1c4d490b9f\"" Dec 16 12:29:33.138523 containerd[2002]: time="2025-12-16T12:29:33.138469434Z" level=info msg="connecting to shim 4005a87a4e38df10cc538a2e118f08167061b5628d9c00ad8ab0ff1c4d490b9f" address="unix:///run/containerd/s/ad0c0a56ce30c24424b1496f58951a180ff5d6e73db43c9d695a59cfabb5adb8" protocol=ttrpc version=3 Dec 16 12:29:33.182910 systemd[1]: Started cri-containerd-4005a87a4e38df10cc538a2e118f08167061b5628d9c00ad8ab0ff1c4d490b9f.scope - libcontainer container 4005a87a4e38df10cc538a2e118f08167061b5628d9c00ad8ab0ff1c4d490b9f. Dec 16 12:29:33.269033 containerd[2002]: time="2025-12-16T12:29:33.268979674Z" level=info msg="StartContainer for \"4005a87a4e38df10cc538a2e118f08167061b5628d9c00ad8ab0ff1c4d490b9f\" returns successfully" Dec 16 12:29:40.565322 kubelet[3329]: E1216 12:29:40.565211 3329 controller.go:195] "Failed to update lease" err="Put \"https://172.31.21.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-37?timeout=10s\": context deadline exceeded"