Jan 23 23:55:49.233073 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Jan 23 23:55:49.233122 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Jan 23 22:26:47 -00 2026 Jan 23 23:55:49.233147 kernel: KASLR disabled due to lack of seed Jan 23 23:55:49.233164 kernel: efi: EFI v2.7 by EDK II Jan 23 23:55:49.233179 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b001a98 MEMRESERVE=0x7852ee18 Jan 23 23:55:49.233195 kernel: ACPI: Early table checksum verification disabled Jan 23 23:55:49.233213 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Jan 23 23:55:49.233228 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Jan 23 23:55:49.233245 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Jan 23 23:55:49.233260 kernel: ACPI: DSDT 0x0000000078640000 0013D2 (v02 AMAZON AMZNDSDT 00000001 AMZN 00000001) Jan 23 23:55:49.233281 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jan 23 23:55:49.233296 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Jan 23 23:55:49.233312 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Jan 23 23:55:49.233328 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Jan 23 23:55:49.233346 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jan 23 23:55:49.233367 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Jan 23 23:55:49.233384 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Jan 23 23:55:49.233401 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Jan 23 23:55:49.233417 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Jan 23 23:55:49.233433 kernel: printk: bootconsole [uart0] enabled Jan 23 23:55:49.233450 kernel: NUMA: Failed to initialise from firmware Jan 23 23:55:49.233466 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Jan 23 23:55:49.233483 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Jan 23 23:55:49.233499 kernel: Zone ranges: Jan 23 23:55:49.233516 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jan 23 23:55:49.233532 kernel: DMA32 empty Jan 23 23:55:49.233552 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Jan 23 23:55:49.233569 kernel: Movable zone start for each node Jan 23 23:55:49.233585 kernel: Early memory node ranges Jan 23 23:55:49.233601 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Jan 23 23:55:49.233618 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Jan 23 23:55:49.233634 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Jan 23 23:55:49.233651 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Jan 23 23:55:49.233667 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Jan 23 23:55:49.233683 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Jan 23 23:55:49.233700 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Jan 23 23:55:49.233716 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Jan 23 23:55:49.233732 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Jan 23 23:55:49.233753 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Jan 23 23:55:49.233771 kernel: psci: probing for conduit method from ACPI. Jan 23 23:55:49.233796 kernel: psci: PSCIv1.0 detected in firmware. Jan 23 23:55:49.233814 kernel: psci: Using standard PSCI v0.2 function IDs Jan 23 23:55:49.233835 kernel: psci: Trusted OS migration not required Jan 23 23:55:49.235972 kernel: psci: SMC Calling Convention v1.1 Jan 23 23:55:49.235993 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Jan 23 23:55:49.236012 kernel: percpu: Embedded 30 pages/cpu s85672 r8192 d29016 u122880 Jan 23 23:55:49.236030 kernel: pcpu-alloc: s85672 r8192 d29016 u122880 alloc=30*4096 Jan 23 23:55:49.236048 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 23 23:55:49.236066 kernel: Detected PIPT I-cache on CPU0 Jan 23 23:55:49.236083 kernel: CPU features: detected: GIC system register CPU interface Jan 23 23:55:49.236103 kernel: CPU features: detected: Spectre-v2 Jan 23 23:55:49.236120 kernel: CPU features: detected: Spectre-v3a Jan 23 23:55:49.236138 kernel: CPU features: detected: Spectre-BHB Jan 23 23:55:49.236155 kernel: CPU features: detected: ARM erratum 1742098 Jan 23 23:55:49.236179 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Jan 23 23:55:49.236198 kernel: alternatives: applying boot alternatives Jan 23 23:55:49.236219 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=a01f25d0714a86cf8b897276230b4ac71c04b1d69bd03a1f6d2ef96f59ef0f09 Jan 23 23:55:49.236238 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 23 23:55:49.236256 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 23 23:55:49.236275 kernel: Fallback order for Node 0: 0 Jan 23 23:55:49.236293 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Jan 23 23:55:49.236313 kernel: Policy zone: Normal Jan 23 23:55:49.236330 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 23 23:55:49.236348 kernel: software IO TLB: area num 2. Jan 23 23:55:49.236366 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Jan 23 23:55:49.236392 kernel: Memory: 3820096K/4030464K available (10304K kernel code, 2180K rwdata, 8112K rodata, 39424K init, 897K bss, 210368K reserved, 0K cma-reserved) Jan 23 23:55:49.236411 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 23 23:55:49.236429 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 23 23:55:49.236449 kernel: rcu: RCU event tracing is enabled. Jan 23 23:55:49.236468 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 23 23:55:49.236487 kernel: Trampoline variant of Tasks RCU enabled. Jan 23 23:55:49.236507 kernel: Tracing variant of Tasks RCU enabled. Jan 23 23:55:49.236525 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 23 23:55:49.236544 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 23 23:55:49.236563 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 23 23:55:49.236581 kernel: GICv3: 96 SPIs implemented Jan 23 23:55:49.236605 kernel: GICv3: 0 Extended SPIs implemented Jan 23 23:55:49.236623 kernel: Root IRQ handler: gic_handle_irq Jan 23 23:55:49.236642 kernel: GICv3: GICv3 features: 16 PPIs Jan 23 23:55:49.236660 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Jan 23 23:55:49.236678 kernel: ITS [mem 0x10080000-0x1009ffff] Jan 23 23:55:49.236696 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Jan 23 23:55:49.236718 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Jan 23 23:55:49.236737 kernel: GICv3: using LPI property table @0x00000004000d0000 Jan 23 23:55:49.236755 kernel: ITS: Using hypervisor restricted LPI range [128] Jan 23 23:55:49.236773 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Jan 23 23:55:49.236791 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 23 23:55:49.236809 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Jan 23 23:55:49.236832 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Jan 23 23:55:49.237004 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Jan 23 23:55:49.237024 kernel: Console: colour dummy device 80x25 Jan 23 23:55:49.237042 kernel: printk: console [tty1] enabled Jan 23 23:55:49.237061 kernel: ACPI: Core revision 20230628 Jan 23 23:55:49.237080 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Jan 23 23:55:49.237099 kernel: pid_max: default: 32768 minimum: 301 Jan 23 23:55:49.237118 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 23 23:55:49.237136 kernel: landlock: Up and running. Jan 23 23:55:49.237164 kernel: SELinux: Initializing. Jan 23 23:55:49.237184 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 23:55:49.237202 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 23:55:49.237220 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 23:55:49.237239 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 23:55:49.237256 kernel: rcu: Hierarchical SRCU implementation. Jan 23 23:55:49.237275 kernel: rcu: Max phase no-delay instances is 400. Jan 23 23:55:49.237292 kernel: Platform MSI: ITS@0x10080000 domain created Jan 23 23:55:49.237310 kernel: PCI/MSI: ITS@0x10080000 domain created Jan 23 23:55:49.237332 kernel: Remapping and enabling EFI services. Jan 23 23:55:49.237350 kernel: smp: Bringing up secondary CPUs ... Jan 23 23:55:49.237368 kernel: Detected PIPT I-cache on CPU1 Jan 23 23:55:49.237385 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Jan 23 23:55:49.237403 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Jan 23 23:55:49.237421 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Jan 23 23:55:49.237438 kernel: smp: Brought up 1 node, 2 CPUs Jan 23 23:55:49.237456 kernel: SMP: Total of 2 processors activated. Jan 23 23:55:49.237473 kernel: CPU features: detected: 32-bit EL0 Support Jan 23 23:55:49.237495 kernel: CPU features: detected: 32-bit EL1 Support Jan 23 23:55:49.237514 kernel: CPU features: detected: CRC32 instructions Jan 23 23:55:49.237532 kernel: CPU: All CPU(s) started at EL1 Jan 23 23:55:49.237561 kernel: alternatives: applying system-wide alternatives Jan 23 23:55:49.237583 kernel: devtmpfs: initialized Jan 23 23:55:49.237602 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 23 23:55:49.237620 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 23 23:55:49.237638 kernel: pinctrl core: initialized pinctrl subsystem Jan 23 23:55:49.237657 kernel: SMBIOS 3.0.0 present. Jan 23 23:55:49.237680 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Jan 23 23:55:49.237698 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 23 23:55:49.237717 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 23 23:55:49.237735 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 23 23:55:49.237754 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 23 23:55:49.237772 kernel: audit: initializing netlink subsys (disabled) Jan 23 23:55:49.237791 kernel: audit: type=2000 audit(0.288:1): state=initialized audit_enabled=0 res=1 Jan 23 23:55:49.237809 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 23 23:55:49.237831 kernel: cpuidle: using governor menu Jan 23 23:55:49.240546 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 23 23:55:49.240566 kernel: ASID allocator initialised with 65536 entries Jan 23 23:55:49.240585 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 23 23:55:49.240604 kernel: Serial: AMBA PL011 UART driver Jan 23 23:55:49.240622 kernel: Modules: 17488 pages in range for non-PLT usage Jan 23 23:55:49.240641 kernel: Modules: 509008 pages in range for PLT usage Jan 23 23:55:49.240659 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 23 23:55:49.240678 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 23 23:55:49.240704 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 23 23:55:49.240724 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 23 23:55:49.240742 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 23 23:55:49.240760 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 23 23:55:49.240779 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 23 23:55:49.240797 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 23 23:55:49.240815 kernel: ACPI: Added _OSI(Module Device) Jan 23 23:55:49.240833 kernel: ACPI: Added _OSI(Processor Device) Jan 23 23:55:49.240874 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 23 23:55:49.240900 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 23 23:55:49.240920 kernel: ACPI: Interpreter enabled Jan 23 23:55:49.240938 kernel: ACPI: Using GIC for interrupt routing Jan 23 23:55:49.240957 kernel: ACPI: MCFG table detected, 1 entries Jan 23 23:55:49.240975 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00]) Jan 23 23:55:49.241279 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 23 23:55:49.241489 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 23 23:55:49.241686 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 23 23:55:49.241951 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x200fffff] reserved by PNP0C02:00 Jan 23 23:55:49.242157 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x200fffff] for [bus 00] Jan 23 23:55:49.242183 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Jan 23 23:55:49.242203 kernel: acpiphp: Slot [1] registered Jan 23 23:55:49.242221 kernel: acpiphp: Slot [2] registered Jan 23 23:55:49.242240 kernel: acpiphp: Slot [3] registered Jan 23 23:55:49.242258 kernel: acpiphp: Slot [4] registered Jan 23 23:55:49.242277 kernel: acpiphp: Slot [5] registered Jan 23 23:55:49.242302 kernel: acpiphp: Slot [6] registered Jan 23 23:55:49.242320 kernel: acpiphp: Slot [7] registered Jan 23 23:55:49.242339 kernel: acpiphp: Slot [8] registered Jan 23 23:55:49.242357 kernel: acpiphp: Slot [9] registered Jan 23 23:55:49.242375 kernel: acpiphp: Slot [10] registered Jan 23 23:55:49.242393 kernel: acpiphp: Slot [11] registered Jan 23 23:55:49.242411 kernel: acpiphp: Slot [12] registered Jan 23 23:55:49.242430 kernel: acpiphp: Slot [13] registered Jan 23 23:55:49.242448 kernel: acpiphp: Slot [14] registered Jan 23 23:55:49.242466 kernel: acpiphp: Slot [15] registered Jan 23 23:55:49.242489 kernel: acpiphp: Slot [16] registered Jan 23 23:55:49.242507 kernel: acpiphp: Slot [17] registered Jan 23 23:55:49.242525 kernel: acpiphp: Slot [18] registered Jan 23 23:55:49.242543 kernel: acpiphp: Slot [19] registered Jan 23 23:55:49.242562 kernel: acpiphp: Slot [20] registered Jan 23 23:55:49.242580 kernel: acpiphp: Slot [21] registered Jan 23 23:55:49.242599 kernel: acpiphp: Slot [22] registered Jan 23 23:55:49.242617 kernel: acpiphp: Slot [23] registered Jan 23 23:55:49.242635 kernel: acpiphp: Slot [24] registered Jan 23 23:55:49.242657 kernel: acpiphp: Slot [25] registered Jan 23 23:55:49.242676 kernel: acpiphp: Slot [26] registered Jan 23 23:55:49.242695 kernel: acpiphp: Slot [27] registered Jan 23 23:55:49.242713 kernel: acpiphp: Slot [28] registered Jan 23 23:55:49.242733 kernel: acpiphp: Slot [29] registered Jan 23 23:55:49.242752 kernel: acpiphp: Slot [30] registered Jan 23 23:55:49.242771 kernel: acpiphp: Slot [31] registered Jan 23 23:55:49.242790 kernel: PCI host bridge to bus 0000:00 Jan 23 23:55:49.243066 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Jan 23 23:55:49.243294 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 23 23:55:49.243489 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Jan 23 23:55:49.243672 kernel: pci_bus 0000:00: root bus resource [bus 00] Jan 23 23:55:49.246083 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Jan 23 23:55:49.246343 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Jan 23 23:55:49.246553 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Jan 23 23:55:49.246784 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jan 23 23:55:49.248970 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Jan 23 23:55:49.249184 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 23 23:55:49.249402 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jan 23 23:55:49.249605 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Jan 23 23:55:49.249805 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Jan 23 23:55:49.251864 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Jan 23 23:55:49.252126 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 23 23:55:49.252328 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Jan 23 23:55:49.252515 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 23 23:55:49.252700 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Jan 23 23:55:49.252726 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 23 23:55:49.252746 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 23 23:55:49.252765 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 23 23:55:49.252784 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 23 23:55:49.252809 kernel: iommu: Default domain type: Translated Jan 23 23:55:49.252828 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 23 23:55:49.252871 kernel: efivars: Registered efivars operations Jan 23 23:55:49.252892 kernel: vgaarb: loaded Jan 23 23:55:49.252911 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 23 23:55:49.252930 kernel: VFS: Disk quotas dquot_6.6.0 Jan 23 23:55:49.252949 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 23 23:55:49.252967 kernel: pnp: PnP ACPI init Jan 23 23:55:49.253183 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Jan 23 23:55:49.253217 kernel: pnp: PnP ACPI: found 1 devices Jan 23 23:55:49.253236 kernel: NET: Registered PF_INET protocol family Jan 23 23:55:49.253255 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 23 23:55:49.253274 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 23 23:55:49.253293 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 23 23:55:49.253311 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 23 23:55:49.253330 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 23 23:55:49.253348 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 23 23:55:49.253371 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 23:55:49.253390 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 23:55:49.253409 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 23 23:55:49.253427 kernel: PCI: CLS 0 bytes, default 64 Jan 23 23:55:49.253446 kernel: kvm [1]: HYP mode not available Jan 23 23:55:49.253464 kernel: Initialise system trusted keyrings Jan 23 23:55:49.253482 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 23 23:55:49.253501 kernel: Key type asymmetric registered Jan 23 23:55:49.253519 kernel: Asymmetric key parser 'x509' registered Jan 23 23:55:49.253542 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 23 23:55:49.253560 kernel: io scheduler mq-deadline registered Jan 23 23:55:49.253579 kernel: io scheduler kyber registered Jan 23 23:55:49.253597 kernel: io scheduler bfq registered Jan 23 23:55:49.253815 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Jan 23 23:55:49.254485 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 23 23:55:49.254511 kernel: ACPI: button: Power Button [PWRB] Jan 23 23:55:49.254530 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Jan 23 23:55:49.254549 kernel: ACPI: button: Sleep Button [SLPB] Jan 23 23:55:49.254575 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 23 23:55:49.254594 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jan 23 23:55:49.254835 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Jan 23 23:55:49.254988 kernel: printk: console [ttyS0] disabled Jan 23 23:55:49.255008 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Jan 23 23:55:49.255027 kernel: printk: console [ttyS0] enabled Jan 23 23:55:49.255046 kernel: printk: bootconsole [uart0] disabled Jan 23 23:55:49.255065 kernel: thunder_xcv, ver 1.0 Jan 23 23:55:49.255083 kernel: thunder_bgx, ver 1.0 Jan 23 23:55:49.255109 kernel: nicpf, ver 1.0 Jan 23 23:55:49.255128 kernel: nicvf, ver 1.0 Jan 23 23:55:49.255909 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 23 23:55:49.256151 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-01-23T23:55:48 UTC (1769212548) Jan 23 23:55:49.256177 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 23 23:55:49.256197 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Jan 23 23:55:49.256216 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 23 23:55:49.256234 kernel: watchdog: Hard watchdog permanently disabled Jan 23 23:55:49.256261 kernel: NET: Registered PF_INET6 protocol family Jan 23 23:55:49.256279 kernel: Segment Routing with IPv6 Jan 23 23:55:49.256298 kernel: In-situ OAM (IOAM) with IPv6 Jan 23 23:55:49.256316 kernel: NET: Registered PF_PACKET protocol family Jan 23 23:55:49.256334 kernel: Key type dns_resolver registered Jan 23 23:55:49.256352 kernel: registered taskstats version 1 Jan 23 23:55:49.256371 kernel: Loading compiled-in X.509 certificates Jan 23 23:55:49.256389 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: e1080b1efd8e2d5332b6814128fba42796535445' Jan 23 23:55:49.256408 kernel: Key type .fscrypt registered Jan 23 23:55:49.256431 kernel: Key type fscrypt-provisioning registered Jan 23 23:55:49.256449 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 23 23:55:49.256467 kernel: ima: Allocated hash algorithm: sha1 Jan 23 23:55:49.256486 kernel: ima: No architecture policies found Jan 23 23:55:49.256504 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 23 23:55:49.256523 kernel: clk: Disabling unused clocks Jan 23 23:55:49.256541 kernel: Freeing unused kernel memory: 39424K Jan 23 23:55:49.256559 kernel: Run /init as init process Jan 23 23:55:49.256578 kernel: with arguments: Jan 23 23:55:49.256600 kernel: /init Jan 23 23:55:49.256618 kernel: with environment: Jan 23 23:55:49.256636 kernel: HOME=/ Jan 23 23:55:49.256656 kernel: TERM=linux Jan 23 23:55:49.256679 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 23 23:55:49.256704 systemd[1]: Detected virtualization amazon. Jan 23 23:55:49.256725 systemd[1]: Detected architecture arm64. Jan 23 23:55:49.256745 systemd[1]: Running in initrd. Jan 23 23:55:49.256769 systemd[1]: No hostname configured, using default hostname. Jan 23 23:55:49.256789 systemd[1]: Hostname set to . Jan 23 23:55:49.256809 systemd[1]: Initializing machine ID from VM UUID. Jan 23 23:55:49.256829 systemd[1]: Queued start job for default target initrd.target. Jan 23 23:55:49.257895 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 23:55:49.257922 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 23:55:49.257945 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 23 23:55:49.257966 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 23:55:49.257998 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 23 23:55:49.258019 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 23 23:55:49.258043 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 23 23:55:49.258064 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 23 23:55:49.258085 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 23:55:49.258105 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 23:55:49.258130 systemd[1]: Reached target paths.target - Path Units. Jan 23 23:55:49.258151 systemd[1]: Reached target slices.target - Slice Units. Jan 23 23:55:49.258171 systemd[1]: Reached target swap.target - Swaps. Jan 23 23:55:49.258191 systemd[1]: Reached target timers.target - Timer Units. Jan 23 23:55:49.258211 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 23:55:49.258232 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 23:55:49.258252 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 23 23:55:49.258272 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 23 23:55:49.258292 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 23:55:49.258317 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 23:55:49.258337 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 23:55:49.258358 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 23:55:49.258378 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 23 23:55:49.258398 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 23:55:49.258418 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 23 23:55:49.258439 systemd[1]: Starting systemd-fsck-usr.service... Jan 23 23:55:49.258459 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 23:55:49.258479 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 23:55:49.258504 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 23:55:49.258525 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 23 23:55:49.258545 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 23:55:49.258613 systemd-journald[251]: Collecting audit messages is disabled. Jan 23 23:55:49.258663 systemd[1]: Finished systemd-fsck-usr.service. Jan 23 23:55:49.258685 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 23:55:49.258706 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 23 23:55:49.258725 kernel: Bridge firewalling registered Jan 23 23:55:49.258749 systemd-journald[251]: Journal started Jan 23 23:55:49.258787 systemd-journald[251]: Runtime Journal (/run/log/journal/ec2712ea076583730e8630628c196e5a) is 8.0M, max 75.3M, 67.3M free. Jan 23 23:55:49.220210 systemd-modules-load[252]: Inserted module 'overlay' Jan 23 23:55:49.267891 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 23:55:49.258922 systemd-modules-load[252]: Inserted module 'br_netfilter' Jan 23 23:55:49.274980 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 23:55:49.280350 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 23:55:49.295156 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 23:55:49.308168 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 23:55:49.319123 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 23:55:49.324111 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:55:49.345216 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 23:55:49.352277 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 23:55:49.354727 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 23:55:49.378305 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 23:55:49.396263 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 23:55:49.407301 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 23:55:49.425208 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 23 23:55:49.453296 dracut-cmdline[289]: dracut-dracut-053 Jan 23 23:55:49.460530 dracut-cmdline[289]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=a01f25d0714a86cf8b897276230b4ac71c04b1d69bd03a1f6d2ef96f59ef0f09 Jan 23 23:55:49.490306 systemd-resolved[283]: Positive Trust Anchors: Jan 23 23:55:49.490856 systemd-resolved[283]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 23:55:49.490925 systemd-resolved[283]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 23:55:49.632878 kernel: SCSI subsystem initialized Jan 23 23:55:49.640876 kernel: Loading iSCSI transport class v2.0-870. Jan 23 23:55:49.653944 kernel: iscsi: registered transport (tcp) Jan 23 23:55:49.676187 kernel: iscsi: registered transport (qla4xxx) Jan 23 23:55:49.676272 kernel: QLogic iSCSI HBA Driver Jan 23 23:55:49.734871 kernel: random: crng init done Jan 23 23:55:49.735306 systemd-resolved[283]: Defaulting to hostname 'linux'. Jan 23 23:55:49.741335 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 23:55:49.751443 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 23:55:49.762975 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 23 23:55:49.775108 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 23 23:55:49.810800 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 23 23:55:49.810892 kernel: device-mapper: uevent: version 1.0.3 Jan 23 23:55:49.813050 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 23 23:55:49.878900 kernel: raid6: neonx8 gen() 6591 MB/s Jan 23 23:55:49.895874 kernel: raid6: neonx4 gen() 6424 MB/s Jan 23 23:55:49.912872 kernel: raid6: neonx2 gen() 5362 MB/s Jan 23 23:55:49.929872 kernel: raid6: neonx1 gen() 3921 MB/s Jan 23 23:55:49.946871 kernel: raid6: int64x8 gen() 3777 MB/s Jan 23 23:55:49.963872 kernel: raid6: int64x4 gen() 3675 MB/s Jan 23 23:55:49.980872 kernel: raid6: int64x2 gen() 3553 MB/s Jan 23 23:55:49.998922 kernel: raid6: int64x1 gen() 2756 MB/s Jan 23 23:55:49.998973 kernel: raid6: using algorithm neonx8 gen() 6591 MB/s Jan 23 23:55:50.017877 kernel: raid6: .... xor() 4891 MB/s, rmw enabled Jan 23 23:55:50.017920 kernel: raid6: using neon recovery algorithm Jan 23 23:55:50.025877 kernel: xor: measuring software checksum speed Jan 23 23:55:50.028145 kernel: 8regs : 10268 MB/sec Jan 23 23:55:50.028178 kernel: 32regs : 11908 MB/sec Jan 23 23:55:50.029458 kernel: arm64_neon : 9294 MB/sec Jan 23 23:55:50.029490 kernel: xor: using function: 32regs (11908 MB/sec) Jan 23 23:55:50.114894 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 23 23:55:50.133469 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 23 23:55:50.144177 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 23:55:50.187783 systemd-udevd[470]: Using default interface naming scheme 'v255'. Jan 23 23:55:50.198387 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 23:55:50.211541 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 23 23:55:50.252814 dracut-pre-trigger[474]: rd.md=0: removing MD RAID activation Jan 23 23:55:50.309783 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 23:55:50.322308 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 23:55:50.436335 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 23:55:50.450628 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 23 23:55:50.486136 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 23 23:55:50.491964 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 23:55:50.503905 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 23:55:50.507144 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 23:55:50.527148 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 23 23:55:50.578140 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 23 23:55:50.646193 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 23 23:55:50.646259 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Jan 23 23:55:50.653645 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jan 23 23:55:50.654046 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jan 23 23:55:50.655329 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 23 23:55:50.658171 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 23:55:50.665232 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 23:55:50.668007 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 23:55:50.668420 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:55:50.688977 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 23:55:50.703861 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80110000, mac addr 06:64:63:ff:ef:b3 Jan 23 23:55:50.705398 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 23:55:50.709059 (udev-worker)[520]: Network interface NamePolicy= disabled on kernel command line. Jan 23 23:55:50.736873 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jan 23 23:55:50.736936 kernel: nvme nvme0: pci function 0000:00:04.0 Jan 23 23:55:50.738149 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:55:50.749922 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 23 23:55:50.755297 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 23:55:50.765596 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 23 23:55:50.765658 kernel: GPT:9289727 != 33554431 Jan 23 23:55:50.765684 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 23 23:55:50.768849 kernel: GPT:9289727 != 33554431 Jan 23 23:55:50.768902 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 23 23:55:50.768938 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 23 23:55:50.794197 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 23:55:50.893882 kernel: BTRFS: device fsid 6d31cc5b-4da2-4320-9991-d4bd2fc0f7fe devid 1 transid 34 /dev/nvme0n1p3 scanned by (udev-worker) (527) Jan 23 23:55:50.902929 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (542) Jan 23 23:55:50.973251 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jan 23 23:55:51.008418 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jan 23 23:55:51.036054 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jan 23 23:55:51.038869 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jan 23 23:55:51.060066 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 23 23:55:51.077072 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 23 23:55:51.099900 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 23 23:55:51.102902 disk-uuid[664]: Primary Header is updated. Jan 23 23:55:51.102902 disk-uuid[664]: Secondary Entries is updated. Jan 23 23:55:51.102902 disk-uuid[664]: Secondary Header is updated. Jan 23 23:55:51.136048 kernel: GPT:disk_guids don't match. Jan 23 23:55:51.136110 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 23 23:55:51.138281 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 23 23:55:52.149909 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 23 23:55:52.149982 disk-uuid[665]: The operation has completed successfully. Jan 23 23:55:52.340018 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 23 23:55:52.342555 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 23 23:55:52.391164 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 23 23:55:52.411072 sh[1008]: Success Jan 23 23:55:52.438378 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 23 23:55:52.568398 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 23 23:55:52.576052 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 23 23:55:52.584359 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 23 23:55:52.635407 kernel: BTRFS info (device dm-0): first mount of filesystem 6d31cc5b-4da2-4320-9991-d4bd2fc0f7fe Jan 23 23:55:52.635470 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 23 23:55:52.637440 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 23 23:55:52.638871 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 23 23:55:52.640076 kernel: BTRFS info (device dm-0): using free space tree Jan 23 23:55:52.751941 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 23 23:55:52.769693 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 23 23:55:52.773345 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 23 23:55:52.787279 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 23 23:55:52.794215 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 23 23:55:52.831563 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 821118c6-9a33-48e6-be55-657b51b768c7 Jan 23 23:55:52.831633 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 23 23:55:52.833441 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 23 23:55:52.848895 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 23 23:55:52.868176 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 23 23:55:52.872874 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 821118c6-9a33-48e6-be55-657b51b768c7 Jan 23 23:55:52.885572 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 23 23:55:52.898538 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 23 23:55:52.997790 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 23:55:53.010169 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 23:55:53.077718 systemd-networkd[1200]: lo: Link UP Jan 23 23:55:53.077739 systemd-networkd[1200]: lo: Gained carrier Jan 23 23:55:53.081518 systemd-networkd[1200]: Enumeration completed Jan 23 23:55:53.081660 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 23:55:53.082670 systemd-networkd[1200]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 23:55:53.082677 systemd-networkd[1200]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 23:55:53.086689 systemd[1]: Reached target network.target - Network. Jan 23 23:55:53.103149 systemd-networkd[1200]: eth0: Link UP Jan 23 23:55:53.103156 systemd-networkd[1200]: eth0: Gained carrier Jan 23 23:55:53.103173 systemd-networkd[1200]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 23:55:53.128929 systemd-networkd[1200]: eth0: DHCPv4 address 172.31.16.109/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 23 23:55:53.384823 ignition[1121]: Ignition 2.19.0 Jan 23 23:55:53.384876 ignition[1121]: Stage: fetch-offline Jan 23 23:55:53.389076 ignition[1121]: no configs at "/usr/lib/ignition/base.d" Jan 23 23:55:53.389116 ignition[1121]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 23:55:53.391563 ignition[1121]: Ignition finished successfully Jan 23 23:55:53.397779 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 23:55:53.409335 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 23 23:55:53.434990 ignition[1215]: Ignition 2.19.0 Jan 23 23:55:53.435019 ignition[1215]: Stage: fetch Jan 23 23:55:53.436947 ignition[1215]: no configs at "/usr/lib/ignition/base.d" Jan 23 23:55:53.437012 ignition[1215]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 23:55:53.438266 ignition[1215]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 23:55:53.452137 ignition[1215]: PUT result: OK Jan 23 23:55:53.455040 ignition[1215]: parsed url from cmdline: "" Jan 23 23:55:53.455070 ignition[1215]: no config URL provided Jan 23 23:55:53.455085 ignition[1215]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 23:55:53.455112 ignition[1215]: no config at "/usr/lib/ignition/user.ign" Jan 23 23:55:53.455143 ignition[1215]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 23:55:53.457285 ignition[1215]: PUT result: OK Jan 23 23:55:53.457363 ignition[1215]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jan 23 23:55:53.460072 ignition[1215]: GET result: OK Jan 23 23:55:53.476052 unknown[1215]: fetched base config from "system" Jan 23 23:55:53.460229 ignition[1215]: parsing config with SHA512: a4c2869d632c51cc74058f4a71fde216fa33f16fba3faed8737d08b402611c58e02d40bd14c817f4b12d0c99f4d64fc7d9581f573a9467462add05c5df85a58a Jan 23 23:55:53.476069 unknown[1215]: fetched base config from "system" Jan 23 23:55:53.476934 ignition[1215]: fetch: fetch complete Jan 23 23:55:53.476083 unknown[1215]: fetched user config from "aws" Jan 23 23:55:53.476946 ignition[1215]: fetch: fetch passed Jan 23 23:55:53.477032 ignition[1215]: Ignition finished successfully Jan 23 23:55:53.492911 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 23 23:55:53.507125 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 23 23:55:53.532745 ignition[1221]: Ignition 2.19.0 Jan 23 23:55:53.532771 ignition[1221]: Stage: kargs Jan 23 23:55:53.534612 ignition[1221]: no configs at "/usr/lib/ignition/base.d" Jan 23 23:55:53.534638 ignition[1221]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 23:55:53.535751 ignition[1221]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 23:55:53.536785 ignition[1221]: PUT result: OK Jan 23 23:55:53.547911 ignition[1221]: kargs: kargs passed Jan 23 23:55:53.548011 ignition[1221]: Ignition finished successfully Jan 23 23:55:53.554901 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 23 23:55:53.570154 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 23 23:55:53.596790 ignition[1227]: Ignition 2.19.0 Jan 23 23:55:53.596812 ignition[1227]: Stage: disks Jan 23 23:55:53.597444 ignition[1227]: no configs at "/usr/lib/ignition/base.d" Jan 23 23:55:53.597468 ignition[1227]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 23:55:53.597631 ignition[1227]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 23:55:53.600233 ignition[1227]: PUT result: OK Jan 23 23:55:53.611779 ignition[1227]: disks: disks passed Jan 23 23:55:53.612076 ignition[1227]: Ignition finished successfully Jan 23 23:55:53.617650 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 23 23:55:53.623294 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 23 23:55:53.625906 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 23 23:55:53.628929 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 23:55:53.631269 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 23:55:53.633586 systemd[1]: Reached target basic.target - Basic System. Jan 23 23:55:53.652179 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 23 23:55:53.709938 systemd-fsck[1235]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 23 23:55:53.719442 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 23 23:55:53.730178 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 23 23:55:53.833879 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 4f5f6971-6639-4171-835a-63d34aadb0e5 r/w with ordered data mode. Quota mode: none. Jan 23 23:55:53.834732 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 23 23:55:53.839085 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 23 23:55:53.863997 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 23:55:53.875109 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 23 23:55:53.877690 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 23 23:55:53.877770 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 23 23:55:53.877820 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 23:55:53.902883 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/nvme0n1p6 scanned by mount (1254) Jan 23 23:55:53.904766 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 23 23:55:53.913618 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 821118c6-9a33-48e6-be55-657b51b768c7 Jan 23 23:55:53.913677 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 23 23:55:53.913704 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 23 23:55:53.928261 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 23 23:55:53.928128 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 23 23:55:53.940542 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 23:55:54.192440 initrd-setup-root[1278]: cut: /sysroot/etc/passwd: No such file or directory Jan 23 23:55:54.214338 initrd-setup-root[1285]: cut: /sysroot/etc/group: No such file or directory Jan 23 23:55:54.224346 initrd-setup-root[1292]: cut: /sysroot/etc/shadow: No such file or directory Jan 23 23:55:54.232905 initrd-setup-root[1299]: cut: /sysroot/etc/gshadow: No such file or directory Jan 23 23:55:54.529617 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 23 23:55:54.541201 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 23 23:55:54.548539 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 23 23:55:54.566466 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 23 23:55:54.570950 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 821118c6-9a33-48e6-be55-657b51b768c7 Jan 23 23:55:54.617386 ignition[1367]: INFO : Ignition 2.19.0 Jan 23 23:55:54.617386 ignition[1367]: INFO : Stage: mount Jan 23 23:55:54.617386 ignition[1367]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 23:55:54.617386 ignition[1367]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 23:55:54.617386 ignition[1367]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 23:55:54.629544 ignition[1367]: INFO : PUT result: OK Jan 23 23:55:54.633437 ignition[1367]: INFO : mount: mount passed Jan 23 23:55:54.635325 ignition[1367]: INFO : Ignition finished successfully Jan 23 23:55:54.640256 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 23 23:55:54.644920 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 23 23:55:54.655264 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 23 23:55:54.844187 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 23:55:54.880880 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1378) Jan 23 23:55:54.885163 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 821118c6-9a33-48e6-be55-657b51b768c7 Jan 23 23:55:54.885205 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 23 23:55:54.885231 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 23 23:55:54.892885 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 23 23:55:54.896712 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 23:55:54.933375 ignition[1395]: INFO : Ignition 2.19.0 Jan 23 23:55:54.933375 ignition[1395]: INFO : Stage: files Jan 23 23:55:54.937355 ignition[1395]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 23:55:54.937355 ignition[1395]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 23:55:54.942317 ignition[1395]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 23:55:54.945880 ignition[1395]: INFO : PUT result: OK Jan 23 23:55:54.951023 ignition[1395]: DEBUG : files: compiled without relabeling support, skipping Jan 23 23:55:54.955698 ignition[1395]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 23 23:55:54.955698 ignition[1395]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 23 23:55:54.987083 ignition[1395]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 23 23:55:54.990336 ignition[1395]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 23 23:55:54.993858 unknown[1395]: wrote ssh authorized keys file for user: core Jan 23 23:55:54.996364 ignition[1395]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 23 23:55:55.002431 ignition[1395]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 23 23:55:55.006406 ignition[1395]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 23 23:55:55.006406 ignition[1395]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 23 23:55:55.014564 ignition[1395]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Jan 23 23:55:55.114151 ignition[1395]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 23 23:55:55.168986 systemd-networkd[1200]: eth0: Gained IPv6LL Jan 23 23:55:55.253906 ignition[1395]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 23 23:55:55.258857 ignition[1395]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 23 23:55:55.258857 ignition[1395]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jan 23 23:55:55.337909 ignition[1395]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Jan 23 23:55:55.469903 ignition[1395]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 23 23:55:55.469903 ignition[1395]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Jan 23 23:55:55.478047 ignition[1395]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Jan 23 23:55:55.478047 ignition[1395]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 23 23:55:55.478047 ignition[1395]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 23 23:55:55.478047 ignition[1395]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 23:55:55.478047 ignition[1395]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 23:55:55.478047 ignition[1395]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 23:55:55.478047 ignition[1395]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 23:55:55.478047 ignition[1395]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 23:55:55.478047 ignition[1395]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 23:55:55.478047 ignition[1395]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 23 23:55:55.478047 ignition[1395]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 23 23:55:55.478047 ignition[1395]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 23 23:55:55.478047 ignition[1395]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Jan 23 23:55:55.756993 ignition[1395]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Jan 23 23:55:56.109714 ignition[1395]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 23 23:55:56.109714 ignition[1395]: INFO : files: op(d): [started] processing unit "containerd.service" Jan 23 23:55:56.118151 ignition[1395]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 23 23:55:56.118151 ignition[1395]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 23 23:55:56.118151 ignition[1395]: INFO : files: op(d): [finished] processing unit "containerd.service" Jan 23 23:55:56.118151 ignition[1395]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Jan 23 23:55:56.118151 ignition[1395]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 23:55:56.118151 ignition[1395]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 23:55:56.118151 ignition[1395]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Jan 23 23:55:56.118151 ignition[1395]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 23 23:55:56.118151 ignition[1395]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 23 23:55:56.118151 ignition[1395]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 23 23:55:56.118151 ignition[1395]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 23 23:55:56.118151 ignition[1395]: INFO : files: files passed Jan 23 23:55:56.118151 ignition[1395]: INFO : Ignition finished successfully Jan 23 23:55:56.128251 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 23 23:55:56.157257 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 23 23:55:56.171739 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 23 23:55:56.192266 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 23 23:55:56.192468 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 23 23:55:56.212559 initrd-setup-root-after-ignition[1424]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 23:55:56.212559 initrd-setup-root-after-ignition[1424]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 23 23:55:56.231224 initrd-setup-root-after-ignition[1428]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 23:55:56.223905 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 23:55:56.228633 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 23 23:55:56.244200 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 23 23:55:56.300067 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 23 23:55:56.300582 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 23 23:55:56.305604 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 23 23:55:56.308077 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 23 23:55:56.310500 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 23 23:55:56.312243 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 23 23:55:56.363935 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 23:55:56.376321 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 23 23:55:56.402929 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 23 23:55:56.403313 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 23:55:56.411597 systemd[1]: Stopped target timers.target - Timer Units. Jan 23 23:55:56.415747 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 23 23:55:56.416009 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 23:55:56.421447 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 23 23:55:56.429994 systemd[1]: Stopped target basic.target - Basic System. Jan 23 23:55:56.432428 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 23 23:55:56.439404 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 23:55:56.442455 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 23 23:55:56.450118 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 23 23:55:56.452612 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 23:55:56.456119 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 23 23:55:56.463315 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 23 23:55:56.465753 systemd[1]: Stopped target swap.target - Swaps. Jan 23 23:55:56.473901 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 23 23:55:56.474141 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 23 23:55:56.477108 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 23 23:55:56.486994 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 23:55:56.492351 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 23 23:55:56.495897 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 23:55:56.496273 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 23 23:55:56.496519 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 23 23:55:56.509068 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 23 23:55:56.509308 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 23:55:56.512366 systemd[1]: ignition-files.service: Deactivated successfully. Jan 23 23:55:56.512568 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 23 23:55:56.529310 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 23 23:55:56.531650 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 23 23:55:56.532026 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 23:55:56.545424 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 23 23:55:56.554160 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 23 23:55:56.555630 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 23:55:56.570437 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 23 23:55:56.571865 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 23:55:56.576946 ignition[1448]: INFO : Ignition 2.19.0 Jan 23 23:55:56.576946 ignition[1448]: INFO : Stage: umount Jan 23 23:55:56.585021 ignition[1448]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 23:55:56.585021 ignition[1448]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 23:55:56.585021 ignition[1448]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 23:55:56.594283 ignition[1448]: INFO : PUT result: OK Jan 23 23:55:56.602490 ignition[1448]: INFO : umount: umount passed Jan 23 23:55:56.604509 ignition[1448]: INFO : Ignition finished successfully Jan 23 23:55:56.607057 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 23 23:55:56.607314 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 23 23:55:56.617036 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 23 23:55:56.623359 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 23 23:55:56.628772 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 23 23:55:56.631349 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 23 23:55:56.636122 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 23 23:55:56.636231 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 23 23:55:56.638720 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 23 23:55:56.638904 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 23 23:55:56.641774 systemd[1]: Stopped target network.target - Network. Jan 23 23:55:56.645972 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 23 23:55:56.646085 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 23:55:56.651273 systemd[1]: Stopped target paths.target - Path Units. Jan 23 23:55:56.651365 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 23 23:55:56.655597 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 23:55:56.660951 systemd[1]: Stopped target slices.target - Slice Units. Jan 23 23:55:56.663059 systemd[1]: Stopped target sockets.target - Socket Units. Jan 23 23:55:56.665358 systemd[1]: iscsid.socket: Deactivated successfully. Jan 23 23:55:56.665444 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 23:55:56.667809 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 23 23:55:56.667909 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 23:55:56.670327 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 23 23:55:56.670419 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 23 23:55:56.673521 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 23 23:55:56.673627 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 23 23:55:56.681039 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 23 23:55:56.692186 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 23 23:55:56.693127 systemd-networkd[1200]: eth0: DHCPv6 lease lost Jan 23 23:55:56.699435 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 23 23:55:56.700445 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 23 23:55:56.700656 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 23 23:55:56.704600 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 23 23:55:56.704803 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 23 23:55:56.711260 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 23 23:55:56.711372 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 23 23:55:56.714992 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 23 23:55:56.715097 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 23 23:55:56.725317 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 23 23:55:56.729197 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 23 23:55:56.729306 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 23:55:56.734323 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 23:55:56.739269 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 23 23:55:56.739493 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 23 23:55:56.757508 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 23 23:55:56.757655 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 23 23:55:56.760318 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 23 23:55:56.760408 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 23 23:55:56.762978 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 23 23:55:56.763061 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 23:55:56.767222 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 23 23:55:56.767498 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 23:55:56.821399 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 23 23:55:56.821545 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 23 23:55:56.850924 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 23 23:55:56.851007 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 23:55:56.853527 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 23 23:55:56.853623 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 23 23:55:56.856691 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 23 23:55:56.856788 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 23 23:55:56.872522 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 23 23:55:56.872629 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 23:55:56.884300 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 23 23:55:56.889620 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 23 23:55:56.891448 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 23:55:56.897967 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 23:55:56.898074 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:55:56.901950 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 23 23:55:56.902396 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 23 23:55:56.933476 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 23 23:55:56.935213 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 23 23:55:56.943445 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 23 23:55:56.956588 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 23 23:55:56.976749 systemd[1]: Switching root. Jan 23 23:55:57.010090 systemd-journald[251]: Journal stopped Jan 23 23:55:59.128737 systemd-journald[251]: Received SIGTERM from PID 1 (systemd). Jan 23 23:55:59.138998 kernel: SELinux: policy capability network_peer_controls=1 Jan 23 23:55:59.139063 kernel: SELinux: policy capability open_perms=1 Jan 23 23:55:59.139094 kernel: SELinux: policy capability extended_socket_class=1 Jan 23 23:55:59.139125 kernel: SELinux: policy capability always_check_network=0 Jan 23 23:55:59.139156 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 23 23:55:59.139187 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 23 23:55:59.139244 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 23 23:55:59.139287 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 23 23:55:59.139318 kernel: audit: type=1403 audit(1769212557.559:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 23 23:55:59.139360 systemd[1]: Successfully loaded SELinux policy in 52.804ms. Jan 23 23:55:59.139413 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 23.027ms. Jan 23 23:55:59.139486 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 23 23:55:59.139536 systemd[1]: Detected virtualization amazon. Jan 23 23:55:59.139570 systemd[1]: Detected architecture arm64. Jan 23 23:55:59.139604 systemd[1]: Detected first boot. Jan 23 23:55:59.139637 systemd[1]: Initializing machine ID from VM UUID. Jan 23 23:55:59.139669 zram_generator::config[1512]: No configuration found. Jan 23 23:55:59.139711 systemd[1]: Populated /etc with preset unit settings. Jan 23 23:55:59.139743 systemd[1]: Queued start job for default target multi-user.target. Jan 23 23:55:59.139777 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 23 23:55:59.139812 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 23 23:55:59.142386 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 23 23:55:59.142450 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 23 23:55:59.142484 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 23 23:55:59.142517 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 23 23:55:59.142550 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 23 23:55:59.142591 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 23 23:55:59.142623 systemd[1]: Created slice user.slice - User and Session Slice. Jan 23 23:55:59.142655 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 23:55:59.142688 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 23:55:59.142718 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 23 23:55:59.142750 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 23 23:55:59.142780 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 23 23:55:59.142812 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 23:55:59.145106 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 23 23:55:59.145160 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 23:55:59.145193 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 23 23:55:59.145226 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 23:55:59.145258 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 23:55:59.145291 systemd[1]: Reached target slices.target - Slice Units. Jan 23 23:55:59.145323 systemd[1]: Reached target swap.target - Swaps. Jan 23 23:55:59.145355 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 23 23:55:59.145392 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 23 23:55:59.145427 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 23 23:55:59.145459 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 23 23:55:59.145489 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 23:55:59.145518 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 23:55:59.145551 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 23:55:59.145581 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 23 23:55:59.145612 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 23 23:55:59.145644 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 23 23:55:59.145674 systemd[1]: Mounting media.mount - External Media Directory... Jan 23 23:55:59.145709 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 23 23:55:59.145739 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 23 23:55:59.145768 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 23 23:55:59.145799 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 23 23:55:59.145831 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 23:55:59.154191 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 23:55:59.154229 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 23 23:55:59.154261 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 23:55:59.154300 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 23:55:59.154331 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 23:55:59.154361 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 23 23:55:59.154393 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 23:55:59.154424 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 23 23:55:59.154454 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jan 23 23:55:59.154488 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jan 23 23:55:59.154519 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 23:55:59.154552 kernel: fuse: init (API version 7.39) Jan 23 23:55:59.154582 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 23:55:59.154614 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 23:55:59.154643 kernel: loop: module loaded Jan 23 23:55:59.154675 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 23 23:55:59.154705 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 23:55:59.154736 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 23 23:55:59.154766 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 23 23:55:59.154799 systemd[1]: Mounted media.mount - External Media Directory. Jan 23 23:55:59.154950 systemd-journald[1619]: Collecting audit messages is disabled. Jan 23 23:55:59.155009 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 23 23:55:59.155043 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 23 23:55:59.155073 kernel: ACPI: bus type drm_connector registered Jan 23 23:55:59.155125 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 23 23:55:59.155161 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 23 23:55:59.155211 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 23:55:59.155247 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 23 23:55:59.155283 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 23 23:55:59.155314 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 23:55:59.155342 systemd-journald[1619]: Journal started Jan 23 23:55:59.155393 systemd-journald[1619]: Runtime Journal (/run/log/journal/ec2712ea076583730e8630628c196e5a) is 8.0M, max 75.3M, 67.3M free. Jan 23 23:55:59.158063 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 23:55:59.163547 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 23:55:59.169288 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 23:55:59.169631 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 23:55:59.178014 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 23:55:59.178379 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 23:55:59.182065 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 23 23:55:59.182417 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 23 23:55:59.186142 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 23:55:59.186736 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 23:55:59.192465 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 23:55:59.196956 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 23:55:59.201875 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 23 23:55:59.231984 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 23:55:59.244212 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 23 23:55:59.261142 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 23 23:55:59.264126 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 23 23:55:59.275154 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 23 23:55:59.293233 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 23 23:55:59.297097 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 23:55:59.308116 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 23 23:55:59.311997 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 23:55:59.319136 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 23:55:59.340132 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 23:55:59.351694 systemd-journald[1619]: Time spent on flushing to /var/log/journal/ec2712ea076583730e8630628c196e5a is 76.315ms for 890 entries. Jan 23 23:55:59.351694 systemd-journald[1619]: System Journal (/var/log/journal/ec2712ea076583730e8630628c196e5a) is 8.0M, max 195.6M, 187.6M free. Jan 23 23:55:59.453069 systemd-journald[1619]: Received client request to flush runtime journal. Jan 23 23:55:59.352120 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 23 23:55:59.362602 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 23 23:55:59.387295 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 23 23:55:59.393397 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 23 23:55:59.408899 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 23:55:59.427087 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 23 23:55:59.458596 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 23 23:55:59.484158 udevadm[1670]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 23 23:55:59.486461 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 23:55:59.503618 systemd-tmpfiles[1660]: ACLs are not supported, ignoring. Jan 23 23:55:59.503921 systemd-tmpfiles[1660]: ACLs are not supported, ignoring. Jan 23 23:55:59.513740 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 23:55:59.523317 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 23 23:55:59.589947 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 23 23:55:59.598181 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 23:55:59.638441 systemd-tmpfiles[1683]: ACLs are not supported, ignoring. Jan 23 23:55:59.638482 systemd-tmpfiles[1683]: ACLs are not supported, ignoring. Jan 23 23:55:59.649087 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 23:56:00.297592 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 23 23:56:00.309366 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 23:56:00.362782 systemd-udevd[1689]: Using default interface naming scheme 'v255'. Jan 23 23:56:00.411044 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 23:56:00.430121 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 23:56:00.460473 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 23 23:56:00.525591 (udev-worker)[1703]: Network interface NamePolicy= disabled on kernel command line. Jan 23 23:56:00.526232 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jan 23 23:56:00.612879 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 23 23:56:00.807233 systemd-networkd[1692]: lo: Link UP Jan 23 23:56:00.807253 systemd-networkd[1692]: lo: Gained carrier Jan 23 23:56:00.810722 systemd-networkd[1692]: Enumeration completed Jan 23 23:56:00.811029 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 23:56:00.823495 systemd-networkd[1692]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 23:56:00.823521 systemd-networkd[1692]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 23:56:00.829159 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 23 23:56:00.835633 systemd-networkd[1692]: eth0: Link UP Jan 23 23:56:00.837098 systemd-networkd[1692]: eth0: Gained carrier Jan 23 23:56:00.837146 systemd-networkd[1692]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 23:56:00.847948 systemd-networkd[1692]: eth0: DHCPv4 address 172.31.16.109/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 23 23:56:00.861132 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (1703) Jan 23 23:56:00.882418 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 23:56:01.069264 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 23 23:56:01.088910 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:56:01.130490 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 23 23:56:01.144106 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 23 23:56:01.163898 lvm[1818]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 23 23:56:01.203564 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 23 23:56:01.210425 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 23:56:01.221154 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 23 23:56:01.241873 lvm[1821]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 23 23:56:01.280613 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 23 23:56:01.287932 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 23 23:56:01.291379 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 23 23:56:01.291437 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 23:56:01.294017 systemd[1]: Reached target machines.target - Containers. Jan 23 23:56:01.298117 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 23 23:56:01.307242 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 23 23:56:01.319163 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 23 23:56:01.327336 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 23:56:01.341067 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 23 23:56:01.356137 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 23 23:56:01.364287 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 23 23:56:01.378204 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 23 23:56:01.403260 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 23 23:56:01.408302 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 23 23:56:01.422893 kernel: loop0: detected capacity change from 0 to 52536 Jan 23 23:56:01.427053 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 23 23:56:01.516910 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 23 23:56:01.546946 kernel: loop1: detected capacity change from 0 to 114328 Jan 23 23:56:01.615872 kernel: loop2: detected capacity change from 0 to 114432 Jan 23 23:56:01.668901 kernel: loop3: detected capacity change from 0 to 207008 Jan 23 23:56:01.788903 kernel: loop4: detected capacity change from 0 to 52536 Jan 23 23:56:01.814863 kernel: loop5: detected capacity change from 0 to 114328 Jan 23 23:56:01.841872 kernel: loop6: detected capacity change from 0 to 114432 Jan 23 23:56:01.865972 kernel: loop7: detected capacity change from 0 to 207008 Jan 23 23:56:01.899048 (sd-merge)[1844]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jan 23 23:56:01.900107 (sd-merge)[1844]: Merged extensions into '/usr'. Jan 23 23:56:01.906932 systemd[1]: Reloading requested from client PID 1829 ('systemd-sysext') (unit systemd-sysext.service)... Jan 23 23:56:01.907126 systemd[1]: Reloading... Jan 23 23:56:02.056076 zram_generator::config[1875]: No configuration found. Jan 23 23:56:02.140539 ldconfig[1825]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 23 23:56:02.272980 systemd-networkd[1692]: eth0: Gained IPv6LL Jan 23 23:56:02.324229 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 23 23:56:02.478015 systemd[1]: Reloading finished in 569 ms. Jan 23 23:56:02.504264 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 23 23:56:02.508075 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 23 23:56:02.511733 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 23 23:56:02.538287 systemd[1]: Starting ensure-sysext.service... Jan 23 23:56:02.547201 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 23:56:02.556312 systemd[1]: Reloading requested from client PID 1933 ('systemctl') (unit ensure-sysext.service)... Jan 23 23:56:02.556505 systemd[1]: Reloading... Jan 23 23:56:02.614875 systemd-tmpfiles[1934]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 23 23:56:02.615564 systemd-tmpfiles[1934]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 23 23:56:02.618183 systemd-tmpfiles[1934]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 23 23:56:02.620680 systemd-tmpfiles[1934]: ACLs are not supported, ignoring. Jan 23 23:56:02.621033 systemd-tmpfiles[1934]: ACLs are not supported, ignoring. Jan 23 23:56:02.629819 systemd-tmpfiles[1934]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 23:56:02.630454 systemd-tmpfiles[1934]: Skipping /boot Jan 23 23:56:02.651997 systemd-tmpfiles[1934]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 23:56:02.652220 systemd-tmpfiles[1934]: Skipping /boot Jan 23 23:56:02.723894 zram_generator::config[1961]: No configuration found. Jan 23 23:56:02.962372 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 23 23:56:03.116056 systemd[1]: Reloading finished in 558 ms. Jan 23 23:56:03.149960 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 23:56:03.164222 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 23 23:56:03.180153 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 23 23:56:03.191156 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 23 23:56:03.204796 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 23:56:03.220199 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 23 23:56:03.248873 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 23:56:03.255039 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 23:56:03.274048 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 23:56:03.291310 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 23:56:03.293951 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 23:56:03.311940 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 23:56:03.312462 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 23:56:03.331083 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 23:56:03.331480 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 23:56:03.360146 systemd[1]: Finished ensure-sysext.service. Jan 23 23:56:03.367275 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 23 23:56:03.371892 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 23 23:56:03.376769 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 23:56:03.377165 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 23:56:03.384405 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 23:56:03.384757 augenrules[2053]: No rules Jan 23 23:56:03.385672 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 23:56:03.394322 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 23 23:56:03.415449 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 23:56:03.426669 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 23:56:03.439378 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 23:56:03.439456 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 23:56:03.439559 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 23:56:03.439630 systemd[1]: Reached target time-set.target - System Time Set. Jan 23 23:56:03.467121 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 23 23:56:03.471915 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 23:56:03.474198 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 23:56:03.494246 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 23 23:56:03.499479 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 23 23:56:03.522398 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 23 23:56:03.528514 systemd-resolved[2026]: Positive Trust Anchors: Jan 23 23:56:03.528551 systemd-resolved[2026]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 23:56:03.528616 systemd-resolved[2026]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 23:56:03.544997 systemd-resolved[2026]: Defaulting to hostname 'linux'. Jan 23 23:56:03.548346 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 23:56:03.551119 systemd[1]: Reached target network.target - Network. Jan 23 23:56:03.553216 systemd[1]: Reached target network-online.target - Network is Online. Jan 23 23:56:03.555685 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 23:56:03.558438 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 23:56:03.561035 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 23 23:56:03.563914 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 23 23:56:03.567080 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 23 23:56:03.569689 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 23 23:56:03.572560 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 23 23:56:03.575422 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 23 23:56:03.575469 systemd[1]: Reached target paths.target - Path Units. Jan 23 23:56:03.577551 systemd[1]: Reached target timers.target - Timer Units. Jan 23 23:56:03.581189 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 23 23:56:03.586571 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 23 23:56:03.591900 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 23 23:56:03.595952 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 23 23:56:03.598499 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 23:56:03.600762 systemd[1]: Reached target basic.target - Basic System. Jan 23 23:56:03.603280 systemd[1]: System is tainted: cgroupsv1 Jan 23 23:56:03.603360 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 23 23:56:03.603412 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 23 23:56:03.612152 systemd[1]: Starting containerd.service - containerd container runtime... Jan 23 23:56:03.620139 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 23 23:56:03.637125 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 23 23:56:03.645000 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 23 23:56:03.651149 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 23 23:56:03.653629 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 23 23:56:03.660046 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:56:03.671463 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 23 23:56:03.707240 systemd[1]: Started ntpd.service - Network Time Service. Jan 23 23:56:03.722199 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 23 23:56:03.735040 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 23 23:56:03.744696 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 23 23:56:03.751417 jq[2081]: false Jan 23 23:56:03.781022 dbus-daemon[2080]: [system] SELinux support is enabled Jan 23 23:56:03.786014 dbus-daemon[2080]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1692 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 23 23:56:03.788106 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 23 23:56:03.809983 extend-filesystems[2082]: Found loop4 Jan 23 23:56:03.809983 extend-filesystems[2082]: Found loop5 Jan 23 23:56:03.809983 extend-filesystems[2082]: Found loop6 Jan 23 23:56:03.809983 extend-filesystems[2082]: Found loop7 Jan 23 23:56:03.846811 extend-filesystems[2082]: Found nvme0n1 Jan 23 23:56:03.846811 extend-filesystems[2082]: Found nvme0n1p1 Jan 23 23:56:03.846811 extend-filesystems[2082]: Found nvme0n1p2 Jan 23 23:56:03.846811 extend-filesystems[2082]: Found nvme0n1p3 Jan 23 23:56:03.846811 extend-filesystems[2082]: Found usr Jan 23 23:56:03.846811 extend-filesystems[2082]: Found nvme0n1p4 Jan 23 23:56:03.846811 extend-filesystems[2082]: Found nvme0n1p6 Jan 23 23:56:03.846811 extend-filesystems[2082]: Found nvme0n1p7 Jan 23 23:56:03.846811 extend-filesystems[2082]: Found nvme0n1p9 Jan 23 23:56:03.846811 extend-filesystems[2082]: Checking size of /dev/nvme0n1p9 Jan 23 23:56:03.810466 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 23 23:56:03.835622 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 23 23:56:03.874523 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 23 23:56:03.897731 systemd[1]: Starting update-engine.service - Update Engine... Jan 23 23:56:03.911044 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 23 23:56:03.917461 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 23 23:56:03.936394 extend-filesystems[2082]: Resized partition /dev/nvme0n1p9 Jan 23 23:56:03.938709 extend-filesystems[2117]: resize2fs 1.47.1 (20-May-2024) Jan 23 23:56:03.948117 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 23 23:56:03.948622 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 23 23:56:03.970869 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Jan 23 23:56:03.970973 ntpd[2087]: 23 Jan 23:56:03 ntpd[2087]: ntpd 4.2.8p17@1.4004-o Fri Jan 23 21:53:23 UTC 2026 (1): Starting Jan 23 23:56:03.970973 ntpd[2087]: 23 Jan 23:56:03 ntpd[2087]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 23 23:56:03.970973 ntpd[2087]: 23 Jan 23:56:03 ntpd[2087]: ---------------------------------------------------- Jan 23 23:56:03.970973 ntpd[2087]: 23 Jan 23:56:03 ntpd[2087]: ntp-4 is maintained by Network Time Foundation, Jan 23 23:56:03.970973 ntpd[2087]: 23 Jan 23:56:03 ntpd[2087]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 23 23:56:03.970973 ntpd[2087]: 23 Jan 23:56:03 ntpd[2087]: corporation. Support and training for ntp-4 are Jan 23 23:56:03.970973 ntpd[2087]: 23 Jan 23:56:03 ntpd[2087]: available at https://www.nwtime.org/support Jan 23 23:56:03.970973 ntpd[2087]: 23 Jan 23:56:03 ntpd[2087]: ---------------------------------------------------- Jan 23 23:56:03.968201 ntpd[2087]: ntpd 4.2.8p17@1.4004-o Fri Jan 23 21:53:23 UTC 2026 (1): Starting Jan 23 23:56:03.978526 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 23 23:56:03.968249 ntpd[2087]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 23 23:56:03.979054 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 23 23:56:03.968269 ntpd[2087]: ---------------------------------------------------- Jan 23 23:56:03.968289 ntpd[2087]: ntp-4 is maintained by Network Time Foundation, Jan 23 23:56:03.968308 ntpd[2087]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 23 23:56:03.968327 ntpd[2087]: corporation. Support and training for ntp-4 are Jan 23 23:56:03.968346 ntpd[2087]: available at https://www.nwtime.org/support Jan 23 23:56:03.968364 ntpd[2087]: ---------------------------------------------------- Jan 23 23:56:04.004984 jq[2115]: true Jan 23 23:56:04.022287 systemd[1]: motdgen.service: Deactivated successfully. Jan 23 23:56:04.024659 ntpd[2087]: proto: precision = 0.096 usec (-23) Jan 23 23:56:04.024818 ntpd[2087]: 23 Jan 23:56:04 ntpd[2087]: proto: precision = 0.096 usec (-23) Jan 23 23:56:04.028912 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 23 23:56:04.032426 ntpd[2087]: basedate set to 2026-01-11 Jan 23 23:56:04.032497 ntpd[2087]: gps base set to 2026-01-11 (week 2401) Jan 23 23:56:04.032636 ntpd[2087]: 23 Jan 23:56:04 ntpd[2087]: basedate set to 2026-01-11 Jan 23 23:56:04.036866 ntpd[2087]: 23 Jan 23:56:04 ntpd[2087]: gps base set to 2026-01-11 (week 2401) Jan 23 23:56:04.069619 ntpd[2087]: Listen and drop on 0 v6wildcard [::]:123 Jan 23 23:56:04.075246 ntpd[2087]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 23 23:56:04.079113 ntpd[2087]: 23 Jan 23:56:04 ntpd[2087]: Listen and drop on 0 v6wildcard [::]:123 Jan 23 23:56:04.079113 ntpd[2087]: 23 Jan 23:56:04 ntpd[2087]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 23 23:56:04.075517 ntpd[2087]: Listen normally on 2 lo 127.0.0.1:123 Jan 23 23:56:04.096110 ntpd[2087]: 23 Jan 23:56:04 ntpd[2087]: Listen normally on 2 lo 127.0.0.1:123 Jan 23 23:56:04.096110 ntpd[2087]: 23 Jan 23:56:04 ntpd[2087]: Listen normally on 3 eth0 172.31.16.109:123 Jan 23 23:56:04.096110 ntpd[2087]: 23 Jan 23:56:04 ntpd[2087]: Listen normally on 4 lo [::1]:123 Jan 23 23:56:04.096110 ntpd[2087]: 23 Jan 23:56:04 ntpd[2087]: Listen normally on 5 eth0 [fe80::464:63ff:feff:efb3%2]:123 Jan 23 23:56:04.096110 ntpd[2087]: 23 Jan 23:56:04 ntpd[2087]: Listening on routing socket on fd #22 for interface updates Jan 23 23:56:04.088447 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 23 23:56:04.084070 ntpd[2087]: Listen normally on 3 eth0 172.31.16.109:123 Jan 23 23:56:04.084146 ntpd[2087]: Listen normally on 4 lo [::1]:123 Jan 23 23:56:04.084225 ntpd[2087]: Listen normally on 5 eth0 [fe80::464:63ff:feff:efb3%2]:123 Jan 23 23:56:04.084296 ntpd[2087]: Listening on routing socket on fd #22 for interface updates Jan 23 23:56:04.097421 dbus-daemon[2080]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 23 23:56:04.108774 (ntainerd)[2135]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 23 23:56:04.131166 tar[2121]: linux-arm64/LICENSE Jan 23 23:56:04.131166 tar[2121]: linux-arm64/helm Jan 23 23:56:04.126594 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 23 23:56:04.126641 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 23 23:56:04.141900 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 23 23:56:04.154130 jq[2133]: true Jan 23 23:56:04.144319 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 23 23:56:04.144361 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 23 23:56:04.159039 ntpd[2087]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 23 23:56:04.168813 ntpd[2087]: 23 Jan 23:56:04 ntpd[2087]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 23 23:56:04.168813 ntpd[2087]: 23 Jan 23:56:04 ntpd[2087]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 23 23:56:04.159105 ntpd[2087]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 23 23:56:04.204253 coreos-metadata[2078]: Jan 23 23:56:04.189 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 23 23:56:04.211710 coreos-metadata[2078]: Jan 23 23:56:04.210 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jan 23 23:56:04.225382 coreos-metadata[2078]: Jan 23 23:56:04.218 INFO Fetch successful Jan 23 23:56:04.225382 coreos-metadata[2078]: Jan 23 23:56:04.219 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jan 23 23:56:04.228947 coreos-metadata[2078]: Jan 23 23:56:04.227 INFO Fetch successful Jan 23 23:56:04.228947 coreos-metadata[2078]: Jan 23 23:56:04.227 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jan 23 23:56:04.240312 coreos-metadata[2078]: Jan 23 23:56:04.233 INFO Fetch successful Jan 23 23:56:04.240312 coreos-metadata[2078]: Jan 23 23:56:04.233 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jan 23 23:56:04.234948 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 23 23:56:04.252247 coreos-metadata[2078]: Jan 23 23:56:04.252 INFO Fetch successful Jan 23 23:56:04.254272 coreos-metadata[2078]: Jan 23 23:56:04.252 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jan 23 23:56:04.260475 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jan 23 23:56:04.274014 coreos-metadata[2078]: Jan 23 23:56:04.264 INFO Fetch failed with 404: resource not found Jan 23 23:56:04.274014 coreos-metadata[2078]: Jan 23 23:56:04.264 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jan 23 23:56:04.274229 coreos-metadata[2078]: Jan 23 23:56:04.273 INFO Fetch successful Jan 23 23:56:04.274490 coreos-metadata[2078]: Jan 23 23:56:04.274 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jan 23 23:56:04.278259 coreos-metadata[2078]: Jan 23 23:56:04.278 INFO Fetch successful Jan 23 23:56:04.278259 coreos-metadata[2078]: Jan 23 23:56:04.278 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jan 23 23:56:04.280018 coreos-metadata[2078]: Jan 23 23:56:04.279 INFO Fetch successful Jan 23 23:56:04.280018 coreos-metadata[2078]: Jan 23 23:56:04.279 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jan 23 23:56:04.287668 coreos-metadata[2078]: Jan 23 23:56:04.285 INFO Fetch successful Jan 23 23:56:04.287668 coreos-metadata[2078]: Jan 23 23:56:04.285 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jan 23 23:56:04.287668 coreos-metadata[2078]: Jan 23 23:56:04.287 INFO Fetch successful Jan 23 23:56:04.320411 update_engine[2110]: I20260123 23:56:04.319988 2110 main.cc:92] Flatcar Update Engine starting Jan 23 23:56:04.339473 systemd[1]: Started update-engine.service - Update Engine. Jan 23 23:56:04.343242 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 23 23:56:04.349085 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 23 23:56:04.355941 update_engine[2110]: I20260123 23:56:04.354127 2110 update_check_scheduler.cc:74] Next update check in 6m42s Jan 23 23:56:04.375912 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Jan 23 23:56:04.389715 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 23 23:56:04.406556 extend-filesystems[2117]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jan 23 23:56:04.406556 extend-filesystems[2117]: old_desc_blocks = 1, new_desc_blocks = 2 Jan 23 23:56:04.406556 extend-filesystems[2117]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Jan 23 23:56:04.417582 extend-filesystems[2082]: Resized filesystem in /dev/nvme0n1p9 Jan 23 23:56:04.427352 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 23 23:56:04.434439 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 23 23:56:04.454523 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 23 23:56:04.457430 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 23 23:56:04.480395 bash[2192]: Updated "/home/core/.ssh/authorized_keys" Jan 23 23:56:04.497660 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 23 23:56:04.529962 systemd[1]: Starting sshkeys.service... Jan 23 23:56:04.564033 amazon-ssm-agent[2159]: Initializing new seelog logger Jan 23 23:56:04.564637 amazon-ssm-agent[2159]: New Seelog Logger Creation Complete Jan 23 23:56:04.567281 amazon-ssm-agent[2159]: 2026/01/23 23:56:04 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 23:56:04.567281 amazon-ssm-agent[2159]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 23:56:04.567281 amazon-ssm-agent[2159]: 2026/01/23 23:56:04 processing appconfig overrides Jan 23 23:56:04.569705 amazon-ssm-agent[2159]: 2026/01/23 23:56:04 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 23:56:04.575922 amazon-ssm-agent[2159]: 2026-01-23 23:56:04 INFO Proxy environment variables: Jan 23 23:56:04.575922 amazon-ssm-agent[2159]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 23:56:04.575922 amazon-ssm-agent[2159]: 2026/01/23 23:56:04 processing appconfig overrides Jan 23 23:56:04.575922 amazon-ssm-agent[2159]: 2026/01/23 23:56:04 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 23:56:04.575922 amazon-ssm-agent[2159]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 23:56:04.575922 amazon-ssm-agent[2159]: 2026/01/23 23:56:04 processing appconfig overrides Jan 23 23:56:04.590076 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 23 23:56:04.623170 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (2200) Jan 23 23:56:04.620576 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 23 23:56:04.633286 amazon-ssm-agent[2159]: 2026/01/23 23:56:04 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 23:56:04.633286 amazon-ssm-agent[2159]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 23:56:04.633286 amazon-ssm-agent[2159]: 2026/01/23 23:56:04 processing appconfig overrides Jan 23 23:56:04.680958 amazon-ssm-agent[2159]: 2026-01-23 23:56:04 INFO https_proxy: Jan 23 23:56:04.784173 amazon-ssm-agent[2159]: 2026-01-23 23:56:04 INFO http_proxy: Jan 23 23:56:04.827083 systemd-logind[2101]: Watching system buttons on /dev/input/event0 (Power Button) Jan 23 23:56:04.827138 systemd-logind[2101]: Watching system buttons on /dev/input/event1 (Sleep Button) Jan 23 23:56:04.832613 systemd-logind[2101]: New seat seat0. Jan 23 23:56:04.838114 systemd[1]: Started systemd-logind.service - User Login Management. Jan 23 23:56:04.899968 amazon-ssm-agent[2159]: 2026-01-23 23:56:04 INFO no_proxy: Jan 23 23:56:04.997881 amazon-ssm-agent[2159]: 2026-01-23 23:56:04 INFO Checking if agent identity type OnPrem can be assumed Jan 23 23:56:05.055774 coreos-metadata[2206]: Jan 23 23:56:05.055 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 23 23:56:05.055774 coreos-metadata[2206]: Jan 23 23:56:05.055 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jan 23 23:56:05.055774 coreos-metadata[2206]: Jan 23 23:56:05.055 INFO Fetch successful Jan 23 23:56:05.055774 coreos-metadata[2206]: Jan 23 23:56:05.055 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 23 23:56:05.056460 coreos-metadata[2206]: Jan 23 23:56:05.055 INFO Fetch successful Jan 23 23:56:05.058804 unknown[2206]: wrote ssh authorized keys file for user: core Jan 23 23:56:05.098176 amazon-ssm-agent[2159]: 2026-01-23 23:56:04 INFO Checking if agent identity type EC2 can be assumed Jan 23 23:56:05.138147 update-ssh-keys[2267]: Updated "/home/core/.ssh/authorized_keys" Jan 23 23:56:05.137520 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 23 23:56:05.158666 systemd[1]: Finished sshkeys.service. Jan 23 23:56:05.165379 dbus-daemon[2080]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 23 23:56:05.167604 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 23 23:56:05.179069 dbus-daemon[2080]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=2149 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 23 23:56:05.193385 systemd[1]: Starting polkit.service - Authorization Manager... Jan 23 23:56:05.199869 amazon-ssm-agent[2159]: 2026-01-23 23:56:05 INFO Agent will take identity from EC2 Jan 23 23:56:05.261887 polkitd[2288]: Started polkitd version 121 Jan 23 23:56:05.300169 amazon-ssm-agent[2159]: 2026-01-23 23:56:05 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 23 23:56:05.327753 polkitd[2288]: Loading rules from directory /etc/polkit-1/rules.d Jan 23 23:56:05.327895 polkitd[2288]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 23 23:56:05.333168 polkitd[2288]: Finished loading, compiling and executing 2 rules Jan 23 23:56:05.344802 dbus-daemon[2080]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 23 23:56:05.346447 systemd[1]: Started polkit.service - Authorization Manager. Jan 23 23:56:05.348690 polkitd[2288]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 23 23:56:05.390800 containerd[2135]: time="2026-01-23T23:56:05.383887199Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 23 23:56:05.399523 amazon-ssm-agent[2159]: 2026-01-23 23:56:05 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 23 23:56:05.427600 systemd-hostnamed[2149]: Hostname set to (transient) Jan 23 23:56:05.427873 systemd-resolved[2026]: System hostname changed to 'ip-172-31-16-109'. Jan 23 23:56:05.430584 locksmithd[2172]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 23 23:56:05.501861 amazon-ssm-agent[2159]: 2026-01-23 23:56:05 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 23 23:56:05.599937 amazon-ssm-agent[2159]: 2026-01-23 23:56:05 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jan 23 23:56:05.621696 containerd[2135]: time="2026-01-23T23:56:05.621522697Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 23 23:56:05.644429 containerd[2135]: time="2026-01-23T23:56:05.643556665Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 23 23:56:05.644429 containerd[2135]: time="2026-01-23T23:56:05.643624429Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 23 23:56:05.644429 containerd[2135]: time="2026-01-23T23:56:05.643658893Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 23 23:56:05.644429 containerd[2135]: time="2026-01-23T23:56:05.644080237Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 23 23:56:05.644429 containerd[2135]: time="2026-01-23T23:56:05.644124877Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 23 23:56:05.644429 containerd[2135]: time="2026-01-23T23:56:05.644301709Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 23 23:56:05.644429 containerd[2135]: time="2026-01-23T23:56:05.644357101Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 23 23:56:05.660876 containerd[2135]: time="2026-01-23T23:56:05.658037269Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 23 23:56:05.660876 containerd[2135]: time="2026-01-23T23:56:05.658089577Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 23 23:56:05.660876 containerd[2135]: time="2026-01-23T23:56:05.658125685Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 23 23:56:05.660876 containerd[2135]: time="2026-01-23T23:56:05.658150837Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 23 23:56:05.660876 containerd[2135]: time="2026-01-23T23:56:05.658377337Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 23 23:56:05.660876 containerd[2135]: time="2026-01-23T23:56:05.658796449Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 23 23:56:05.660876 containerd[2135]: time="2026-01-23T23:56:05.660460825Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 23 23:56:05.660876 containerd[2135]: time="2026-01-23T23:56:05.660508153Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 23 23:56:05.660876 containerd[2135]: time="2026-01-23T23:56:05.660745897Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 23 23:56:05.669197 containerd[2135]: time="2026-01-23T23:56:05.668909149Z" level=info msg="metadata content store policy set" policy=shared Jan 23 23:56:05.682884 containerd[2135]: time="2026-01-23T23:56:05.680901877Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 23 23:56:05.682884 containerd[2135]: time="2026-01-23T23:56:05.681025777Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 23 23:56:05.682884 containerd[2135]: time="2026-01-23T23:56:05.681064057Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 23 23:56:05.682884 containerd[2135]: time="2026-01-23T23:56:05.681101749Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 23 23:56:05.682884 containerd[2135]: time="2026-01-23T23:56:05.681139321Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 23 23:56:05.682884 containerd[2135]: time="2026-01-23T23:56:05.681410821Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 23 23:56:05.695872 containerd[2135]: time="2026-01-23T23:56:05.693444673Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 23 23:56:05.698597 amazon-ssm-agent[2159]: 2026-01-23 23:56:05 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Jan 23 23:56:05.699614 containerd[2135]: time="2026-01-23T23:56:05.699421933Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 23 23:56:05.699614 containerd[2135]: time="2026-01-23T23:56:05.699484897Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 23 23:56:05.699614 containerd[2135]: time="2026-01-23T23:56:05.699531037Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 23 23:56:05.699614 containerd[2135]: time="2026-01-23T23:56:05.699564229Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 23 23:56:05.699614 containerd[2135]: time="2026-01-23T23:56:05.699598201Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 23 23:56:05.699917 containerd[2135]: time="2026-01-23T23:56:05.699636757Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 23 23:56:05.699917 containerd[2135]: time="2026-01-23T23:56:05.699670717Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 23 23:56:05.699917 containerd[2135]: time="2026-01-23T23:56:05.699703201Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 23 23:56:05.699917 containerd[2135]: time="2026-01-23T23:56:05.699736201Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 23 23:56:05.699917 containerd[2135]: time="2026-01-23T23:56:05.699765709Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 23 23:56:05.699917 containerd[2135]: time="2026-01-23T23:56:05.699794761Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 23 23:56:05.699917 containerd[2135]: time="2026-01-23T23:56:05.699862945Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 23 23:56:05.699917 containerd[2135]: time="2026-01-23T23:56:05.699909133Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 23 23:56:05.700256 containerd[2135]: time="2026-01-23T23:56:05.699939925Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 23 23:56:05.700256 containerd[2135]: time="2026-01-23T23:56:05.699971929Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 23 23:56:05.700256 containerd[2135]: time="2026-01-23T23:56:05.700001653Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 23 23:56:05.700256 containerd[2135]: time="2026-01-23T23:56:05.700032505Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 23 23:56:05.700256 containerd[2135]: time="2026-01-23T23:56:05.700061485Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 23 23:56:05.700256 containerd[2135]: time="2026-01-23T23:56:05.700091437Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 23 23:56:05.700256 containerd[2135]: time="2026-01-23T23:56:05.700122217Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 23 23:56:05.700256 containerd[2135]: time="2026-01-23T23:56:05.700156393Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 23 23:56:05.700256 containerd[2135]: time="2026-01-23T23:56:05.700198933Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 23 23:56:05.700647 containerd[2135]: time="2026-01-23T23:56:05.700269013Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 23 23:56:05.700647 containerd[2135]: time="2026-01-23T23:56:05.700300849Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 23 23:56:05.700647 containerd[2135]: time="2026-01-23T23:56:05.700337185Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 23 23:56:05.700647 containerd[2135]: time="2026-01-23T23:56:05.700380337Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 23 23:56:05.700647 containerd[2135]: time="2026-01-23T23:56:05.700410109Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 23 23:56:05.700647 containerd[2135]: time="2026-01-23T23:56:05.700436857Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 23 23:56:05.700939 containerd[2135]: time="2026-01-23T23:56:05.700681429Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 23 23:56:05.700939 containerd[2135]: time="2026-01-23T23:56:05.700721653Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 23 23:56:05.700939 containerd[2135]: time="2026-01-23T23:56:05.700748869Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 23 23:56:05.700939 containerd[2135]: time="2026-01-23T23:56:05.700779205Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 23 23:56:05.700939 containerd[2135]: time="2026-01-23T23:56:05.700811137Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 23 23:56:05.710865 containerd[2135]: time="2026-01-23T23:56:05.705994273Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 23 23:56:05.710865 containerd[2135]: time="2026-01-23T23:56:05.706059553Z" level=info msg="NRI interface is disabled by configuration." Jan 23 23:56:05.710865 containerd[2135]: time="2026-01-23T23:56:05.706115113Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 23 23:56:05.711065 containerd[2135]: time="2026-01-23T23:56:05.706785961Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 23 23:56:05.711065 containerd[2135]: time="2026-01-23T23:56:05.706939369Z" level=info msg="Connect containerd service" Jan 23 23:56:05.711065 containerd[2135]: time="2026-01-23T23:56:05.707218729Z" level=info msg="using legacy CRI server" Jan 23 23:56:05.711065 containerd[2135]: time="2026-01-23T23:56:05.707251573Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 23 23:56:05.711065 containerd[2135]: time="2026-01-23T23:56:05.707453437Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 23 23:56:05.711065 containerd[2135]: time="2026-01-23T23:56:05.708567673Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 23:56:05.711065 containerd[2135]: time="2026-01-23T23:56:05.710715721Z" level=info msg="Start subscribing containerd event" Jan 23 23:56:05.711065 containerd[2135]: time="2026-01-23T23:56:05.710802121Z" level=info msg="Start recovering state" Jan 23 23:56:05.711065 containerd[2135]: time="2026-01-23T23:56:05.710955385Z" level=info msg="Start event monitor" Jan 23 23:56:05.711065 containerd[2135]: time="2026-01-23T23:56:05.710980945Z" level=info msg="Start snapshots syncer" Jan 23 23:56:05.711065 containerd[2135]: time="2026-01-23T23:56:05.711003805Z" level=info msg="Start cni network conf syncer for default" Jan 23 23:56:05.711065 containerd[2135]: time="2026-01-23T23:56:05.711022825Z" level=info msg="Start streaming server" Jan 23 23:56:05.722942 containerd[2135]: time="2026-01-23T23:56:05.717046645Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 23 23:56:05.722942 containerd[2135]: time="2026-01-23T23:56:05.717184141Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 23 23:56:05.722942 containerd[2135]: time="2026-01-23T23:56:05.718491205Z" level=info msg="containerd successfully booted in 0.342172s" Jan 23 23:56:05.717451 systemd[1]: Started containerd.service - containerd container runtime. Jan 23 23:56:05.800859 amazon-ssm-agent[2159]: 2026-01-23 23:56:05 INFO [amazon-ssm-agent] Starting Core Agent Jan 23 23:56:05.899263 amazon-ssm-agent[2159]: 2026-01-23 23:56:05 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jan 23 23:56:06.000931 amazon-ssm-agent[2159]: 2026-01-23 23:56:05 INFO [Registrar] Starting registrar module Jan 23 23:56:06.101496 amazon-ssm-agent[2159]: 2026-01-23 23:56:05 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jan 23 23:56:06.202009 amazon-ssm-agent[2159]: 2026-01-23 23:56:06 INFO [EC2Identity] EC2 registration was successful. Jan 23 23:56:06.210864 amazon-ssm-agent[2159]: 2026-01-23 23:56:06 INFO [CredentialRefresher] credentialRefresher has started Jan 23 23:56:06.210864 amazon-ssm-agent[2159]: 2026-01-23 23:56:06 INFO [CredentialRefresher] Starting credentials refresher loop Jan 23 23:56:06.210864 amazon-ssm-agent[2159]: 2026-01-23 23:56:06 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jan 23 23:56:06.302488 amazon-ssm-agent[2159]: 2026-01-23 23:56:06 INFO [CredentialRefresher] Next credential rotation will be in 31.7249920369 minutes Jan 23 23:56:06.368643 tar[2121]: linux-arm64/README.md Jan 23 23:56:06.409989 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 23 23:56:06.826540 sshd_keygen[2123]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 23 23:56:06.873053 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 23 23:56:06.885419 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 23 23:56:06.898917 systemd[1]: Started sshd@0-172.31.16.109:22-4.153.228.146:36222.service - OpenSSH per-connection server daemon (4.153.228.146:36222). Jan 23 23:56:06.923221 systemd[1]: issuegen.service: Deactivated successfully. Jan 23 23:56:06.923744 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 23 23:56:06.944096 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 23 23:56:06.982480 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 23 23:56:07.007820 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 23 23:56:07.015341 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 23 23:56:07.019613 systemd[1]: Reached target getty.target - Login Prompts. Jan 23 23:56:07.030096 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:56:07.040665 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 23 23:56:07.044625 systemd[1]: Startup finished in 9.921s (kernel) + 9.537s (userspace) = 19.458s. Jan 23 23:56:07.055083 (kubelet)[2372]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 23:56:07.238986 amazon-ssm-agent[2159]: 2026-01-23 23:56:07 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jan 23 23:56:07.341168 amazon-ssm-agent[2159]: 2026-01-23 23:56:07 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2380) started Jan 23 23:56:07.441635 amazon-ssm-agent[2159]: 2026-01-23 23:56:07 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jan 23 23:56:07.466096 sshd[2354]: Accepted publickey for core from 4.153.228.146 port 36222 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:56:07.470542 sshd[2354]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:56:07.500323 systemd-logind[2101]: New session 1 of user core. Jan 23 23:56:07.501449 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 23 23:56:07.511633 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 23 23:56:07.550179 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 23 23:56:07.565416 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 23 23:56:07.580792 (systemd)[2397]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 23 23:56:07.819471 systemd[2397]: Queued start job for default target default.target. Jan 23 23:56:07.820695 systemd[2397]: Created slice app.slice - User Application Slice. Jan 23 23:56:07.820738 systemd[2397]: Reached target paths.target - Paths. Jan 23 23:56:07.820771 systemd[2397]: Reached target timers.target - Timers. Jan 23 23:56:07.827030 systemd[2397]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 23 23:56:07.863802 systemd[2397]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 23 23:56:07.864154 systemd[2397]: Reached target sockets.target - Sockets. Jan 23 23:56:07.864192 systemd[2397]: Reached target basic.target - Basic System. Jan 23 23:56:07.864278 systemd[2397]: Reached target default.target - Main User Target. Jan 23 23:56:07.864337 systemd[2397]: Startup finished in 271ms. Jan 23 23:56:07.864943 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 23 23:56:07.874487 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 23 23:56:08.180812 kubelet[2372]: E0123 23:56:08.180662 2372 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 23:56:08.186160 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 23:56:08.186582 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 23:56:08.266281 systemd[1]: Started sshd@1-172.31.16.109:22-4.153.228.146:36230.service - OpenSSH per-connection server daemon (4.153.228.146:36230). Jan 23 23:56:08.795377 sshd[2413]: Accepted publickey for core from 4.153.228.146 port 36230 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:56:08.797994 sshd[2413]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:56:08.805934 systemd-logind[2101]: New session 2 of user core. Jan 23 23:56:08.817452 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 23 23:56:09.175179 sshd[2413]: pam_unix(sshd:session): session closed for user core Jan 23 23:56:09.180017 systemd[1]: sshd@1-172.31.16.109:22-4.153.228.146:36230.service: Deactivated successfully. Jan 23 23:56:09.186973 systemd[1]: session-2.scope: Deactivated successfully. Jan 23 23:56:09.188483 systemd-logind[2101]: Session 2 logged out. Waiting for processes to exit. Jan 23 23:56:09.190270 systemd-logind[2101]: Removed session 2. Jan 23 23:56:09.256299 systemd[1]: Started sshd@2-172.31.16.109:22-4.153.228.146:36246.service - OpenSSH per-connection server daemon (4.153.228.146:36246). Jan 23 23:56:09.749410 sshd[2421]: Accepted publickey for core from 4.153.228.146 port 36246 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:56:09.752208 sshd[2421]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:56:09.760766 systemd-logind[2101]: New session 3 of user core. Jan 23 23:56:09.773463 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 23 23:56:10.097982 sshd[2421]: pam_unix(sshd:session): session closed for user core Jan 23 23:56:10.105242 systemd-logind[2101]: Session 3 logged out. Waiting for processes to exit. Jan 23 23:56:10.106559 systemd[1]: sshd@2-172.31.16.109:22-4.153.228.146:36246.service: Deactivated successfully. Jan 23 23:56:10.110311 systemd[1]: session-3.scope: Deactivated successfully. Jan 23 23:56:10.114302 systemd-logind[2101]: Removed session 3. Jan 23 23:56:10.188279 systemd[1]: Started sshd@3-172.31.16.109:22-4.153.228.146:36262.service - OpenSSH per-connection server daemon (4.153.228.146:36262). Jan 23 23:56:10.686243 sshd[2429]: Accepted publickey for core from 4.153.228.146 port 36262 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:56:10.690305 sshd[2429]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:56:10.699745 systemd-logind[2101]: New session 4 of user core. Jan 23 23:56:10.706684 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 23 23:56:11.451823 systemd-resolved[2026]: Clock change detected. Flushing caches. Jan 23 23:56:11.517998 sshd[2429]: pam_unix(sshd:session): session closed for user core Jan 23 23:56:11.522891 systemd[1]: sshd@3-172.31.16.109:22-4.153.228.146:36262.service: Deactivated successfully. Jan 23 23:56:11.530023 systemd-logind[2101]: Session 4 logged out. Waiting for processes to exit. Jan 23 23:56:11.531406 systemd[1]: session-4.scope: Deactivated successfully. Jan 23 23:56:11.533384 systemd-logind[2101]: Removed session 4. Jan 23 23:56:11.616136 systemd[1]: Started sshd@4-172.31.16.109:22-4.153.228.146:36278.service - OpenSSH per-connection server daemon (4.153.228.146:36278). Jan 23 23:56:12.145444 sshd[2437]: Accepted publickey for core from 4.153.228.146 port 36278 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:56:12.148107 sshd[2437]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:56:12.155628 systemd-logind[2101]: New session 5 of user core. Jan 23 23:56:12.161229 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 23 23:56:12.463772 sudo[2441]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 23 23:56:12.464390 sudo[2441]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 23:56:12.485223 sudo[2441]: pam_unix(sudo:session): session closed for user root Jan 23 23:56:12.570985 sshd[2437]: pam_unix(sshd:session): session closed for user core Jan 23 23:56:12.578675 systemd[1]: sshd@4-172.31.16.109:22-4.153.228.146:36278.service: Deactivated successfully. Jan 23 23:56:12.580044 systemd-logind[2101]: Session 5 logged out. Waiting for processes to exit. Jan 23 23:56:12.584384 systemd[1]: session-5.scope: Deactivated successfully. Jan 23 23:56:12.586525 systemd-logind[2101]: Removed session 5. Jan 23 23:56:12.670108 systemd[1]: Started sshd@5-172.31.16.109:22-4.153.228.146:36282.service - OpenSSH per-connection server daemon (4.153.228.146:36282). Jan 23 23:56:13.198363 sshd[2446]: Accepted publickey for core from 4.153.228.146 port 36282 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:56:13.201039 sshd[2446]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:56:13.208578 systemd-logind[2101]: New session 6 of user core. Jan 23 23:56:13.216133 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 23 23:56:13.495926 sudo[2451]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 23 23:56:13.497069 sudo[2451]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 23:56:13.503842 sudo[2451]: pam_unix(sudo:session): session closed for user root Jan 23 23:56:13.513803 sudo[2450]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 23 23:56:13.514440 sudo[2450]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 23:56:13.541118 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 23 23:56:13.544924 auditctl[2454]: No rules Jan 23 23:56:13.548022 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 23:56:13.548778 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 23 23:56:13.559260 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 23 23:56:13.603905 augenrules[2473]: No rules Jan 23 23:56:13.607331 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 23 23:56:13.611442 sudo[2450]: pam_unix(sudo:session): session closed for user root Jan 23 23:56:13.695985 sshd[2446]: pam_unix(sshd:session): session closed for user core Jan 23 23:56:13.700633 systemd[1]: sshd@5-172.31.16.109:22-4.153.228.146:36282.service: Deactivated successfully. Jan 23 23:56:13.708263 systemd[1]: session-6.scope: Deactivated successfully. Jan 23 23:56:13.709836 systemd-logind[2101]: Session 6 logged out. Waiting for processes to exit. Jan 23 23:56:13.711922 systemd-logind[2101]: Removed session 6. Jan 23 23:56:13.785227 systemd[1]: Started sshd@6-172.31.16.109:22-4.153.228.146:36288.service - OpenSSH per-connection server daemon (4.153.228.146:36288). Jan 23 23:56:14.326019 sshd[2482]: Accepted publickey for core from 4.153.228.146 port 36288 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:56:14.328566 sshd[2482]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:56:14.338738 systemd-logind[2101]: New session 7 of user core. Jan 23 23:56:14.345302 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 23 23:56:14.623089 sudo[2486]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 23 23:56:14.624448 sudo[2486]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 23:56:15.148125 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 23 23:56:15.158338 (dockerd)[2503]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 23 23:56:15.577840 dockerd[2503]: time="2026-01-23T23:56:15.577625160Z" level=info msg="Starting up" Jan 23 23:56:16.057969 dockerd[2503]: time="2026-01-23T23:56:16.057510070Z" level=info msg="Loading containers: start." Jan 23 23:56:16.220698 kernel: Initializing XFRM netlink socket Jan 23 23:56:16.260785 (udev-worker)[2524]: Network interface NamePolicy= disabled on kernel command line. Jan 23 23:56:16.346228 systemd-networkd[1692]: docker0: Link UP Jan 23 23:56:16.372342 dockerd[2503]: time="2026-01-23T23:56:16.372077052Z" level=info msg="Loading containers: done." Jan 23 23:56:16.399125 dockerd[2503]: time="2026-01-23T23:56:16.399046848Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 23 23:56:16.399446 dockerd[2503]: time="2026-01-23T23:56:16.399209304Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 23 23:56:16.399446 dockerd[2503]: time="2026-01-23T23:56:16.399399420Z" level=info msg="Daemon has completed initialization" Jan 23 23:56:16.463718 dockerd[2503]: time="2026-01-23T23:56:16.463431024Z" level=info msg="API listen on /run/docker.sock" Jan 23 23:56:16.464847 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 23 23:56:17.720533 containerd[2135]: time="2026-01-23T23:56:17.720477423Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 23 23:56:18.362909 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1332989434.mount: Deactivated successfully. Jan 23 23:56:18.907834 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 23 23:56:18.915986 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:56:19.336984 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:56:19.357895 (kubelet)[2711]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 23:56:19.450883 kubelet[2711]: E0123 23:56:19.450825 2711 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 23:56:19.460607 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 23:56:19.461036 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 23:56:20.112678 containerd[2135]: time="2026-01-23T23:56:20.110673003Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:56:20.114534 containerd[2135]: time="2026-01-23T23:56:20.114473631Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=26441982" Jan 23 23:56:20.117206 containerd[2135]: time="2026-01-23T23:56:20.117144471Z" level=info msg="ImageCreate event name:\"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:56:20.123610 containerd[2135]: time="2026-01-23T23:56:20.123554343Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:56:20.125958 containerd[2135]: time="2026-01-23T23:56:20.125877663Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"26438581\" in 2.405335092s" Jan 23 23:56:20.125958 containerd[2135]: time="2026-01-23T23:56:20.125960271Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\"" Jan 23 23:56:20.127303 containerd[2135]: time="2026-01-23T23:56:20.127223583Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 23 23:56:21.524023 containerd[2135]: time="2026-01-23T23:56:21.523963386Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:56:21.526444 containerd[2135]: time="2026-01-23T23:56:21.526380846Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=22622086" Jan 23 23:56:21.526969 containerd[2135]: time="2026-01-23T23:56:21.526933290Z" level=info msg="ImageCreate event name:\"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:56:21.532857 containerd[2135]: time="2026-01-23T23:56:21.532799478Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:56:21.535288 containerd[2135]: time="2026-01-23T23:56:21.535237326Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"24206567\" in 1.407930367s" Jan 23 23:56:21.535463 containerd[2135]: time="2026-01-23T23:56:21.535434366Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\"" Jan 23 23:56:21.536590 containerd[2135]: time="2026-01-23T23:56:21.536190618Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 23 23:56:22.751679 containerd[2135]: time="2026-01-23T23:56:22.749794256Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:56:22.752304 containerd[2135]: time="2026-01-23T23:56:22.751709828Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=17616747" Jan 23 23:56:22.754727 containerd[2135]: time="2026-01-23T23:56:22.754679684Z" level=info msg="ImageCreate event name:\"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:56:22.760468 containerd[2135]: time="2026-01-23T23:56:22.760420328Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:56:22.765075 containerd[2135]: time="2026-01-23T23:56:22.765028568Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"19201246\" in 1.228781838s" Jan 23 23:56:22.765284 containerd[2135]: time="2026-01-23T23:56:22.765253628Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\"" Jan 23 23:56:22.766024 containerd[2135]: time="2026-01-23T23:56:22.765967280Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 23 23:56:23.989733 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2982691607.mount: Deactivated successfully. Jan 23 23:56:24.613041 containerd[2135]: time="2026-01-23T23:56:24.612964041Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:56:24.614444 containerd[2135]: time="2026-01-23T23:56:24.614391945Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=27558724" Jan 23 23:56:24.616250 containerd[2135]: time="2026-01-23T23:56:24.616175205Z" level=info msg="ImageCreate event name:\"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:56:24.620697 containerd[2135]: time="2026-01-23T23:56:24.619741629Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:56:24.621449 containerd[2135]: time="2026-01-23T23:56:24.621211449Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"27557743\" in 1.855186497s" Jan 23 23:56:24.621449 containerd[2135]: time="2026-01-23T23:56:24.621272025Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\"" Jan 23 23:56:24.622239 containerd[2135]: time="2026-01-23T23:56:24.622164693Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 23 23:56:25.136989 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2927848805.mount: Deactivated successfully. Jan 23 23:56:26.463681 containerd[2135]: time="2026-01-23T23:56:26.463594006Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:56:26.466890 containerd[2135]: time="2026-01-23T23:56:26.466834258Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Jan 23 23:56:26.469001 containerd[2135]: time="2026-01-23T23:56:26.468936982Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:56:26.475745 containerd[2135]: time="2026-01-23T23:56:26.475636702Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:56:26.478187 containerd[2135]: time="2026-01-23T23:56:26.478136326Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.855909929s" Jan 23 23:56:26.479804 containerd[2135]: time="2026-01-23T23:56:26.478341130Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jan 23 23:56:26.479956 containerd[2135]: time="2026-01-23T23:56:26.479887042Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 23 23:56:26.958729 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3188840116.mount: Deactivated successfully. Jan 23 23:56:26.972916 containerd[2135]: time="2026-01-23T23:56:26.971608369Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:56:26.974951 containerd[2135]: time="2026-01-23T23:56:26.974909053Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Jan 23 23:56:26.977447 containerd[2135]: time="2026-01-23T23:56:26.977405869Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:56:26.983827 containerd[2135]: time="2026-01-23T23:56:26.983761381Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:56:26.985595 containerd[2135]: time="2026-01-23T23:56:26.985547281Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 505.611135ms" Jan 23 23:56:26.985810 containerd[2135]: time="2026-01-23T23:56:26.985778449Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jan 23 23:56:26.987146 containerd[2135]: time="2026-01-23T23:56:26.987084637Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 23 23:56:27.549804 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2742294195.mount: Deactivated successfully. Jan 23 23:56:29.650846 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 23 23:56:29.662029 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:56:30.138921 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:56:30.162887 (kubelet)[2856]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 23:56:30.273500 kubelet[2856]: E0123 23:56:30.272873 2856 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 23:56:30.280730 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 23:56:30.281690 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 23:56:30.985720 containerd[2135]: time="2026-01-23T23:56:30.985453973Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:56:30.988019 containerd[2135]: time="2026-01-23T23:56:30.987948365Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67943165" Jan 23 23:56:30.990205 containerd[2135]: time="2026-01-23T23:56:30.990118997Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:56:30.996911 containerd[2135]: time="2026-01-23T23:56:30.996822437Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:56:31.005407 containerd[2135]: time="2026-01-23T23:56:31.004157821Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 4.017001988s" Jan 23 23:56:31.005407 containerd[2135]: time="2026-01-23T23:56:31.004233001Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Jan 23 23:56:35.938022 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 23 23:56:39.156708 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:56:39.171110 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:56:39.232988 systemd[1]: Reloading requested from client PID 2899 ('systemctl') (unit session-7.scope)... Jan 23 23:56:39.233021 systemd[1]: Reloading... Jan 23 23:56:39.442679 zram_generator::config[2942]: No configuration found. Jan 23 23:56:39.705578 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 23 23:56:39.876606 systemd[1]: Reloading finished in 642 ms. Jan 23 23:56:39.966463 systemd[1]: kubelet.service: Deactivated successfully. Jan 23 23:56:39.967403 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:56:39.977632 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:56:40.307055 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:56:40.327362 (kubelet)[3015]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 23:56:40.398403 kubelet[3015]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 23:56:40.400676 kubelet[3015]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 23:56:40.400676 kubelet[3015]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 23:56:40.400676 kubelet[3015]: I0123 23:56:40.399188 3015 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 23:56:41.716822 kubelet[3015]: I0123 23:56:41.716753 3015 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 23 23:56:41.716822 kubelet[3015]: I0123 23:56:41.716804 3015 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 23:56:41.717498 kubelet[3015]: I0123 23:56:41.717271 3015 server.go:954] "Client rotation is on, will bootstrap in background" Jan 23 23:56:41.764732 kubelet[3015]: E0123 23:56:41.764683 3015 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.16.109:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.16.109:6443: connect: connection refused" logger="UnhandledError" Jan 23 23:56:41.769093 kubelet[3015]: I0123 23:56:41.768658 3015 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 23:56:41.780914 kubelet[3015]: E0123 23:56:41.780822 3015 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 23 23:56:41.780914 kubelet[3015]: I0123 23:56:41.780901 3015 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 23 23:56:41.786990 kubelet[3015]: I0123 23:56:41.786939 3015 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 23 23:56:41.790582 kubelet[3015]: I0123 23:56:41.790498 3015 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 23:56:41.790931 kubelet[3015]: I0123 23:56:41.790569 3015 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-16-109","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jan 23 23:56:41.791106 kubelet[3015]: I0123 23:56:41.791075 3015 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 23:56:41.791106 kubelet[3015]: I0123 23:56:41.791098 3015 container_manager_linux.go:304] "Creating device plugin manager" Jan 23 23:56:41.791494 kubelet[3015]: I0123 23:56:41.791450 3015 state_mem.go:36] "Initialized new in-memory state store" Jan 23 23:56:41.799938 kubelet[3015]: I0123 23:56:41.799881 3015 kubelet.go:446] "Attempting to sync node with API server" Jan 23 23:56:41.800042 kubelet[3015]: I0123 23:56:41.799963 3015 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 23:56:41.800042 kubelet[3015]: I0123 23:56:41.799999 3015 kubelet.go:352] "Adding apiserver pod source" Jan 23 23:56:41.800042 kubelet[3015]: I0123 23:56:41.800019 3015 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 23:56:41.809226 kubelet[3015]: W0123 23:56:41.809099 3015 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.16.109:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.16.109:6443: connect: connection refused Jan 23 23:56:41.809347 kubelet[3015]: E0123 23:56:41.809240 3015 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.16.109:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.16.109:6443: connect: connection refused" logger="UnhandledError" Jan 23 23:56:41.809424 kubelet[3015]: W0123 23:56:41.809386 3015 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.16.109:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-109&limit=500&resourceVersion=0": dial tcp 172.31.16.109:6443: connect: connection refused Jan 23 23:56:41.809482 kubelet[3015]: E0123 23:56:41.809442 3015 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.16.109:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-109&limit=500&resourceVersion=0\": dial tcp 172.31.16.109:6443: connect: connection refused" logger="UnhandledError" Jan 23 23:56:41.811674 kubelet[3015]: I0123 23:56:41.809615 3015 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 23 23:56:41.811674 kubelet[3015]: I0123 23:56:41.810715 3015 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 23 23:56:41.811674 kubelet[3015]: W0123 23:56:41.810951 3015 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 23 23:56:41.813354 kubelet[3015]: I0123 23:56:41.813299 3015 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 23 23:56:41.813471 kubelet[3015]: I0123 23:56:41.813373 3015 server.go:1287] "Started kubelet" Jan 23 23:56:41.821463 kubelet[3015]: E0123 23:56:41.820985 3015 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.16.109:6443/api/v1/namespaces/default/events\": dial tcp 172.31.16.109:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-16-109.188d817cd73a9546 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-16-109,UID:ip-172-31-16-109,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-16-109,},FirstTimestamp:2026-01-23 23:56:41.81334151 +0000 UTC m=+1.479103232,LastTimestamp:2026-01-23 23:56:41.81334151 +0000 UTC m=+1.479103232,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-16-109,}" Jan 23 23:56:41.824543 kubelet[3015]: I0123 23:56:41.824486 3015 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 23:56:41.828147 kubelet[3015]: I0123 23:56:41.828086 3015 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 23:56:41.829985 kubelet[3015]: I0123 23:56:41.829950 3015 server.go:479] "Adding debug handlers to kubelet server" Jan 23 23:56:41.830929 kubelet[3015]: I0123 23:56:41.830899 3015 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 23 23:56:41.831768 kubelet[3015]: E0123 23:56:41.831723 3015 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-16-109\" not found" Jan 23 23:56:41.834903 kubelet[3015]: I0123 23:56:41.833435 3015 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 23:56:41.835499 kubelet[3015]: I0123 23:56:41.833753 3015 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 23 23:56:41.835499 kubelet[3015]: I0123 23:56:41.833834 3015 reconciler.go:26] "Reconciler: start to sync state" Jan 23 23:56:41.835499 kubelet[3015]: I0123 23:56:41.833853 3015 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 23:56:41.836779 kubelet[3015]: I0123 23:56:41.836738 3015 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 23:56:41.837704 kubelet[3015]: W0123 23:56:41.837611 3015 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.16.109:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.16.109:6443: connect: connection refused Jan 23 23:56:41.837957 kubelet[3015]: E0123 23:56:41.837905 3015 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.16.109:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.16.109:6443: connect: connection refused" logger="UnhandledError" Jan 23 23:56:41.838270 kubelet[3015]: E0123 23:56:41.838213 3015 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.109:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-109?timeout=10s\": dial tcp 172.31.16.109:6443: connect: connection refused" interval="200ms" Jan 23 23:56:41.840113 kubelet[3015]: I0123 23:56:41.840071 3015 factory.go:221] Registration of the systemd container factory successfully Jan 23 23:56:41.840377 kubelet[3015]: I0123 23:56:41.840333 3015 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 23:56:41.842639 kubelet[3015]: E0123 23:56:41.842524 3015 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 23:56:41.844765 kubelet[3015]: I0123 23:56:41.844625 3015 factory.go:221] Registration of the containerd container factory successfully Jan 23 23:56:41.887105 kubelet[3015]: I0123 23:56:41.886861 3015 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 23 23:56:41.889286 kubelet[3015]: I0123 23:56:41.889225 3015 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 23 23:56:41.889286 kubelet[3015]: I0123 23:56:41.889278 3015 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 23 23:56:41.889489 kubelet[3015]: I0123 23:56:41.889317 3015 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 23:56:41.889489 kubelet[3015]: I0123 23:56:41.889333 3015 kubelet.go:2382] "Starting kubelet main sync loop" Jan 23 23:56:41.889489 kubelet[3015]: E0123 23:56:41.889397 3015 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 23:56:41.901811 kubelet[3015]: W0123 23:56:41.901583 3015 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.16.109:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.16.109:6443: connect: connection refused Jan 23 23:56:41.902218 kubelet[3015]: E0123 23:56:41.902134 3015 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.16.109:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.16.109:6443: connect: connection refused" logger="UnhandledError" Jan 23 23:56:41.907762 kubelet[3015]: I0123 23:56:41.907294 3015 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 23:56:41.907762 kubelet[3015]: I0123 23:56:41.907331 3015 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 23:56:41.907762 kubelet[3015]: I0123 23:56:41.907365 3015 state_mem.go:36] "Initialized new in-memory state store" Jan 23 23:56:41.909865 kubelet[3015]: I0123 23:56:41.909811 3015 policy_none.go:49] "None policy: Start" Jan 23 23:56:41.909865 kubelet[3015]: I0123 23:56:41.909856 3015 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 23 23:56:41.910061 kubelet[3015]: I0123 23:56:41.909898 3015 state_mem.go:35] "Initializing new in-memory state store" Jan 23 23:56:41.922693 kubelet[3015]: I0123 23:56:41.922587 3015 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 23 23:56:41.923477 kubelet[3015]: I0123 23:56:41.923439 3015 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 23:56:41.923754 kubelet[3015]: I0123 23:56:41.923593 3015 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 23:56:41.926003 kubelet[3015]: I0123 23:56:41.925954 3015 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 23:56:41.926868 kubelet[3015]: E0123 23:56:41.926754 3015 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 23:56:41.926868 kubelet[3015]: E0123 23:56:41.926832 3015 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-16-109\" not found" Jan 23 23:56:42.003097 kubelet[3015]: E0123 23:56:42.002940 3015 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-109\" not found" node="ip-172-31-16-109" Jan 23 23:56:42.006379 kubelet[3015]: E0123 23:56:42.006310 3015 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-109\" not found" node="ip-172-31-16-109" Jan 23 23:56:42.012210 kubelet[3015]: E0123 23:56:42.012172 3015 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-109\" not found" node="ip-172-31-16-109" Jan 23 23:56:42.026444 kubelet[3015]: I0123 23:56:42.026370 3015 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-16-109" Jan 23 23:56:42.027354 kubelet[3015]: E0123 23:56:42.027303 3015 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.16.109:6443/api/v1/nodes\": dial tcp 172.31.16.109:6443: connect: connection refused" node="ip-172-31-16-109" Jan 23 23:56:42.037077 kubelet[3015]: I0123 23:56:42.037038 3015 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/61c1222c206bdb7e2ae9f9238e4613ca-k8s-certs\") pod \"kube-apiserver-ip-172-31-16-109\" (UID: \"61c1222c206bdb7e2ae9f9238e4613ca\") " pod="kube-system/kube-apiserver-ip-172-31-16-109" Jan 23 23:56:42.037215 kubelet[3015]: I0123 23:56:42.037098 3015 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/61c1222c206bdb7e2ae9f9238e4613ca-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-16-109\" (UID: \"61c1222c206bdb7e2ae9f9238e4613ca\") " pod="kube-system/kube-apiserver-ip-172-31-16-109" Jan 23 23:56:42.037215 kubelet[3015]: I0123 23:56:42.037144 3015 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/afed73d03875d6ca0dd9e924aa2b9ec0-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-16-109\" (UID: \"afed73d03875d6ca0dd9e924aa2b9ec0\") " pod="kube-system/kube-controller-manager-ip-172-31-16-109" Jan 23 23:56:42.037215 kubelet[3015]: I0123 23:56:42.037181 3015 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/afed73d03875d6ca0dd9e924aa2b9ec0-kubeconfig\") pod \"kube-controller-manager-ip-172-31-16-109\" (UID: \"afed73d03875d6ca0dd9e924aa2b9ec0\") " pod="kube-system/kube-controller-manager-ip-172-31-16-109" Jan 23 23:56:42.037372 kubelet[3015]: I0123 23:56:42.037216 3015 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/470f72ad16e3815c090d4dfa2cc57e26-kubeconfig\") pod \"kube-scheduler-ip-172-31-16-109\" (UID: \"470f72ad16e3815c090d4dfa2cc57e26\") " pod="kube-system/kube-scheduler-ip-172-31-16-109" Jan 23 23:56:42.037372 kubelet[3015]: I0123 23:56:42.037249 3015 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/61c1222c206bdb7e2ae9f9238e4613ca-ca-certs\") pod \"kube-apiserver-ip-172-31-16-109\" (UID: \"61c1222c206bdb7e2ae9f9238e4613ca\") " pod="kube-system/kube-apiserver-ip-172-31-16-109" Jan 23 23:56:42.037372 kubelet[3015]: I0123 23:56:42.037281 3015 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/afed73d03875d6ca0dd9e924aa2b9ec0-ca-certs\") pod \"kube-controller-manager-ip-172-31-16-109\" (UID: \"afed73d03875d6ca0dd9e924aa2b9ec0\") " pod="kube-system/kube-controller-manager-ip-172-31-16-109" Jan 23 23:56:42.037372 kubelet[3015]: I0123 23:56:42.037316 3015 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/afed73d03875d6ca0dd9e924aa2b9ec0-k8s-certs\") pod \"kube-controller-manager-ip-172-31-16-109\" (UID: \"afed73d03875d6ca0dd9e924aa2b9ec0\") " pod="kube-system/kube-controller-manager-ip-172-31-16-109" Jan 23 23:56:42.037372 kubelet[3015]: I0123 23:56:42.037351 3015 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/afed73d03875d6ca0dd9e924aa2b9ec0-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-16-109\" (UID: \"afed73d03875d6ca0dd9e924aa2b9ec0\") " pod="kube-system/kube-controller-manager-ip-172-31-16-109" Jan 23 23:56:42.039326 kubelet[3015]: E0123 23:56:42.039275 3015 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.109:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-109?timeout=10s\": dial tcp 172.31.16.109:6443: connect: connection refused" interval="400ms" Jan 23 23:56:42.230093 kubelet[3015]: I0123 23:56:42.230049 3015 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-16-109" Jan 23 23:56:42.230599 kubelet[3015]: E0123 23:56:42.230523 3015 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.16.109:6443/api/v1/nodes\": dial tcp 172.31.16.109:6443: connect: connection refused" node="ip-172-31-16-109" Jan 23 23:56:42.316279 containerd[2135]: time="2026-01-23T23:56:42.315226177Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-16-109,Uid:470f72ad16e3815c090d4dfa2cc57e26,Namespace:kube-system,Attempt:0,}" Jan 23 23:56:42.316279 containerd[2135]: time="2026-01-23T23:56:42.315368293Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-16-109,Uid:61c1222c206bdb7e2ae9f9238e4613ca,Namespace:kube-system,Attempt:0,}" Jan 23 23:56:42.316279 containerd[2135]: time="2026-01-23T23:56:42.315629737Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-16-109,Uid:afed73d03875d6ca0dd9e924aa2b9ec0,Namespace:kube-system,Attempt:0,}" Jan 23 23:56:42.441213 kubelet[3015]: E0123 23:56:42.441145 3015 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.109:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-109?timeout=10s\": dial tcp 172.31.16.109:6443: connect: connection refused" interval="800ms" Jan 23 23:56:42.591774 kubelet[3015]: E0123 23:56:42.591493 3015 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.16.109:6443/api/v1/namespaces/default/events\": dial tcp 172.31.16.109:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-16-109.188d817cd73a9546 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-16-109,UID:ip-172-31-16-109,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-16-109,},FirstTimestamp:2026-01-23 23:56:41.81334151 +0000 UTC m=+1.479103232,LastTimestamp:2026-01-23 23:56:41.81334151 +0000 UTC m=+1.479103232,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-16-109,}" Jan 23 23:56:42.634038 kubelet[3015]: I0123 23:56:42.633450 3015 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-16-109" Jan 23 23:56:42.634038 kubelet[3015]: E0123 23:56:42.633964 3015 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.16.109:6443/api/v1/nodes\": dial tcp 172.31.16.109:6443: connect: connection refused" node="ip-172-31-16-109" Jan 23 23:56:42.801925 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount619635481.mount: Deactivated successfully. Jan 23 23:56:42.812929 containerd[2135]: time="2026-01-23T23:56:42.812849955Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 23:56:42.814946 containerd[2135]: time="2026-01-23T23:56:42.814872447Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 23:56:42.816972 containerd[2135]: time="2026-01-23T23:56:42.816583443Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Jan 23 23:56:42.818680 containerd[2135]: time="2026-01-23T23:56:42.818538543Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 23 23:56:42.820690 containerd[2135]: time="2026-01-23T23:56:42.820623183Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 23:56:42.824673 containerd[2135]: time="2026-01-23T23:56:42.824097963Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 23:56:42.826141 containerd[2135]: time="2026-01-23T23:56:42.826102731Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 23 23:56:42.831904 containerd[2135]: time="2026-01-23T23:56:42.831822387Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 23:56:42.835939 containerd[2135]: time="2026-01-23T23:56:42.835873671Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 520.496078ms" Jan 23 23:56:42.840699 containerd[2135]: time="2026-01-23T23:56:42.840395283Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 524.955122ms" Jan 23 23:56:42.851121 containerd[2135]: time="2026-01-23T23:56:42.850955439Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 535.230326ms" Jan 23 23:56:42.932304 kubelet[3015]: W0123 23:56:42.932206 3015 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.16.109:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-109&limit=500&resourceVersion=0": dial tcp 172.31.16.109:6443: connect: connection refused Jan 23 23:56:42.933369 kubelet[3015]: E0123 23:56:42.932307 3015 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.16.109:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-109&limit=500&resourceVersion=0\": dial tcp 172.31.16.109:6443: connect: connection refused" logger="UnhandledError" Jan 23 23:56:43.063272 containerd[2135]: time="2026-01-23T23:56:43.063095185Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:56:43.064550 containerd[2135]: time="2026-01-23T23:56:43.064158097Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:56:43.064550 containerd[2135]: time="2026-01-23T23:56:43.064244677Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:56:43.064550 containerd[2135]: time="2026-01-23T23:56:43.064270681Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:56:43.064550 containerd[2135]: time="2026-01-23T23:56:43.064414825Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:56:43.064550 containerd[2135]: time="2026-01-23T23:56:43.063850309Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:56:43.064550 containerd[2135]: time="2026-01-23T23:56:43.063904933Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:56:43.064550 containerd[2135]: time="2026-01-23T23:56:43.064251517Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:56:43.068493 containerd[2135]: time="2026-01-23T23:56:43.068143693Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:56:43.068725 containerd[2135]: time="2026-01-23T23:56:43.068605369Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:56:43.069599 containerd[2135]: time="2026-01-23T23:56:43.068807677Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:56:43.070138 containerd[2135]: time="2026-01-23T23:56:43.069974029Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:56:43.079831 kubelet[3015]: W0123 23:56:43.079768 3015 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.16.109:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.16.109:6443: connect: connection refused Jan 23 23:56:43.080001 kubelet[3015]: E0123 23:56:43.079844 3015 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.16.109:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.16.109:6443: connect: connection refused" logger="UnhandledError" Jan 23 23:56:43.139006 kubelet[3015]: W0123 23:56:43.137167 3015 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.16.109:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.16.109:6443: connect: connection refused Jan 23 23:56:43.139006 kubelet[3015]: E0123 23:56:43.137260 3015 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.16.109:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.16.109:6443: connect: connection refused" logger="UnhandledError" Jan 23 23:56:43.240845 containerd[2135]: time="2026-01-23T23:56:43.240720277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-16-109,Uid:afed73d03875d6ca0dd9e924aa2b9ec0,Namespace:kube-system,Attempt:0,} returns sandbox id \"8688987446ec4c641e2aa99f9cf48a1a8a55c4c7b6bcae69ff94ef5e42bf2f24\"" Jan 23 23:56:43.242694 kubelet[3015]: E0123 23:56:43.242267 3015 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.109:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-109?timeout=10s\": dial tcp 172.31.16.109:6443: connect: connection refused" interval="1.6s" Jan 23 23:56:43.251312 containerd[2135]: time="2026-01-23T23:56:43.251239009Z" level=info msg="CreateContainer within sandbox \"8688987446ec4c641e2aa99f9cf48a1a8a55c4c7b6bcae69ff94ef5e42bf2f24\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 23 23:56:43.251744 containerd[2135]: time="2026-01-23T23:56:43.251689345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-16-109,Uid:470f72ad16e3815c090d4dfa2cc57e26,Namespace:kube-system,Attempt:0,} returns sandbox id \"6e2fba2ab67795b72e53f415f6e72afbad6fb119eebc9f088231363f8fc1d609\"" Jan 23 23:56:43.261285 containerd[2135]: time="2026-01-23T23:56:43.261059989Z" level=info msg="CreateContainer within sandbox \"6e2fba2ab67795b72e53f415f6e72afbad6fb119eebc9f088231363f8fc1d609\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 23 23:56:43.262117 containerd[2135]: time="2026-01-23T23:56:43.262073630Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-16-109,Uid:61c1222c206bdb7e2ae9f9238e4613ca,Namespace:kube-system,Attempt:0,} returns sandbox id \"ef3853c39b53189177daf41c61fca7e36f775ad844637f6b2310ec450f90001b\"" Jan 23 23:56:43.270946 containerd[2135]: time="2026-01-23T23:56:43.270744422Z" level=info msg="CreateContainer within sandbox \"ef3853c39b53189177daf41c61fca7e36f775ad844637f6b2310ec450f90001b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 23 23:56:43.292974 kubelet[3015]: W0123 23:56:43.292899 3015 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.16.109:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.16.109:6443: connect: connection refused Jan 23 23:56:43.293210 kubelet[3015]: E0123 23:56:43.293179 3015 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.16.109:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.16.109:6443: connect: connection refused" logger="UnhandledError" Jan 23 23:56:43.303030 containerd[2135]: time="2026-01-23T23:56:43.302973986Z" level=info msg="CreateContainer within sandbox \"8688987446ec4c641e2aa99f9cf48a1a8a55c4c7b6bcae69ff94ef5e42bf2f24\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"873f32991c476ae2612e110e7a15c141e4af2c4add44808e1a63c4e385293cee\"" Jan 23 23:56:43.305467 containerd[2135]: time="2026-01-23T23:56:43.305418782Z" level=info msg="StartContainer for \"873f32991c476ae2612e110e7a15c141e4af2c4add44808e1a63c4e385293cee\"" Jan 23 23:56:43.320415 containerd[2135]: time="2026-01-23T23:56:43.320323022Z" level=info msg="CreateContainer within sandbox \"6e2fba2ab67795b72e53f415f6e72afbad6fb119eebc9f088231363f8fc1d609\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"41742f070aacb8644a8b9a624459f3f6cfa41258da4ae6a0bb04f7d5b6308b3a\"" Jan 23 23:56:43.322718 containerd[2135]: time="2026-01-23T23:56:43.321380858Z" level=info msg="StartContainer for \"41742f070aacb8644a8b9a624459f3f6cfa41258da4ae6a0bb04f7d5b6308b3a\"" Jan 23 23:56:43.327919 containerd[2135]: time="2026-01-23T23:56:43.327843374Z" level=info msg="CreateContainer within sandbox \"ef3853c39b53189177daf41c61fca7e36f775ad844637f6b2310ec450f90001b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"32ae1b7c854a82ab80fb1a55a25203bdb046de7c9b185739e09574ce4e97c4b5\"" Jan 23 23:56:43.328604 containerd[2135]: time="2026-01-23T23:56:43.328550510Z" level=info msg="StartContainer for \"32ae1b7c854a82ab80fb1a55a25203bdb046de7c9b185739e09574ce4e97c4b5\"" Jan 23 23:56:43.439007 kubelet[3015]: I0123 23:56:43.438949 3015 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-16-109" Jan 23 23:56:43.441712 kubelet[3015]: E0123 23:56:43.441576 3015 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.16.109:6443/api/v1/nodes\": dial tcp 172.31.16.109:6443: connect: connection refused" node="ip-172-31-16-109" Jan 23 23:56:43.535111 containerd[2135]: time="2026-01-23T23:56:43.534006543Z" level=info msg="StartContainer for \"873f32991c476ae2612e110e7a15c141e4af2c4add44808e1a63c4e385293cee\" returns successfully" Jan 23 23:56:43.567951 containerd[2135]: time="2026-01-23T23:56:43.567870603Z" level=info msg="StartContainer for \"32ae1b7c854a82ab80fb1a55a25203bdb046de7c9b185739e09574ce4e97c4b5\" returns successfully" Jan 23 23:56:43.570058 containerd[2135]: time="2026-01-23T23:56:43.569767623Z" level=info msg="StartContainer for \"41742f070aacb8644a8b9a624459f3f6cfa41258da4ae6a0bb04f7d5b6308b3a\" returns successfully" Jan 23 23:56:43.922114 kubelet[3015]: E0123 23:56:43.919486 3015 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-109\" not found" node="ip-172-31-16-109" Jan 23 23:56:43.925665 kubelet[3015]: E0123 23:56:43.925594 3015 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-109\" not found" node="ip-172-31-16-109" Jan 23 23:56:43.932661 kubelet[3015]: E0123 23:56:43.932602 3015 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-109\" not found" node="ip-172-31-16-109" Jan 23 23:56:44.934559 kubelet[3015]: E0123 23:56:44.934510 3015 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-109\" not found" node="ip-172-31-16-109" Jan 23 23:56:44.938702 kubelet[3015]: E0123 23:56:44.936691 3015 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-109\" not found" node="ip-172-31-16-109" Jan 23 23:56:45.044953 kubelet[3015]: I0123 23:56:45.044900 3015 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-16-109" Jan 23 23:56:45.936691 kubelet[3015]: E0123 23:56:45.935831 3015 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-109\" not found" node="ip-172-31-16-109" Jan 23 23:56:47.811488 kubelet[3015]: I0123 23:56:47.811168 3015 apiserver.go:52] "Watching apiserver" Jan 23 23:56:47.836188 kubelet[3015]: I0123 23:56:47.836107 3015 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 23 23:56:47.840255 kubelet[3015]: E0123 23:56:47.840202 3015 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-16-109\" not found" node="ip-172-31-16-109" Jan 23 23:56:47.903718 kubelet[3015]: I0123 23:56:47.903421 3015 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-16-109" Jan 23 23:56:47.903718 kubelet[3015]: E0123 23:56:47.903483 3015 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ip-172-31-16-109\": node \"ip-172-31-16-109\" not found" Jan 23 23:56:47.935833 kubelet[3015]: I0123 23:56:47.934397 3015 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-16-109" Jan 23 23:56:47.966821 kubelet[3015]: E0123 23:56:47.966773 3015 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-16-109\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-16-109" Jan 23 23:56:47.967306 kubelet[3015]: I0123 23:56:47.967047 3015 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-16-109" Jan 23 23:56:47.971682 kubelet[3015]: E0123 23:56:47.970966 3015 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-16-109\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-16-109" Jan 23 23:56:47.971682 kubelet[3015]: I0123 23:56:47.971015 3015 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-16-109" Jan 23 23:56:47.974996 kubelet[3015]: E0123 23:56:47.974942 3015 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-16-109\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-16-109" Jan 23 23:56:48.412859 kubelet[3015]: I0123 23:56:48.412814 3015 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-16-109" Jan 23 23:56:48.418192 kubelet[3015]: E0123 23:56:48.417874 3015 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-16-109\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-16-109" Jan 23 23:56:50.068808 update_engine[2110]: I20260123 23:56:50.068710 2110 update_attempter.cc:509] Updating boot flags... Jan 23 23:56:50.104712 kubelet[3015]: I0123 23:56:50.101208 3015 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-16-109" Jan 23 23:56:50.178280 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (3302) Jan 23 23:56:50.353382 systemd[1]: Reloading requested from client PID 3386 ('systemctl') (unit session-7.scope)... Jan 23 23:56:50.353413 systemd[1]: Reloading... Jan 23 23:56:50.516697 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (3303) Jan 23 23:56:50.636699 zram_generator::config[3458]: No configuration found. Jan 23 23:56:50.937208 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 23 23:56:51.161784 systemd[1]: Reloading finished in 807 ms. Jan 23 23:56:51.309409 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:56:51.339340 systemd[1]: kubelet.service: Deactivated successfully. Jan 23 23:56:51.340211 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:56:51.352444 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:56:51.723050 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:56:51.724005 (kubelet)[3582]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 23:56:51.852601 kubelet[3582]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 23:56:51.853191 kubelet[3582]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 23:56:51.853511 kubelet[3582]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 23:56:51.855682 kubelet[3582]: I0123 23:56:51.854086 3582 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 23:56:51.868677 kubelet[3582]: I0123 23:56:51.868009 3582 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 23 23:56:51.868677 kubelet[3582]: I0123 23:56:51.868055 3582 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 23:56:51.868677 kubelet[3582]: I0123 23:56:51.868547 3582 server.go:954] "Client rotation is on, will bootstrap in background" Jan 23 23:56:51.871410 kubelet[3582]: I0123 23:56:51.871373 3582 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 23 23:56:51.875919 kubelet[3582]: I0123 23:56:51.875859 3582 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 23:56:51.881321 sudo[3597]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 23 23:56:51.882063 sudo[3597]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 23 23:56:51.889670 kubelet[3582]: E0123 23:56:51.889296 3582 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 23 23:56:51.890094 kubelet[3582]: I0123 23:56:51.889902 3582 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 23 23:56:51.902100 kubelet[3582]: I0123 23:56:51.902017 3582 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 23 23:56:51.903339 kubelet[3582]: I0123 23:56:51.903272 3582 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 23:56:51.905828 kubelet[3582]: I0123 23:56:51.903333 3582 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-16-109","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jan 23 23:56:51.905828 kubelet[3582]: I0123 23:56:51.905531 3582 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 23:56:51.905828 kubelet[3582]: I0123 23:56:51.905558 3582 container_manager_linux.go:304] "Creating device plugin manager" Jan 23 23:56:51.905828 kubelet[3582]: I0123 23:56:51.905682 3582 state_mem.go:36] "Initialized new in-memory state store" Jan 23 23:56:51.906210 kubelet[3582]: I0123 23:56:51.905962 3582 kubelet.go:446] "Attempting to sync node with API server" Jan 23 23:56:51.906210 kubelet[3582]: I0123 23:56:51.905990 3582 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 23:56:51.906210 kubelet[3582]: I0123 23:56:51.906030 3582 kubelet.go:352] "Adding apiserver pod source" Jan 23 23:56:51.906210 kubelet[3582]: I0123 23:56:51.906053 3582 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 23:56:51.914853 kubelet[3582]: I0123 23:56:51.914454 3582 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 23 23:56:51.920200 kubelet[3582]: I0123 23:56:51.916751 3582 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 23 23:56:51.920200 kubelet[3582]: I0123 23:56:51.918822 3582 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 23 23:56:51.920200 kubelet[3582]: I0123 23:56:51.918877 3582 server.go:1287] "Started kubelet" Jan 23 23:56:51.928874 kubelet[3582]: I0123 23:56:51.928822 3582 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 23:56:51.959159 kubelet[3582]: I0123 23:56:51.958920 3582 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 23:56:51.965764 kubelet[3582]: I0123 23:56:51.965623 3582 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 23:56:51.966435 kubelet[3582]: I0123 23:56:51.966170 3582 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 23:56:51.966557 kubelet[3582]: I0123 23:56:51.966540 3582 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 23:56:51.971541 kubelet[3582]: I0123 23:56:51.971494 3582 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 23 23:56:51.974111 kubelet[3582]: E0123 23:56:51.973973 3582 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-16-109\" not found" Jan 23 23:56:51.979366 kubelet[3582]: I0123 23:56:51.978290 3582 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 23 23:56:51.980150 kubelet[3582]: I0123 23:56:51.980105 3582 reconciler.go:26] "Reconciler: start to sync state" Jan 23 23:56:51.996617 kubelet[3582]: I0123 23:56:51.996569 3582 server.go:479] "Adding debug handlers to kubelet server" Jan 23 23:56:52.005090 kubelet[3582]: I0123 23:56:52.005030 3582 factory.go:221] Registration of the systemd container factory successfully Jan 23 23:56:52.005267 kubelet[3582]: I0123 23:56:52.005222 3582 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 23:56:52.013373 kubelet[3582]: I0123 23:56:52.012563 3582 factory.go:221] Registration of the containerd container factory successfully Jan 23 23:56:52.033935 kubelet[3582]: I0123 23:56:52.033859 3582 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 23 23:56:52.048022 kubelet[3582]: I0123 23:56:52.047968 3582 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 23 23:56:52.048022 kubelet[3582]: I0123 23:56:52.048013 3582 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 23 23:56:52.048236 kubelet[3582]: I0123 23:56:52.048057 3582 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 23:56:52.048236 kubelet[3582]: I0123 23:56:52.048073 3582 kubelet.go:2382] "Starting kubelet main sync loop" Jan 23 23:56:52.048236 kubelet[3582]: E0123 23:56:52.048141 3582 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 23:56:52.150695 kubelet[3582]: E0123 23:56:52.149499 3582 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 23 23:56:52.251315 kubelet[3582]: I0123 23:56:52.251166 3582 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 23:56:52.251692 kubelet[3582]: I0123 23:56:52.251207 3582 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 23:56:52.251692 kubelet[3582]: I0123 23:56:52.251476 3582 state_mem.go:36] "Initialized new in-memory state store" Jan 23 23:56:52.252893 kubelet[3582]: I0123 23:56:52.252683 3582 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 23 23:56:52.252893 kubelet[3582]: I0123 23:56:52.252714 3582 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 23 23:56:52.252893 kubelet[3582]: I0123 23:56:52.252793 3582 policy_none.go:49] "None policy: Start" Jan 23 23:56:52.253622 kubelet[3582]: I0123 23:56:52.252816 3582 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 23 23:56:52.253622 kubelet[3582]: I0123 23:56:52.253225 3582 state_mem.go:35] "Initializing new in-memory state store" Jan 23 23:56:52.254116 kubelet[3582]: I0123 23:56:52.253856 3582 state_mem.go:75] "Updated machine memory state" Jan 23 23:56:52.259178 kubelet[3582]: I0123 23:56:52.258986 3582 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 23 23:56:52.259737 kubelet[3582]: I0123 23:56:52.259630 3582 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 23:56:52.259903 kubelet[3582]: I0123 23:56:52.259843 3582 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 23:56:52.260554 kubelet[3582]: I0123 23:56:52.260508 3582 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 23:56:52.268509 kubelet[3582]: E0123 23:56:52.266707 3582 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 23:56:52.350415 kubelet[3582]: I0123 23:56:52.350362 3582 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-16-109" Jan 23 23:56:52.350679 kubelet[3582]: I0123 23:56:52.350617 3582 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-16-109" Jan 23 23:56:52.350845 kubelet[3582]: I0123 23:56:52.350393 3582 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-16-109" Jan 23 23:56:52.363524 kubelet[3582]: E0123 23:56:52.363471 3582 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-16-109\" already exists" pod="kube-system/kube-apiserver-ip-172-31-16-109" Jan 23 23:56:52.387925 kubelet[3582]: I0123 23:56:52.386966 3582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/afed73d03875d6ca0dd9e924aa2b9ec0-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-16-109\" (UID: \"afed73d03875d6ca0dd9e924aa2b9ec0\") " pod="kube-system/kube-controller-manager-ip-172-31-16-109" Jan 23 23:56:52.387925 kubelet[3582]: I0123 23:56:52.387031 3582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/afed73d03875d6ca0dd9e924aa2b9ec0-k8s-certs\") pod \"kube-controller-manager-ip-172-31-16-109\" (UID: \"afed73d03875d6ca0dd9e924aa2b9ec0\") " pod="kube-system/kube-controller-manager-ip-172-31-16-109" Jan 23 23:56:52.387925 kubelet[3582]: I0123 23:56:52.387076 3582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/470f72ad16e3815c090d4dfa2cc57e26-kubeconfig\") pod \"kube-scheduler-ip-172-31-16-109\" (UID: \"470f72ad16e3815c090d4dfa2cc57e26\") " pod="kube-system/kube-scheduler-ip-172-31-16-109" Jan 23 23:56:52.387925 kubelet[3582]: I0123 23:56:52.387113 3582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/61c1222c206bdb7e2ae9f9238e4613ca-ca-certs\") pod \"kube-apiserver-ip-172-31-16-109\" (UID: \"61c1222c206bdb7e2ae9f9238e4613ca\") " pod="kube-system/kube-apiserver-ip-172-31-16-109" Jan 23 23:56:52.387925 kubelet[3582]: I0123 23:56:52.387156 3582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/61c1222c206bdb7e2ae9f9238e4613ca-k8s-certs\") pod \"kube-apiserver-ip-172-31-16-109\" (UID: \"61c1222c206bdb7e2ae9f9238e4613ca\") " pod="kube-system/kube-apiserver-ip-172-31-16-109" Jan 23 23:56:52.388291 kubelet[3582]: I0123 23:56:52.387193 3582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/61c1222c206bdb7e2ae9f9238e4613ca-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-16-109\" (UID: \"61c1222c206bdb7e2ae9f9238e4613ca\") " pod="kube-system/kube-apiserver-ip-172-31-16-109" Jan 23 23:56:52.388291 kubelet[3582]: I0123 23:56:52.387231 3582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/afed73d03875d6ca0dd9e924aa2b9ec0-ca-certs\") pod \"kube-controller-manager-ip-172-31-16-109\" (UID: \"afed73d03875d6ca0dd9e924aa2b9ec0\") " pod="kube-system/kube-controller-manager-ip-172-31-16-109" Jan 23 23:56:52.388291 kubelet[3582]: I0123 23:56:52.387265 3582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/afed73d03875d6ca0dd9e924aa2b9ec0-kubeconfig\") pod \"kube-controller-manager-ip-172-31-16-109\" (UID: \"afed73d03875d6ca0dd9e924aa2b9ec0\") " pod="kube-system/kube-controller-manager-ip-172-31-16-109" Jan 23 23:56:52.388291 kubelet[3582]: I0123 23:56:52.387304 3582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/afed73d03875d6ca0dd9e924aa2b9ec0-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-16-109\" (UID: \"afed73d03875d6ca0dd9e924aa2b9ec0\") " pod="kube-system/kube-controller-manager-ip-172-31-16-109" Jan 23 23:56:52.394629 kubelet[3582]: I0123 23:56:52.394365 3582 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-16-109" Jan 23 23:56:52.412396 kubelet[3582]: I0123 23:56:52.411598 3582 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-16-109" Jan 23 23:56:52.412396 kubelet[3582]: I0123 23:56:52.411736 3582 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-16-109" Jan 23 23:56:52.868714 sudo[3597]: pam_unix(sudo:session): session closed for user root Jan 23 23:56:52.910539 kubelet[3582]: I0123 23:56:52.909145 3582 apiserver.go:52] "Watching apiserver" Jan 23 23:56:52.978893 kubelet[3582]: I0123 23:56:52.978488 3582 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 23 23:56:53.167028 kubelet[3582]: I0123 23:56:53.166682 3582 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-16-109" podStartSLOduration=1.166601927 podStartE2EDuration="1.166601927s" podCreationTimestamp="2026-01-23 23:56:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 23:56:53.147333935 +0000 UTC m=+1.414735772" watchObservedRunningTime="2026-01-23 23:56:53.166601927 +0000 UTC m=+1.434003764" Jan 23 23:56:53.167718 kubelet[3582]: I0123 23:56:53.167362 3582 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-16-109" podStartSLOduration=1.167342615 podStartE2EDuration="1.167342615s" podCreationTimestamp="2026-01-23 23:56:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 23:56:53.166272167 +0000 UTC m=+1.433674016" watchObservedRunningTime="2026-01-23 23:56:53.167342615 +0000 UTC m=+1.434744476" Jan 23 23:56:53.190417 kubelet[3582]: I0123 23:56:53.189139 3582 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-16-109" podStartSLOduration=3.189117731 podStartE2EDuration="3.189117731s" podCreationTimestamp="2026-01-23 23:56:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 23:56:53.187699595 +0000 UTC m=+1.455101444" watchObservedRunningTime="2026-01-23 23:56:53.189117731 +0000 UTC m=+1.456519568" Jan 23 23:56:55.135784 sudo[2486]: pam_unix(sudo:session): session closed for user root Jan 23 23:56:55.220113 sshd[2482]: pam_unix(sshd:session): session closed for user core Jan 23 23:56:55.231339 systemd[1]: sshd@6-172.31.16.109:22-4.153.228.146:36288.service: Deactivated successfully. Jan 23 23:56:55.244743 systemd[1]: session-7.scope: Deactivated successfully. Jan 23 23:56:55.248037 systemd-logind[2101]: Session 7 logged out. Waiting for processes to exit. Jan 23 23:56:55.252571 systemd-logind[2101]: Removed session 7. Jan 23 23:56:55.594714 kubelet[3582]: I0123 23:56:55.594526 3582 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 23 23:56:55.595872 kubelet[3582]: I0123 23:56:55.595505 3582 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 23 23:56:55.595966 containerd[2135]: time="2026-01-23T23:56:55.595191603Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 23 23:56:56.515032 kubelet[3582]: I0123 23:56:56.514972 3582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/63cc08a2-e61d-44ea-8c15-f900399de0b3-xtables-lock\") pod \"kube-proxy-4c96q\" (UID: \"63cc08a2-e61d-44ea-8c15-f900399de0b3\") " pod="kube-system/kube-proxy-4c96q" Jan 23 23:56:56.515190 kubelet[3582]: I0123 23:56:56.515043 3582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/933dfb45-99a9-4d36-ad6d-924571aec70a-etc-cni-netd\") pod \"cilium-8pkzm\" (UID: \"933dfb45-99a9-4d36-ad6d-924571aec70a\") " pod="kube-system/cilium-8pkzm" Jan 23 23:56:56.515190 kubelet[3582]: I0123 23:56:56.515085 3582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/63cc08a2-e61d-44ea-8c15-f900399de0b3-lib-modules\") pod \"kube-proxy-4c96q\" (UID: \"63cc08a2-e61d-44ea-8c15-f900399de0b3\") " pod="kube-system/kube-proxy-4c96q" Jan 23 23:56:56.515190 kubelet[3582]: I0123 23:56:56.515120 3582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/933dfb45-99a9-4d36-ad6d-924571aec70a-clustermesh-secrets\") pod \"cilium-8pkzm\" (UID: \"933dfb45-99a9-4d36-ad6d-924571aec70a\") " pod="kube-system/cilium-8pkzm" Jan 23 23:56:56.515190 kubelet[3582]: I0123 23:56:56.515161 3582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/933dfb45-99a9-4d36-ad6d-924571aec70a-hubble-tls\") pod \"cilium-8pkzm\" (UID: \"933dfb45-99a9-4d36-ad6d-924571aec70a\") " pod="kube-system/cilium-8pkzm" Jan 23 23:56:56.515424 kubelet[3582]: I0123 23:56:56.515205 3582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/933dfb45-99a9-4d36-ad6d-924571aec70a-bpf-maps\") pod \"cilium-8pkzm\" (UID: \"933dfb45-99a9-4d36-ad6d-924571aec70a\") " pod="kube-system/cilium-8pkzm" Jan 23 23:56:56.515424 kubelet[3582]: I0123 23:56:56.515252 3582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/933dfb45-99a9-4d36-ad6d-924571aec70a-cni-path\") pod \"cilium-8pkzm\" (UID: \"933dfb45-99a9-4d36-ad6d-924571aec70a\") " pod="kube-system/cilium-8pkzm" Jan 23 23:56:56.515424 kubelet[3582]: I0123 23:56:56.515288 3582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/933dfb45-99a9-4d36-ad6d-924571aec70a-host-proc-sys-kernel\") pod \"cilium-8pkzm\" (UID: \"933dfb45-99a9-4d36-ad6d-924571aec70a\") " pod="kube-system/cilium-8pkzm" Jan 23 23:56:56.515424 kubelet[3582]: I0123 23:56:56.515333 3582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/933dfb45-99a9-4d36-ad6d-924571aec70a-cilium-cgroup\") pod \"cilium-8pkzm\" (UID: \"933dfb45-99a9-4d36-ad6d-924571aec70a\") " pod="kube-system/cilium-8pkzm" Jan 23 23:56:56.515424 kubelet[3582]: I0123 23:56:56.515370 3582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/933dfb45-99a9-4d36-ad6d-924571aec70a-cilium-config-path\") pod \"cilium-8pkzm\" (UID: \"933dfb45-99a9-4d36-ad6d-924571aec70a\") " pod="kube-system/cilium-8pkzm" Jan 23 23:56:56.515424 kubelet[3582]: I0123 23:56:56.515414 3582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/63cc08a2-e61d-44ea-8c15-f900399de0b3-kube-proxy\") pod \"kube-proxy-4c96q\" (UID: \"63cc08a2-e61d-44ea-8c15-f900399de0b3\") " pod="kube-system/kube-proxy-4c96q" Jan 23 23:56:56.515866 kubelet[3582]: I0123 23:56:56.515448 3582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/933dfb45-99a9-4d36-ad6d-924571aec70a-host-proc-sys-net\") pod \"cilium-8pkzm\" (UID: \"933dfb45-99a9-4d36-ad6d-924571aec70a\") " pod="kube-system/cilium-8pkzm" Jan 23 23:56:56.515866 kubelet[3582]: I0123 23:56:56.515490 3582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtv7j\" (UniqueName: \"kubernetes.io/projected/933dfb45-99a9-4d36-ad6d-924571aec70a-kube-api-access-qtv7j\") pod \"cilium-8pkzm\" (UID: \"933dfb45-99a9-4d36-ad6d-924571aec70a\") " pod="kube-system/cilium-8pkzm" Jan 23 23:56:56.515866 kubelet[3582]: I0123 23:56:56.515527 3582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9fsq\" (UniqueName: \"kubernetes.io/projected/63cc08a2-e61d-44ea-8c15-f900399de0b3-kube-api-access-l9fsq\") pod \"kube-proxy-4c96q\" (UID: \"63cc08a2-e61d-44ea-8c15-f900399de0b3\") " pod="kube-system/kube-proxy-4c96q" Jan 23 23:56:56.515866 kubelet[3582]: I0123 23:56:56.515560 3582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/933dfb45-99a9-4d36-ad6d-924571aec70a-hostproc\") pod \"cilium-8pkzm\" (UID: \"933dfb45-99a9-4d36-ad6d-924571aec70a\") " pod="kube-system/cilium-8pkzm" Jan 23 23:56:56.515866 kubelet[3582]: I0123 23:56:56.515612 3582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/933dfb45-99a9-4d36-ad6d-924571aec70a-lib-modules\") pod \"cilium-8pkzm\" (UID: \"933dfb45-99a9-4d36-ad6d-924571aec70a\") " pod="kube-system/cilium-8pkzm" Jan 23 23:56:56.516154 kubelet[3582]: I0123 23:56:56.515675 3582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/933dfb45-99a9-4d36-ad6d-924571aec70a-cilium-run\") pod \"cilium-8pkzm\" (UID: \"933dfb45-99a9-4d36-ad6d-924571aec70a\") " pod="kube-system/cilium-8pkzm" Jan 23 23:56:56.516154 kubelet[3582]: I0123 23:56:56.515716 3582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/933dfb45-99a9-4d36-ad6d-924571aec70a-xtables-lock\") pod \"cilium-8pkzm\" (UID: \"933dfb45-99a9-4d36-ad6d-924571aec70a\") " pod="kube-system/cilium-8pkzm" Jan 23 23:56:56.771571 containerd[2135]: time="2026-01-23T23:56:56.767459381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4c96q,Uid:63cc08a2-e61d-44ea-8c15-f900399de0b3,Namespace:kube-system,Attempt:0,}" Jan 23 23:56:56.786907 containerd[2135]: time="2026-01-23T23:56:56.786840413Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8pkzm,Uid:933dfb45-99a9-4d36-ad6d-924571aec70a,Namespace:kube-system,Attempt:0,}" Jan 23 23:56:56.826674 kubelet[3582]: I0123 23:56:56.824395 3582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9742g\" (UniqueName: \"kubernetes.io/projected/43b841e6-3993-4d47-8028-013eb3640157-kube-api-access-9742g\") pod \"cilium-operator-6c4d7847fc-knm26\" (UID: \"43b841e6-3993-4d47-8028-013eb3640157\") " pod="kube-system/cilium-operator-6c4d7847fc-knm26" Jan 23 23:56:56.833902 kubelet[3582]: I0123 23:56:56.830054 3582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/43b841e6-3993-4d47-8028-013eb3640157-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-knm26\" (UID: \"43b841e6-3993-4d47-8028-013eb3640157\") " pod="kube-system/cilium-operator-6c4d7847fc-knm26" Jan 23 23:56:56.904734 containerd[2135]: time="2026-01-23T23:56:56.902372897Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:56:56.904734 containerd[2135]: time="2026-01-23T23:56:56.902479841Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:56:56.904734 containerd[2135]: time="2026-01-23T23:56:56.902515925Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:56:56.904734 containerd[2135]: time="2026-01-23T23:56:56.902744921Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:56:56.913607 containerd[2135]: time="2026-01-23T23:56:56.913365713Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:56:56.913607 containerd[2135]: time="2026-01-23T23:56:56.913487393Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:56:56.913989 containerd[2135]: time="2026-01-23T23:56:56.913563305Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:56:56.913989 containerd[2135]: time="2026-01-23T23:56:56.913792277Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:56:57.017224 containerd[2135]: time="2026-01-23T23:56:57.017145506Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8pkzm,Uid:933dfb45-99a9-4d36-ad6d-924571aec70a,Namespace:kube-system,Attempt:0,} returns sandbox id \"db5be60d230425ec06aff951cd5dfccdfda83d06ab1872603cc8b24614cc81d9\"" Jan 23 23:56:57.023567 containerd[2135]: time="2026-01-23T23:56:57.022988738Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 23 23:56:57.053409 containerd[2135]: time="2026-01-23T23:56:57.053338586Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4c96q,Uid:63cc08a2-e61d-44ea-8c15-f900399de0b3,Namespace:kube-system,Attempt:0,} returns sandbox id \"b24cfeb4124584ce198b45c1aa42b46c75c2b2c1e0e69f6af1e80541ac5564f9\"" Jan 23 23:56:57.060448 containerd[2135]: time="2026-01-23T23:56:57.060377798Z" level=info msg="CreateContainer within sandbox \"b24cfeb4124584ce198b45c1aa42b46c75c2b2c1e0e69f6af1e80541ac5564f9\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 23 23:56:57.083633 containerd[2135]: time="2026-01-23T23:56:57.083526302Z" level=info msg="CreateContainer within sandbox \"b24cfeb4124584ce198b45c1aa42b46c75c2b2c1e0e69f6af1e80541ac5564f9\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"18cb0a2b708d254fcd193dbfb54f2326c61a017d7bcdc980030cb9f77613ebb5\"" Jan 23 23:56:57.087601 containerd[2135]: time="2026-01-23T23:56:57.086737298Z" level=info msg="StartContainer for \"18cb0a2b708d254fcd193dbfb54f2326c61a017d7bcdc980030cb9f77613ebb5\"" Jan 23 23:56:57.113022 containerd[2135]: time="2026-01-23T23:56:57.112957466Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-knm26,Uid:43b841e6-3993-4d47-8028-013eb3640157,Namespace:kube-system,Attempt:0,}" Jan 23 23:56:57.193807 containerd[2135]: time="2026-01-23T23:56:57.193300083Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:56:57.195476 containerd[2135]: time="2026-01-23T23:56:57.194127051Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:56:57.195476 containerd[2135]: time="2026-01-23T23:56:57.194416767Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:56:57.195476 containerd[2135]: time="2026-01-23T23:56:57.194785719Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:56:57.243067 containerd[2135]: time="2026-01-23T23:56:57.242986623Z" level=info msg="StartContainer for \"18cb0a2b708d254fcd193dbfb54f2326c61a017d7bcdc980030cb9f77613ebb5\" returns successfully" Jan 23 23:56:57.351775 containerd[2135]: time="2026-01-23T23:56:57.350581203Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-knm26,Uid:43b841e6-3993-4d47-8028-013eb3640157,Namespace:kube-system,Attempt:0,} returns sandbox id \"eb88ed3c77bbe6182597d61547af2635c3183240a3014cc99b2bc67eb25d5068\"" Jan 23 23:56:58.183683 kubelet[3582]: I0123 23:56:58.183336 3582 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-4c96q" podStartSLOduration=2.18330802 podStartE2EDuration="2.18330802s" podCreationTimestamp="2026-01-23 23:56:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 23:56:58.182544424 +0000 UTC m=+6.450075297" watchObservedRunningTime="2026-01-23 23:56:58.18330802 +0000 UTC m=+6.450709941" Jan 23 23:57:02.442010 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount571168970.mount: Deactivated successfully. Jan 23 23:57:04.996768 containerd[2135]: time="2026-01-23T23:57:04.996684481Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:57:05.000140 containerd[2135]: time="2026-01-23T23:57:05.000069813Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Jan 23 23:57:05.001700 containerd[2135]: time="2026-01-23T23:57:05.001597533Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:57:05.006601 containerd[2135]: time="2026-01-23T23:57:05.006500434Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 7.983130156s" Jan 23 23:57:05.006601 containerd[2135]: time="2026-01-23T23:57:05.006591934Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jan 23 23:57:05.010111 containerd[2135]: time="2026-01-23T23:57:05.010010290Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 23 23:57:05.011806 containerd[2135]: time="2026-01-23T23:57:05.011736154Z" level=info msg="CreateContainer within sandbox \"db5be60d230425ec06aff951cd5dfccdfda83d06ab1872603cc8b24614cc81d9\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 23 23:57:05.038112 containerd[2135]: time="2026-01-23T23:57:05.038056990Z" level=info msg="CreateContainer within sandbox \"db5be60d230425ec06aff951cd5dfccdfda83d06ab1872603cc8b24614cc81d9\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"dfa5951ad263842dfe6a370a2ef97c892190e902462500fb7c465e0675668f28\"" Jan 23 23:57:05.040998 containerd[2135]: time="2026-01-23T23:57:05.040928650Z" level=info msg="StartContainer for \"dfa5951ad263842dfe6a370a2ef97c892190e902462500fb7c465e0675668f28\"" Jan 23 23:57:05.163472 containerd[2135]: time="2026-01-23T23:57:05.162952558Z" level=info msg="StartContainer for \"dfa5951ad263842dfe6a370a2ef97c892190e902462500fb7c465e0675668f28\" returns successfully" Jan 23 23:57:06.026994 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dfa5951ad263842dfe6a370a2ef97c892190e902462500fb7c465e0675668f28-rootfs.mount: Deactivated successfully. Jan 23 23:57:06.503693 containerd[2135]: time="2026-01-23T23:57:06.503579221Z" level=info msg="shim disconnected" id=dfa5951ad263842dfe6a370a2ef97c892190e902462500fb7c465e0675668f28 namespace=k8s.io Jan 23 23:57:06.503693 containerd[2135]: time="2026-01-23T23:57:06.503688709Z" level=warning msg="cleaning up after shim disconnected" id=dfa5951ad263842dfe6a370a2ef97c892190e902462500fb7c465e0675668f28 namespace=k8s.io Jan 23 23:57:06.504492 containerd[2135]: time="2026-01-23T23:57:06.503710957Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 23:57:06.902531 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2490248078.mount: Deactivated successfully. Jan 23 23:57:07.212852 containerd[2135]: time="2026-01-23T23:57:07.212190492Z" level=info msg="CreateContainer within sandbox \"db5be60d230425ec06aff951cd5dfccdfda83d06ab1872603cc8b24614cc81d9\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 23 23:57:07.256832 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1319039131.mount: Deactivated successfully. Jan 23 23:57:07.267060 containerd[2135]: time="2026-01-23T23:57:07.266308429Z" level=info msg="CreateContainer within sandbox \"db5be60d230425ec06aff951cd5dfccdfda83d06ab1872603cc8b24614cc81d9\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"edab0e7ed5bed600ef899b13c9abeeee70980e1eed44e8f9da8e99832ca57b92\"" Jan 23 23:57:07.269621 containerd[2135]: time="2026-01-23T23:57:07.267914821Z" level=info msg="StartContainer for \"edab0e7ed5bed600ef899b13c9abeeee70980e1eed44e8f9da8e99832ca57b92\"" Jan 23 23:57:07.350523 systemd[1]: run-containerd-runc-k8s.io-edab0e7ed5bed600ef899b13c9abeeee70980e1eed44e8f9da8e99832ca57b92-runc.Cq3fSf.mount: Deactivated successfully. Jan 23 23:57:07.446858 containerd[2135]: time="2026-01-23T23:57:07.446779190Z" level=info msg="StartContainer for \"edab0e7ed5bed600ef899b13c9abeeee70980e1eed44e8f9da8e99832ca57b92\" returns successfully" Jan 23 23:57:07.474397 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 23 23:57:07.475403 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 23 23:57:07.475529 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 23 23:57:07.492548 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 23:57:07.546195 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 23:57:07.638202 containerd[2135]: time="2026-01-23T23:57:07.637904259Z" level=info msg="shim disconnected" id=edab0e7ed5bed600ef899b13c9abeeee70980e1eed44e8f9da8e99832ca57b92 namespace=k8s.io Jan 23 23:57:07.638202 containerd[2135]: time="2026-01-23T23:57:07.637983387Z" level=warning msg="cleaning up after shim disconnected" id=edab0e7ed5bed600ef899b13c9abeeee70980e1eed44e8f9da8e99832ca57b92 namespace=k8s.io Jan 23 23:57:07.638202 containerd[2135]: time="2026-01-23T23:57:07.638006415Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 23:57:07.817155 containerd[2135]: time="2026-01-23T23:57:07.816985551Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:57:07.819201 containerd[2135]: time="2026-01-23T23:57:07.819130563Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Jan 23 23:57:07.821440 containerd[2135]: time="2026-01-23T23:57:07.821342115Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:57:07.825768 containerd[2135]: time="2026-01-23T23:57:07.825551140Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.815472522s" Jan 23 23:57:07.825768 containerd[2135]: time="2026-01-23T23:57:07.825633784Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jan 23 23:57:07.831448 containerd[2135]: time="2026-01-23T23:57:07.830662780Z" level=info msg="CreateContainer within sandbox \"eb88ed3c77bbe6182597d61547af2635c3183240a3014cc99b2bc67eb25d5068\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 23 23:57:07.851057 containerd[2135]: time="2026-01-23T23:57:07.850975408Z" level=info msg="CreateContainer within sandbox \"eb88ed3c77bbe6182597d61547af2635c3183240a3014cc99b2bc67eb25d5068\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"d0ef27b6086e558d7e729545f4452e0737064352b96d431c21dc3cae3d802192\"" Jan 23 23:57:07.852280 containerd[2135]: time="2026-01-23T23:57:07.852210820Z" level=info msg="StartContainer for \"d0ef27b6086e558d7e729545f4452e0737064352b96d431c21dc3cae3d802192\"" Jan 23 23:57:07.953606 containerd[2135]: time="2026-01-23T23:57:07.953544460Z" level=info msg="StartContainer for \"d0ef27b6086e558d7e729545f4452e0737064352b96d431c21dc3cae3d802192\" returns successfully" Jan 23 23:57:08.229066 containerd[2135]: time="2026-01-23T23:57:08.228845654Z" level=info msg="CreateContainer within sandbox \"db5be60d230425ec06aff951cd5dfccdfda83d06ab1872603cc8b24614cc81d9\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 23 23:57:08.265820 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-edab0e7ed5bed600ef899b13c9abeeee70980e1eed44e8f9da8e99832ca57b92-rootfs.mount: Deactivated successfully. Jan 23 23:57:08.282920 containerd[2135]: time="2026-01-23T23:57:08.282542918Z" level=info msg="CreateContainer within sandbox \"db5be60d230425ec06aff951cd5dfccdfda83d06ab1872603cc8b24614cc81d9\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"c787f680fd3f3410c192597ecc70a56664ec2e8ab0ef6514f596af912cf0b228\"" Jan 23 23:57:08.294633 containerd[2135]: time="2026-01-23T23:57:08.292287602Z" level=info msg="StartContainer for \"c787f680fd3f3410c192597ecc70a56664ec2e8ab0ef6514f596af912cf0b228\"" Jan 23 23:57:08.642798 containerd[2135]: time="2026-01-23T23:57:08.642323980Z" level=info msg="StartContainer for \"c787f680fd3f3410c192597ecc70a56664ec2e8ab0ef6514f596af912cf0b228\" returns successfully" Jan 23 23:57:08.796327 containerd[2135]: time="2026-01-23T23:57:08.794895244Z" level=info msg="shim disconnected" id=c787f680fd3f3410c192597ecc70a56664ec2e8ab0ef6514f596af912cf0b228 namespace=k8s.io Jan 23 23:57:08.796327 containerd[2135]: time="2026-01-23T23:57:08.794970160Z" level=warning msg="cleaning up after shim disconnected" id=c787f680fd3f3410c192597ecc70a56664ec2e8ab0ef6514f596af912cf0b228 namespace=k8s.io Jan 23 23:57:08.796327 containerd[2135]: time="2026-01-23T23:57:08.794990524Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 23:57:09.256131 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c787f680fd3f3410c192597ecc70a56664ec2e8ab0ef6514f596af912cf0b228-rootfs.mount: Deactivated successfully. Jan 23 23:57:09.273566 containerd[2135]: time="2026-01-23T23:57:09.273489891Z" level=info msg="CreateContainer within sandbox \"db5be60d230425ec06aff951cd5dfccdfda83d06ab1872603cc8b24614cc81d9\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 23 23:57:09.321241 containerd[2135]: time="2026-01-23T23:57:09.318489879Z" level=info msg="CreateContainer within sandbox \"db5be60d230425ec06aff951cd5dfccdfda83d06ab1872603cc8b24614cc81d9\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"35be388526cde0c9934f85aaa9fdedc9a233da2a2bcb89cc046661207a708f4f\"" Jan 23 23:57:09.323956 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount626911129.mount: Deactivated successfully. Jan 23 23:57:09.350716 containerd[2135]: time="2026-01-23T23:57:09.350402487Z" level=info msg="StartContainer for \"35be388526cde0c9934f85aaa9fdedc9a233da2a2bcb89cc046661207a708f4f\"" Jan 23 23:57:09.520528 kubelet[3582]: I0123 23:57:09.519569 3582 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-knm26" podStartSLOduration=3.049424608 podStartE2EDuration="13.519544744s" podCreationTimestamp="2026-01-23 23:56:56 +0000 UTC" firstStartedPulling="2026-01-23 23:56:57.357168112 +0000 UTC m=+5.624569937" lastFinishedPulling="2026-01-23 23:57:07.82728826 +0000 UTC m=+16.094690073" observedRunningTime="2026-01-23 23:57:08.409449134 +0000 UTC m=+16.676850995" watchObservedRunningTime="2026-01-23 23:57:09.519544744 +0000 UTC m=+17.786946569" Jan 23 23:57:09.658813 containerd[2135]: time="2026-01-23T23:57:09.657181445Z" level=info msg="StartContainer for \"35be388526cde0c9934f85aaa9fdedc9a233da2a2bcb89cc046661207a708f4f\" returns successfully" Jan 23 23:57:09.723131 containerd[2135]: time="2026-01-23T23:57:09.723054869Z" level=info msg="shim disconnected" id=35be388526cde0c9934f85aaa9fdedc9a233da2a2bcb89cc046661207a708f4f namespace=k8s.io Jan 23 23:57:09.724006 containerd[2135]: time="2026-01-23T23:57:09.723446885Z" level=warning msg="cleaning up after shim disconnected" id=35be388526cde0c9934f85aaa9fdedc9a233da2a2bcb89cc046661207a708f4f namespace=k8s.io Jan 23 23:57:09.724006 containerd[2135]: time="2026-01-23T23:57:09.723480353Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 23:57:10.247755 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-35be388526cde0c9934f85aaa9fdedc9a233da2a2bcb89cc046661207a708f4f-rootfs.mount: Deactivated successfully. Jan 23 23:57:10.268413 containerd[2135]: time="2026-01-23T23:57:10.268095064Z" level=info msg="CreateContainer within sandbox \"db5be60d230425ec06aff951cd5dfccdfda83d06ab1872603cc8b24614cc81d9\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 23 23:57:10.305061 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2547003241.mount: Deactivated successfully. Jan 23 23:57:10.308789 containerd[2135]: time="2026-01-23T23:57:10.308367424Z" level=info msg="CreateContainer within sandbox \"db5be60d230425ec06aff951cd5dfccdfda83d06ab1872603cc8b24614cc81d9\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"bcfeddcfe99d57b2a024a50bd5aba60771eea8b4af7f92637fd80ac7659eb5ba\"" Jan 23 23:57:10.310392 containerd[2135]: time="2026-01-23T23:57:10.310221376Z" level=info msg="StartContainer for \"bcfeddcfe99d57b2a024a50bd5aba60771eea8b4af7f92637fd80ac7659eb5ba\"" Jan 23 23:57:10.417969 containerd[2135]: time="2026-01-23T23:57:10.417894052Z" level=info msg="StartContainer for \"bcfeddcfe99d57b2a024a50bd5aba60771eea8b4af7f92637fd80ac7659eb5ba\" returns successfully" Jan 23 23:57:10.562783 kubelet[3582]: I0123 23:57:10.561759 3582 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 23 23:57:10.638360 kubelet[3582]: I0123 23:57:10.638314 3582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5ps2\" (UniqueName: \"kubernetes.io/projected/16abc80a-136d-47da-9f60-ff56fe5b0049-kube-api-access-z5ps2\") pod \"coredns-668d6bf9bc-75gc8\" (UID: \"16abc80a-136d-47da-9f60-ff56fe5b0049\") " pod="kube-system/coredns-668d6bf9bc-75gc8" Jan 23 23:57:10.640110 kubelet[3582]: I0123 23:57:10.639769 3582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jq9p8\" (UniqueName: \"kubernetes.io/projected/20d233a9-1d5a-4fe4-97d5-336e3200aa0a-kube-api-access-jq9p8\") pod \"coredns-668d6bf9bc-cn8pz\" (UID: \"20d233a9-1d5a-4fe4-97d5-336e3200aa0a\") " pod="kube-system/coredns-668d6bf9bc-cn8pz" Jan 23 23:57:10.640110 kubelet[3582]: I0123 23:57:10.639844 3582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/20d233a9-1d5a-4fe4-97d5-336e3200aa0a-config-volume\") pod \"coredns-668d6bf9bc-cn8pz\" (UID: \"20d233a9-1d5a-4fe4-97d5-336e3200aa0a\") " pod="kube-system/coredns-668d6bf9bc-cn8pz" Jan 23 23:57:10.640110 kubelet[3582]: I0123 23:57:10.639891 3582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/16abc80a-136d-47da-9f60-ff56fe5b0049-config-volume\") pod \"coredns-668d6bf9bc-75gc8\" (UID: \"16abc80a-136d-47da-9f60-ff56fe5b0049\") " pod="kube-system/coredns-668d6bf9bc-75gc8" Jan 23 23:57:10.938821 containerd[2135]: time="2026-01-23T23:57:10.936929671Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-75gc8,Uid:16abc80a-136d-47da-9f60-ff56fe5b0049,Namespace:kube-system,Attempt:0,}" Jan 23 23:57:10.951935 containerd[2135]: time="2026-01-23T23:57:10.949988287Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-cn8pz,Uid:20d233a9-1d5a-4fe4-97d5-336e3200aa0a,Namespace:kube-system,Attempt:0,}" Jan 23 23:57:11.393344 kubelet[3582]: I0123 23:57:11.392546 3582 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-8pkzm" podStartSLOduration=7.405068049 podStartE2EDuration="15.392520245s" podCreationTimestamp="2026-01-23 23:56:56 +0000 UTC" firstStartedPulling="2026-01-23 23:56:57.020476514 +0000 UTC m=+5.287878339" lastFinishedPulling="2026-01-23 23:57:05.007928722 +0000 UTC m=+13.275330535" observedRunningTime="2026-01-23 23:57:11.389029685 +0000 UTC m=+19.656431510" watchObservedRunningTime="2026-01-23 23:57:11.392520245 +0000 UTC m=+19.659922082" Jan 23 23:57:13.512588 (udev-worker)[4377]: Network interface NamePolicy= disabled on kernel command line. Jan 23 23:57:13.513001 systemd-networkd[1692]: cilium_host: Link UP Jan 23 23:57:13.513290 systemd-networkd[1692]: cilium_net: Link UP Jan 23 23:57:13.513298 systemd-networkd[1692]: cilium_net: Gained carrier Jan 23 23:57:13.516316 systemd-networkd[1692]: cilium_host: Gained carrier Jan 23 23:57:13.519265 (udev-worker)[4410]: Network interface NamePolicy= disabled on kernel command line. Jan 23 23:57:13.520849 systemd-networkd[1692]: cilium_net: Gained IPv6LL Jan 23 23:57:13.697926 systemd-networkd[1692]: cilium_vxlan: Link UP Jan 23 23:57:13.697941 systemd-networkd[1692]: cilium_vxlan: Gained carrier Jan 23 23:57:14.168743 systemd-networkd[1692]: cilium_host: Gained IPv6LL Jan 23 23:57:14.279894 kernel: NET: Registered PF_ALG protocol family Jan 23 23:57:15.385204 systemd-networkd[1692]: cilium_vxlan: Gained IPv6LL Jan 23 23:57:15.613938 systemd-networkd[1692]: lxc_health: Link UP Jan 23 23:57:15.621854 systemd-networkd[1692]: lxc_health: Gained carrier Jan 23 23:57:16.083895 systemd-networkd[1692]: lxcb9f35da1d146: Link UP Jan 23 23:57:16.093694 kernel: eth0: renamed from tmp57a61 Jan 23 23:57:16.099163 systemd-networkd[1692]: lxcb9f35da1d146: Gained carrier Jan 23 23:57:16.099356 (udev-worker)[4422]: Network interface NamePolicy= disabled on kernel command line. Jan 23 23:57:16.123100 systemd-networkd[1692]: lxcbc080db6b835: Link UP Jan 23 23:57:16.133699 kernel: eth0: renamed from tmp81623 Jan 23 23:57:16.142017 (udev-worker)[4425]: Network interface NamePolicy= disabled on kernel command line. Jan 23 23:57:16.150740 systemd-networkd[1692]: lxcbc080db6b835: Gained carrier Jan 23 23:57:17.370860 systemd-networkd[1692]: lxc_health: Gained IPv6LL Jan 23 23:57:17.561834 systemd-networkd[1692]: lxcb9f35da1d146: Gained IPv6LL Jan 23 23:57:17.816454 systemd-networkd[1692]: lxcbc080db6b835: Gained IPv6LL Jan 23 23:57:20.451725 ntpd[2087]: Listen normally on 6 cilium_host 192.168.0.159:123 Jan 23 23:57:20.453018 ntpd[2087]: 23 Jan 23:57:20 ntpd[2087]: Listen normally on 6 cilium_host 192.168.0.159:123 Jan 23 23:57:20.453018 ntpd[2087]: 23 Jan 23:57:20 ntpd[2087]: Listen normally on 7 cilium_net [fe80::b4a5:b3ff:feab:473a%4]:123 Jan 23 23:57:20.453018 ntpd[2087]: 23 Jan 23:57:20 ntpd[2087]: Listen normally on 8 cilium_host [fe80::481b:fdff:fe00:615b%5]:123 Jan 23 23:57:20.453018 ntpd[2087]: 23 Jan 23:57:20 ntpd[2087]: Listen normally on 9 cilium_vxlan [fe80::1c49:53ff:fe30:230b%6]:123 Jan 23 23:57:20.453018 ntpd[2087]: 23 Jan 23:57:20 ntpd[2087]: Listen normally on 10 lxc_health [fe80::28f7:75ff:fed7:84fb%8]:123 Jan 23 23:57:20.453018 ntpd[2087]: 23 Jan 23:57:20 ntpd[2087]: Listen normally on 11 lxcb9f35da1d146 [fe80::6ce5:36ff:fe9c:8bbe%10]:123 Jan 23 23:57:20.453018 ntpd[2087]: 23 Jan 23:57:20 ntpd[2087]: Listen normally on 12 lxcbc080db6b835 [fe80::683e:5bff:fe9c:783e%12]:123 Jan 23 23:57:20.451849 ntpd[2087]: Listen normally on 7 cilium_net [fe80::b4a5:b3ff:feab:473a%4]:123 Jan 23 23:57:20.451929 ntpd[2087]: Listen normally on 8 cilium_host [fe80::481b:fdff:fe00:615b%5]:123 Jan 23 23:57:20.451999 ntpd[2087]: Listen normally on 9 cilium_vxlan [fe80::1c49:53ff:fe30:230b%6]:123 Jan 23 23:57:20.452067 ntpd[2087]: Listen normally on 10 lxc_health [fe80::28f7:75ff:fed7:84fb%8]:123 Jan 23 23:57:20.452133 ntpd[2087]: Listen normally on 11 lxcb9f35da1d146 [fe80::6ce5:36ff:fe9c:8bbe%10]:123 Jan 23 23:57:20.452205 ntpd[2087]: Listen normally on 12 lxcbc080db6b835 [fe80::683e:5bff:fe9c:783e%12]:123 Jan 23 23:57:24.505928 containerd[2135]: time="2026-01-23T23:57:24.504769026Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:57:24.505928 containerd[2135]: time="2026-01-23T23:57:24.504868602Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:57:24.505928 containerd[2135]: time="2026-01-23T23:57:24.504904866Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:57:24.505928 containerd[2135]: time="2026-01-23T23:57:24.505080510Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:57:24.547265 containerd[2135]: time="2026-01-23T23:57:24.542036971Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:57:24.547265 containerd[2135]: time="2026-01-23T23:57:24.542149231Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:57:24.547265 containerd[2135]: time="2026-01-23T23:57:24.542186899Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:57:24.547265 containerd[2135]: time="2026-01-23T23:57:24.542354599Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:57:24.740181 containerd[2135]: time="2026-01-23T23:57:24.740096336Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-75gc8,Uid:16abc80a-136d-47da-9f60-ff56fe5b0049,Namespace:kube-system,Attempt:0,} returns sandbox id \"57a6164a15fcfdbf1701429f8c9bb48d764f8c0d46ed91aad607e43fe3b846a2\"" Jan 23 23:57:24.748732 containerd[2135]: time="2026-01-23T23:57:24.747576872Z" level=info msg="CreateContainer within sandbox \"57a6164a15fcfdbf1701429f8c9bb48d764f8c0d46ed91aad607e43fe3b846a2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 23:57:24.801727 containerd[2135]: time="2026-01-23T23:57:24.801458792Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-cn8pz,Uid:20d233a9-1d5a-4fe4-97d5-336e3200aa0a,Namespace:kube-system,Attempt:0,} returns sandbox id \"81623e08454a682b094c6b7444845905d3ff17946f955c21f09d741e28116ad6\"" Jan 23 23:57:24.803585 containerd[2135]: time="2026-01-23T23:57:24.803522936Z" level=info msg="CreateContainer within sandbox \"57a6164a15fcfdbf1701429f8c9bb48d764f8c0d46ed91aad607e43fe3b846a2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7d9cc1b3179c646f043fd5c99892a385a784ef3efd0fef3e23c5016a02420199\"" Jan 23 23:57:24.806344 containerd[2135]: time="2026-01-23T23:57:24.806157608Z" level=info msg="StartContainer for \"7d9cc1b3179c646f043fd5c99892a385a784ef3efd0fef3e23c5016a02420199\"" Jan 23 23:57:24.813968 containerd[2135]: time="2026-01-23T23:57:24.813834608Z" level=info msg="CreateContainer within sandbox \"81623e08454a682b094c6b7444845905d3ff17946f955c21f09d741e28116ad6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 23:57:24.855694 containerd[2135]: time="2026-01-23T23:57:24.855535604Z" level=info msg="CreateContainer within sandbox \"81623e08454a682b094c6b7444845905d3ff17946f955c21f09d741e28116ad6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8af482f2588bde23f28e66ba6baf4da2c75609263efab7df92d1594a4fb2801f\"" Jan 23 23:57:24.859051 containerd[2135]: time="2026-01-23T23:57:24.858792392Z" level=info msg="StartContainer for \"8af482f2588bde23f28e66ba6baf4da2c75609263efab7df92d1594a4fb2801f\"" Jan 23 23:57:24.986280 containerd[2135]: time="2026-01-23T23:57:24.985817817Z" level=info msg="StartContainer for \"7d9cc1b3179c646f043fd5c99892a385a784ef3efd0fef3e23c5016a02420199\" returns successfully" Jan 23 23:57:25.046838 containerd[2135]: time="2026-01-23T23:57:25.046761017Z" level=info msg="StartContainer for \"8af482f2588bde23f28e66ba6baf4da2c75609263efab7df92d1594a4fb2801f\" returns successfully" Jan 23 23:57:25.384429 kubelet[3582]: I0123 23:57:25.382717 3582 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-cn8pz" podStartSLOduration=29.382692775 podStartE2EDuration="29.382692775s" podCreationTimestamp="2026-01-23 23:56:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 23:57:25.381839347 +0000 UTC m=+33.649241196" watchObservedRunningTime="2026-01-23 23:57:25.382692775 +0000 UTC m=+33.650094660" Jan 23 23:57:25.410558 kubelet[3582]: I0123 23:57:25.410463 3582 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-75gc8" podStartSLOduration=29.410436847 podStartE2EDuration="29.410436847s" podCreationTimestamp="2026-01-23 23:56:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 23:57:25.406619407 +0000 UTC m=+33.674021244" watchObservedRunningTime="2026-01-23 23:57:25.410436847 +0000 UTC m=+33.677838672" Jan 23 23:57:25.548558 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount528757977.mount: Deactivated successfully. Jan 23 23:57:33.391291 systemd[1]: Started sshd@7-172.31.16.109:22-4.153.228.146:39456.service - OpenSSH per-connection server daemon (4.153.228.146:39456). Jan 23 23:57:33.937509 sshd[4955]: Accepted publickey for core from 4.153.228.146 port 39456 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:57:33.940869 sshd[4955]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:57:33.949375 systemd-logind[2101]: New session 8 of user core. Jan 23 23:57:33.959134 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 23 23:57:34.459527 sshd[4955]: pam_unix(sshd:session): session closed for user core Jan 23 23:57:34.464928 systemd[1]: sshd@7-172.31.16.109:22-4.153.228.146:39456.service: Deactivated successfully. Jan 23 23:57:34.474556 systemd[1]: session-8.scope: Deactivated successfully. Jan 23 23:57:34.478446 systemd-logind[2101]: Session 8 logged out. Waiting for processes to exit. Jan 23 23:57:34.481132 systemd-logind[2101]: Removed session 8. Jan 23 23:57:39.555300 systemd[1]: Started sshd@8-172.31.16.109:22-4.153.228.146:54422.service - OpenSSH per-connection server daemon (4.153.228.146:54422). Jan 23 23:57:40.085330 sshd[4969]: Accepted publickey for core from 4.153.228.146 port 54422 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:57:40.088487 sshd[4969]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:57:40.097977 systemd-logind[2101]: New session 9 of user core. Jan 23 23:57:40.104492 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 23 23:57:40.582014 sshd[4969]: pam_unix(sshd:session): session closed for user core Jan 23 23:57:40.588012 systemd[1]: sshd@8-172.31.16.109:22-4.153.228.146:54422.service: Deactivated successfully. Jan 23 23:57:40.597029 systemd[1]: session-9.scope: Deactivated successfully. Jan 23 23:57:40.600264 systemd-logind[2101]: Session 9 logged out. Waiting for processes to exit. Jan 23 23:57:40.603405 systemd-logind[2101]: Removed session 9. Jan 23 23:57:45.660151 systemd[1]: Started sshd@9-172.31.16.109:22-4.153.228.146:49300.service - OpenSSH per-connection server daemon (4.153.228.146:49300). Jan 23 23:57:46.162186 sshd[4984]: Accepted publickey for core from 4.153.228.146 port 49300 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:57:46.164380 sshd[4984]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:57:46.175405 systemd-logind[2101]: New session 10 of user core. Jan 23 23:57:46.185346 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 23 23:57:46.628001 sshd[4984]: pam_unix(sshd:session): session closed for user core Jan 23 23:57:46.636099 systemd[1]: sshd@9-172.31.16.109:22-4.153.228.146:49300.service: Deactivated successfully. Jan 23 23:57:46.643627 systemd[1]: session-10.scope: Deactivated successfully. Jan 23 23:57:46.646161 systemd-logind[2101]: Session 10 logged out. Waiting for processes to exit. Jan 23 23:57:46.648175 systemd-logind[2101]: Removed session 10. Jan 23 23:57:51.729200 systemd[1]: Started sshd@10-172.31.16.109:22-4.153.228.146:49304.service - OpenSSH per-connection server daemon (4.153.228.146:49304). Jan 23 23:57:52.261171 sshd[4999]: Accepted publickey for core from 4.153.228.146 port 49304 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:57:52.263843 sshd[4999]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:57:52.279379 systemd-logind[2101]: New session 11 of user core. Jan 23 23:57:52.283480 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 23 23:57:52.778559 sshd[4999]: pam_unix(sshd:session): session closed for user core Jan 23 23:57:52.784049 systemd-logind[2101]: Session 11 logged out. Waiting for processes to exit. Jan 23 23:57:52.786222 systemd[1]: sshd@10-172.31.16.109:22-4.153.228.146:49304.service: Deactivated successfully. Jan 23 23:57:52.792619 systemd[1]: session-11.scope: Deactivated successfully. Jan 23 23:57:52.795752 systemd-logind[2101]: Removed session 11. Jan 23 23:57:52.855092 systemd[1]: Started sshd@11-172.31.16.109:22-4.153.228.146:49306.service - OpenSSH per-connection server daemon (4.153.228.146:49306). Jan 23 23:57:53.360793 sshd[5017]: Accepted publickey for core from 4.153.228.146 port 49306 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:57:53.363389 sshd[5017]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:57:53.372248 systemd-logind[2101]: New session 12 of user core. Jan 23 23:57:53.379717 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 23 23:57:53.909285 sshd[5017]: pam_unix(sshd:session): session closed for user core Jan 23 23:57:53.927261 systemd-logind[2101]: Session 12 logged out. Waiting for processes to exit. Jan 23 23:57:53.932137 systemd[1]: sshd@11-172.31.16.109:22-4.153.228.146:49306.service: Deactivated successfully. Jan 23 23:57:53.943018 systemd[1]: session-12.scope: Deactivated successfully. Jan 23 23:57:53.947535 systemd-logind[2101]: Removed session 12. Jan 23 23:57:53.998142 systemd[1]: Started sshd@12-172.31.16.109:22-4.153.228.146:49314.service - OpenSSH per-connection server daemon (4.153.228.146:49314). Jan 23 23:57:54.493342 sshd[5029]: Accepted publickey for core from 4.153.228.146 port 49314 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:57:54.495954 sshd[5029]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:57:54.505264 systemd-logind[2101]: New session 13 of user core. Jan 23 23:57:54.514136 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 23 23:57:54.960004 sshd[5029]: pam_unix(sshd:session): session closed for user core Jan 23 23:57:54.966147 systemd-logind[2101]: Session 13 logged out. Waiting for processes to exit. Jan 23 23:57:54.967189 systemd[1]: sshd@12-172.31.16.109:22-4.153.228.146:49314.service: Deactivated successfully. Jan 23 23:57:54.975755 systemd[1]: session-13.scope: Deactivated successfully. Jan 23 23:57:54.977615 systemd-logind[2101]: Removed session 13. Jan 23 23:58:00.044140 systemd[1]: Started sshd@13-172.31.16.109:22-4.153.228.146:35828.service - OpenSSH per-connection server daemon (4.153.228.146:35828). Jan 23 23:58:00.544278 sshd[5046]: Accepted publickey for core from 4.153.228.146 port 35828 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:58:00.547095 sshd[5046]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:58:00.554733 systemd-logind[2101]: New session 14 of user core. Jan 23 23:58:00.564128 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 23 23:58:01.005991 sshd[5046]: pam_unix(sshd:session): session closed for user core Jan 23 23:58:01.012630 systemd[1]: sshd@13-172.31.16.109:22-4.153.228.146:35828.service: Deactivated successfully. Jan 23 23:58:01.019791 systemd-logind[2101]: Session 14 logged out. Waiting for processes to exit. Jan 23 23:58:01.020215 systemd[1]: session-14.scope: Deactivated successfully. Jan 23 23:58:01.023795 systemd-logind[2101]: Removed session 14. Jan 23 23:58:06.104111 systemd[1]: Started sshd@14-172.31.16.109:22-4.153.228.146:36442.service - OpenSSH per-connection server daemon (4.153.228.146:36442). Jan 23 23:58:06.636823 sshd[5060]: Accepted publickey for core from 4.153.228.146 port 36442 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:58:06.639482 sshd[5060]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:58:06.647432 systemd-logind[2101]: New session 15 of user core. Jan 23 23:58:06.659259 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 23 23:58:07.126609 sshd[5060]: pam_unix(sshd:session): session closed for user core Jan 23 23:58:07.133622 systemd[1]: sshd@14-172.31.16.109:22-4.153.228.146:36442.service: Deactivated successfully. Jan 23 23:58:07.143592 systemd-logind[2101]: Session 15 logged out. Waiting for processes to exit. Jan 23 23:58:07.144599 systemd[1]: session-15.scope: Deactivated successfully. Jan 23 23:58:07.148150 systemd-logind[2101]: Removed session 15. Jan 23 23:58:07.217151 systemd[1]: Started sshd@15-172.31.16.109:22-4.153.228.146:36454.service - OpenSSH per-connection server daemon (4.153.228.146:36454). Jan 23 23:58:07.761361 sshd[5074]: Accepted publickey for core from 4.153.228.146 port 36454 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:58:07.764039 sshd[5074]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:58:07.772702 systemd-logind[2101]: New session 16 of user core. Jan 23 23:58:07.782158 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 23 23:58:08.345751 sshd[5074]: pam_unix(sshd:session): session closed for user core Jan 23 23:58:08.351108 systemd[1]: sshd@15-172.31.16.109:22-4.153.228.146:36454.service: Deactivated successfully. Jan 23 23:58:08.352264 systemd-logind[2101]: Session 16 logged out. Waiting for processes to exit. Jan 23 23:58:08.360713 systemd[1]: session-16.scope: Deactivated successfully. Jan 23 23:58:08.363921 systemd-logind[2101]: Removed session 16. Jan 23 23:58:08.427124 systemd[1]: Started sshd@16-172.31.16.109:22-4.153.228.146:36470.service - OpenSSH per-connection server daemon (4.153.228.146:36470). Jan 23 23:58:08.928163 sshd[5087]: Accepted publickey for core from 4.153.228.146 port 36470 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:58:08.930942 sshd[5087]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:58:08.940211 systemd-logind[2101]: New session 17 of user core. Jan 23 23:58:08.948189 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 23 23:58:10.107009 sshd[5087]: pam_unix(sshd:session): session closed for user core Jan 23 23:58:10.112691 systemd-logind[2101]: Session 17 logged out. Waiting for processes to exit. Jan 23 23:58:10.118925 systemd[1]: sshd@16-172.31.16.109:22-4.153.228.146:36470.service: Deactivated successfully. Jan 23 23:58:10.129378 systemd[1]: session-17.scope: Deactivated successfully. Jan 23 23:58:10.132809 systemd-logind[2101]: Removed session 17. Jan 23 23:58:10.191136 systemd[1]: Started sshd@17-172.31.16.109:22-4.153.228.146:36480.service - OpenSSH per-connection server daemon (4.153.228.146:36480). Jan 23 23:58:10.696025 sshd[5108]: Accepted publickey for core from 4.153.228.146 port 36480 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:58:10.698781 sshd[5108]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:58:10.707478 systemd-logind[2101]: New session 18 of user core. Jan 23 23:58:10.714290 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 23 23:58:11.412074 sshd[5108]: pam_unix(sshd:session): session closed for user core Jan 23 23:58:11.419771 systemd-logind[2101]: Session 18 logged out. Waiting for processes to exit. Jan 23 23:58:11.420383 systemd[1]: sshd@17-172.31.16.109:22-4.153.228.146:36480.service: Deactivated successfully. Jan 23 23:58:11.429045 systemd[1]: session-18.scope: Deactivated successfully. Jan 23 23:58:11.431831 systemd-logind[2101]: Removed session 18. Jan 23 23:58:11.511135 systemd[1]: Started sshd@18-172.31.16.109:22-4.153.228.146:36484.service - OpenSSH per-connection server daemon (4.153.228.146:36484). Jan 23 23:58:12.039791 sshd[5120]: Accepted publickey for core from 4.153.228.146 port 36484 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:58:12.042564 sshd[5120]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:58:12.051592 systemd-logind[2101]: New session 19 of user core. Jan 23 23:58:12.058303 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 23 23:58:12.527479 sshd[5120]: pam_unix(sshd:session): session closed for user core Jan 23 23:58:12.535269 systemd[1]: sshd@18-172.31.16.109:22-4.153.228.146:36484.service: Deactivated successfully. Jan 23 23:58:12.541211 systemd[1]: session-19.scope: Deactivated successfully. Jan 23 23:58:12.542547 systemd-logind[2101]: Session 19 logged out. Waiting for processes to exit. Jan 23 23:58:12.544356 systemd-logind[2101]: Removed session 19. Jan 23 23:58:17.607153 systemd[1]: Started sshd@19-172.31.16.109:22-4.153.228.146:46790.service - OpenSSH per-connection server daemon (4.153.228.146:46790). Jan 23 23:58:18.118184 sshd[5136]: Accepted publickey for core from 4.153.228.146 port 46790 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:58:18.120964 sshd[5136]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:58:18.130024 systemd-logind[2101]: New session 20 of user core. Jan 23 23:58:18.138317 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 23 23:58:18.588002 sshd[5136]: pam_unix(sshd:session): session closed for user core Jan 23 23:58:18.594337 systemd[1]: sshd@19-172.31.16.109:22-4.153.228.146:46790.service: Deactivated successfully. Jan 23 23:58:18.602429 systemd[1]: session-20.scope: Deactivated successfully. Jan 23 23:58:18.602514 systemd-logind[2101]: Session 20 logged out. Waiting for processes to exit. Jan 23 23:58:18.606561 systemd-logind[2101]: Removed session 20. Jan 23 23:58:23.671124 systemd[1]: Started sshd@20-172.31.16.109:22-4.153.228.146:46804.service - OpenSSH per-connection server daemon (4.153.228.146:46804). Jan 23 23:58:24.168868 sshd[5150]: Accepted publickey for core from 4.153.228.146 port 46804 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:58:24.173179 sshd[5150]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:58:24.188426 systemd-logind[2101]: New session 21 of user core. Jan 23 23:58:24.193277 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 23 23:58:24.638488 sshd[5150]: pam_unix(sshd:session): session closed for user core Jan 23 23:58:24.645544 systemd[1]: sshd@20-172.31.16.109:22-4.153.228.146:46804.service: Deactivated successfully. Jan 23 23:58:24.655864 systemd[1]: session-21.scope: Deactivated successfully. Jan 23 23:58:24.657574 systemd-logind[2101]: Session 21 logged out. Waiting for processes to exit. Jan 23 23:58:24.660441 systemd-logind[2101]: Removed session 21. Jan 23 23:58:29.737104 systemd[1]: Started sshd@21-172.31.16.109:22-4.153.228.146:37134.service - OpenSSH per-connection server daemon (4.153.228.146:37134). Jan 23 23:58:30.288400 sshd[5165]: Accepted publickey for core from 4.153.228.146 port 37134 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:58:30.291292 sshd[5165]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:58:30.302069 systemd-logind[2101]: New session 22 of user core. Jan 23 23:58:30.316210 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 23 23:58:30.780003 sshd[5165]: pam_unix(sshd:session): session closed for user core Jan 23 23:58:30.785899 systemd[1]: sshd@21-172.31.16.109:22-4.153.228.146:37134.service: Deactivated successfully. Jan 23 23:58:30.794976 systemd-logind[2101]: Session 22 logged out. Waiting for processes to exit. Jan 23 23:58:30.795950 systemd[1]: session-22.scope: Deactivated successfully. Jan 23 23:58:30.798450 systemd-logind[2101]: Removed session 22. Jan 23 23:58:30.870330 systemd[1]: Started sshd@22-172.31.16.109:22-4.153.228.146:37136.service - OpenSSH per-connection server daemon (4.153.228.146:37136). Jan 23 23:58:31.406751 sshd[5179]: Accepted publickey for core from 4.153.228.146 port 37136 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:58:31.411890 sshd[5179]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:58:31.419838 systemd-logind[2101]: New session 23 of user core. Jan 23 23:58:31.429181 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 23 23:58:35.409860 containerd[2135]: time="2026-01-23T23:58:35.409770231Z" level=info msg="StopContainer for \"d0ef27b6086e558d7e729545f4452e0737064352b96d431c21dc3cae3d802192\" with timeout 30 (s)" Jan 23 23:58:35.412694 containerd[2135]: time="2026-01-23T23:58:35.412352511Z" level=info msg="Stop container \"d0ef27b6086e558d7e729545f4452e0737064352b96d431c21dc3cae3d802192\" with signal terminated" Jan 23 23:58:35.477040 systemd[1]: run-containerd-runc-k8s.io-bcfeddcfe99d57b2a024a50bd5aba60771eea8b4af7f92637fd80ac7659eb5ba-runc.DTObvM.mount: Deactivated successfully. Jan 23 23:58:35.499996 containerd[2135]: time="2026-01-23T23:58:35.499941927Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 23:58:35.514929 containerd[2135]: time="2026-01-23T23:58:35.514772487Z" level=info msg="StopContainer for \"bcfeddcfe99d57b2a024a50bd5aba60771eea8b4af7f92637fd80ac7659eb5ba\" with timeout 2 (s)" Jan 23 23:58:35.516425 containerd[2135]: time="2026-01-23T23:58:35.516237327Z" level=info msg="Stop container \"bcfeddcfe99d57b2a024a50bd5aba60771eea8b4af7f92637fd80ac7659eb5ba\" with signal terminated" Jan 23 23:58:35.533009 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d0ef27b6086e558d7e729545f4452e0737064352b96d431c21dc3cae3d802192-rootfs.mount: Deactivated successfully. Jan 23 23:58:35.549205 systemd-networkd[1692]: lxc_health: Link DOWN Jan 23 23:58:35.549218 systemd-networkd[1692]: lxc_health: Lost carrier Jan 23 23:58:35.562029 containerd[2135]: time="2026-01-23T23:58:35.560275431Z" level=info msg="shim disconnected" id=d0ef27b6086e558d7e729545f4452e0737064352b96d431c21dc3cae3d802192 namespace=k8s.io Jan 23 23:58:35.562784 containerd[2135]: time="2026-01-23T23:58:35.562036647Z" level=warning msg="cleaning up after shim disconnected" id=d0ef27b6086e558d7e729545f4452e0737064352b96d431c21dc3cae3d802192 namespace=k8s.io Jan 23 23:58:35.562784 containerd[2135]: time="2026-01-23T23:58:35.562064619Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 23:58:35.607604 containerd[2135]: time="2026-01-23T23:58:35.607534480Z" level=info msg="StopContainer for \"d0ef27b6086e558d7e729545f4452e0737064352b96d431c21dc3cae3d802192\" returns successfully" Jan 23 23:58:35.608878 containerd[2135]: time="2026-01-23T23:58:35.608490832Z" level=info msg="StopPodSandbox for \"eb88ed3c77bbe6182597d61547af2635c3183240a3014cc99b2bc67eb25d5068\"" Jan 23 23:58:35.608878 containerd[2135]: time="2026-01-23T23:58:35.608563264Z" level=info msg="Container to stop \"d0ef27b6086e558d7e729545f4452e0737064352b96d431c21dc3cae3d802192\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 23:58:35.615605 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-eb88ed3c77bbe6182597d61547af2635c3183240a3014cc99b2bc67eb25d5068-shm.mount: Deactivated successfully. Jan 23 23:58:35.637939 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bcfeddcfe99d57b2a024a50bd5aba60771eea8b4af7f92637fd80ac7659eb5ba-rootfs.mount: Deactivated successfully. Jan 23 23:58:35.656421 containerd[2135]: time="2026-01-23T23:58:35.656093452Z" level=info msg="shim disconnected" id=bcfeddcfe99d57b2a024a50bd5aba60771eea8b4af7f92637fd80ac7659eb5ba namespace=k8s.io Jan 23 23:58:35.656421 containerd[2135]: time="2026-01-23T23:58:35.656181376Z" level=warning msg="cleaning up after shim disconnected" id=bcfeddcfe99d57b2a024a50bd5aba60771eea8b4af7f92637fd80ac7659eb5ba namespace=k8s.io Jan 23 23:58:35.656421 containerd[2135]: time="2026-01-23T23:58:35.656203900Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 23:58:35.701691 containerd[2135]: time="2026-01-23T23:58:35.701116936Z" level=info msg="shim disconnected" id=eb88ed3c77bbe6182597d61547af2635c3183240a3014cc99b2bc67eb25d5068 namespace=k8s.io Jan 23 23:58:35.701691 containerd[2135]: time="2026-01-23T23:58:35.701189620Z" level=warning msg="cleaning up after shim disconnected" id=eb88ed3c77bbe6182597d61547af2635c3183240a3014cc99b2bc67eb25d5068 namespace=k8s.io Jan 23 23:58:35.701691 containerd[2135]: time="2026-01-23T23:58:35.701209336Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 23:58:35.704706 containerd[2135]: time="2026-01-23T23:58:35.703296808Z" level=info msg="StopContainer for \"bcfeddcfe99d57b2a024a50bd5aba60771eea8b4af7f92637fd80ac7659eb5ba\" returns successfully" Jan 23 23:58:35.705934 containerd[2135]: time="2026-01-23T23:58:35.705619444Z" level=info msg="StopPodSandbox for \"db5be60d230425ec06aff951cd5dfccdfda83d06ab1872603cc8b24614cc81d9\"" Jan 23 23:58:35.705934 containerd[2135]: time="2026-01-23T23:58:35.705725572Z" level=info msg="Container to stop \"dfa5951ad263842dfe6a370a2ef97c892190e902462500fb7c465e0675668f28\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 23:58:35.705934 containerd[2135]: time="2026-01-23T23:58:35.705754696Z" level=info msg="Container to stop \"edab0e7ed5bed600ef899b13c9abeeee70980e1eed44e8f9da8e99832ca57b92\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 23:58:35.705934 containerd[2135]: time="2026-01-23T23:58:35.705777268Z" level=info msg="Container to stop \"c787f680fd3f3410c192597ecc70a56664ec2e8ab0ef6514f596af912cf0b228\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 23:58:35.705934 containerd[2135]: time="2026-01-23T23:58:35.705799324Z" level=info msg="Container to stop \"35be388526cde0c9934f85aaa9fdedc9a233da2a2bcb89cc046661207a708f4f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 23:58:35.705934 containerd[2135]: time="2026-01-23T23:58:35.705821872Z" level=info msg="Container to stop \"bcfeddcfe99d57b2a024a50bd5aba60771eea8b4af7f92637fd80ac7659eb5ba\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 23:58:35.741336 containerd[2135]: time="2026-01-23T23:58:35.740295688Z" level=info msg="TearDown network for sandbox \"eb88ed3c77bbe6182597d61547af2635c3183240a3014cc99b2bc67eb25d5068\" successfully" Jan 23 23:58:35.741336 containerd[2135]: time="2026-01-23T23:58:35.740343916Z" level=info msg="StopPodSandbox for \"eb88ed3c77bbe6182597d61547af2635c3183240a3014cc99b2bc67eb25d5068\" returns successfully" Jan 23 23:58:35.784818 containerd[2135]: time="2026-01-23T23:58:35.784727584Z" level=info msg="shim disconnected" id=db5be60d230425ec06aff951cd5dfccdfda83d06ab1872603cc8b24614cc81d9 namespace=k8s.io Jan 23 23:58:35.784818 containerd[2135]: time="2026-01-23T23:58:35.784810804Z" level=warning msg="cleaning up after shim disconnected" id=db5be60d230425ec06aff951cd5dfccdfda83d06ab1872603cc8b24614cc81d9 namespace=k8s.io Jan 23 23:58:35.785141 containerd[2135]: time="2026-01-23T23:58:35.784835164Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 23:58:35.808842 containerd[2135]: time="2026-01-23T23:58:35.808765241Z" level=info msg="TearDown network for sandbox \"db5be60d230425ec06aff951cd5dfccdfda83d06ab1872603cc8b24614cc81d9\" successfully" Jan 23 23:58:35.808842 containerd[2135]: time="2026-01-23T23:58:35.808820489Z" level=info msg="StopPodSandbox for \"db5be60d230425ec06aff951cd5dfccdfda83d06ab1872603cc8b24614cc81d9\" returns successfully" Jan 23 23:58:35.840678 kubelet[3582]: I0123 23:58:35.839116 3582 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9742g\" (UniqueName: \"kubernetes.io/projected/43b841e6-3993-4d47-8028-013eb3640157-kube-api-access-9742g\") pod \"43b841e6-3993-4d47-8028-013eb3640157\" (UID: \"43b841e6-3993-4d47-8028-013eb3640157\") " Jan 23 23:58:35.840678 kubelet[3582]: I0123 23:58:35.839189 3582 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/43b841e6-3993-4d47-8028-013eb3640157-cilium-config-path\") pod \"43b841e6-3993-4d47-8028-013eb3640157\" (UID: \"43b841e6-3993-4d47-8028-013eb3640157\") " Jan 23 23:58:35.850738 kubelet[3582]: I0123 23:58:35.850628 3582 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43b841e6-3993-4d47-8028-013eb3640157-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "43b841e6-3993-4d47-8028-013eb3640157" (UID: "43b841e6-3993-4d47-8028-013eb3640157"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 23:58:35.850996 kubelet[3582]: I0123 23:58:35.850766 3582 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43b841e6-3993-4d47-8028-013eb3640157-kube-api-access-9742g" (OuterVolumeSpecName: "kube-api-access-9742g") pod "43b841e6-3993-4d47-8028-013eb3640157" (UID: "43b841e6-3993-4d47-8028-013eb3640157"). InnerVolumeSpecName "kube-api-access-9742g". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 23:58:35.939672 kubelet[3582]: I0123 23:58:35.939608 3582 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/933dfb45-99a9-4d36-ad6d-924571aec70a-cilium-run\") pod \"933dfb45-99a9-4d36-ad6d-924571aec70a\" (UID: \"933dfb45-99a9-4d36-ad6d-924571aec70a\") " Jan 23 23:58:35.939873 kubelet[3582]: I0123 23:58:35.939693 3582 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/933dfb45-99a9-4d36-ad6d-924571aec70a-etc-cni-netd\") pod \"933dfb45-99a9-4d36-ad6d-924571aec70a\" (UID: \"933dfb45-99a9-4d36-ad6d-924571aec70a\") " Jan 23 23:58:35.939873 kubelet[3582]: I0123 23:58:35.939737 3582 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/933dfb45-99a9-4d36-ad6d-924571aec70a-cilium-config-path\") pod \"933dfb45-99a9-4d36-ad6d-924571aec70a\" (UID: \"933dfb45-99a9-4d36-ad6d-924571aec70a\") " Jan 23 23:58:35.939873 kubelet[3582]: I0123 23:58:35.939804 3582 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/933dfb45-99a9-4d36-ad6d-924571aec70a-cni-path\") pod \"933dfb45-99a9-4d36-ad6d-924571aec70a\" (UID: \"933dfb45-99a9-4d36-ad6d-924571aec70a\") " Jan 23 23:58:35.939873 kubelet[3582]: I0123 23:58:35.939840 3582 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/933dfb45-99a9-4d36-ad6d-924571aec70a-cilium-cgroup\") pod \"933dfb45-99a9-4d36-ad6d-924571aec70a\" (UID: \"933dfb45-99a9-4d36-ad6d-924571aec70a\") " Jan 23 23:58:35.939873 kubelet[3582]: I0123 23:58:35.939871 3582 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/933dfb45-99a9-4d36-ad6d-924571aec70a-lib-modules\") pod \"933dfb45-99a9-4d36-ad6d-924571aec70a\" (UID: \"933dfb45-99a9-4d36-ad6d-924571aec70a\") " Jan 23 23:58:35.940154 kubelet[3582]: I0123 23:58:35.939908 3582 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/933dfb45-99a9-4d36-ad6d-924571aec70a-xtables-lock\") pod \"933dfb45-99a9-4d36-ad6d-924571aec70a\" (UID: \"933dfb45-99a9-4d36-ad6d-924571aec70a\") " Jan 23 23:58:35.940154 kubelet[3582]: I0123 23:58:35.939947 3582 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qtv7j\" (UniqueName: \"kubernetes.io/projected/933dfb45-99a9-4d36-ad6d-924571aec70a-kube-api-access-qtv7j\") pod \"933dfb45-99a9-4d36-ad6d-924571aec70a\" (UID: \"933dfb45-99a9-4d36-ad6d-924571aec70a\") " Jan 23 23:58:35.940154 kubelet[3582]: I0123 23:58:35.939979 3582 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/933dfb45-99a9-4d36-ad6d-924571aec70a-hostproc\") pod \"933dfb45-99a9-4d36-ad6d-924571aec70a\" (UID: \"933dfb45-99a9-4d36-ad6d-924571aec70a\") " Jan 23 23:58:35.940154 kubelet[3582]: I0123 23:58:35.940018 3582 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/933dfb45-99a9-4d36-ad6d-924571aec70a-clustermesh-secrets\") pod \"933dfb45-99a9-4d36-ad6d-924571aec70a\" (UID: \"933dfb45-99a9-4d36-ad6d-924571aec70a\") " Jan 23 23:58:35.940154 kubelet[3582]: I0123 23:58:35.940051 3582 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/933dfb45-99a9-4d36-ad6d-924571aec70a-host-proc-sys-kernel\") pod \"933dfb45-99a9-4d36-ad6d-924571aec70a\" (UID: \"933dfb45-99a9-4d36-ad6d-924571aec70a\") " Jan 23 23:58:35.940154 kubelet[3582]: I0123 23:58:35.940089 3582 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/933dfb45-99a9-4d36-ad6d-924571aec70a-hubble-tls\") pod \"933dfb45-99a9-4d36-ad6d-924571aec70a\" (UID: \"933dfb45-99a9-4d36-ad6d-924571aec70a\") " Jan 23 23:58:35.940472 kubelet[3582]: I0123 23:58:35.940120 3582 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/933dfb45-99a9-4d36-ad6d-924571aec70a-host-proc-sys-net\") pod \"933dfb45-99a9-4d36-ad6d-924571aec70a\" (UID: \"933dfb45-99a9-4d36-ad6d-924571aec70a\") " Jan 23 23:58:35.940472 kubelet[3582]: I0123 23:58:35.940157 3582 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/933dfb45-99a9-4d36-ad6d-924571aec70a-bpf-maps\") pod \"933dfb45-99a9-4d36-ad6d-924571aec70a\" (UID: \"933dfb45-99a9-4d36-ad6d-924571aec70a\") " Jan 23 23:58:35.940472 kubelet[3582]: I0123 23:58:35.940217 3582 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9742g\" (UniqueName: \"kubernetes.io/projected/43b841e6-3993-4d47-8028-013eb3640157-kube-api-access-9742g\") on node \"ip-172-31-16-109\" DevicePath \"\"" Jan 23 23:58:35.940472 kubelet[3582]: I0123 23:58:35.940242 3582 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/43b841e6-3993-4d47-8028-013eb3640157-cilium-config-path\") on node \"ip-172-31-16-109\" DevicePath \"\"" Jan 23 23:58:35.940472 kubelet[3582]: I0123 23:58:35.940306 3582 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/933dfb45-99a9-4d36-ad6d-924571aec70a-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "933dfb45-99a9-4d36-ad6d-924571aec70a" (UID: "933dfb45-99a9-4d36-ad6d-924571aec70a"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 23:58:35.940472 kubelet[3582]: I0123 23:58:35.940365 3582 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/933dfb45-99a9-4d36-ad6d-924571aec70a-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "933dfb45-99a9-4d36-ad6d-924571aec70a" (UID: "933dfb45-99a9-4d36-ad6d-924571aec70a"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 23:58:35.940837 kubelet[3582]: I0123 23:58:35.940401 3582 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/933dfb45-99a9-4d36-ad6d-924571aec70a-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "933dfb45-99a9-4d36-ad6d-924571aec70a" (UID: "933dfb45-99a9-4d36-ad6d-924571aec70a"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 23:58:35.943690 kubelet[3582]: I0123 23:58:35.942147 3582 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/933dfb45-99a9-4d36-ad6d-924571aec70a-hostproc" (OuterVolumeSpecName: "hostproc") pod "933dfb45-99a9-4d36-ad6d-924571aec70a" (UID: "933dfb45-99a9-4d36-ad6d-924571aec70a"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 23:58:35.944077 kubelet[3582]: I0123 23:58:35.944012 3582 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/933dfb45-99a9-4d36-ad6d-924571aec70a-cni-path" (OuterVolumeSpecName: "cni-path") pod "933dfb45-99a9-4d36-ad6d-924571aec70a" (UID: "933dfb45-99a9-4d36-ad6d-924571aec70a"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 23:58:35.944251 kubelet[3582]: I0123 23:58:35.944225 3582 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/933dfb45-99a9-4d36-ad6d-924571aec70a-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "933dfb45-99a9-4d36-ad6d-924571aec70a" (UID: "933dfb45-99a9-4d36-ad6d-924571aec70a"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 23:58:35.944404 kubelet[3582]: I0123 23:58:35.944378 3582 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/933dfb45-99a9-4d36-ad6d-924571aec70a-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "933dfb45-99a9-4d36-ad6d-924571aec70a" (UID: "933dfb45-99a9-4d36-ad6d-924571aec70a"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 23:58:35.944557 kubelet[3582]: I0123 23:58:35.944531 3582 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/933dfb45-99a9-4d36-ad6d-924571aec70a-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "933dfb45-99a9-4d36-ad6d-924571aec70a" (UID: "933dfb45-99a9-4d36-ad6d-924571aec70a"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 23:58:35.945849 kubelet[3582]: I0123 23:58:35.945782 3582 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/933dfb45-99a9-4d36-ad6d-924571aec70a-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "933dfb45-99a9-4d36-ad6d-924571aec70a" (UID: "933dfb45-99a9-4d36-ad6d-924571aec70a"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 23:58:35.946200 kubelet[3582]: I0123 23:58:35.946072 3582 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/933dfb45-99a9-4d36-ad6d-924571aec70a-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "933dfb45-99a9-4d36-ad6d-924571aec70a" (UID: "933dfb45-99a9-4d36-ad6d-924571aec70a"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 23:58:35.949575 kubelet[3582]: I0123 23:58:35.949399 3582 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/933dfb45-99a9-4d36-ad6d-924571aec70a-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "933dfb45-99a9-4d36-ad6d-924571aec70a" (UID: "933dfb45-99a9-4d36-ad6d-924571aec70a"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 23:58:35.950960 kubelet[3582]: I0123 23:58:35.950861 3582 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/933dfb45-99a9-4d36-ad6d-924571aec70a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "933dfb45-99a9-4d36-ad6d-924571aec70a" (UID: "933dfb45-99a9-4d36-ad6d-924571aec70a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 23:58:35.952208 kubelet[3582]: I0123 23:58:35.951928 3582 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/933dfb45-99a9-4d36-ad6d-924571aec70a-kube-api-access-qtv7j" (OuterVolumeSpecName: "kube-api-access-qtv7j") pod "933dfb45-99a9-4d36-ad6d-924571aec70a" (UID: "933dfb45-99a9-4d36-ad6d-924571aec70a"). InnerVolumeSpecName "kube-api-access-qtv7j". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 23:58:35.955233 kubelet[3582]: I0123 23:58:35.955031 3582 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/933dfb45-99a9-4d36-ad6d-924571aec70a-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "933dfb45-99a9-4d36-ad6d-924571aec70a" (UID: "933dfb45-99a9-4d36-ad6d-924571aec70a"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 23:58:36.041302 kubelet[3582]: I0123 23:58:36.041235 3582 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/933dfb45-99a9-4d36-ad6d-924571aec70a-bpf-maps\") on node \"ip-172-31-16-109\" DevicePath \"\"" Jan 23 23:58:36.041302 kubelet[3582]: I0123 23:58:36.041295 3582 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/933dfb45-99a9-4d36-ad6d-924571aec70a-cilium-run\") on node \"ip-172-31-16-109\" DevicePath \"\"" Jan 23 23:58:36.041481 kubelet[3582]: I0123 23:58:36.041322 3582 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/933dfb45-99a9-4d36-ad6d-924571aec70a-etc-cni-netd\") on node \"ip-172-31-16-109\" DevicePath \"\"" Jan 23 23:58:36.041481 kubelet[3582]: I0123 23:58:36.041344 3582 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/933dfb45-99a9-4d36-ad6d-924571aec70a-cilium-config-path\") on node \"ip-172-31-16-109\" DevicePath \"\"" Jan 23 23:58:36.041481 kubelet[3582]: I0123 23:58:36.041369 3582 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/933dfb45-99a9-4d36-ad6d-924571aec70a-cni-path\") on node \"ip-172-31-16-109\" DevicePath \"\"" Jan 23 23:58:36.041481 kubelet[3582]: I0123 23:58:36.041390 3582 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/933dfb45-99a9-4d36-ad6d-924571aec70a-xtables-lock\") on node \"ip-172-31-16-109\" DevicePath \"\"" Jan 23 23:58:36.041481 kubelet[3582]: I0123 23:58:36.041411 3582 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/933dfb45-99a9-4d36-ad6d-924571aec70a-cilium-cgroup\") on node \"ip-172-31-16-109\" DevicePath \"\"" Jan 23 23:58:36.041481 kubelet[3582]: I0123 23:58:36.041431 3582 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/933dfb45-99a9-4d36-ad6d-924571aec70a-lib-modules\") on node \"ip-172-31-16-109\" DevicePath \"\"" Jan 23 23:58:36.041481 kubelet[3582]: I0123 23:58:36.041455 3582 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/933dfb45-99a9-4d36-ad6d-924571aec70a-clustermesh-secrets\") on node \"ip-172-31-16-109\" DevicePath \"\"" Jan 23 23:58:36.041481 kubelet[3582]: I0123 23:58:36.041476 3582 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/933dfb45-99a9-4d36-ad6d-924571aec70a-host-proc-sys-kernel\") on node \"ip-172-31-16-109\" DevicePath \"\"" Jan 23 23:58:36.041984 kubelet[3582]: I0123 23:58:36.041498 3582 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qtv7j\" (UniqueName: \"kubernetes.io/projected/933dfb45-99a9-4d36-ad6d-924571aec70a-kube-api-access-qtv7j\") on node \"ip-172-31-16-109\" DevicePath \"\"" Jan 23 23:58:36.041984 kubelet[3582]: I0123 23:58:36.041538 3582 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/933dfb45-99a9-4d36-ad6d-924571aec70a-hostproc\") on node \"ip-172-31-16-109\" DevicePath \"\"" Jan 23 23:58:36.041984 kubelet[3582]: I0123 23:58:36.041563 3582 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/933dfb45-99a9-4d36-ad6d-924571aec70a-hubble-tls\") on node \"ip-172-31-16-109\" DevicePath \"\"" Jan 23 23:58:36.041984 kubelet[3582]: I0123 23:58:36.041583 3582 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/933dfb45-99a9-4d36-ad6d-924571aec70a-host-proc-sys-net\") on node \"ip-172-31-16-109\" DevicePath \"\"" Jan 23 23:58:36.464006 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eb88ed3c77bbe6182597d61547af2635c3183240a3014cc99b2bc67eb25d5068-rootfs.mount: Deactivated successfully. Jan 23 23:58:36.464264 systemd[1]: var-lib-kubelet-pods-43b841e6\x2d3993\x2d4d47\x2d8028\x2d013eb3640157-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9742g.mount: Deactivated successfully. Jan 23 23:58:36.464498 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-db5be60d230425ec06aff951cd5dfccdfda83d06ab1872603cc8b24614cc81d9-rootfs.mount: Deactivated successfully. Jan 23 23:58:36.464743 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-db5be60d230425ec06aff951cd5dfccdfda83d06ab1872603cc8b24614cc81d9-shm.mount: Deactivated successfully. Jan 23 23:58:36.466010 systemd[1]: var-lib-kubelet-pods-933dfb45\x2d99a9\x2d4d36\x2dad6d\x2d924571aec70a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqtv7j.mount: Deactivated successfully. Jan 23 23:58:36.466380 systemd[1]: var-lib-kubelet-pods-933dfb45\x2d99a9\x2d4d36\x2dad6d\x2d924571aec70a-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 23 23:58:36.466606 systemd[1]: var-lib-kubelet-pods-933dfb45\x2d99a9\x2d4d36\x2dad6d\x2d924571aec70a-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 23 23:58:36.549313 kubelet[3582]: I0123 23:58:36.549162 3582 scope.go:117] "RemoveContainer" containerID="bcfeddcfe99d57b2a024a50bd5aba60771eea8b4af7f92637fd80ac7659eb5ba" Jan 23 23:58:36.553884 containerd[2135]: time="2026-01-23T23:58:36.552887344Z" level=info msg="RemoveContainer for \"bcfeddcfe99d57b2a024a50bd5aba60771eea8b4af7f92637fd80ac7659eb5ba\"" Jan 23 23:58:36.571592 containerd[2135]: time="2026-01-23T23:58:36.570720580Z" level=info msg="RemoveContainer for \"bcfeddcfe99d57b2a024a50bd5aba60771eea8b4af7f92637fd80ac7659eb5ba\" returns successfully" Jan 23 23:58:36.572448 kubelet[3582]: I0123 23:58:36.571972 3582 scope.go:117] "RemoveContainer" containerID="35be388526cde0c9934f85aaa9fdedc9a233da2a2bcb89cc046661207a708f4f" Jan 23 23:58:36.575864 containerd[2135]: time="2026-01-23T23:58:36.575400868Z" level=info msg="RemoveContainer for \"35be388526cde0c9934f85aaa9fdedc9a233da2a2bcb89cc046661207a708f4f\"" Jan 23 23:58:36.587987 containerd[2135]: time="2026-01-23T23:58:36.586566376Z" level=info msg="RemoveContainer for \"35be388526cde0c9934f85aaa9fdedc9a233da2a2bcb89cc046661207a708f4f\" returns successfully" Jan 23 23:58:36.588351 kubelet[3582]: I0123 23:58:36.587061 3582 scope.go:117] "RemoveContainer" containerID="c787f680fd3f3410c192597ecc70a56664ec2e8ab0ef6514f596af912cf0b228" Jan 23 23:58:36.590718 containerd[2135]: time="2026-01-23T23:58:36.590627512Z" level=info msg="RemoveContainer for \"c787f680fd3f3410c192597ecc70a56664ec2e8ab0ef6514f596af912cf0b228\"" Jan 23 23:58:36.597543 containerd[2135]: time="2026-01-23T23:58:36.597448144Z" level=info msg="RemoveContainer for \"c787f680fd3f3410c192597ecc70a56664ec2e8ab0ef6514f596af912cf0b228\" returns successfully" Jan 23 23:58:36.598009 kubelet[3582]: I0123 23:58:36.597833 3582 scope.go:117] "RemoveContainer" containerID="edab0e7ed5bed600ef899b13c9abeeee70980e1eed44e8f9da8e99832ca57b92" Jan 23 23:58:36.604141 containerd[2135]: time="2026-01-23T23:58:36.604069900Z" level=info msg="RemoveContainer for \"edab0e7ed5bed600ef899b13c9abeeee70980e1eed44e8f9da8e99832ca57b92\"" Jan 23 23:58:36.610703 containerd[2135]: time="2026-01-23T23:58:36.610611029Z" level=info msg="RemoveContainer for \"edab0e7ed5bed600ef899b13c9abeeee70980e1eed44e8f9da8e99832ca57b92\" returns successfully" Jan 23 23:58:36.611160 kubelet[3582]: I0123 23:58:36.611115 3582 scope.go:117] "RemoveContainer" containerID="dfa5951ad263842dfe6a370a2ef97c892190e902462500fb7c465e0675668f28" Jan 23 23:58:36.613604 containerd[2135]: time="2026-01-23T23:58:36.613201169Z" level=info msg="RemoveContainer for \"dfa5951ad263842dfe6a370a2ef97c892190e902462500fb7c465e0675668f28\"" Jan 23 23:58:36.619232 containerd[2135]: time="2026-01-23T23:58:36.619142993Z" level=info msg="RemoveContainer for \"dfa5951ad263842dfe6a370a2ef97c892190e902462500fb7c465e0675668f28\" returns successfully" Jan 23 23:58:36.620033 kubelet[3582]: I0123 23:58:36.619977 3582 scope.go:117] "RemoveContainer" containerID="bcfeddcfe99d57b2a024a50bd5aba60771eea8b4af7f92637fd80ac7659eb5ba" Jan 23 23:58:36.620803 containerd[2135]: time="2026-01-23T23:58:36.620740877Z" level=error msg="ContainerStatus for \"bcfeddcfe99d57b2a024a50bd5aba60771eea8b4af7f92637fd80ac7659eb5ba\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bcfeddcfe99d57b2a024a50bd5aba60771eea8b4af7f92637fd80ac7659eb5ba\": not found" Jan 23 23:58:36.621086 kubelet[3582]: E0123 23:58:36.621030 3582 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bcfeddcfe99d57b2a024a50bd5aba60771eea8b4af7f92637fd80ac7659eb5ba\": not found" containerID="bcfeddcfe99d57b2a024a50bd5aba60771eea8b4af7f92637fd80ac7659eb5ba" Jan 23 23:58:36.621328 kubelet[3582]: I0123 23:58:36.621128 3582 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bcfeddcfe99d57b2a024a50bd5aba60771eea8b4af7f92637fd80ac7659eb5ba"} err="failed to get container status \"bcfeddcfe99d57b2a024a50bd5aba60771eea8b4af7f92637fd80ac7659eb5ba\": rpc error: code = NotFound desc = an error occurred when try to find container \"bcfeddcfe99d57b2a024a50bd5aba60771eea8b4af7f92637fd80ac7659eb5ba\": not found" Jan 23 23:58:36.621328 kubelet[3582]: I0123 23:58:36.621317 3582 scope.go:117] "RemoveContainer" containerID="35be388526cde0c9934f85aaa9fdedc9a233da2a2bcb89cc046661207a708f4f" Jan 23 23:58:36.621792 containerd[2135]: time="2026-01-23T23:58:36.621688865Z" level=error msg="ContainerStatus for \"35be388526cde0c9934f85aaa9fdedc9a233da2a2bcb89cc046661207a708f4f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"35be388526cde0c9934f85aaa9fdedc9a233da2a2bcb89cc046661207a708f4f\": not found" Jan 23 23:58:36.622153 kubelet[3582]: E0123 23:58:36.622113 3582 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"35be388526cde0c9934f85aaa9fdedc9a233da2a2bcb89cc046661207a708f4f\": not found" containerID="35be388526cde0c9934f85aaa9fdedc9a233da2a2bcb89cc046661207a708f4f" Jan 23 23:58:36.622236 kubelet[3582]: I0123 23:58:36.622165 3582 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"35be388526cde0c9934f85aaa9fdedc9a233da2a2bcb89cc046661207a708f4f"} err="failed to get container status \"35be388526cde0c9934f85aaa9fdedc9a233da2a2bcb89cc046661207a708f4f\": rpc error: code = NotFound desc = an error occurred when try to find container \"35be388526cde0c9934f85aaa9fdedc9a233da2a2bcb89cc046661207a708f4f\": not found" Jan 23 23:58:36.622236 kubelet[3582]: I0123 23:58:36.622209 3582 scope.go:117] "RemoveContainer" containerID="c787f680fd3f3410c192597ecc70a56664ec2e8ab0ef6514f596af912cf0b228" Jan 23 23:58:36.622660 containerd[2135]: time="2026-01-23T23:58:36.622534289Z" level=error msg="ContainerStatus for \"c787f680fd3f3410c192597ecc70a56664ec2e8ab0ef6514f596af912cf0b228\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c787f680fd3f3410c192597ecc70a56664ec2e8ab0ef6514f596af912cf0b228\": not found" Jan 23 23:58:36.623161 kubelet[3582]: E0123 23:58:36.623052 3582 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c787f680fd3f3410c192597ecc70a56664ec2e8ab0ef6514f596af912cf0b228\": not found" containerID="c787f680fd3f3410c192597ecc70a56664ec2e8ab0ef6514f596af912cf0b228" Jan 23 23:58:36.623240 kubelet[3582]: I0123 23:58:36.623173 3582 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c787f680fd3f3410c192597ecc70a56664ec2e8ab0ef6514f596af912cf0b228"} err="failed to get container status \"c787f680fd3f3410c192597ecc70a56664ec2e8ab0ef6514f596af912cf0b228\": rpc error: code = NotFound desc = an error occurred when try to find container \"c787f680fd3f3410c192597ecc70a56664ec2e8ab0ef6514f596af912cf0b228\": not found" Jan 23 23:58:36.623240 kubelet[3582]: I0123 23:58:36.623209 3582 scope.go:117] "RemoveContainer" containerID="edab0e7ed5bed600ef899b13c9abeeee70980e1eed44e8f9da8e99832ca57b92" Jan 23 23:58:36.623795 containerd[2135]: time="2026-01-23T23:58:36.623621213Z" level=error msg="ContainerStatus for \"edab0e7ed5bed600ef899b13c9abeeee70980e1eed44e8f9da8e99832ca57b92\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"edab0e7ed5bed600ef899b13c9abeeee70980e1eed44e8f9da8e99832ca57b92\": not found" Jan 23 23:58:36.624218 kubelet[3582]: E0123 23:58:36.624013 3582 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"edab0e7ed5bed600ef899b13c9abeeee70980e1eed44e8f9da8e99832ca57b92\": not found" containerID="edab0e7ed5bed600ef899b13c9abeeee70980e1eed44e8f9da8e99832ca57b92" Jan 23 23:58:36.624218 kubelet[3582]: I0123 23:58:36.624065 3582 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"edab0e7ed5bed600ef899b13c9abeeee70980e1eed44e8f9da8e99832ca57b92"} err="failed to get container status \"edab0e7ed5bed600ef899b13c9abeeee70980e1eed44e8f9da8e99832ca57b92\": rpc error: code = NotFound desc = an error occurred when try to find container \"edab0e7ed5bed600ef899b13c9abeeee70980e1eed44e8f9da8e99832ca57b92\": not found" Jan 23 23:58:36.624218 kubelet[3582]: I0123 23:58:36.624096 3582 scope.go:117] "RemoveContainer" containerID="dfa5951ad263842dfe6a370a2ef97c892190e902462500fb7c465e0675668f28" Jan 23 23:58:36.624591 containerd[2135]: time="2026-01-23T23:58:36.624387437Z" level=error msg="ContainerStatus for \"dfa5951ad263842dfe6a370a2ef97c892190e902462500fb7c465e0675668f28\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"dfa5951ad263842dfe6a370a2ef97c892190e902462500fb7c465e0675668f28\": not found" Jan 23 23:58:36.625100 kubelet[3582]: E0123 23:58:36.624873 3582 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"dfa5951ad263842dfe6a370a2ef97c892190e902462500fb7c465e0675668f28\": not found" containerID="dfa5951ad263842dfe6a370a2ef97c892190e902462500fb7c465e0675668f28" Jan 23 23:58:36.625100 kubelet[3582]: I0123 23:58:36.624913 3582 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"dfa5951ad263842dfe6a370a2ef97c892190e902462500fb7c465e0675668f28"} err="failed to get container status \"dfa5951ad263842dfe6a370a2ef97c892190e902462500fb7c465e0675668f28\": rpc error: code = NotFound desc = an error occurred when try to find container \"dfa5951ad263842dfe6a370a2ef97c892190e902462500fb7c465e0675668f28\": not found" Jan 23 23:58:36.625100 kubelet[3582]: I0123 23:58:36.624971 3582 scope.go:117] "RemoveContainer" containerID="d0ef27b6086e558d7e729545f4452e0737064352b96d431c21dc3cae3d802192" Jan 23 23:58:36.627574 containerd[2135]: time="2026-01-23T23:58:36.627143393Z" level=info msg="RemoveContainer for \"d0ef27b6086e558d7e729545f4452e0737064352b96d431c21dc3cae3d802192\"" Jan 23 23:58:36.633135 containerd[2135]: time="2026-01-23T23:58:36.633035249Z" level=info msg="RemoveContainer for \"d0ef27b6086e558d7e729545f4452e0737064352b96d431c21dc3cae3d802192\" returns successfully" Jan 23 23:58:36.633669 kubelet[3582]: I0123 23:58:36.633620 3582 scope.go:117] "RemoveContainer" containerID="d0ef27b6086e558d7e729545f4452e0737064352b96d431c21dc3cae3d802192" Jan 23 23:58:36.634104 containerd[2135]: time="2026-01-23T23:58:36.634046309Z" level=error msg="ContainerStatus for \"d0ef27b6086e558d7e729545f4452e0737064352b96d431c21dc3cae3d802192\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d0ef27b6086e558d7e729545f4452e0737064352b96d431c21dc3cae3d802192\": not found" Jan 23 23:58:36.634453 kubelet[3582]: E0123 23:58:36.634275 3582 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d0ef27b6086e558d7e729545f4452e0737064352b96d431c21dc3cae3d802192\": not found" containerID="d0ef27b6086e558d7e729545f4452e0737064352b96d431c21dc3cae3d802192" Jan 23 23:58:36.634453 kubelet[3582]: I0123 23:58:36.634316 3582 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d0ef27b6086e558d7e729545f4452e0737064352b96d431c21dc3cae3d802192"} err="failed to get container status \"d0ef27b6086e558d7e729545f4452e0737064352b96d431c21dc3cae3d802192\": rpc error: code = NotFound desc = an error occurred when try to find container \"d0ef27b6086e558d7e729545f4452e0737064352b96d431c21dc3cae3d802192\": not found" Jan 23 23:58:37.308716 kubelet[3582]: E0123 23:58:37.308456 3582 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 23 23:58:37.404683 sshd[5179]: pam_unix(sshd:session): session closed for user core Jan 23 23:58:37.412927 systemd[1]: sshd@22-172.31.16.109:22-4.153.228.146:37136.service: Deactivated successfully. Jan 23 23:58:37.419607 systemd[1]: session-23.scope: Deactivated successfully. Jan 23 23:58:37.421595 systemd-logind[2101]: Session 23 logged out. Waiting for processes to exit. Jan 23 23:58:37.424008 systemd-logind[2101]: Removed session 23. Jan 23 23:58:37.484226 systemd[1]: Started sshd@23-172.31.16.109:22-4.153.228.146:48042.service - OpenSSH per-connection server daemon (4.153.228.146:48042). Jan 23 23:58:37.983878 sshd[5345]: Accepted publickey for core from 4.153.228.146 port 48042 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:58:37.986434 sshd[5345]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:58:37.994401 systemd-logind[2101]: New session 24 of user core. Jan 23 23:58:38.001287 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 23 23:58:38.052446 kubelet[3582]: I0123 23:58:38.052375 3582 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43b841e6-3993-4d47-8028-013eb3640157" path="/var/lib/kubelet/pods/43b841e6-3993-4d47-8028-013eb3640157/volumes" Jan 23 23:58:38.054194 kubelet[3582]: I0123 23:58:38.053997 3582 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="933dfb45-99a9-4d36-ad6d-924571aec70a" path="/var/lib/kubelet/pods/933dfb45-99a9-4d36-ad6d-924571aec70a/volumes" Jan 23 23:58:38.451744 ntpd[2087]: Deleting interface #10 lxc_health, fe80::28f7:75ff:fed7:84fb%8#123, interface stats: received=0, sent=0, dropped=0, active_time=78 secs Jan 23 23:58:38.452335 ntpd[2087]: 23 Jan 23:58:38 ntpd[2087]: Deleting interface #10 lxc_health, fe80::28f7:75ff:fed7:84fb%8#123, interface stats: received=0, sent=0, dropped=0, active_time=78 secs Jan 23 23:58:39.367830 kubelet[3582]: I0123 23:58:39.367752 3582 memory_manager.go:355] "RemoveStaleState removing state" podUID="43b841e6-3993-4d47-8028-013eb3640157" containerName="cilium-operator" Jan 23 23:58:39.367830 kubelet[3582]: I0123 23:58:39.367809 3582 memory_manager.go:355] "RemoveStaleState removing state" podUID="933dfb45-99a9-4d36-ad6d-924571aec70a" containerName="cilium-agent" Jan 23 23:58:39.398128 sshd[5345]: pam_unix(sshd:session): session closed for user core Jan 23 23:58:39.409066 kubelet[3582]: W0123 23:58:39.409005 3582 reflector.go:569] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ip-172-31-16-109" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-16-109' and this object Jan 23 23:58:39.409226 kubelet[3582]: E0123 23:58:39.409082 3582 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:ip-172-31-16-109\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-16-109' and this object" logger="UnhandledError" Jan 23 23:58:39.409226 kubelet[3582]: I0123 23:58:39.409170 3582 status_manager.go:890] "Failed to get status for pod" podUID="2c8418b7-b2d9-43e5-a97d-31c01729bfbd" pod="kube-system/cilium-6h6fh" err="pods \"cilium-6h6fh\" is forbidden: User \"system:node:ip-172-31-16-109\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-16-109' and this object" Jan 23 23:58:39.409356 kubelet[3582]: W0123 23:58:39.409258 3582 reflector.go:569] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ip-172-31-16-109" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-16-109' and this object Jan 23 23:58:39.409356 kubelet[3582]: E0123 23:58:39.409285 3582 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-ipsec-keys\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-ipsec-keys\" is forbidden: User \"system:node:ip-172-31-16-109\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-16-109' and this object" logger="UnhandledError" Jan 23 23:58:39.409455 kubelet[3582]: W0123 23:58:39.409371 3582 reflector.go:569] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ip-172-31-16-109" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-16-109' and this object Jan 23 23:58:39.409455 kubelet[3582]: E0123 23:58:39.409396 3582 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:ip-172-31-16-109\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-16-109' and this object" logger="UnhandledError" Jan 23 23:58:39.410695 kubelet[3582]: W0123 23:58:39.409467 3582 reflector.go:569] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ip-172-31-16-109" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-16-109' and this object Jan 23 23:58:39.410695 kubelet[3582]: E0123 23:58:39.409493 3582 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:ip-172-31-16-109\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-16-109' and this object" logger="UnhandledError" Jan 23 23:58:39.415333 systemd[1]: sshd@23-172.31.16.109:22-4.153.228.146:48042.service: Deactivated successfully. Jan 23 23:58:39.428477 systemd[1]: session-24.scope: Deactivated successfully. Jan 23 23:58:39.437089 systemd-logind[2101]: Session 24 logged out. Waiting for processes to exit. Jan 23 23:58:39.442139 systemd-logind[2101]: Removed session 24. Jan 23 23:58:39.467426 kubelet[3582]: I0123 23:58:39.467350 3582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2c8418b7-b2d9-43e5-a97d-31c01729bfbd-xtables-lock\") pod \"cilium-6h6fh\" (UID: \"2c8418b7-b2d9-43e5-a97d-31c01729bfbd\") " pod="kube-system/cilium-6h6fh" Jan 23 23:58:39.467591 kubelet[3582]: I0123 23:58:39.467436 3582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2c8418b7-b2d9-43e5-a97d-31c01729bfbd-cni-path\") pod \"cilium-6h6fh\" (UID: \"2c8418b7-b2d9-43e5-a97d-31c01729bfbd\") " pod="kube-system/cilium-6h6fh" Jan 23 23:58:39.467591 kubelet[3582]: I0123 23:58:39.467481 3582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2c8418b7-b2d9-43e5-a97d-31c01729bfbd-etc-cni-netd\") pod \"cilium-6h6fh\" (UID: \"2c8418b7-b2d9-43e5-a97d-31c01729bfbd\") " pod="kube-system/cilium-6h6fh" Jan 23 23:58:39.467591 kubelet[3582]: I0123 23:58:39.467521 3582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2c8418b7-b2d9-43e5-a97d-31c01729bfbd-clustermesh-secrets\") pod \"cilium-6h6fh\" (UID: \"2c8418b7-b2d9-43e5-a97d-31c01729bfbd\") " pod="kube-system/cilium-6h6fh" Jan 23 23:58:39.467591 kubelet[3582]: I0123 23:58:39.467557 3582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2c8418b7-b2d9-43e5-a97d-31c01729bfbd-bpf-maps\") pod \"cilium-6h6fh\" (UID: \"2c8418b7-b2d9-43e5-a97d-31c01729bfbd\") " pod="kube-system/cilium-6h6fh" Jan 23 23:58:39.467848 kubelet[3582]: I0123 23:58:39.467594 3582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2c8418b7-b2d9-43e5-a97d-31c01729bfbd-cilium-run\") pod \"cilium-6h6fh\" (UID: \"2c8418b7-b2d9-43e5-a97d-31c01729bfbd\") " pod="kube-system/cilium-6h6fh" Jan 23 23:58:39.467848 kubelet[3582]: I0123 23:58:39.467631 3582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2c8418b7-b2d9-43e5-a97d-31c01729bfbd-hostproc\") pod \"cilium-6h6fh\" (UID: \"2c8418b7-b2d9-43e5-a97d-31c01729bfbd\") " pod="kube-system/cilium-6h6fh" Jan 23 23:58:39.467848 kubelet[3582]: I0123 23:58:39.467693 3582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/2c8418b7-b2d9-43e5-a97d-31c01729bfbd-cilium-ipsec-secrets\") pod \"cilium-6h6fh\" (UID: \"2c8418b7-b2d9-43e5-a97d-31c01729bfbd\") " pod="kube-system/cilium-6h6fh" Jan 23 23:58:39.467848 kubelet[3582]: I0123 23:58:39.467729 3582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2c8418b7-b2d9-43e5-a97d-31c01729bfbd-host-proc-sys-net\") pod \"cilium-6h6fh\" (UID: \"2c8418b7-b2d9-43e5-a97d-31c01729bfbd\") " pod="kube-system/cilium-6h6fh" Jan 23 23:58:39.468973 kubelet[3582]: I0123 23:58:39.467766 3582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2c8418b7-b2d9-43e5-a97d-31c01729bfbd-host-proc-sys-kernel\") pod \"cilium-6h6fh\" (UID: \"2c8418b7-b2d9-43e5-a97d-31c01729bfbd\") " pod="kube-system/cilium-6h6fh" Jan 23 23:58:39.469928 kubelet[3582]: I0123 23:58:39.469852 3582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2c8418b7-b2d9-43e5-a97d-31c01729bfbd-hubble-tls\") pod \"cilium-6h6fh\" (UID: \"2c8418b7-b2d9-43e5-a97d-31c01729bfbd\") " pod="kube-system/cilium-6h6fh" Jan 23 23:58:39.470240 kubelet[3582]: I0123 23:58:39.469956 3582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2c8418b7-b2d9-43e5-a97d-31c01729bfbd-lib-modules\") pod \"cilium-6h6fh\" (UID: \"2c8418b7-b2d9-43e5-a97d-31c01729bfbd\") " pod="kube-system/cilium-6h6fh" Jan 23 23:58:39.471505 kubelet[3582]: I0123 23:58:39.471426 3582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcdwl\" (UniqueName: \"kubernetes.io/projected/2c8418b7-b2d9-43e5-a97d-31c01729bfbd-kube-api-access-lcdwl\") pod \"cilium-6h6fh\" (UID: \"2c8418b7-b2d9-43e5-a97d-31c01729bfbd\") " pod="kube-system/cilium-6h6fh" Jan 23 23:58:39.471703 kubelet[3582]: I0123 23:58:39.471594 3582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2c8418b7-b2d9-43e5-a97d-31c01729bfbd-cilium-config-path\") pod \"cilium-6h6fh\" (UID: \"2c8418b7-b2d9-43e5-a97d-31c01729bfbd\") " pod="kube-system/cilium-6h6fh" Jan 23 23:58:39.473016 kubelet[3582]: I0123 23:58:39.472945 3582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2c8418b7-b2d9-43e5-a97d-31c01729bfbd-cilium-cgroup\") pod \"cilium-6h6fh\" (UID: \"2c8418b7-b2d9-43e5-a97d-31c01729bfbd\") " pod="kube-system/cilium-6h6fh" Jan 23 23:58:39.488153 systemd[1]: Started sshd@24-172.31.16.109:22-4.153.228.146:48048.service - OpenSSH per-connection server daemon (4.153.228.146:48048). Jan 23 23:58:40.027072 sshd[5357]: Accepted publickey for core from 4.153.228.146 port 48048 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:58:40.029844 sshd[5357]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:58:40.037783 systemd-logind[2101]: New session 25 of user core. Jan 23 23:58:40.048251 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 23 23:58:40.375920 sshd[5357]: pam_unix(sshd:session): session closed for user core Jan 23 23:58:40.383007 systemd-logind[2101]: Session 25 logged out. Waiting for processes to exit. Jan 23 23:58:40.385601 systemd[1]: sshd@24-172.31.16.109:22-4.153.228.146:48048.service: Deactivated successfully. Jan 23 23:58:40.390978 systemd[1]: session-25.scope: Deactivated successfully. Jan 23 23:58:40.393957 systemd-logind[2101]: Removed session 25. Jan 23 23:58:40.474131 systemd[1]: Started sshd@25-172.31.16.109:22-4.153.228.146:48064.service - OpenSSH per-connection server daemon (4.153.228.146:48064). Jan 23 23:58:40.576830 kubelet[3582]: E0123 23:58:40.576691 3582 projected.go:263] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Jan 23 23:58:40.576830 kubelet[3582]: E0123 23:58:40.576733 3582 projected.go:194] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-6h6fh: failed to sync secret cache: timed out waiting for the condition Jan 23 23:58:40.579182 kubelet[3582]: E0123 23:58:40.576837 3582 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2c8418b7-b2d9-43e5-a97d-31c01729bfbd-hubble-tls podName:2c8418b7-b2d9-43e5-a97d-31c01729bfbd nodeName:}" failed. No retries permitted until 2026-01-23 23:58:41.07680414 +0000 UTC m=+109.344205965 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/2c8418b7-b2d9-43e5-a97d-31c01729bfbd-hubble-tls") pod "cilium-6h6fh" (UID: "2c8418b7-b2d9-43e5-a97d-31c01729bfbd") : failed to sync secret cache: timed out waiting for the condition Jan 23 23:58:41.012225 sshd[5368]: Accepted publickey for core from 4.153.228.146 port 48064 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:58:41.014881 sshd[5368]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:58:41.024143 systemd-logind[2101]: New session 26 of user core. Jan 23 23:58:41.032285 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 23 23:58:41.186337 containerd[2135]: time="2026-01-23T23:58:41.186280591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6h6fh,Uid:2c8418b7-b2d9-43e5-a97d-31c01729bfbd,Namespace:kube-system,Attempt:0,}" Jan 23 23:58:41.232046 containerd[2135]: time="2026-01-23T23:58:41.230750911Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:58:41.232046 containerd[2135]: time="2026-01-23T23:58:41.231979363Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:58:41.232320 containerd[2135]: time="2026-01-23T23:58:41.232116931Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:58:41.233129 containerd[2135]: time="2026-01-23T23:58:41.232864483Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:58:41.275588 systemd[1]: run-containerd-runc-k8s.io-053dedc43f6493d185ff4853143f1fb938032c31d56505595e5a713c620cf7ff-runc.qo59b1.mount: Deactivated successfully. Jan 23 23:58:41.322928 containerd[2135]: time="2026-01-23T23:58:41.322639316Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6h6fh,Uid:2c8418b7-b2d9-43e5-a97d-31c01729bfbd,Namespace:kube-system,Attempt:0,} returns sandbox id \"053dedc43f6493d185ff4853143f1fb938032c31d56505595e5a713c620cf7ff\"" Jan 23 23:58:41.335018 containerd[2135]: time="2026-01-23T23:58:41.334947464Z" level=info msg="CreateContainer within sandbox \"053dedc43f6493d185ff4853143f1fb938032c31d56505595e5a713c620cf7ff\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 23 23:58:41.361908 containerd[2135]: time="2026-01-23T23:58:41.361830932Z" level=info msg="CreateContainer within sandbox \"053dedc43f6493d185ff4853143f1fb938032c31d56505595e5a713c620cf7ff\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"62b31faecbf2673f69b357e0652297e949a88343b94ebc33e5b848ffb83e81f4\"" Jan 23 23:58:41.364411 containerd[2135]: time="2026-01-23T23:58:41.362989844Z" level=info msg="StartContainer for \"62b31faecbf2673f69b357e0652297e949a88343b94ebc33e5b848ffb83e81f4\"" Jan 23 23:58:41.517026 containerd[2135]: time="2026-01-23T23:58:41.516943197Z" level=info msg="StartContainer for \"62b31faecbf2673f69b357e0652297e949a88343b94ebc33e5b848ffb83e81f4\" returns successfully" Jan 23 23:58:41.591012 containerd[2135]: time="2026-01-23T23:58:41.590546229Z" level=info msg="shim disconnected" id=62b31faecbf2673f69b357e0652297e949a88343b94ebc33e5b848ffb83e81f4 namespace=k8s.io Jan 23 23:58:41.591012 containerd[2135]: time="2026-01-23T23:58:41.590614545Z" level=warning msg="cleaning up after shim disconnected" id=62b31faecbf2673f69b357e0652297e949a88343b94ebc33e5b848ffb83e81f4 namespace=k8s.io Jan 23 23:58:41.591012 containerd[2135]: time="2026-01-23T23:58:41.590633841Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 23:58:42.310617 kubelet[3582]: E0123 23:58:42.310542 3582 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 23 23:58:42.609271 containerd[2135]: time="2026-01-23T23:58:42.608622982Z" level=info msg="CreateContainer within sandbox \"053dedc43f6493d185ff4853143f1fb938032c31d56505595e5a713c620cf7ff\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 23 23:58:42.643205 containerd[2135]: time="2026-01-23T23:58:42.643020466Z" level=info msg="CreateContainer within sandbox \"053dedc43f6493d185ff4853143f1fb938032c31d56505595e5a713c620cf7ff\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"0ec1d09bbc068778ffa4e3c55876372b04525346f254203fd3b1797997b4f23d\"" Jan 23 23:58:42.644074 containerd[2135]: time="2026-01-23T23:58:42.644027974Z" level=info msg="StartContainer for \"0ec1d09bbc068778ffa4e3c55876372b04525346f254203fd3b1797997b4f23d\"" Jan 23 23:58:42.758450 containerd[2135]: time="2026-01-23T23:58:42.758375135Z" level=info msg="StartContainer for \"0ec1d09bbc068778ffa4e3c55876372b04525346f254203fd3b1797997b4f23d\" returns successfully" Jan 23 23:58:42.821349 containerd[2135]: time="2026-01-23T23:58:42.820957427Z" level=info msg="shim disconnected" id=0ec1d09bbc068778ffa4e3c55876372b04525346f254203fd3b1797997b4f23d namespace=k8s.io Jan 23 23:58:42.822244 containerd[2135]: time="2026-01-23T23:58:42.821455487Z" level=warning msg="cleaning up after shim disconnected" id=0ec1d09bbc068778ffa4e3c55876372b04525346f254203fd3b1797997b4f23d namespace=k8s.io Jan 23 23:58:42.822244 containerd[2135]: time="2026-01-23T23:58:42.821479979Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 23:58:43.049049 kubelet[3582]: E0123 23:58:43.048957 3582 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-cn8pz" podUID="20d233a9-1d5a-4fe4-97d5-336e3200aa0a" Jan 23 23:58:43.095830 systemd[1]: run-containerd-runc-k8s.io-0ec1d09bbc068778ffa4e3c55876372b04525346f254203fd3b1797997b4f23d-runc.nJ3f9T.mount: Deactivated successfully. Jan 23 23:58:43.096113 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0ec1d09bbc068778ffa4e3c55876372b04525346f254203fd3b1797997b4f23d-rootfs.mount: Deactivated successfully. Jan 23 23:58:43.616773 containerd[2135]: time="2026-01-23T23:58:43.616615979Z" level=info msg="CreateContainer within sandbox \"053dedc43f6493d185ff4853143f1fb938032c31d56505595e5a713c620cf7ff\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 23 23:58:43.648861 containerd[2135]: time="2026-01-23T23:58:43.648721559Z" level=info msg="CreateContainer within sandbox \"053dedc43f6493d185ff4853143f1fb938032c31d56505595e5a713c620cf7ff\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"92e2e053bca3a29e47ff23c53a420ff64181a7045e93cab98e276a685c4d26fe\"" Jan 23 23:58:43.656004 containerd[2135]: time="2026-01-23T23:58:43.655206720Z" level=info msg="StartContainer for \"92e2e053bca3a29e47ff23c53a420ff64181a7045e93cab98e276a685c4d26fe\"" Jan 23 23:58:43.658594 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2866717951.mount: Deactivated successfully. Jan 23 23:58:43.771691 containerd[2135]: time="2026-01-23T23:58:43.771567060Z" level=info msg="StartContainer for \"92e2e053bca3a29e47ff23c53a420ff64181a7045e93cab98e276a685c4d26fe\" returns successfully" Jan 23 23:58:43.819163 containerd[2135]: time="2026-01-23T23:58:43.819056748Z" level=info msg="shim disconnected" id=92e2e053bca3a29e47ff23c53a420ff64181a7045e93cab98e276a685c4d26fe namespace=k8s.io Jan 23 23:58:43.819163 containerd[2135]: time="2026-01-23T23:58:43.819137412Z" level=warning msg="cleaning up after shim disconnected" id=92e2e053bca3a29e47ff23c53a420ff64181a7045e93cab98e276a685c4d26fe namespace=k8s.io Jan 23 23:58:43.819163 containerd[2135]: time="2026-01-23T23:58:43.819159492Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 23:58:44.093353 systemd[1]: run-containerd-runc-k8s.io-92e2e053bca3a29e47ff23c53a420ff64181a7045e93cab98e276a685c4d26fe-runc.8IpZf6.mount: Deactivated successfully. Jan 23 23:58:44.093975 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-92e2e053bca3a29e47ff23c53a420ff64181a7045e93cab98e276a685c4d26fe-rootfs.mount: Deactivated successfully. Jan 23 23:58:44.623685 containerd[2135]: time="2026-01-23T23:58:44.621821580Z" level=info msg="CreateContainer within sandbox \"053dedc43f6493d185ff4853143f1fb938032c31d56505595e5a713c620cf7ff\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 23 23:58:44.653188 containerd[2135]: time="2026-01-23T23:58:44.650712564Z" level=info msg="CreateContainer within sandbox \"053dedc43f6493d185ff4853143f1fb938032c31d56505595e5a713c620cf7ff\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a1b2ad61ca9ae9ec8d7969c92c9cf8ec9abddcddd01b8b4f17e5b18f42c56a9f\"" Jan 23 23:58:44.653188 containerd[2135]: time="2026-01-23T23:58:44.651863436Z" level=info msg="StartContainer for \"a1b2ad61ca9ae9ec8d7969c92c9cf8ec9abddcddd01b8b4f17e5b18f42c56a9f\"" Jan 23 23:58:44.763979 containerd[2135]: time="2026-01-23T23:58:44.763058653Z" level=info msg="StartContainer for \"a1b2ad61ca9ae9ec8d7969c92c9cf8ec9abddcddd01b8b4f17e5b18f42c56a9f\" returns successfully" Jan 23 23:58:44.806490 containerd[2135]: time="2026-01-23T23:58:44.806358769Z" level=info msg="shim disconnected" id=a1b2ad61ca9ae9ec8d7969c92c9cf8ec9abddcddd01b8b4f17e5b18f42c56a9f namespace=k8s.io Jan 23 23:58:44.806490 containerd[2135]: time="2026-01-23T23:58:44.806476849Z" level=warning msg="cleaning up after shim disconnected" id=a1b2ad61ca9ae9ec8d7969c92c9cf8ec9abddcddd01b8b4f17e5b18f42c56a9f namespace=k8s.io Jan 23 23:58:44.806996 containerd[2135]: time="2026-01-23T23:58:44.806498857Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 23:58:44.814131 kubelet[3582]: I0123 23:58:44.814063 3582 setters.go:602] "Node became not ready" node="ip-172-31-16-109" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T23:58:44Z","lastTransitionTime":"2026-01-23T23:58:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 23 23:58:45.048903 kubelet[3582]: E0123 23:58:45.048823 3582 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-cn8pz" podUID="20d233a9-1d5a-4fe4-97d5-336e3200aa0a" Jan 23 23:58:45.092820 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a1b2ad61ca9ae9ec8d7969c92c9cf8ec9abddcddd01b8b4f17e5b18f42c56a9f-rootfs.mount: Deactivated successfully. Jan 23 23:58:45.628314 containerd[2135]: time="2026-01-23T23:58:45.628087705Z" level=info msg="CreateContainer within sandbox \"053dedc43f6493d185ff4853143f1fb938032c31d56505595e5a713c620cf7ff\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 23 23:58:45.662213 containerd[2135]: time="2026-01-23T23:58:45.662121073Z" level=info msg="CreateContainer within sandbox \"053dedc43f6493d185ff4853143f1fb938032c31d56505595e5a713c620cf7ff\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"8c1def2740d4d9494b87d47e63a84abb127919074ac1d94f5ee2c62570dea345\"" Jan 23 23:58:45.669962 containerd[2135]: time="2026-01-23T23:58:45.667701793Z" level=info msg="StartContainer for \"8c1def2740d4d9494b87d47e63a84abb127919074ac1d94f5ee2c62570dea345\"" Jan 23 23:58:45.780861 containerd[2135]: time="2026-01-23T23:58:45.780575918Z" level=info msg="StartContainer for \"8c1def2740d4d9494b87d47e63a84abb127919074ac1d94f5ee2c62570dea345\" returns successfully" Jan 23 23:58:46.599688 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jan 23 23:58:46.676319 kubelet[3582]: I0123 23:58:46.676207 3582 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-6h6fh" podStartSLOduration=7.676184091 podStartE2EDuration="7.676184091s" podCreationTimestamp="2026-01-23 23:58:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 23:58:46.676157415 +0000 UTC m=+114.943559252" watchObservedRunningTime="2026-01-23 23:58:46.676184091 +0000 UTC m=+114.943586000" Jan 23 23:58:47.049727 kubelet[3582]: E0123 23:58:47.049070 3582 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-cn8pz" podUID="20d233a9-1d5a-4fe4-97d5-336e3200aa0a" Jan 23 23:58:50.880710 systemd-networkd[1692]: lxc_health: Link UP Jan 23 23:58:50.891943 (udev-worker)[6223]: Network interface NamePolicy= disabled on kernel command line. Jan 23 23:58:50.898414 systemd-networkd[1692]: lxc_health: Gained carrier Jan 23 23:58:52.020619 containerd[2135]: time="2026-01-23T23:58:52.020551733Z" level=info msg="StopPodSandbox for \"eb88ed3c77bbe6182597d61547af2635c3183240a3014cc99b2bc67eb25d5068\"" Jan 23 23:58:52.021240 containerd[2135]: time="2026-01-23T23:58:52.020741501Z" level=info msg="TearDown network for sandbox \"eb88ed3c77bbe6182597d61547af2635c3183240a3014cc99b2bc67eb25d5068\" successfully" Jan 23 23:58:52.021240 containerd[2135]: time="2026-01-23T23:58:52.020769665Z" level=info msg="StopPodSandbox for \"eb88ed3c77bbe6182597d61547af2635c3183240a3014cc99b2bc67eb25d5068\" returns successfully" Jan 23 23:58:52.024540 containerd[2135]: time="2026-01-23T23:58:52.024475853Z" level=info msg="RemovePodSandbox for \"eb88ed3c77bbe6182597d61547af2635c3183240a3014cc99b2bc67eb25d5068\"" Jan 23 23:58:52.024755 containerd[2135]: time="2026-01-23T23:58:52.024543461Z" level=info msg="Forcibly stopping sandbox \"eb88ed3c77bbe6182597d61547af2635c3183240a3014cc99b2bc67eb25d5068\"" Jan 23 23:58:52.025831 containerd[2135]: time="2026-01-23T23:58:52.025758821Z" level=info msg="TearDown network for sandbox \"eb88ed3c77bbe6182597d61547af2635c3183240a3014cc99b2bc67eb25d5068\" successfully" Jan 23 23:58:52.037513 containerd[2135]: time="2026-01-23T23:58:52.037407209Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"eb88ed3c77bbe6182597d61547af2635c3183240a3014cc99b2bc67eb25d5068\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 23 23:58:52.038591 containerd[2135]: time="2026-01-23T23:58:52.037533281Z" level=info msg="RemovePodSandbox \"eb88ed3c77bbe6182597d61547af2635c3183240a3014cc99b2bc67eb25d5068\" returns successfully" Jan 23 23:58:52.040323 containerd[2135]: time="2026-01-23T23:58:52.039995477Z" level=info msg="StopPodSandbox for \"db5be60d230425ec06aff951cd5dfccdfda83d06ab1872603cc8b24614cc81d9\"" Jan 23 23:58:52.040323 containerd[2135]: time="2026-01-23T23:58:52.040146029Z" level=info msg="TearDown network for sandbox \"db5be60d230425ec06aff951cd5dfccdfda83d06ab1872603cc8b24614cc81d9\" successfully" Jan 23 23:58:52.040323 containerd[2135]: time="2026-01-23T23:58:52.040170461Z" level=info msg="StopPodSandbox for \"db5be60d230425ec06aff951cd5dfccdfda83d06ab1872603cc8b24614cc81d9\" returns successfully" Jan 23 23:58:52.045726 containerd[2135]: time="2026-01-23T23:58:52.042375857Z" level=info msg="RemovePodSandbox for \"db5be60d230425ec06aff951cd5dfccdfda83d06ab1872603cc8b24614cc81d9\"" Jan 23 23:58:52.045726 containerd[2135]: time="2026-01-23T23:58:52.042435773Z" level=info msg="Forcibly stopping sandbox \"db5be60d230425ec06aff951cd5dfccdfda83d06ab1872603cc8b24614cc81d9\"" Jan 23 23:58:52.045726 containerd[2135]: time="2026-01-23T23:58:52.042539369Z" level=info msg="TearDown network for sandbox \"db5be60d230425ec06aff951cd5dfccdfda83d06ab1872603cc8b24614cc81d9\" successfully" Jan 23 23:58:52.055109 containerd[2135]: time="2026-01-23T23:58:52.055040489Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"db5be60d230425ec06aff951cd5dfccdfda83d06ab1872603cc8b24614cc81d9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 23 23:58:52.055377 containerd[2135]: time="2026-01-23T23:58:52.055343081Z" level=info msg="RemovePodSandbox \"db5be60d230425ec06aff951cd5dfccdfda83d06ab1872603cc8b24614cc81d9\" returns successfully" Jan 23 23:58:52.599874 systemd-networkd[1692]: lxc_health: Gained IPv6LL Jan 23 23:58:54.712925 systemd[1]: run-containerd-runc-k8s.io-8c1def2740d4d9494b87d47e63a84abb127919074ac1d94f5ee2c62570dea345-runc.STiwCX.mount: Deactivated successfully. Jan 23 23:58:55.451836 ntpd[2087]: Listen normally on 13 lxc_health [fe80::f0ef:95ff:fe3c:7fc3%14]:123 Jan 23 23:58:55.452477 ntpd[2087]: 23 Jan 23:58:55 ntpd[2087]: Listen normally on 13 lxc_health [fe80::f0ef:95ff:fe3c:7fc3%14]:123 Jan 23 23:58:57.159467 sshd[5368]: pam_unix(sshd:session): session closed for user core Jan 23 23:58:57.170988 systemd[1]: sshd@25-172.31.16.109:22-4.153.228.146:48064.service: Deactivated successfully. Jan 23 23:58:57.182592 systemd-logind[2101]: Session 26 logged out. Waiting for processes to exit. Jan 23 23:58:57.183192 systemd[1]: session-26.scope: Deactivated successfully. Jan 23 23:58:57.189955 systemd-logind[2101]: Removed session 26. Jan 23 23:59:11.451170 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-873f32991c476ae2612e110e7a15c141e4af2c4add44808e1a63c4e385293cee-rootfs.mount: Deactivated successfully. Jan 23 23:59:11.493957 containerd[2135]: time="2026-01-23T23:59:11.493817450Z" level=info msg="shim disconnected" id=873f32991c476ae2612e110e7a15c141e4af2c4add44808e1a63c4e385293cee namespace=k8s.io Jan 23 23:59:11.494713 containerd[2135]: time="2026-01-23T23:59:11.493957826Z" level=warning msg="cleaning up after shim disconnected" id=873f32991c476ae2612e110e7a15c141e4af2c4add44808e1a63c4e385293cee namespace=k8s.io Jan 23 23:59:11.494713 containerd[2135]: time="2026-01-23T23:59:11.493980326Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 23:59:11.709062 kubelet[3582]: I0123 23:59:11.708464 3582 scope.go:117] "RemoveContainer" containerID="873f32991c476ae2612e110e7a15c141e4af2c4add44808e1a63c4e385293cee" Jan 23 23:59:11.711912 containerd[2135]: time="2026-01-23T23:59:11.711848439Z" level=info msg="CreateContainer within sandbox \"8688987446ec4c641e2aa99f9cf48a1a8a55c4c7b6bcae69ff94ef5e42bf2f24\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 23 23:59:11.733616 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1791562325.mount: Deactivated successfully. Jan 23 23:59:11.741015 containerd[2135]: time="2026-01-23T23:59:11.740879463Z" level=info msg="CreateContainer within sandbox \"8688987446ec4c641e2aa99f9cf48a1a8a55c4c7b6bcae69ff94ef5e42bf2f24\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"3b698fcb7e4b2459f8977d4dc893baccae7dbb34b9d70f504f4fa7d2ebd74693\"" Jan 23 23:59:11.742831 containerd[2135]: time="2026-01-23T23:59:11.741637071Z" level=info msg="StartContainer for \"3b698fcb7e4b2459f8977d4dc893baccae7dbb34b9d70f504f4fa7d2ebd74693\"" Jan 23 23:59:11.867590 containerd[2135]: time="2026-01-23T23:59:11.867397912Z" level=info msg="StartContainer for \"3b698fcb7e4b2459f8977d4dc893baccae7dbb34b9d70f504f4fa7d2ebd74693\" returns successfully" Jan 23 23:59:14.622187 kubelet[3582]: E0123 23:59:14.622110 3582 controller.go:195] "Failed to update lease" err="Put \"https://172.31.16.109:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-109?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 23:59:17.627358 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-41742f070aacb8644a8b9a624459f3f6cfa41258da4ae6a0bb04f7d5b6308b3a-rootfs.mount: Deactivated successfully. Jan 23 23:59:17.642206 containerd[2135]: time="2026-01-23T23:59:17.641876780Z" level=info msg="shim disconnected" id=41742f070aacb8644a8b9a624459f3f6cfa41258da4ae6a0bb04f7d5b6308b3a namespace=k8s.io Jan 23 23:59:17.642206 containerd[2135]: time="2026-01-23T23:59:17.641962052Z" level=warning msg="cleaning up after shim disconnected" id=41742f070aacb8644a8b9a624459f3f6cfa41258da4ae6a0bb04f7d5b6308b3a namespace=k8s.io Jan 23 23:59:17.642206 containerd[2135]: time="2026-01-23T23:59:17.641982512Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 23:59:17.734810 kubelet[3582]: I0123 23:59:17.734459 3582 scope.go:117] "RemoveContainer" containerID="41742f070aacb8644a8b9a624459f3f6cfa41258da4ae6a0bb04f7d5b6308b3a" Jan 23 23:59:17.737241 containerd[2135]: time="2026-01-23T23:59:17.737163177Z" level=info msg="CreateContainer within sandbox \"6e2fba2ab67795b72e53f415f6e72afbad6fb119eebc9f088231363f8fc1d609\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 23 23:59:17.767504 containerd[2135]: time="2026-01-23T23:59:17.767426613Z" level=info msg="CreateContainer within sandbox \"6e2fba2ab67795b72e53f415f6e72afbad6fb119eebc9f088231363f8fc1d609\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"75097bf1c0eac579609ac9035356296fbf88bfce46c40d074eeaf1bc7dbd4f78\"" Jan 23 23:59:17.768231 containerd[2135]: time="2026-01-23T23:59:17.768170697Z" level=info msg="StartContainer for \"75097bf1c0eac579609ac9035356296fbf88bfce46c40d074eeaf1bc7dbd4f78\"" Jan 23 23:59:17.889765 containerd[2135]: time="2026-01-23T23:59:17.887750350Z" level=info msg="StartContainer for \"75097bf1c0eac579609ac9035356296fbf88bfce46c40d074eeaf1bc7dbd4f78\" returns successfully" Jan 23 23:59:24.623017 kubelet[3582]: E0123 23:59:24.622416 3582 controller.go:195] "Failed to update lease" err="Put \"https://172.31.16.109:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-109?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"