Jan 17 00:31:11.925791 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 16 22:25:55 -00 2026 Jan 17 00:31:11.925830 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=5950c0a3c50d11b7bc07a3e3bf06049ed0b5a605b5e0b52a981b78f1c63eeedd Jan 17 00:31:11.925851 kernel: BIOS-provided physical RAM map: Jan 17 00:31:11.925863 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 17 00:31:11.925875 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Jan 17 00:31:11.925887 kernel: BIOS-e820: [mem 0x00000000786ce000-0x00000000787cdfff] type 20 Jan 17 00:31:11.925902 kernel: BIOS-e820: [mem 0x00000000787ce000-0x000000007894dfff] reserved Jan 17 00:31:11.925916 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Jan 17 00:31:11.925929 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Jan 17 00:31:11.925945 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Jan 17 00:31:11.925958 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Jan 17 00:31:11.925971 kernel: NX (Execute Disable) protection: active Jan 17 00:31:11.925984 kernel: APIC: Static calls initialized Jan 17 00:31:11.925997 kernel: efi: EFI v2.7 by EDK II Jan 17 00:31:11.926014 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77015518 Jan 17 00:31:11.926032 kernel: SMBIOS 2.7 present. Jan 17 00:31:11.926047 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Jan 17 00:31:11.926061 kernel: Hypervisor detected: KVM Jan 17 00:31:11.926075 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 17 00:31:11.926090 kernel: kvm-clock: using sched offset of 3952627900 cycles Jan 17 00:31:11.926105 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 17 00:31:11.926120 kernel: tsc: Detected 2499.998 MHz processor Jan 17 00:31:11.926135 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 17 00:31:11.926150 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 17 00:31:11.926164 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Jan 17 00:31:11.926207 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 17 00:31:11.926219 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 17 00:31:11.926233 kernel: Using GB pages for direct mapping Jan 17 00:31:11.926248 kernel: Secure boot disabled Jan 17 00:31:11.926262 kernel: ACPI: Early table checksum verification disabled Jan 17 00:31:11.926278 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Jan 17 00:31:11.926292 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Jan 17 00:31:11.926307 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Jan 17 00:31:11.926321 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Jan 17 00:31:11.926340 kernel: ACPI: FACS 0x00000000789D0000 000040 Jan 17 00:31:11.926355 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Jan 17 00:31:11.926370 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jan 17 00:31:11.926384 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jan 17 00:31:11.926398 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Jan 17 00:31:11.926412 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Jan 17 00:31:11.926432 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jan 17 00:31:11.926451 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jan 17 00:31:11.926467 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Jan 17 00:31:11.926483 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Jan 17 00:31:11.926499 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Jan 17 00:31:11.926514 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Jan 17 00:31:11.926529 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Jan 17 00:31:11.926545 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Jan 17 00:31:11.926563 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Jan 17 00:31:11.926579 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Jan 17 00:31:11.926595 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Jan 17 00:31:11.926610 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Jan 17 00:31:11.926626 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] Jan 17 00:31:11.926642 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Jan 17 00:31:11.926657 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 17 00:31:11.926672 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 17 00:31:11.926688 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Jan 17 00:31:11.926706 kernel: NUMA: Initialized distance table, cnt=1 Jan 17 00:31:11.926721 kernel: NODE_DATA(0) allocated [mem 0x7a8f0000-0x7a8f5fff] Jan 17 00:31:11.926736 kernel: Zone ranges: Jan 17 00:31:11.926752 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 17 00:31:11.926768 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Jan 17 00:31:11.926783 kernel: Normal empty Jan 17 00:31:11.926798 kernel: Movable zone start for each node Jan 17 00:31:11.926814 kernel: Early memory node ranges Jan 17 00:31:11.926829 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 17 00:31:11.926848 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Jan 17 00:31:11.926863 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Jan 17 00:31:11.926878 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Jan 17 00:31:11.926894 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 17 00:31:11.926909 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 17 00:31:11.926924 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Jan 17 00:31:11.926940 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Jan 17 00:31:11.926955 kernel: ACPI: PM-Timer IO Port: 0xb008 Jan 17 00:31:11.926971 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 17 00:31:11.926989 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Jan 17 00:31:11.927003 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 17 00:31:11.927016 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 17 00:31:11.927030 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 17 00:31:11.927044 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 17 00:31:11.927058 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 17 00:31:11.927070 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 17 00:31:11.927082 kernel: TSC deadline timer available Jan 17 00:31:11.927098 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 17 00:31:11.927116 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 17 00:31:11.927140 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Jan 17 00:31:11.927157 kernel: Booting paravirtualized kernel on KVM Jan 17 00:31:11.927174 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 17 00:31:11.930723 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 17 00:31:11.930739 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Jan 17 00:31:11.930755 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Jan 17 00:31:11.930770 kernel: pcpu-alloc: [0] 0 1 Jan 17 00:31:11.930784 kernel: kvm-guest: PV spinlocks enabled Jan 17 00:31:11.930799 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 17 00:31:11.930825 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=5950c0a3c50d11b7bc07a3e3bf06049ed0b5a605b5e0b52a981b78f1c63eeedd Jan 17 00:31:11.930841 kernel: random: crng init done Jan 17 00:31:11.930855 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 17 00:31:11.930869 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 17 00:31:11.930884 kernel: Fallback order for Node 0: 0 Jan 17 00:31:11.930898 kernel: Built 1 zonelists, mobility grouping on. Total pages: 501318 Jan 17 00:31:11.930912 kernel: Policy zone: DMA32 Jan 17 00:31:11.930927 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 17 00:31:11.930945 kernel: Memory: 1874624K/2037804K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42884K init, 2312K bss, 162920K reserved, 0K cma-reserved) Jan 17 00:31:11.930961 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 17 00:31:11.930975 kernel: Kernel/User page tables isolation: enabled Jan 17 00:31:11.930990 kernel: ftrace: allocating 37989 entries in 149 pages Jan 17 00:31:11.931004 kernel: ftrace: allocated 149 pages with 4 groups Jan 17 00:31:11.931019 kernel: Dynamic Preempt: voluntary Jan 17 00:31:11.931033 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 17 00:31:11.931049 kernel: rcu: RCU event tracing is enabled. Jan 17 00:31:11.931063 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 17 00:31:11.931081 kernel: Trampoline variant of Tasks RCU enabled. Jan 17 00:31:11.931096 kernel: Rude variant of Tasks RCU enabled. Jan 17 00:31:11.931110 kernel: Tracing variant of Tasks RCU enabled. Jan 17 00:31:11.931123 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 17 00:31:11.931138 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 17 00:31:11.931153 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 17 00:31:11.931169 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 17 00:31:11.931210 kernel: Console: colour dummy device 80x25 Jan 17 00:31:11.931225 kernel: printk: console [tty0] enabled Jan 17 00:31:11.931240 kernel: printk: console [ttyS0] enabled Jan 17 00:31:11.931254 kernel: ACPI: Core revision 20230628 Jan 17 00:31:11.931267 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Jan 17 00:31:11.931286 kernel: APIC: Switch to symmetric I/O mode setup Jan 17 00:31:11.931299 kernel: x2apic enabled Jan 17 00:31:11.931313 kernel: APIC: Switched APIC routing to: physical x2apic Jan 17 00:31:11.931327 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Jan 17 00:31:11.931345 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Jan 17 00:31:11.931359 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 17 00:31:11.931373 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Jan 17 00:31:11.931388 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 17 00:31:11.931401 kernel: Spectre V2 : Mitigation: Retpolines Jan 17 00:31:11.931415 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 17 00:31:11.931429 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jan 17 00:31:11.931444 kernel: RETBleed: Vulnerable Jan 17 00:31:11.931458 kernel: Speculative Store Bypass: Vulnerable Jan 17 00:31:11.931473 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Jan 17 00:31:11.931492 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 17 00:31:11.931507 kernel: GDS: Unknown: Dependent on hypervisor status Jan 17 00:31:11.931522 kernel: active return thunk: its_return_thunk Jan 17 00:31:11.931537 kernel: ITS: Mitigation: Aligned branch/return thunks Jan 17 00:31:11.931552 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 17 00:31:11.931568 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 17 00:31:11.931583 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 17 00:31:11.931598 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Jan 17 00:31:11.931614 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Jan 17 00:31:11.931629 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 17 00:31:11.931644 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 17 00:31:11.931662 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 17 00:31:11.931677 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jan 17 00:31:11.931692 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 17 00:31:11.931707 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Jan 17 00:31:11.931723 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Jan 17 00:31:11.931738 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Jan 17 00:31:11.931753 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Jan 17 00:31:11.931768 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Jan 17 00:31:11.931783 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Jan 17 00:31:11.931798 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Jan 17 00:31:11.931813 kernel: Freeing SMP alternatives memory: 32K Jan 17 00:31:11.931830 kernel: pid_max: default: 32768 minimum: 301 Jan 17 00:31:11.931849 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 17 00:31:11.931864 kernel: landlock: Up and running. Jan 17 00:31:11.931881 kernel: SELinux: Initializing. Jan 17 00:31:11.931897 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 17 00:31:11.931914 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 17 00:31:11.931941 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Jan 17 00:31:11.931957 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 00:31:11.931974 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 00:31:11.931989 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 00:31:11.932004 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jan 17 00:31:11.932022 kernel: signal: max sigframe size: 3632 Jan 17 00:31:11.932038 kernel: rcu: Hierarchical SRCU implementation. Jan 17 00:31:11.932054 kernel: rcu: Max phase no-delay instances is 400. Jan 17 00:31:11.932069 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 17 00:31:11.932084 kernel: smp: Bringing up secondary CPUs ... Jan 17 00:31:11.932099 kernel: smpboot: x86: Booting SMP configuration: Jan 17 00:31:11.932114 kernel: .... node #0, CPUs: #1 Jan 17 00:31:11.932130 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jan 17 00:31:11.932146 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 17 00:31:11.932164 kernel: smp: Brought up 1 node, 2 CPUs Jan 17 00:31:11.934232 kernel: smpboot: Max logical packages: 1 Jan 17 00:31:11.934260 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Jan 17 00:31:11.934277 kernel: devtmpfs: initialized Jan 17 00:31:11.934294 kernel: x86/mm: Memory block size: 128MB Jan 17 00:31:11.934311 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Jan 17 00:31:11.934328 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 17 00:31:11.934344 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 17 00:31:11.934367 kernel: pinctrl core: initialized pinctrl subsystem Jan 17 00:31:11.934383 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 17 00:31:11.934399 kernel: audit: initializing netlink subsys (disabled) Jan 17 00:31:11.934415 kernel: audit: type=2000 audit(1768609871.009:1): state=initialized audit_enabled=0 res=1 Jan 17 00:31:11.934431 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 17 00:31:11.934447 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 17 00:31:11.934463 kernel: cpuidle: using governor menu Jan 17 00:31:11.934480 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 17 00:31:11.934496 kernel: dca service started, version 1.12.1 Jan 17 00:31:11.934512 kernel: PCI: Using configuration type 1 for base access Jan 17 00:31:11.934532 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 17 00:31:11.934548 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 17 00:31:11.934564 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 17 00:31:11.934581 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 17 00:31:11.934597 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 17 00:31:11.934613 kernel: ACPI: Added _OSI(Module Device) Jan 17 00:31:11.934629 kernel: ACPI: Added _OSI(Processor Device) Jan 17 00:31:11.934646 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 17 00:31:11.934665 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jan 17 00:31:11.934681 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 17 00:31:11.934698 kernel: ACPI: Interpreter enabled Jan 17 00:31:11.934714 kernel: ACPI: PM: (supports S0 S5) Jan 17 00:31:11.934729 kernel: ACPI: Using IOAPIC for interrupt routing Jan 17 00:31:11.934746 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 17 00:31:11.934762 kernel: PCI: Using E820 reservations for host bridge windows Jan 17 00:31:11.934778 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jan 17 00:31:11.934794 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 17 00:31:11.935040 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 17 00:31:11.935202 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 17 00:31:11.935371 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 17 00:31:11.935392 kernel: acpiphp: Slot [3] registered Jan 17 00:31:11.935408 kernel: acpiphp: Slot [4] registered Jan 17 00:31:11.935424 kernel: acpiphp: Slot [5] registered Jan 17 00:31:11.935437 kernel: acpiphp: Slot [6] registered Jan 17 00:31:11.935451 kernel: acpiphp: Slot [7] registered Jan 17 00:31:11.935472 kernel: acpiphp: Slot [8] registered Jan 17 00:31:11.935487 kernel: acpiphp: Slot [9] registered Jan 17 00:31:11.935503 kernel: acpiphp: Slot [10] registered Jan 17 00:31:11.935517 kernel: acpiphp: Slot [11] registered Jan 17 00:31:11.935529 kernel: acpiphp: Slot [12] registered Jan 17 00:31:11.935545 kernel: acpiphp: Slot [13] registered Jan 17 00:31:11.935559 kernel: acpiphp: Slot [14] registered Jan 17 00:31:11.935572 kernel: acpiphp: Slot [15] registered Jan 17 00:31:11.935594 kernel: acpiphp: Slot [16] registered Jan 17 00:31:11.935617 kernel: acpiphp: Slot [17] registered Jan 17 00:31:11.935630 kernel: acpiphp: Slot [18] registered Jan 17 00:31:11.935643 kernel: acpiphp: Slot [19] registered Jan 17 00:31:11.935656 kernel: acpiphp: Slot [20] registered Jan 17 00:31:11.935671 kernel: acpiphp: Slot [21] registered Jan 17 00:31:11.935688 kernel: acpiphp: Slot [22] registered Jan 17 00:31:11.935701 kernel: acpiphp: Slot [23] registered Jan 17 00:31:11.935714 kernel: acpiphp: Slot [24] registered Jan 17 00:31:11.935727 kernel: acpiphp: Slot [25] registered Jan 17 00:31:11.935742 kernel: acpiphp: Slot [26] registered Jan 17 00:31:11.935762 kernel: acpiphp: Slot [27] registered Jan 17 00:31:11.935778 kernel: acpiphp: Slot [28] registered Jan 17 00:31:11.935791 kernel: acpiphp: Slot [29] registered Jan 17 00:31:11.935805 kernel: acpiphp: Slot [30] registered Jan 17 00:31:11.935821 kernel: acpiphp: Slot [31] registered Jan 17 00:31:11.935838 kernel: PCI host bridge to bus 0000:00 Jan 17 00:31:11.936016 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 17 00:31:11.936149 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 17 00:31:11.939670 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 17 00:31:11.939824 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jan 17 00:31:11.939960 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Jan 17 00:31:11.940077 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 17 00:31:11.940251 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 17 00:31:11.940402 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jan 17 00:31:11.940556 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Jan 17 00:31:11.940703 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jan 17 00:31:11.940848 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Jan 17 00:31:11.941003 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Jan 17 00:31:11.944343 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Jan 17 00:31:11.944518 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Jan 17 00:31:11.944661 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Jan 17 00:31:11.944810 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Jan 17 00:31:11.944961 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Jan 17 00:31:11.945102 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x80000000-0x803fffff pref] Jan 17 00:31:11.945319 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Jan 17 00:31:11.945461 kernel: pci 0000:00:03.0: BAR 0: assigned to efifb Jan 17 00:31:11.945610 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 17 00:31:11.945769 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jan 17 00:31:11.945915 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80404000-0x80407fff] Jan 17 00:31:11.946064 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jan 17 00:31:11.946242 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80400000-0x80403fff] Jan 17 00:31:11.946267 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 17 00:31:11.946284 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 17 00:31:11.946301 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 17 00:31:11.946315 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 17 00:31:11.946332 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 17 00:31:11.946353 kernel: iommu: Default domain type: Translated Jan 17 00:31:11.946370 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 17 00:31:11.946386 kernel: efivars: Registered efivars operations Jan 17 00:31:11.946403 kernel: PCI: Using ACPI for IRQ routing Jan 17 00:31:11.946419 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 17 00:31:11.946436 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Jan 17 00:31:11.946451 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Jan 17 00:31:11.946596 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Jan 17 00:31:11.946742 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Jan 17 00:31:11.946882 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 17 00:31:11.946903 kernel: vgaarb: loaded Jan 17 00:31:11.946920 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Jan 17 00:31:11.946936 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Jan 17 00:31:11.946952 kernel: clocksource: Switched to clocksource kvm-clock Jan 17 00:31:11.946969 kernel: VFS: Disk quotas dquot_6.6.0 Jan 17 00:31:11.946985 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 17 00:31:11.947002 kernel: pnp: PnP ACPI init Jan 17 00:31:11.947022 kernel: pnp: PnP ACPI: found 5 devices Jan 17 00:31:11.947039 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 17 00:31:11.947056 kernel: NET: Registered PF_INET protocol family Jan 17 00:31:11.947072 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 17 00:31:11.947089 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 17 00:31:11.947106 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 17 00:31:11.947122 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 17 00:31:11.947140 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 17 00:31:11.947157 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 17 00:31:11.947209 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 17 00:31:11.947223 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 17 00:31:11.947237 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 17 00:31:11.947251 kernel: NET: Registered PF_XDP protocol family Jan 17 00:31:11.947456 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 17 00:31:11.947592 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 17 00:31:11.947730 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 17 00:31:11.947867 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jan 17 00:31:11.948027 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Jan 17 00:31:11.949299 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 17 00:31:11.949336 kernel: PCI: CLS 0 bytes, default 64 Jan 17 00:31:11.949355 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 17 00:31:11.949373 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Jan 17 00:31:11.949389 kernel: clocksource: Switched to clocksource tsc Jan 17 00:31:11.949406 kernel: Initialise system trusted keyrings Jan 17 00:31:11.949422 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 17 00:31:11.949438 kernel: Key type asymmetric registered Jan 17 00:31:11.949460 kernel: Asymmetric key parser 'x509' registered Jan 17 00:31:11.949477 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 17 00:31:11.949493 kernel: io scheduler mq-deadline registered Jan 17 00:31:11.949510 kernel: io scheduler kyber registered Jan 17 00:31:11.949526 kernel: io scheduler bfq registered Jan 17 00:31:11.949543 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 17 00:31:11.949559 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 17 00:31:11.949575 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 17 00:31:11.949591 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 17 00:31:11.949611 kernel: i8042: Warning: Keylock active Jan 17 00:31:11.949627 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 17 00:31:11.949644 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 17 00:31:11.949815 kernel: rtc_cmos 00:00: RTC can wake from S4 Jan 17 00:31:11.949952 kernel: rtc_cmos 00:00: registered as rtc0 Jan 17 00:31:11.950083 kernel: rtc_cmos 00:00: setting system clock to 2026-01-17T00:31:11 UTC (1768609871) Jan 17 00:31:11.950241 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jan 17 00:31:11.950262 kernel: intel_pstate: CPU model not supported Jan 17 00:31:11.950284 kernel: efifb: probing for efifb Jan 17 00:31:11.950302 kernel: efifb: framebuffer at 0x80000000, using 1920k, total 1920k Jan 17 00:31:11.950319 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Jan 17 00:31:11.950336 kernel: efifb: scrolling: redraw Jan 17 00:31:11.950353 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 17 00:31:11.950370 kernel: Console: switching to colour frame buffer device 100x37 Jan 17 00:31:11.950387 kernel: fb0: EFI VGA frame buffer device Jan 17 00:31:11.950404 kernel: pstore: Using crash dump compression: deflate Jan 17 00:31:11.950422 kernel: pstore: Registered efi_pstore as persistent store backend Jan 17 00:31:11.950442 kernel: NET: Registered PF_INET6 protocol family Jan 17 00:31:11.950459 kernel: Segment Routing with IPv6 Jan 17 00:31:11.950475 kernel: In-situ OAM (IOAM) with IPv6 Jan 17 00:31:11.950492 kernel: NET: Registered PF_PACKET protocol family Jan 17 00:31:11.950509 kernel: Key type dns_resolver registered Jan 17 00:31:11.950526 kernel: IPI shorthand broadcast: enabled Jan 17 00:31:11.950570 kernel: sched_clock: Marking stable (468006777, 135027915)->(693677706, -90643014) Jan 17 00:31:11.950591 kernel: registered taskstats version 1 Jan 17 00:31:11.950609 kernel: Loading compiled-in X.509 certificates Jan 17 00:31:11.950630 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: b6a847a3a522371f15b0d5425f12279a240740e4' Jan 17 00:31:11.950650 kernel: Key type .fscrypt registered Jan 17 00:31:11.950667 kernel: Key type fscrypt-provisioning registered Jan 17 00:31:11.950685 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 17 00:31:11.950704 kernel: ima: Allocated hash algorithm: sha1 Jan 17 00:31:11.950720 kernel: ima: No architecture policies found Jan 17 00:31:11.950738 kernel: clk: Disabling unused clocks Jan 17 00:31:11.950756 kernel: Freeing unused kernel image (initmem) memory: 42884K Jan 17 00:31:11.950774 kernel: Write protecting the kernel read-only data: 36864k Jan 17 00:31:11.950796 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Jan 17 00:31:11.950814 kernel: Run /init as init process Jan 17 00:31:11.950832 kernel: with arguments: Jan 17 00:31:11.950849 kernel: /init Jan 17 00:31:11.950867 kernel: with environment: Jan 17 00:31:11.950884 kernel: HOME=/ Jan 17 00:31:11.950901 kernel: TERM=linux Jan 17 00:31:11.950922 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 00:31:11.950947 systemd[1]: Detected virtualization amazon. Jan 17 00:31:11.950963 systemd[1]: Detected architecture x86-64. Jan 17 00:31:11.950981 systemd[1]: Running in initrd. Jan 17 00:31:11.950999 systemd[1]: No hostname configured, using default hostname. Jan 17 00:31:11.951017 systemd[1]: Hostname set to . Jan 17 00:31:11.951036 systemd[1]: Initializing machine ID from VM UUID. Jan 17 00:31:11.951054 systemd[1]: Queued start job for default target initrd.target. Jan 17 00:31:11.951072 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:31:11.951094 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:31:11.951113 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 17 00:31:11.951132 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 00:31:11.951151 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 17 00:31:11.951173 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 17 00:31:11.952283 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 17 00:31:11.952302 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 17 00:31:11.952318 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:31:11.952335 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:31:11.952351 systemd[1]: Reached target paths.target - Path Units. Jan 17 00:31:11.952366 systemd[1]: Reached target slices.target - Slice Units. Jan 17 00:31:11.952383 systemd[1]: Reached target swap.target - Swaps. Jan 17 00:31:11.952403 systemd[1]: Reached target timers.target - Timer Units. Jan 17 00:31:11.952420 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 00:31:11.952436 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 00:31:11.952454 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 00:31:11.952472 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 00:31:11.952491 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:31:11.952511 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 00:31:11.952531 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:31:11.952550 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 00:31:11.952574 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 17 00:31:11.952592 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 00:31:11.952612 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 17 00:31:11.952631 systemd[1]: Starting systemd-fsck-usr.service... Jan 17 00:31:11.952650 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 00:31:11.952670 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 00:31:11.952689 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:31:11.952751 systemd-journald[179]: Collecting audit messages is disabled. Jan 17 00:31:11.952799 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 17 00:31:11.952818 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:31:11.952837 systemd[1]: Finished systemd-fsck-usr.service. Jan 17 00:31:11.952859 systemd-journald[179]: Journal started Jan 17 00:31:11.952894 systemd-journald[179]: Runtime Journal (/run/log/journal/ec2208766dc5780789ce1309b5b20432) is 4.7M, max 38.2M, 33.4M free. Jan 17 00:31:11.956601 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 00:31:11.935643 systemd-modules-load[180]: Inserted module 'overlay' Jan 17 00:31:11.963226 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 00:31:11.972475 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 00:31:11.974836 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:31:11.995193 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 17 00:31:11.996534 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:31:11.999305 kernel: Bridge firewalling registered Jan 17 00:31:11.998443 systemd-modules-load[180]: Inserted module 'br_netfilter' Jan 17 00:31:12.000404 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 00:31:12.001942 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 00:31:12.004034 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:31:12.010516 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 17 00:31:12.013489 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:31:12.016220 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 00:31:12.029336 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:31:12.040456 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 17 00:31:12.042711 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:31:12.044662 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:31:12.054526 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 00:31:12.056046 dracut-cmdline[212]: dracut-dracut-053 Jan 17 00:31:12.059612 dracut-cmdline[212]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=5950c0a3c50d11b7bc07a3e3bf06049ed0b5a605b5e0b52a981b78f1c63eeedd Jan 17 00:31:12.105923 systemd-resolved[220]: Positive Trust Anchors: Jan 17 00:31:12.105949 systemd-resolved[220]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 00:31:12.106011 systemd-resolved[220]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 00:31:12.114313 systemd-resolved[220]: Defaulting to hostname 'linux'. Jan 17 00:31:12.117703 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 00:31:12.118439 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:31:12.151212 kernel: SCSI subsystem initialized Jan 17 00:31:12.161209 kernel: Loading iSCSI transport class v2.0-870. Jan 17 00:31:12.173280 kernel: iscsi: registered transport (tcp) Jan 17 00:31:12.194225 kernel: iscsi: registered transport (qla4xxx) Jan 17 00:31:12.194312 kernel: QLogic iSCSI HBA Driver Jan 17 00:31:12.235216 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 17 00:31:12.240570 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 17 00:31:12.275655 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 17 00:31:12.275737 kernel: device-mapper: uevent: version 1.0.3 Jan 17 00:31:12.275760 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 17 00:31:12.319231 kernel: raid6: avx512x4 gen() 18001 MB/s Jan 17 00:31:12.337209 kernel: raid6: avx512x2 gen() 18064 MB/s Jan 17 00:31:12.355213 kernel: raid6: avx512x1 gen() 17954 MB/s Jan 17 00:31:12.373215 kernel: raid6: avx2x4 gen() 17920 MB/s Jan 17 00:31:12.391213 kernel: raid6: avx2x2 gen() 16998 MB/s Jan 17 00:31:12.409524 kernel: raid6: avx2x1 gen() 13492 MB/s Jan 17 00:31:12.409593 kernel: raid6: using algorithm avx512x2 gen() 18064 MB/s Jan 17 00:31:12.428478 kernel: raid6: .... xor() 24613 MB/s, rmw enabled Jan 17 00:31:12.428534 kernel: raid6: using avx512x2 recovery algorithm Jan 17 00:31:12.450224 kernel: xor: automatically using best checksumming function avx Jan 17 00:31:12.611213 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 17 00:31:12.621837 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 17 00:31:12.626418 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:31:12.653385 systemd-udevd[399]: Using default interface naming scheme 'v255'. Jan 17 00:31:12.658293 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:31:12.665805 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 17 00:31:12.681068 dracut-pre-trigger[405]: rd.md=0: removing MD RAID activation Jan 17 00:31:12.716549 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 00:31:12.722421 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 00:31:12.774809 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:31:12.783412 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 17 00:31:12.806398 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 17 00:31:12.809738 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 00:31:12.810393 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:31:12.815914 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 00:31:12.821386 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 17 00:31:12.848220 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 17 00:31:12.887208 kernel: cryptd: max_cpu_qlen set to 1000 Jan 17 00:31:12.899965 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jan 17 00:31:12.900353 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jan 17 00:31:12.917290 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Jan 17 00:31:12.917621 kernel: AVX2 version of gcm_enc/dec engaged. Jan 17 00:31:12.918258 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 00:31:12.919324 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:31:12.930344 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:90:b1:71:34:e1 Jan 17 00:31:12.930611 kernel: AES CTR mode by8 optimization enabled Jan 17 00:31:12.923630 (udev-worker)[460]: Network interface NamePolicy= disabled on kernel command line. Jan 17 00:31:12.928618 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:31:12.929361 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:31:12.929713 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:31:12.933009 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:31:12.941242 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:31:12.950209 kernel: nvme nvme0: pci function 0000:00:04.0 Jan 17 00:31:12.950504 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 17 00:31:12.955848 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:31:12.956767 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:31:12.968518 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 17 00:31:12.967509 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:31:12.980998 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 17 00:31:12.981067 kernel: GPT:9289727 != 33554431 Jan 17 00:31:12.981088 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 17 00:31:12.981116 kernel: GPT:9289727 != 33554431 Jan 17 00:31:12.981740 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 17 00:31:12.981770 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 17 00:31:12.989131 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:31:12.997470 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:31:13.017710 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:31:13.086206 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (455) Jan 17 00:31:13.092975 kernel: BTRFS: device fsid a67b5ac0-cdfd-426d-9386-e029282f433a devid 1 transid 33 /dev/nvme0n1p3 scanned by (udev-worker) (449) Jan 17 00:31:13.113491 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jan 17 00:31:13.127020 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jan 17 00:31:13.138437 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jan 17 00:31:13.138966 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jan 17 00:31:13.145968 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 17 00:31:13.153371 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 17 00:31:13.160278 disk-uuid[631]: Primary Header is updated. Jan 17 00:31:13.160278 disk-uuid[631]: Secondary Entries is updated. Jan 17 00:31:13.160278 disk-uuid[631]: Secondary Header is updated. Jan 17 00:31:13.168285 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 17 00:31:13.175618 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 17 00:31:14.186242 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 17 00:31:14.186608 disk-uuid[632]: The operation has completed successfully. Jan 17 00:31:14.304245 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 17 00:31:14.304347 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 17 00:31:14.322474 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 17 00:31:14.328368 sh[977]: Success Jan 17 00:31:14.343584 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 17 00:31:14.445895 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 17 00:31:14.453423 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 17 00:31:14.456867 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 17 00:31:14.500525 kernel: BTRFS info (device dm-0): first mount of filesystem a67b5ac0-cdfd-426d-9386-e029282f433a Jan 17 00:31:14.500594 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:31:14.500608 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 17 00:31:14.502542 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 17 00:31:14.503849 kernel: BTRFS info (device dm-0): using free space tree Jan 17 00:31:14.528225 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 17 00:31:14.542445 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 17 00:31:14.543548 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 17 00:31:14.550423 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 17 00:31:14.553386 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 17 00:31:14.578259 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:31:14.578331 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:31:14.580493 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 17 00:31:14.598207 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 17 00:31:14.609602 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 17 00:31:14.614217 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:31:14.621104 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 17 00:31:14.628480 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 17 00:31:14.662546 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 00:31:14.669400 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 00:31:14.703667 systemd-networkd[1169]: lo: Link UP Jan 17 00:31:14.705310 systemd-networkd[1169]: lo: Gained carrier Jan 17 00:31:14.710090 systemd-networkd[1169]: Enumeration completed Jan 17 00:31:14.711465 systemd-networkd[1169]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:31:14.711471 systemd-networkd[1169]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 00:31:14.713598 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 00:31:14.714409 systemd[1]: Reached target network.target - Network. Jan 17 00:31:14.716205 systemd-networkd[1169]: eth0: Link UP Jan 17 00:31:14.716210 systemd-networkd[1169]: eth0: Gained carrier Jan 17 00:31:14.716226 systemd-networkd[1169]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:31:14.729305 systemd-networkd[1169]: eth0: DHCPv4 address 172.31.24.155/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 17 00:31:14.848908 ignition[1123]: Ignition 2.19.0 Jan 17 00:31:14.848920 ignition[1123]: Stage: fetch-offline Jan 17 00:31:14.849130 ignition[1123]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:31:14.849139 ignition[1123]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 17 00:31:14.849597 ignition[1123]: Ignition finished successfully Jan 17 00:31:14.852170 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 00:31:14.857386 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 17 00:31:14.873604 ignition[1179]: Ignition 2.19.0 Jan 17 00:31:14.873618 ignition[1179]: Stage: fetch Jan 17 00:31:14.874082 ignition[1179]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:31:14.874097 ignition[1179]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 17 00:31:14.874237 ignition[1179]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 17 00:31:14.882991 ignition[1179]: PUT result: OK Jan 17 00:31:14.890922 ignition[1179]: parsed url from cmdline: "" Jan 17 00:31:14.890931 ignition[1179]: no config URL provided Jan 17 00:31:14.890939 ignition[1179]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 00:31:14.890952 ignition[1179]: no config at "/usr/lib/ignition/user.ign" Jan 17 00:31:14.890972 ignition[1179]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 17 00:31:14.891793 ignition[1179]: PUT result: OK Jan 17 00:31:14.891849 ignition[1179]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jan 17 00:31:14.892737 ignition[1179]: GET result: OK Jan 17 00:31:14.892791 ignition[1179]: parsing config with SHA512: bd0e9ee2533d324a15bbf73a2380c355f3cc9511682869d92f6f63ce8403e6ab1b0f6311d8adf93870c3aa7098e760f3741d3618764325cc01104985cfbd8e24 Jan 17 00:31:14.895676 unknown[1179]: fetched base config from "system" Jan 17 00:31:14.895685 unknown[1179]: fetched base config from "system" Jan 17 00:31:14.896274 ignition[1179]: fetch: fetch complete Jan 17 00:31:14.895697 unknown[1179]: fetched user config from "aws" Jan 17 00:31:14.896279 ignition[1179]: fetch: fetch passed Jan 17 00:31:14.896331 ignition[1179]: Ignition finished successfully Jan 17 00:31:14.898301 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 17 00:31:14.902455 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 17 00:31:14.930650 ignition[1186]: Ignition 2.19.0 Jan 17 00:31:14.930664 ignition[1186]: Stage: kargs Jan 17 00:31:14.931148 ignition[1186]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:31:14.931162 ignition[1186]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 17 00:31:14.931303 ignition[1186]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 17 00:31:14.932475 ignition[1186]: PUT result: OK Jan 17 00:31:14.935097 ignition[1186]: kargs: kargs passed Jan 17 00:31:14.935197 ignition[1186]: Ignition finished successfully Jan 17 00:31:14.936745 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 17 00:31:14.942420 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 17 00:31:14.958444 ignition[1192]: Ignition 2.19.0 Jan 17 00:31:14.958458 ignition[1192]: Stage: disks Jan 17 00:31:14.958937 ignition[1192]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:31:14.958952 ignition[1192]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 17 00:31:14.959067 ignition[1192]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 17 00:31:14.960053 ignition[1192]: PUT result: OK Jan 17 00:31:14.962862 ignition[1192]: disks: disks passed Jan 17 00:31:14.962938 ignition[1192]: Ignition finished successfully Jan 17 00:31:14.965273 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 17 00:31:14.965922 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 17 00:31:14.966356 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 00:31:14.966913 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 00:31:14.967529 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 00:31:14.968260 systemd[1]: Reached target basic.target - Basic System. Jan 17 00:31:14.973407 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 17 00:31:15.002513 systemd-fsck[1200]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 17 00:31:15.005491 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 17 00:31:15.011351 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 17 00:31:15.115212 kernel: EXT4-fs (nvme0n1p9): mounted filesystem ab055cfb-d92d-4784-aa05-26ea844796bc r/w with ordered data mode. Quota mode: none. Jan 17 00:31:15.115826 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 17 00:31:15.116967 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 17 00:31:15.123306 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 00:31:15.126761 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 17 00:31:15.128027 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 17 00:31:15.128854 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 17 00:31:15.128887 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 00:31:15.134511 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 17 00:31:15.136268 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 17 00:31:15.149198 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/nvme0n1p6 scanned by mount (1219) Jan 17 00:31:15.152232 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:31:15.152293 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:31:15.154659 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 17 00:31:15.169203 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 17 00:31:15.171548 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 00:31:15.326009 initrd-setup-root[1243]: cut: /sysroot/etc/passwd: No such file or directory Jan 17 00:31:15.355199 initrd-setup-root[1250]: cut: /sysroot/etc/group: No such file or directory Jan 17 00:31:15.359846 initrd-setup-root[1257]: cut: /sysroot/etc/shadow: No such file or directory Jan 17 00:31:15.364583 initrd-setup-root[1264]: cut: /sysroot/etc/gshadow: No such file or directory Jan 17 00:31:15.541759 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 17 00:31:15.548336 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 17 00:31:15.551137 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 17 00:31:15.557333 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:31:15.556997 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 17 00:31:15.587067 ignition[1331]: INFO : Ignition 2.19.0 Jan 17 00:31:15.587067 ignition[1331]: INFO : Stage: mount Jan 17 00:31:15.587067 ignition[1331]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:31:15.587067 ignition[1331]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 17 00:31:15.587067 ignition[1331]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 17 00:31:15.590908 ignition[1331]: INFO : PUT result: OK Jan 17 00:31:15.593126 ignition[1331]: INFO : mount: mount passed Jan 17 00:31:15.594846 ignition[1331]: INFO : Ignition finished successfully Jan 17 00:31:15.595520 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 17 00:31:15.602308 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 17 00:31:15.606338 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 17 00:31:15.616441 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 00:31:15.639240 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1345) Jan 17 00:31:15.643743 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:31:15.643806 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:31:15.643830 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 17 00:31:15.651210 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 17 00:31:15.654025 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 00:31:15.673629 ignition[1361]: INFO : Ignition 2.19.0 Jan 17 00:31:15.673629 ignition[1361]: INFO : Stage: files Jan 17 00:31:15.675153 ignition[1361]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:31:15.675153 ignition[1361]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 17 00:31:15.675153 ignition[1361]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 17 00:31:15.675153 ignition[1361]: INFO : PUT result: OK Jan 17 00:31:15.677606 ignition[1361]: DEBUG : files: compiled without relabeling support, skipping Jan 17 00:31:15.678801 ignition[1361]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 17 00:31:15.678801 ignition[1361]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 17 00:31:15.684919 ignition[1361]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 17 00:31:15.685755 ignition[1361]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 17 00:31:15.685755 ignition[1361]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 17 00:31:15.685649 unknown[1361]: wrote ssh authorized keys file for user: core Jan 17 00:31:15.688899 ignition[1361]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Jan 17 00:31:15.688899 ignition[1361]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Jan 17 00:31:15.688899 ignition[1361]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 00:31:15.690694 ignition[1361]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 00:31:15.690694 ignition[1361]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 17 00:31:15.690694 ignition[1361]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 17 00:31:15.690694 ignition[1361]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 17 00:31:15.690694 ignition[1361]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jan 17 00:31:16.229570 ignition[1361]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Jan 17 00:31:16.244433 systemd-networkd[1169]: eth0: Gained IPv6LL Jan 17 00:31:16.556163 ignition[1361]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 17 00:31:16.557141 ignition[1361]: INFO : files: createResultFile: createFiles: op(7): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 17 00:31:16.557141 ignition[1361]: INFO : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 17 00:31:16.557141 ignition[1361]: INFO : files: files passed Jan 17 00:31:16.557141 ignition[1361]: INFO : Ignition finished successfully Jan 17 00:31:16.559299 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 17 00:31:16.566473 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 17 00:31:16.570413 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 17 00:31:16.574850 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 17 00:31:16.575618 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 17 00:31:16.595283 initrd-setup-root-after-ignition[1391]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:31:16.595283 initrd-setup-root-after-ignition[1391]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:31:16.597544 initrd-setup-root-after-ignition[1395]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:31:16.598829 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 00:31:16.599994 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 17 00:31:16.606429 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 17 00:31:16.645152 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 17 00:31:16.645305 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 17 00:31:16.646604 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 17 00:31:16.647782 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 17 00:31:16.648816 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 17 00:31:16.657602 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 17 00:31:16.674784 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 00:31:16.681441 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 17 00:31:16.694901 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:31:16.695627 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:31:16.696803 systemd[1]: Stopped target timers.target - Timer Units. Jan 17 00:31:16.697702 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 17 00:31:16.697887 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 00:31:16.699118 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 17 00:31:16.700160 systemd[1]: Stopped target basic.target - Basic System. Jan 17 00:31:16.700911 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 17 00:31:16.701706 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 00:31:16.702500 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 17 00:31:16.703309 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 17 00:31:16.704259 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 00:31:16.705063 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 17 00:31:16.706279 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 17 00:31:16.707038 systemd[1]: Stopped target swap.target - Swaps. Jan 17 00:31:16.707777 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 17 00:31:16.708119 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 17 00:31:16.709192 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:31:16.709995 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:31:16.710703 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 17 00:31:16.711048 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:31:16.711553 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 17 00:31:16.711730 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 17 00:31:16.713285 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 17 00:31:16.713473 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 00:31:16.714228 systemd[1]: ignition-files.service: Deactivated successfully. Jan 17 00:31:16.714382 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 17 00:31:16.721569 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 17 00:31:16.722949 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 17 00:31:16.723193 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:31:16.727573 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 17 00:31:16.729122 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 17 00:31:16.730896 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:31:16.734678 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 17 00:31:16.736597 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 00:31:16.744853 ignition[1415]: INFO : Ignition 2.19.0 Jan 17 00:31:16.744853 ignition[1415]: INFO : Stage: umount Jan 17 00:31:16.744853 ignition[1415]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:31:16.744853 ignition[1415]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 17 00:31:16.744853 ignition[1415]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 17 00:31:16.745537 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 17 00:31:16.750854 ignition[1415]: INFO : PUT result: OK Jan 17 00:31:16.750854 ignition[1415]: INFO : umount: umount passed Jan 17 00:31:16.750854 ignition[1415]: INFO : Ignition finished successfully Jan 17 00:31:16.745641 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 17 00:31:16.752725 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 17 00:31:16.752858 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 17 00:31:16.756677 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 17 00:31:16.756800 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 17 00:31:16.757430 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 17 00:31:16.757492 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 17 00:31:16.757994 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 17 00:31:16.758055 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 17 00:31:16.758611 systemd[1]: Stopped target network.target - Network. Jan 17 00:31:16.759311 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 17 00:31:16.759378 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 00:31:16.760672 systemd[1]: Stopped target paths.target - Path Units. Jan 17 00:31:16.760931 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 17 00:31:16.761453 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:31:16.762279 systemd[1]: Stopped target slices.target - Slice Units. Jan 17 00:31:16.762766 systemd[1]: Stopped target sockets.target - Socket Units. Jan 17 00:31:16.765725 systemd[1]: iscsid.socket: Deactivated successfully. Jan 17 00:31:16.765788 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 00:31:16.766294 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 17 00:31:16.766340 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 00:31:16.766800 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 17 00:31:16.766863 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 17 00:31:16.767421 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 17 00:31:16.767476 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 17 00:31:16.768937 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 17 00:31:16.770413 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 17 00:31:16.775247 systemd-networkd[1169]: eth0: DHCPv6 lease lost Jan 17 00:31:16.776727 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 17 00:31:16.778597 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 17 00:31:16.778829 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 17 00:31:16.783907 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 17 00:31:16.784077 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 17 00:31:16.785956 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 17 00:31:16.786090 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 17 00:31:16.788517 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 17 00:31:16.788581 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:31:16.789351 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 17 00:31:16.789415 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 17 00:31:16.794323 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 17 00:31:16.794915 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 17 00:31:16.794995 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 00:31:16.795674 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 00:31:16.795731 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:31:16.796486 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 17 00:31:16.796543 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 17 00:31:16.797157 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 17 00:31:16.797234 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:31:16.802393 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:31:16.815802 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 17 00:31:16.816086 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 17 00:31:16.821110 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 17 00:31:16.821339 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:31:16.822551 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 17 00:31:16.822616 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 17 00:31:16.823447 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 17 00:31:16.823496 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:31:16.824381 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 17 00:31:16.824447 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 17 00:31:16.825570 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 17 00:31:16.825633 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 17 00:31:16.826814 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 00:31:16.826875 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:31:16.832466 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 17 00:31:16.833785 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 17 00:31:16.834310 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:31:16.837659 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:31:16.837744 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:31:16.842800 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 17 00:31:16.842906 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 17 00:31:16.843661 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 17 00:31:16.850391 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 17 00:31:16.859821 systemd[1]: Switching root. Jan 17 00:31:16.893205 systemd-journald[179]: Received SIGTERM from PID 1 (systemd). Jan 17 00:31:16.893280 systemd-journald[179]: Journal stopped Jan 17 00:31:18.536422 kernel: SELinux: policy capability network_peer_controls=1 Jan 17 00:31:18.536550 kernel: SELinux: policy capability open_perms=1 Jan 17 00:31:18.536578 kernel: SELinux: policy capability extended_socket_class=1 Jan 17 00:31:18.536600 kernel: SELinux: policy capability always_check_network=0 Jan 17 00:31:18.536624 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 17 00:31:18.536647 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 17 00:31:18.536672 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 17 00:31:18.536709 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 17 00:31:18.536733 kernel: audit: type=1403 audit(1768609877.386:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 17 00:31:18.536759 systemd[1]: Successfully loaded SELinux policy in 64.834ms. Jan 17 00:31:18.536795 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.512ms. Jan 17 00:31:18.536824 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 00:31:18.536851 systemd[1]: Detected virtualization amazon. Jan 17 00:31:18.536876 systemd[1]: Detected architecture x86-64. Jan 17 00:31:18.536904 systemd[1]: Detected first boot. Jan 17 00:31:18.536931 systemd[1]: Initializing machine ID from VM UUID. Jan 17 00:31:18.536956 zram_generator::config[1459]: No configuration found. Jan 17 00:31:18.536983 systemd[1]: Populated /etc with preset unit settings. Jan 17 00:31:18.537007 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 17 00:31:18.537031 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 17 00:31:18.537061 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 17 00:31:18.537089 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 17 00:31:18.537115 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 17 00:31:18.537141 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 17 00:31:18.537167 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 17 00:31:18.539943 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 17 00:31:18.539980 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 17 00:31:18.540008 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 17 00:31:18.540035 systemd[1]: Created slice user.slice - User and Session Slice. Jan 17 00:31:18.540071 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:31:18.540106 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:31:18.540140 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 17 00:31:18.540166 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 17 00:31:18.540212 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 17 00:31:18.540238 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 00:31:18.540263 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 17 00:31:18.540286 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:31:18.540320 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 17 00:31:18.540352 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 17 00:31:18.540378 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 17 00:31:18.540404 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 17 00:31:18.540429 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:31:18.540454 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 00:31:18.540480 systemd[1]: Reached target slices.target - Slice Units. Jan 17 00:31:18.540504 systemd[1]: Reached target swap.target - Swaps. Jan 17 00:31:18.540530 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 17 00:31:18.540560 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 17 00:31:18.540584 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:31:18.540610 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 00:31:18.540635 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:31:18.540660 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 17 00:31:18.540685 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 17 00:31:18.540711 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 17 00:31:18.540734 systemd[1]: Mounting media.mount - External Media Directory... Jan 17 00:31:18.540761 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:31:18.540791 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 17 00:31:18.540817 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 17 00:31:18.540843 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 17 00:31:18.540868 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 17 00:31:18.540891 systemd[1]: Reached target machines.target - Containers. Jan 17 00:31:18.540918 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 17 00:31:18.540944 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:31:18.540969 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 00:31:18.540998 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 17 00:31:18.541023 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 00:31:18.541048 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 00:31:18.541073 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 00:31:18.541101 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 17 00:31:18.541126 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 00:31:18.541151 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 17 00:31:18.545228 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 17 00:31:18.545312 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 17 00:31:18.545341 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 17 00:31:18.545368 systemd[1]: Stopped systemd-fsck-usr.service. Jan 17 00:31:18.545393 kernel: loop: module loaded Jan 17 00:31:18.545419 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 00:31:18.545444 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 00:31:18.545469 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 17 00:31:18.545496 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 17 00:31:18.545522 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 00:31:18.545548 systemd[1]: verity-setup.service: Deactivated successfully. Jan 17 00:31:18.545579 systemd[1]: Stopped verity-setup.service. Jan 17 00:31:18.545603 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:31:18.545629 kernel: ACPI: bus type drm_connector registered Jan 17 00:31:18.545653 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 17 00:31:18.545678 kernel: fuse: init (API version 7.39) Jan 17 00:31:18.545702 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 17 00:31:18.545728 systemd[1]: Mounted media.mount - External Media Directory. Jan 17 00:31:18.545752 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 17 00:31:18.545783 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 17 00:31:18.545809 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 17 00:31:18.545837 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:31:18.545861 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 17 00:31:18.545888 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 17 00:31:18.545917 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 00:31:18.545944 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 00:31:18.545969 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 00:31:18.545995 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 00:31:18.546021 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 00:31:18.546055 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 00:31:18.546081 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 17 00:31:18.546108 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 17 00:31:18.546169 systemd-journald[1548]: Collecting audit messages is disabled. Jan 17 00:31:18.546262 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 17 00:31:18.546285 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 00:31:18.546308 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 00:31:18.546339 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 00:31:18.546365 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 17 00:31:18.546391 systemd-journald[1548]: Journal started Jan 17 00:31:18.546438 systemd-journald[1548]: Runtime Journal (/run/log/journal/ec2208766dc5780789ce1309b5b20432) is 4.7M, max 38.2M, 33.4M free. Jan 17 00:31:18.144422 systemd[1]: Queued start job for default target multi-user.target. Jan 17 00:31:18.162665 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 17 00:31:18.163271 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 17 00:31:18.550248 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 00:31:18.553112 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 17 00:31:18.572442 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 17 00:31:18.584275 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 17 00:31:18.587646 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 17 00:31:18.588460 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 17 00:31:18.588510 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 00:31:18.590797 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 17 00:31:18.599112 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 17 00:31:18.602369 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 17 00:31:18.603428 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:31:18.610422 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 17 00:31:18.612526 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 17 00:31:18.613402 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 00:31:18.617373 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 17 00:31:18.618123 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 00:31:18.628428 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:31:18.635433 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 17 00:31:18.640479 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 17 00:31:18.646202 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:31:18.647914 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 17 00:31:18.649482 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 17 00:31:18.664987 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 17 00:31:18.676819 systemd-journald[1548]: Time spent on flushing to /var/log/journal/ec2208766dc5780789ce1309b5b20432 is 118.603ms for 966 entries. Jan 17 00:31:18.676819 systemd-journald[1548]: System Journal (/var/log/journal/ec2208766dc5780789ce1309b5b20432) is 8.0M, max 195.6M, 187.6M free. Jan 17 00:31:18.806354 systemd-journald[1548]: Received client request to flush runtime journal. Jan 17 00:31:18.806429 kernel: loop0: detected capacity change from 0 to 140768 Jan 17 00:31:18.680038 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 17 00:31:18.681635 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 17 00:31:18.684053 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 17 00:31:18.694057 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 17 00:31:18.741679 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:31:18.775118 udevadm[1595]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 17 00:31:18.810711 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 17 00:31:18.828872 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 17 00:31:18.831487 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 17 00:31:18.836381 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 17 00:31:18.852500 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 00:31:18.855916 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 17 00:31:18.894140 kernel: loop1: detected capacity change from 0 to 142488 Jan 17 00:31:18.895572 systemd-tmpfiles[1609]: ACLs are not supported, ignoring. Jan 17 00:31:18.895601 systemd-tmpfiles[1609]: ACLs are not supported, ignoring. Jan 17 00:31:18.903607 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:31:18.966832 kernel: loop2: detected capacity change from 0 to 224512 Jan 17 00:31:19.070210 kernel: loop3: detected capacity change from 0 to 61336 Jan 17 00:31:19.160741 kernel: loop4: detected capacity change from 0 to 140768 Jan 17 00:31:19.198220 kernel: loop5: detected capacity change from 0 to 142488 Jan 17 00:31:19.242217 kernel: loop6: detected capacity change from 0 to 224512 Jan 17 00:31:19.281219 kernel: loop7: detected capacity change from 0 to 61336 Jan 17 00:31:19.296898 (sd-merge)[1615]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jan 17 00:31:19.297608 (sd-merge)[1615]: Merged extensions into '/usr'. Jan 17 00:31:19.304908 systemd[1]: Reloading requested from client PID 1588 ('systemd-sysext') (unit systemd-sysext.service)... Jan 17 00:31:19.305082 systemd[1]: Reloading... Jan 17 00:31:19.379327 zram_generator::config[1638]: No configuration found. Jan 17 00:31:19.626536 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:31:19.713119 systemd[1]: Reloading finished in 407 ms. Jan 17 00:31:19.744212 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 17 00:31:19.745263 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 17 00:31:19.756564 systemd[1]: Starting ensure-sysext.service... Jan 17 00:31:19.760428 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 00:31:19.773406 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:31:19.779368 systemd[1]: Reloading requested from client PID 1693 ('systemctl') (unit ensure-sysext.service)... Jan 17 00:31:19.779538 systemd[1]: Reloading... Jan 17 00:31:19.811605 systemd-tmpfiles[1694]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 17 00:31:19.816644 systemd-tmpfiles[1694]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 17 00:31:19.820261 systemd-tmpfiles[1694]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 17 00:31:19.822036 systemd-tmpfiles[1694]: ACLs are not supported, ignoring. Jan 17 00:31:19.828757 systemd-tmpfiles[1694]: ACLs are not supported, ignoring. Jan 17 00:31:19.830429 systemd-udevd[1695]: Using default interface naming scheme 'v255'. Jan 17 00:31:19.841650 systemd-tmpfiles[1694]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 00:31:19.841666 systemd-tmpfiles[1694]: Skipping /boot Jan 17 00:31:19.885510 systemd-tmpfiles[1694]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 00:31:19.887256 systemd-tmpfiles[1694]: Skipping /boot Jan 17 00:31:19.889200 zram_generator::config[1719]: No configuration found. Jan 17 00:31:20.060566 (udev-worker)[1744]: Network interface NamePolicy= disabled on kernel command line. Jan 17 00:31:20.200203 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Jan 17 00:31:20.224233 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 17 00:31:20.243277 ldconfig[1583]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 17 00:31:20.251212 kernel: ACPI: button: Power Button [PWRF] Jan 17 00:31:20.261371 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 33 scanned by (udev-worker) (1744) Jan 17 00:31:20.261466 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3 Jan 17 00:31:20.268571 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:31:20.276219 kernel: ACPI: button: Sleep Button [SLPF] Jan 17 00:31:20.312238 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input4 Jan 17 00:31:20.407842 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 17 00:31:20.409076 systemd[1]: Reloading finished in 628 ms. Jan 17 00:31:20.432040 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:31:20.434661 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 17 00:31:20.437684 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:31:20.480710 systemd[1]: Finished ensure-sysext.service. Jan 17 00:31:20.498711 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:31:20.506481 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 00:31:20.514127 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 17 00:31:20.517927 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:31:20.528323 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 00:31:20.536515 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 00:31:20.541396 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 00:31:20.547400 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 00:31:20.548239 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:31:20.558624 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 17 00:31:20.570431 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 00:31:20.585866 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 00:31:20.588309 systemd[1]: Reached target time-set.target - System Time Set. Jan 17 00:31:20.591730 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 17 00:31:20.593268 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:31:20.594029 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 00:31:20.596435 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 00:31:20.612206 kernel: mousedev: PS/2 mouse device common for all mice Jan 17 00:31:20.636437 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 00:31:20.637024 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 00:31:20.649709 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 17 00:31:20.658089 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 00:31:20.658826 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 00:31:20.674831 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 00:31:20.675269 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 00:31:20.687663 augenrules[1914]: No rules Jan 17 00:31:20.689156 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 00:31:20.690928 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 17 00:31:20.694487 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 17 00:31:20.697140 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 17 00:31:20.707444 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 17 00:31:20.714466 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 17 00:31:20.715227 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 00:31:20.715318 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 00:31:20.718062 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 17 00:31:20.727465 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 17 00:31:20.730825 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:31:20.752523 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 17 00:31:20.753491 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 17 00:31:20.761022 lvm[1924]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 00:31:20.765248 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 17 00:31:20.771617 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 17 00:31:20.804606 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 17 00:31:20.805495 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:31:20.813552 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 17 00:31:20.816116 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 17 00:31:20.838108 lvm[1936]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 00:31:20.876838 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 17 00:31:20.918293 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:31:20.929839 systemd-networkd[1900]: lo: Link UP Jan 17 00:31:20.929851 systemd-networkd[1900]: lo: Gained carrier Jan 17 00:31:20.931326 systemd-networkd[1900]: Enumeration completed Jan 17 00:31:20.931967 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 00:31:20.932894 systemd-networkd[1900]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:31:20.932904 systemd-networkd[1900]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 00:31:20.939361 systemd-networkd[1900]: eth0: Link UP Jan 17 00:31:20.939853 systemd-networkd[1900]: eth0: Gained carrier Jan 17 00:31:20.939902 systemd-networkd[1900]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:31:20.941936 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 17 00:31:20.945547 systemd-resolved[1901]: Positive Trust Anchors: Jan 17 00:31:20.945938 systemd-resolved[1901]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 00:31:20.946003 systemd-resolved[1901]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 00:31:20.949263 systemd-networkd[1900]: eth0: DHCPv4 address 172.31.24.155/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 17 00:31:20.964315 systemd-resolved[1901]: Defaulting to hostname 'linux'. Jan 17 00:31:20.966488 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 00:31:20.967070 systemd[1]: Reached target network.target - Network. Jan 17 00:31:20.967551 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:31:20.968137 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 00:31:20.968681 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 17 00:31:20.969112 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 17 00:31:20.969678 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 17 00:31:20.970138 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 17 00:31:20.970615 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 17 00:31:20.970995 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 17 00:31:20.971040 systemd[1]: Reached target paths.target - Path Units. Jan 17 00:31:20.971459 systemd[1]: Reached target timers.target - Timer Units. Jan 17 00:31:20.972333 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 17 00:31:20.974087 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 17 00:31:20.985228 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 17 00:31:20.986412 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 17 00:31:20.986958 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 00:31:20.987404 systemd[1]: Reached target basic.target - Basic System. Jan 17 00:31:20.987849 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 17 00:31:20.987985 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 17 00:31:20.989215 systemd[1]: Starting containerd.service - containerd container runtime... Jan 17 00:31:20.993400 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 17 00:31:20.998868 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 17 00:31:21.004344 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 17 00:31:21.007515 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 17 00:31:21.008131 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 17 00:31:21.011136 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 17 00:31:21.015052 systemd[1]: Started ntpd.service - Network Time Service. Jan 17 00:31:21.024310 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 17 00:31:21.030389 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 17 00:31:21.039569 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 17 00:31:21.055475 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 17 00:31:21.060036 jq[1954]: false Jan 17 00:31:21.057638 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 17 00:31:21.065686 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 17 00:31:21.071080 systemd[1]: Starting update-engine.service - Update Engine... Jan 17 00:31:21.075028 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 17 00:31:21.096750 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 17 00:31:21.096984 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 17 00:31:21.131003 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 17 00:31:21.131940 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 17 00:31:21.138893 (ntainerd)[1973]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 17 00:31:21.144475 extend-filesystems[1955]: Found loop4 Jan 17 00:31:21.147498 extend-filesystems[1955]: Found loop5 Jan 17 00:31:21.147498 extend-filesystems[1955]: Found loop6 Jan 17 00:31:21.147498 extend-filesystems[1955]: Found loop7 Jan 17 00:31:21.147498 extend-filesystems[1955]: Found nvme0n1 Jan 17 00:31:21.147498 extend-filesystems[1955]: Found nvme0n1p1 Jan 17 00:31:21.147498 extend-filesystems[1955]: Found nvme0n1p2 Jan 17 00:31:21.147498 extend-filesystems[1955]: Found nvme0n1p3 Jan 17 00:31:21.147498 extend-filesystems[1955]: Found usr Jan 17 00:31:21.147498 extend-filesystems[1955]: Found nvme0n1p4 Jan 17 00:31:21.147498 extend-filesystems[1955]: Found nvme0n1p6 Jan 17 00:31:21.147498 extend-filesystems[1955]: Found nvme0n1p7 Jan 17 00:31:21.147498 extend-filesystems[1955]: Found nvme0n1p9 Jan 17 00:31:21.147498 extend-filesystems[1955]: Checking size of /dev/nvme0n1p9 Jan 17 00:31:21.199269 jq[1965]: true Jan 17 00:31:21.218039 systemd[1]: motdgen.service: Deactivated successfully. Jan 17 00:31:21.219553 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 17 00:31:21.233444 extend-filesystems[1955]: Resized partition /dev/nvme0n1p9 Jan 17 00:31:21.232328 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 17 00:31:21.232092 dbus-daemon[1953]: [system] SELinux support is enabled Jan 17 00:31:21.237472 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 17 00:31:21.237514 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 17 00:31:21.238358 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 17 00:31:21.238383 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 17 00:31:21.252524 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 17 00:31:21.255093 dbus-daemon[1953]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1900 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 17 00:31:21.262805 extend-filesystems[2000]: resize2fs 1.47.1 (20-May-2024) Jan 17 00:31:21.275207 update_engine[1964]: I20260117 00:31:21.268293 1964 main.cc:92] Flatcar Update Engine starting Jan 17 00:31:21.275566 ntpd[1957]: 17 Jan 00:31:21 ntpd[1957]: ntpd 4.2.8p17@1.4004-o Fri Jan 16 21:54:12 UTC 2026 (1): Starting Jan 17 00:31:21.275566 ntpd[1957]: 17 Jan 00:31:21 ntpd[1957]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 17 00:31:21.275566 ntpd[1957]: 17 Jan 00:31:21 ntpd[1957]: ---------------------------------------------------- Jan 17 00:31:21.275566 ntpd[1957]: 17 Jan 00:31:21 ntpd[1957]: ntp-4 is maintained by Network Time Foundation, Jan 17 00:31:21.275566 ntpd[1957]: 17 Jan 00:31:21 ntpd[1957]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 17 00:31:21.275566 ntpd[1957]: 17 Jan 00:31:21 ntpd[1957]: corporation. Support and training for ntp-4 are Jan 17 00:31:21.275566 ntpd[1957]: 17 Jan 00:31:21 ntpd[1957]: available at https://www.nwtime.org/support Jan 17 00:31:21.275566 ntpd[1957]: 17 Jan 00:31:21 ntpd[1957]: ---------------------------------------------------- Jan 17 00:31:21.272323 ntpd[1957]: ntpd 4.2.8p17@1.4004-o Fri Jan 16 21:54:12 UTC 2026 (1): Starting Jan 17 00:31:21.271427 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 17 00:31:21.272351 ntpd[1957]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 17 00:31:21.272362 ntpd[1957]: ---------------------------------------------------- Jan 17 00:31:21.272373 ntpd[1957]: ntp-4 is maintained by Network Time Foundation, Jan 17 00:31:21.272383 ntpd[1957]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 17 00:31:21.272393 ntpd[1957]: corporation. Support and training for ntp-4 are Jan 17 00:31:21.272403 ntpd[1957]: available at https://www.nwtime.org/support Jan 17 00:31:21.272412 ntpd[1957]: ---------------------------------------------------- Jan 17 00:31:21.281201 jq[1993]: true Jan 17 00:31:21.281490 ntpd[1957]: 17 Jan 00:31:21 ntpd[1957]: proto: precision = 0.062 usec (-24) Jan 17 00:31:21.281490 ntpd[1957]: 17 Jan 00:31:21 ntpd[1957]: basedate set to 2026-01-04 Jan 17 00:31:21.281490 ntpd[1957]: 17 Jan 00:31:21 ntpd[1957]: gps base set to 2026-01-04 (week 2400) Jan 17 00:31:21.277372 ntpd[1957]: proto: precision = 0.062 usec (-24) Jan 17 00:31:21.279423 ntpd[1957]: basedate set to 2026-01-04 Jan 17 00:31:21.279443 ntpd[1957]: gps base set to 2026-01-04 (week 2400) Jan 17 00:31:21.284215 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Jan 17 00:31:21.286658 ntpd[1957]: Listen and drop on 0 v6wildcard [::]:123 Jan 17 00:31:21.286930 ntpd[1957]: 17 Jan 00:31:21 ntpd[1957]: Listen and drop on 0 v6wildcard [::]:123 Jan 17 00:31:21.286930 ntpd[1957]: 17 Jan 00:31:21 ntpd[1957]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 17 00:31:21.286727 ntpd[1957]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 17 00:31:21.287263 ntpd[1957]: Listen normally on 2 lo 127.0.0.1:123 Jan 17 00:31:21.288293 ntpd[1957]: 17 Jan 00:31:21 ntpd[1957]: Listen normally on 2 lo 127.0.0.1:123 Jan 17 00:31:21.288293 ntpd[1957]: 17 Jan 00:31:21 ntpd[1957]: Listen normally on 3 eth0 172.31.24.155:123 Jan 17 00:31:21.288293 ntpd[1957]: 17 Jan 00:31:21 ntpd[1957]: Listen normally on 4 lo [::1]:123 Jan 17 00:31:21.288293 ntpd[1957]: 17 Jan 00:31:21 ntpd[1957]: bind(21) AF_INET6 fe80::490:b1ff:fe71:34e1%2#123 flags 0x11 failed: Cannot assign requested address Jan 17 00:31:21.288293 ntpd[1957]: 17 Jan 00:31:21 ntpd[1957]: unable to create socket on eth0 (5) for fe80::490:b1ff:fe71:34e1%2#123 Jan 17 00:31:21.288293 ntpd[1957]: 17 Jan 00:31:21 ntpd[1957]: failed to init interface for address fe80::490:b1ff:fe71:34e1%2 Jan 17 00:31:21.288293 ntpd[1957]: 17 Jan 00:31:21 ntpd[1957]: Listening on routing socket on fd #21 for interface updates Jan 17 00:31:21.287310 ntpd[1957]: Listen normally on 3 eth0 172.31.24.155:123 Jan 17 00:31:21.287353 ntpd[1957]: Listen normally on 4 lo [::1]:123 Jan 17 00:31:21.287400 ntpd[1957]: bind(21) AF_INET6 fe80::490:b1ff:fe71:34e1%2#123 flags 0x11 failed: Cannot assign requested address Jan 17 00:31:21.287422 ntpd[1957]: unable to create socket on eth0 (5) for fe80::490:b1ff:fe71:34e1%2#123 Jan 17 00:31:21.287438 ntpd[1957]: failed to init interface for address fe80::490:b1ff:fe71:34e1%2 Jan 17 00:31:21.287470 ntpd[1957]: Listening on routing socket on fd #21 for interface updates Jan 17 00:31:21.296089 systemd[1]: Started update-engine.service - Update Engine. Jan 17 00:31:21.300838 ntpd[1957]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 17 00:31:21.301888 ntpd[1957]: 17 Jan 00:31:21 ntpd[1957]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 17 00:31:21.301888 ntpd[1957]: 17 Jan 00:31:21 ntpd[1957]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 17 00:31:21.300884 ntpd[1957]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 17 00:31:21.308433 update_engine[1964]: I20260117 00:31:21.304471 1964 update_check_scheduler.cc:74] Next update check in 9m49s Jan 17 00:31:21.305433 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 17 00:31:21.309748 coreos-metadata[1952]: Jan 17 00:31:21.308 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 17 00:31:21.310127 coreos-metadata[1952]: Jan 17 00:31:21.309 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jan 17 00:31:21.312795 coreos-metadata[1952]: Jan 17 00:31:21.310 INFO Fetch successful Jan 17 00:31:21.312795 coreos-metadata[1952]: Jan 17 00:31:21.310 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jan 17 00:31:21.312795 coreos-metadata[1952]: Jan 17 00:31:21.311 INFO Fetch successful Jan 17 00:31:21.312795 coreos-metadata[1952]: Jan 17 00:31:21.311 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jan 17 00:31:21.312795 coreos-metadata[1952]: Jan 17 00:31:21.312 INFO Fetch successful Jan 17 00:31:21.312795 coreos-metadata[1952]: Jan 17 00:31:21.312 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jan 17 00:31:21.312795 coreos-metadata[1952]: Jan 17 00:31:21.312 INFO Fetch successful Jan 17 00:31:21.313170 coreos-metadata[1952]: Jan 17 00:31:21.312 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jan 17 00:31:21.313829 coreos-metadata[1952]: Jan 17 00:31:21.313 INFO Fetch failed with 404: resource not found Jan 17 00:31:21.314288 coreos-metadata[1952]: Jan 17 00:31:21.314 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jan 17 00:31:21.319678 coreos-metadata[1952]: Jan 17 00:31:21.318 INFO Fetch successful Jan 17 00:31:21.319678 coreos-metadata[1952]: Jan 17 00:31:21.319 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jan 17 00:31:21.322574 coreos-metadata[1952]: Jan 17 00:31:21.322 INFO Fetch successful Jan 17 00:31:21.322574 coreos-metadata[1952]: Jan 17 00:31:21.322 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jan 17 00:31:21.333613 coreos-metadata[1952]: Jan 17 00:31:21.326 INFO Fetch successful Jan 17 00:31:21.333613 coreos-metadata[1952]: Jan 17 00:31:21.326 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jan 17 00:31:21.333613 coreos-metadata[1952]: Jan 17 00:31:21.333 INFO Fetch successful Jan 17 00:31:21.333613 coreos-metadata[1952]: Jan 17 00:31:21.333 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jan 17 00:31:21.337407 systemd-logind[1961]: Watching system buttons on /dev/input/event1 (Power Button) Jan 17 00:31:21.341319 coreos-metadata[1952]: Jan 17 00:31:21.339 INFO Fetch successful Jan 17 00:31:21.342542 systemd-logind[1961]: Watching system buttons on /dev/input/event2 (Sleep Button) Jan 17 00:31:21.342579 systemd-logind[1961]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 17 00:31:21.349300 systemd-logind[1961]: New seat seat0. Jan 17 00:31:21.357576 systemd[1]: Started systemd-logind.service - User Login Management. Jan 17 00:31:21.427523 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 17 00:31:21.431427 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 17 00:31:21.447207 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 33 scanned by (udev-worker) (1741) Jan 17 00:31:21.560492 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Jan 17 00:31:21.577881 bash[2040]: Updated "/home/core/.ssh/authorized_keys" Jan 17 00:31:21.578561 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 17 00:31:21.588266 extend-filesystems[2000]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jan 17 00:31:21.588266 extend-filesystems[2000]: old_desc_blocks = 1, new_desc_blocks = 2 Jan 17 00:31:21.588266 extend-filesystems[2000]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Jan 17 00:31:21.604941 extend-filesystems[1955]: Resized filesystem in /dev/nvme0n1p9 Jan 17 00:31:21.592817 systemd[1]: Starting sshkeys.service... Jan 17 00:31:21.595541 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 17 00:31:21.596132 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 17 00:31:21.604527 locksmithd[2005]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 17 00:31:21.618956 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 17 00:31:21.625749 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 17 00:31:21.785350 coreos-metadata[2094]: Jan 17 00:31:21.785 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 17 00:31:21.790140 dbus-daemon[1953]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 17 00:31:21.790923 coreos-metadata[2094]: Jan 17 00:31:21.790 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jan 17 00:31:21.791271 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 17 00:31:21.791774 coreos-metadata[2094]: Jan 17 00:31:21.791 INFO Fetch successful Jan 17 00:31:21.791774 coreos-metadata[2094]: Jan 17 00:31:21.791 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 17 00:31:21.794080 dbus-daemon[1953]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2001 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 17 00:31:21.799166 coreos-metadata[2094]: Jan 17 00:31:21.798 INFO Fetch successful Jan 17 00:31:21.806667 systemd[1]: Starting polkit.service - Authorization Manager... Jan 17 00:31:21.810258 unknown[2094]: wrote ssh authorized keys file for user: core Jan 17 00:31:21.871168 update-ssh-keys[2138]: Updated "/home/core/.ssh/authorized_keys" Jan 17 00:31:21.873635 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 17 00:31:21.885233 systemd[1]: Finished sshkeys.service. Jan 17 00:31:21.908423 polkitd[2126]: Started polkitd version 121 Jan 17 00:31:21.924170 polkitd[2126]: Loading rules from directory /etc/polkit-1/rules.d Jan 17 00:31:21.924406 polkitd[2126]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 17 00:31:21.930694 polkitd[2126]: Finished loading, compiling and executing 2 rules Jan 17 00:31:21.933456 dbus-daemon[1953]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 17 00:31:21.933894 polkitd[2126]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 17 00:31:21.934627 systemd[1]: Started polkit.service - Authorization Manager. Jan 17 00:31:21.954468 sshd_keygen[1988]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 17 00:31:21.961769 systemd-hostnamed[2001]: Hostname set to (transient) Jan 17 00:31:21.961896 systemd-resolved[1901]: System hostname changed to 'ip-172-31-24-155'. Jan 17 00:31:21.966209 containerd[1973]: time="2026-01-17T00:31:21.964924143Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 17 00:31:21.994611 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 17 00:31:22.001567 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 17 00:31:22.015131 systemd[1]: issuegen.service: Deactivated successfully. Jan 17 00:31:22.016001 containerd[1973]: time="2026-01-17T00:31:22.012217632Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:31:22.016001 containerd[1973]: time="2026-01-17T00:31:22.014240610Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:31:22.016001 containerd[1973]: time="2026-01-17T00:31:22.014282555Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 17 00:31:22.016001 containerd[1973]: time="2026-01-17T00:31:22.014304953Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 17 00:31:22.016001 containerd[1973]: time="2026-01-17T00:31:22.014489226Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 17 00:31:22.016001 containerd[1973]: time="2026-01-17T00:31:22.014510185Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 17 00:31:22.016001 containerd[1973]: time="2026-01-17T00:31:22.014596797Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:31:22.016001 containerd[1973]: time="2026-01-17T00:31:22.014617074Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:31:22.016001 containerd[1973]: time="2026-01-17T00:31:22.014845623Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:31:22.016001 containerd[1973]: time="2026-01-17T00:31:22.014872054Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 17 00:31:22.016001 containerd[1973]: time="2026-01-17T00:31:22.014893444Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:31:22.015383 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 17 00:31:22.016799 containerd[1973]: time="2026-01-17T00:31:22.014909663Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 17 00:31:22.016799 containerd[1973]: time="2026-01-17T00:31:22.015011113Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:31:22.016799 containerd[1973]: time="2026-01-17T00:31:22.015302915Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:31:22.016799 containerd[1973]: time="2026-01-17T00:31:22.015449272Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:31:22.016799 containerd[1973]: time="2026-01-17T00:31:22.015469607Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 17 00:31:22.016799 containerd[1973]: time="2026-01-17T00:31:22.015573534Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 17 00:31:22.016799 containerd[1973]: time="2026-01-17T00:31:22.015631286Z" level=info msg="metadata content store policy set" policy=shared Jan 17 00:31:22.024544 containerd[1973]: time="2026-01-17T00:31:22.023167794Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 17 00:31:22.024544 containerd[1973]: time="2026-01-17T00:31:22.023273752Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 17 00:31:22.024544 containerd[1973]: time="2026-01-17T00:31:22.023296951Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 17 00:31:22.024544 containerd[1973]: time="2026-01-17T00:31:22.023326025Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 17 00:31:22.024544 containerd[1973]: time="2026-01-17T00:31:22.023347051Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 17 00:31:22.024544 containerd[1973]: time="2026-01-17T00:31:22.023531354Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 17 00:31:22.024544 containerd[1973]: time="2026-01-17T00:31:22.023888533Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 17 00:31:22.024544 containerd[1973]: time="2026-01-17T00:31:22.024031967Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 17 00:31:22.024544 containerd[1973]: time="2026-01-17T00:31:22.024058583Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 17 00:31:22.024544 containerd[1973]: time="2026-01-17T00:31:22.024078820Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 17 00:31:22.024544 containerd[1973]: time="2026-01-17T00:31:22.024100684Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 17 00:31:22.024544 containerd[1973]: time="2026-01-17T00:31:22.024119581Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 17 00:31:22.024544 containerd[1973]: time="2026-01-17T00:31:22.024137042Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 17 00:31:22.024544 containerd[1973]: time="2026-01-17T00:31:22.024157230Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 17 00:31:22.026540 containerd[1973]: time="2026-01-17T00:31:22.025235822Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 17 00:31:22.026540 containerd[1973]: time="2026-01-17T00:31:22.025280887Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 17 00:31:22.026540 containerd[1973]: time="2026-01-17T00:31:22.025300770Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 17 00:31:22.026540 containerd[1973]: time="2026-01-17T00:31:22.025318011Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 17 00:31:22.026540 containerd[1973]: time="2026-01-17T00:31:22.025349213Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 17 00:31:22.026540 containerd[1973]: time="2026-01-17T00:31:22.025370800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 17 00:31:22.026540 containerd[1973]: time="2026-01-17T00:31:22.025389674Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 17 00:31:22.026540 containerd[1973]: time="2026-01-17T00:31:22.025410607Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 17 00:31:22.026540 containerd[1973]: time="2026-01-17T00:31:22.025429296Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 17 00:31:22.026540 containerd[1973]: time="2026-01-17T00:31:22.025449376Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 17 00:31:22.026540 containerd[1973]: time="2026-01-17T00:31:22.025466955Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 17 00:31:22.026540 containerd[1973]: time="2026-01-17T00:31:22.025485944Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 17 00:31:22.026540 containerd[1973]: time="2026-01-17T00:31:22.025509843Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 17 00:31:22.026540 containerd[1973]: time="2026-01-17T00:31:22.025531135Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 17 00:31:22.025287 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 17 00:31:22.027218 containerd[1973]: time="2026-01-17T00:31:22.025552047Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 17 00:31:22.027218 containerd[1973]: time="2026-01-17T00:31:22.025574864Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 17 00:31:22.027218 containerd[1973]: time="2026-01-17T00:31:22.025594368Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 17 00:31:22.027218 containerd[1973]: time="2026-01-17T00:31:22.025621333Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 17 00:31:22.027218 containerd[1973]: time="2026-01-17T00:31:22.025652532Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 17 00:31:22.027218 containerd[1973]: time="2026-01-17T00:31:22.025679874Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 17 00:31:22.027218 containerd[1973]: time="2026-01-17T00:31:22.025695700Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 17 00:31:22.027218 containerd[1973]: time="2026-01-17T00:31:22.026237090Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 17 00:31:22.027218 containerd[1973]: time="2026-01-17T00:31:22.026648491Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 17 00:31:22.027218 containerd[1973]: time="2026-01-17T00:31:22.026674981Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 17 00:31:22.027218 containerd[1973]: time="2026-01-17T00:31:22.026697301Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 17 00:31:22.027218 containerd[1973]: time="2026-01-17T00:31:22.026717848Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 17 00:31:22.027218 containerd[1973]: time="2026-01-17T00:31:22.026739076Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 17 00:31:22.027218 containerd[1973]: time="2026-01-17T00:31:22.026761461Z" level=info msg="NRI interface is disabled by configuration." Jan 17 00:31:22.027814 containerd[1973]: time="2026-01-17T00:31:22.026777237Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 17 00:31:22.030686 containerd[1973]: time="2026-01-17T00:31:22.030243885Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 17 00:31:22.030686 containerd[1973]: time="2026-01-17T00:31:22.030409120Z" level=info msg="Connect containerd service" Jan 17 00:31:22.030686 containerd[1973]: time="2026-01-17T00:31:22.030475372Z" level=info msg="using legacy CRI server" Jan 17 00:31:22.030686 containerd[1973]: time="2026-01-17T00:31:22.030493952Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 17 00:31:22.033321 containerd[1973]: time="2026-01-17T00:31:22.030793035Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 17 00:31:22.033321 containerd[1973]: time="2026-01-17T00:31:22.031618473Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 00:31:22.033321 containerd[1973]: time="2026-01-17T00:31:22.031751869Z" level=info msg="Start subscribing containerd event" Jan 17 00:31:22.033321 containerd[1973]: time="2026-01-17T00:31:22.031810642Z" level=info msg="Start recovering state" Jan 17 00:31:22.033321 containerd[1973]: time="2026-01-17T00:31:22.031893862Z" level=info msg="Start event monitor" Jan 17 00:31:22.033321 containerd[1973]: time="2026-01-17T00:31:22.031920876Z" level=info msg="Start snapshots syncer" Jan 17 00:31:22.033321 containerd[1973]: time="2026-01-17T00:31:22.031938536Z" level=info msg="Start cni network conf syncer for default" Jan 17 00:31:22.033321 containerd[1973]: time="2026-01-17T00:31:22.031951723Z" level=info msg="Start streaming server" Jan 17 00:31:22.033321 containerd[1973]: time="2026-01-17T00:31:22.032056871Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 17 00:31:22.033321 containerd[1973]: time="2026-01-17T00:31:22.032107823Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 17 00:31:22.032271 systemd[1]: Started containerd.service - containerd container runtime. Jan 17 00:31:22.034481 containerd[1973]: time="2026-01-17T00:31:22.034455632Z" level=info msg="containerd successfully booted in 0.070717s" Jan 17 00:31:22.045104 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 17 00:31:22.053667 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 17 00:31:22.056799 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 17 00:31:22.058474 systemd[1]: Reached target getty.target - Login Prompts. Jan 17 00:31:22.272841 ntpd[1957]: bind(24) AF_INET6 fe80::490:b1ff:fe71:34e1%2#123 flags 0x11 failed: Cannot assign requested address Jan 17 00:31:22.273270 ntpd[1957]: 17 Jan 00:31:22 ntpd[1957]: bind(24) AF_INET6 fe80::490:b1ff:fe71:34e1%2#123 flags 0x11 failed: Cannot assign requested address Jan 17 00:31:22.273270 ntpd[1957]: 17 Jan 00:31:22 ntpd[1957]: unable to create socket on eth0 (6) for fe80::490:b1ff:fe71:34e1%2#123 Jan 17 00:31:22.273270 ntpd[1957]: 17 Jan 00:31:22 ntpd[1957]: failed to init interface for address fe80::490:b1ff:fe71:34e1%2 Jan 17 00:31:22.272888 ntpd[1957]: unable to create socket on eth0 (6) for fe80::490:b1ff:fe71:34e1%2#123 Jan 17 00:31:22.272917 ntpd[1957]: failed to init interface for address fe80::490:b1ff:fe71:34e1%2 Jan 17 00:31:22.324372 systemd-networkd[1900]: eth0: Gained IPv6LL Jan 17 00:31:22.327391 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 17 00:31:22.328579 systemd[1]: Reached target network-online.target - Network is Online. Jan 17 00:31:22.334502 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jan 17 00:31:22.345553 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:31:22.349448 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 17 00:31:22.400083 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 17 00:31:22.413692 amazon-ssm-agent[2171]: Initializing new seelog logger Jan 17 00:31:22.414264 amazon-ssm-agent[2171]: New Seelog Logger Creation Complete Jan 17 00:31:22.414418 amazon-ssm-agent[2171]: 2026/01/17 00:31:22 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 00:31:22.414418 amazon-ssm-agent[2171]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 00:31:22.414829 amazon-ssm-agent[2171]: 2026/01/17 00:31:22 processing appconfig overrides Jan 17 00:31:22.415212 amazon-ssm-agent[2171]: 2026/01/17 00:31:22 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 00:31:22.415212 amazon-ssm-agent[2171]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 00:31:22.415306 amazon-ssm-agent[2171]: 2026/01/17 00:31:22 processing appconfig overrides Jan 17 00:31:22.415558 amazon-ssm-agent[2171]: 2026/01/17 00:31:22 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 00:31:22.415558 amazon-ssm-agent[2171]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 00:31:22.415664 amazon-ssm-agent[2171]: 2026/01/17 00:31:22 processing appconfig overrides Jan 17 00:31:22.416114 amazon-ssm-agent[2171]: 2026-01-17 00:31:22 INFO Proxy environment variables: Jan 17 00:31:22.419096 amazon-ssm-agent[2171]: 2026/01/17 00:31:22 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 00:31:22.419096 amazon-ssm-agent[2171]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 00:31:22.419279 amazon-ssm-agent[2171]: 2026/01/17 00:31:22 processing appconfig overrides Jan 17 00:31:22.516591 amazon-ssm-agent[2171]: 2026-01-17 00:31:22 INFO https_proxy: Jan 17 00:31:22.614697 amazon-ssm-agent[2171]: 2026-01-17 00:31:22 INFO http_proxy: Jan 17 00:31:22.681952 amazon-ssm-agent[2171]: 2026-01-17 00:31:22 INFO no_proxy: Jan 17 00:31:22.681952 amazon-ssm-agent[2171]: 2026-01-17 00:31:22 INFO Checking if agent identity type OnPrem can be assumed Jan 17 00:31:22.681952 amazon-ssm-agent[2171]: 2026-01-17 00:31:22 INFO Checking if agent identity type EC2 can be assumed Jan 17 00:31:22.681952 amazon-ssm-agent[2171]: 2026-01-17 00:31:22 INFO Agent will take identity from EC2 Jan 17 00:31:22.681952 amazon-ssm-agent[2171]: 2026-01-17 00:31:22 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 17 00:31:22.682151 amazon-ssm-agent[2171]: 2026-01-17 00:31:22 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 17 00:31:22.682151 amazon-ssm-agent[2171]: 2026-01-17 00:31:22 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 17 00:31:22.682151 amazon-ssm-agent[2171]: 2026-01-17 00:31:22 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jan 17 00:31:22.682151 amazon-ssm-agent[2171]: 2026-01-17 00:31:22 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Jan 17 00:31:22.682151 amazon-ssm-agent[2171]: 2026-01-17 00:31:22 INFO [amazon-ssm-agent] Starting Core Agent Jan 17 00:31:22.682151 amazon-ssm-agent[2171]: 2026-01-17 00:31:22 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jan 17 00:31:22.682151 amazon-ssm-agent[2171]: 2026-01-17 00:31:22 INFO [Registrar] Starting registrar module Jan 17 00:31:22.682151 amazon-ssm-agent[2171]: 2026-01-17 00:31:22 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jan 17 00:31:22.682151 amazon-ssm-agent[2171]: 2026-01-17 00:31:22 INFO [EC2Identity] EC2 registration was successful. Jan 17 00:31:22.682151 amazon-ssm-agent[2171]: 2026-01-17 00:31:22 INFO [CredentialRefresher] credentialRefresher has started Jan 17 00:31:22.682151 amazon-ssm-agent[2171]: 2026-01-17 00:31:22 INFO [CredentialRefresher] Starting credentials refresher loop Jan 17 00:31:22.682151 amazon-ssm-agent[2171]: 2026-01-17 00:31:22 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jan 17 00:31:22.712358 amazon-ssm-agent[2171]: 2026-01-17 00:31:22 INFO [CredentialRefresher] Next credential rotation will be in 30.966661025316668 minutes Jan 17 00:31:23.698793 amazon-ssm-agent[2171]: 2026-01-17 00:31:23 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jan 17 00:31:23.802223 amazon-ssm-agent[2171]: 2026-01-17 00:31:23 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2190) started Jan 17 00:31:23.900828 amazon-ssm-agent[2171]: 2026-01-17 00:31:23 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jan 17 00:31:24.280459 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:31:24.280657 (kubelet)[2205]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:31:24.281476 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 17 00:31:24.282053 systemd[1]: Startup finished in 598ms (kernel) + 5.665s (initrd) + 6.958s (userspace) = 13.222s. Jan 17 00:31:25.272794 ntpd[1957]: Listen normally on 7 eth0 [fe80::490:b1ff:fe71:34e1%2]:123 Jan 17 00:31:25.273124 ntpd[1957]: 17 Jan 00:31:25 ntpd[1957]: Listen normally on 7 eth0 [fe80::490:b1ff:fe71:34e1%2]:123 Jan 17 00:31:25.341643 kubelet[2205]: E0117 00:31:25.341560 2205 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:31:25.344261 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:31:25.344420 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:31:25.344708 systemd[1]: kubelet.service: Consumed 1.057s CPU time. Jan 17 00:31:25.885509 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 17 00:31:25.897654 systemd[1]: Started sshd@0-172.31.24.155:22-4.153.228.146:41420.service - OpenSSH per-connection server daemon (4.153.228.146:41420). Jan 17 00:31:26.430246 sshd[2217]: Accepted publickey for core from 4.153.228.146 port 41420 ssh2: RSA SHA256:sbILTD9G5iELn3Zwr53HzB3sU6rscuYx+TXC00D8O3s Jan 17 00:31:26.431755 sshd[2217]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:31:26.442607 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 17 00:31:26.457061 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 17 00:31:26.461310 systemd-logind[1961]: New session 1 of user core. Jan 17 00:31:26.498334 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 17 00:31:26.506836 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 17 00:31:26.517129 (systemd)[2221]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 17 00:31:26.633345 systemd[2221]: Queued start job for default target default.target. Jan 17 00:31:26.643534 systemd[2221]: Created slice app.slice - User Application Slice. Jan 17 00:31:26.643580 systemd[2221]: Reached target paths.target - Paths. Jan 17 00:31:26.643601 systemd[2221]: Reached target timers.target - Timers. Jan 17 00:31:26.645228 systemd[2221]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 17 00:31:26.658297 systemd[2221]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 17 00:31:26.658451 systemd[2221]: Reached target sockets.target - Sockets. Jan 17 00:31:26.658473 systemd[2221]: Reached target basic.target - Basic System. Jan 17 00:31:26.658527 systemd[2221]: Reached target default.target - Main User Target. Jan 17 00:31:26.658566 systemd[2221]: Startup finished in 134ms. Jan 17 00:31:26.658903 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 17 00:31:26.675441 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 17 00:31:27.050277 systemd[1]: Started sshd@1-172.31.24.155:22-4.153.228.146:41436.service - OpenSSH per-connection server daemon (4.153.228.146:41436). Jan 17 00:31:27.541929 sshd[2232]: Accepted publickey for core from 4.153.228.146 port 41436 ssh2: RSA SHA256:sbILTD9G5iELn3Zwr53HzB3sU6rscuYx+TXC00D8O3s Jan 17 00:31:27.543444 sshd[2232]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:31:27.550246 systemd-logind[1961]: New session 2 of user core. Jan 17 00:31:27.553506 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 17 00:31:27.896118 sshd[2232]: pam_unix(sshd:session): session closed for user core Jan 17 00:31:27.908990 systemd[1]: sshd@1-172.31.24.155:22-4.153.228.146:41436.service: Deactivated successfully. Jan 17 00:31:27.912289 systemd[1]: session-2.scope: Deactivated successfully. Jan 17 00:31:27.914082 systemd-logind[1961]: Session 2 logged out. Waiting for processes to exit. Jan 17 00:31:27.916878 systemd-logind[1961]: Removed session 2. Jan 17 00:31:27.986657 systemd[1]: Started sshd@2-172.31.24.155:22-4.153.228.146:41446.service - OpenSSH per-connection server daemon (4.153.228.146:41446). Jan 17 00:31:28.465017 sshd[2239]: Accepted publickey for core from 4.153.228.146 port 41446 ssh2: RSA SHA256:sbILTD9G5iELn3Zwr53HzB3sU6rscuYx+TXC00D8O3s Jan 17 00:31:28.466625 sshd[2239]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:31:28.473744 systemd-logind[1961]: New session 3 of user core. Jan 17 00:31:28.479462 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 17 00:31:28.810743 sshd[2239]: pam_unix(sshd:session): session closed for user core Jan 17 00:31:28.815101 systemd[1]: sshd@2-172.31.24.155:22-4.153.228.146:41446.service: Deactivated successfully. Jan 17 00:31:28.817108 systemd[1]: session-3.scope: Deactivated successfully. Jan 17 00:31:28.817959 systemd-logind[1961]: Session 3 logged out. Waiting for processes to exit. Jan 17 00:31:28.818968 systemd-logind[1961]: Removed session 3. Jan 17 00:31:28.896047 systemd[1]: Started sshd@3-172.31.24.155:22-4.153.228.146:41454.service - OpenSSH per-connection server daemon (4.153.228.146:41454). Jan 17 00:31:29.377094 sshd[2246]: Accepted publickey for core from 4.153.228.146 port 41454 ssh2: RSA SHA256:sbILTD9G5iELn3Zwr53HzB3sU6rscuYx+TXC00D8O3s Jan 17 00:31:29.378502 sshd[2246]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:31:29.383978 systemd-logind[1961]: New session 4 of user core. Jan 17 00:31:29.393435 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 17 00:31:29.726742 sshd[2246]: pam_unix(sshd:session): session closed for user core Jan 17 00:31:29.730717 systemd[1]: sshd@3-172.31.24.155:22-4.153.228.146:41454.service: Deactivated successfully. Jan 17 00:31:29.732650 systemd[1]: session-4.scope: Deactivated successfully. Jan 17 00:31:29.734393 systemd-logind[1961]: Session 4 logged out. Waiting for processes to exit. Jan 17 00:31:29.735546 systemd-logind[1961]: Removed session 4. Jan 17 00:31:29.819603 systemd[1]: Started sshd@4-172.31.24.155:22-4.153.228.146:41470.service - OpenSSH per-connection server daemon (4.153.228.146:41470). Jan 17 00:31:30.307596 sshd[2253]: Accepted publickey for core from 4.153.228.146 port 41470 ssh2: RSA SHA256:sbILTD9G5iELn3Zwr53HzB3sU6rscuYx+TXC00D8O3s Jan 17 00:31:30.309087 sshd[2253]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:31:30.313226 systemd-logind[1961]: New session 5 of user core. Jan 17 00:31:30.318640 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 17 00:31:30.620551 sudo[2256]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 17 00:31:30.620980 sudo[2256]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:31:30.637015 sudo[2256]: pam_unix(sudo:session): session closed for user root Jan 17 00:31:30.715366 sshd[2253]: pam_unix(sshd:session): session closed for user core Jan 17 00:31:30.719944 systemd[1]: sshd@4-172.31.24.155:22-4.153.228.146:41470.service: Deactivated successfully. Jan 17 00:31:30.721839 systemd[1]: session-5.scope: Deactivated successfully. Jan 17 00:31:30.722763 systemd-logind[1961]: Session 5 logged out. Waiting for processes to exit. Jan 17 00:31:30.723936 systemd-logind[1961]: Removed session 5. Jan 17 00:31:30.804520 systemd[1]: Started sshd@5-172.31.24.155:22-4.153.228.146:41472.service - OpenSSH per-connection server daemon (4.153.228.146:41472). Jan 17 00:31:31.280698 sshd[2261]: Accepted publickey for core from 4.153.228.146 port 41472 ssh2: RSA SHA256:sbILTD9G5iELn3Zwr53HzB3sU6rscuYx+TXC00D8O3s Jan 17 00:31:31.282399 sshd[2261]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:31:31.287515 systemd-logind[1961]: New session 6 of user core. Jan 17 00:31:31.290366 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 17 00:31:31.553201 sudo[2265]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 17 00:31:31.553606 sudo[2265]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:31:31.557716 sudo[2265]: pam_unix(sudo:session): session closed for user root Jan 17 00:31:31.563317 sudo[2264]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 17 00:31:31.563710 sudo[2264]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:31:31.584700 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 17 00:31:31.586911 auditctl[2268]: No rules Jan 17 00:31:31.587346 systemd[1]: audit-rules.service: Deactivated successfully. Jan 17 00:31:31.587561 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 17 00:31:31.590719 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 00:31:31.629872 augenrules[2286]: No rules Jan 17 00:31:31.631333 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 00:31:31.632951 sudo[2264]: pam_unix(sudo:session): session closed for user root Jan 17 00:31:31.712265 sshd[2261]: pam_unix(sshd:session): session closed for user core Jan 17 00:31:31.716359 systemd[1]: sshd@5-172.31.24.155:22-4.153.228.146:41472.service: Deactivated successfully. Jan 17 00:31:31.718118 systemd[1]: session-6.scope: Deactivated successfully. Jan 17 00:31:31.719210 systemd-logind[1961]: Session 6 logged out. Waiting for processes to exit. Jan 17 00:31:31.720086 systemd-logind[1961]: Removed session 6. Jan 17 00:31:31.797465 systemd[1]: Started sshd@6-172.31.24.155:22-4.153.228.146:41478.service - OpenSSH per-connection server daemon (4.153.228.146:41478). Jan 17 00:31:32.279624 sshd[2294]: Accepted publickey for core from 4.153.228.146 port 41478 ssh2: RSA SHA256:sbILTD9G5iELn3Zwr53HzB3sU6rscuYx+TXC00D8O3s Jan 17 00:31:32.281036 sshd[2294]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:31:32.286116 systemd-logind[1961]: New session 7 of user core. Jan 17 00:31:32.292426 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 17 00:31:32.553476 sudo[2297]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 17 00:31:32.553807 sudo[2297]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:31:33.608651 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:31:33.608806 systemd[1]: kubelet.service: Consumed 1.057s CPU time. Jan 17 00:31:33.615974 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:31:33.649548 systemd[1]: Reloading requested from client PID 2330 ('systemctl') (unit session-7.scope)... Jan 17 00:31:33.649567 systemd[1]: Reloading... Jan 17 00:31:33.770277 zram_generator::config[2373]: No configuration found. Jan 17 00:31:33.897625 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:31:33.988513 systemd[1]: Reloading finished in 338 ms. Jan 17 00:31:34.036379 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 17 00:31:34.036478 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 17 00:31:34.036735 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:31:34.043588 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:31:35.569847 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:31:35.583662 (kubelet)[2430]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 00:31:35.630655 kubelet[2430]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:31:35.630655 kubelet[2430]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 17 00:31:35.630655 kubelet[2430]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:31:35.631210 kubelet[2430]: I0117 00:31:35.630748 2430 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 00:31:35.995120 kubelet[2430]: I0117 00:31:35.993313 2430 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 17 00:31:35.995120 kubelet[2430]: I0117 00:31:35.993350 2430 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 00:31:35.995120 kubelet[2430]: I0117 00:31:35.993722 2430 server.go:954] "Client rotation is on, will bootstrap in background" Jan 17 00:31:36.042887 kubelet[2430]: I0117 00:31:36.042840 2430 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 00:31:36.057784 kubelet[2430]: E0117 00:31:36.057722 2430 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 17 00:31:36.057784 kubelet[2430]: I0117 00:31:36.057760 2430 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 17 00:31:36.062155 kubelet[2430]: I0117 00:31:36.062121 2430 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 00:31:36.064211 kubelet[2430]: I0117 00:31:36.064022 2430 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 00:31:36.064337 kubelet[2430]: I0117 00:31:36.064076 2430 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172.31.24.155","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 17 00:31:36.064337 kubelet[2430]: I0117 00:31:36.064272 2430 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 00:31:36.064337 kubelet[2430]: I0117 00:31:36.064280 2430 container_manager_linux.go:304] "Creating device plugin manager" Jan 17 00:31:36.064471 kubelet[2430]: I0117 00:31:36.064399 2430 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:31:36.072528 kubelet[2430]: I0117 00:31:36.072489 2430 kubelet.go:446] "Attempting to sync node with API server" Jan 17 00:31:36.072528 kubelet[2430]: I0117 00:31:36.072536 2430 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 00:31:36.072686 kubelet[2430]: I0117 00:31:36.072557 2430 kubelet.go:352] "Adding apiserver pod source" Jan 17 00:31:36.072686 kubelet[2430]: I0117 00:31:36.072570 2430 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 00:31:36.073112 kubelet[2430]: E0117 00:31:36.073074 2430 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:31:36.073351 kubelet[2430]: E0117 00:31:36.073317 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:31:36.077304 kubelet[2430]: I0117 00:31:36.076531 2430 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 00:31:36.077304 kubelet[2430]: I0117 00:31:36.077083 2430 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 17 00:31:36.078197 kubelet[2430]: W0117 00:31:36.078146 2430 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 17 00:31:36.080897 kubelet[2430]: I0117 00:31:36.080864 2430 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 17 00:31:36.081015 kubelet[2430]: I0117 00:31:36.080909 2430 server.go:1287] "Started kubelet" Jan 17 00:31:36.087920 kubelet[2430]: I0117 00:31:36.086724 2430 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 00:31:36.087920 kubelet[2430]: I0117 00:31:36.087150 2430 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 00:31:36.088497 kubelet[2430]: I0117 00:31:36.088335 2430 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 00:31:36.090107 kubelet[2430]: I0117 00:31:36.090082 2430 server.go:479] "Adding debug handlers to kubelet server" Jan 17 00:31:36.090766 kubelet[2430]: I0117 00:31:36.090741 2430 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 00:31:36.094812 kubelet[2430]: I0117 00:31:36.094777 2430 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 17 00:31:36.101544 kubelet[2430]: I0117 00:31:36.101518 2430 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 17 00:31:36.101888 kubelet[2430]: E0117 00:31:36.101824 2430 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172.31.24.155\" not found" Jan 17 00:31:36.102079 kubelet[2430]: I0117 00:31:36.102066 2430 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 17 00:31:36.102143 kubelet[2430]: I0117 00:31:36.102127 2430 reconciler.go:26] "Reconciler: start to sync state" Jan 17 00:31:36.105995 kubelet[2430]: I0117 00:31:36.105650 2430 factory.go:221] Registration of the systemd container factory successfully Jan 17 00:31:36.105995 kubelet[2430]: I0117 00:31:36.105759 2430 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 00:31:36.106939 kubelet[2430]: E0117 00:31:36.106920 2430 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172.31.24.155\" not found" node="172.31.24.155" Jan 17 00:31:36.107756 kubelet[2430]: I0117 00:31:36.107741 2430 factory.go:221] Registration of the containerd container factory successfully Jan 17 00:31:36.116215 kubelet[2430]: E0117 00:31:36.116052 2430 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 00:31:36.123277 kubelet[2430]: I0117 00:31:36.122618 2430 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 17 00:31:36.123277 kubelet[2430]: I0117 00:31:36.122635 2430 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 17 00:31:36.123277 kubelet[2430]: I0117 00:31:36.122653 2430 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:31:36.127557 kubelet[2430]: I0117 00:31:36.127524 2430 policy_none.go:49] "None policy: Start" Jan 17 00:31:36.128174 kubelet[2430]: I0117 00:31:36.128156 2430 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 17 00:31:36.128309 kubelet[2430]: I0117 00:31:36.128299 2430 state_mem.go:35] "Initializing new in-memory state store" Jan 17 00:31:36.142058 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 17 00:31:36.155127 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 17 00:31:36.160335 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 17 00:31:36.168731 kubelet[2430]: I0117 00:31:36.168498 2430 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 00:31:36.169357 kubelet[2430]: I0117 00:31:36.169322 2430 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 17 00:31:36.170888 kubelet[2430]: I0117 00:31:36.169337 2430 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 17 00:31:36.172693 kubelet[2430]: I0117 00:31:36.172680 2430 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 00:31:36.184452 kubelet[2430]: E0117 00:31:36.184403 2430 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 17 00:31:36.184586 kubelet[2430]: E0117 00:31:36.184474 2430 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.31.24.155\" not found" Jan 17 00:31:36.200856 kubelet[2430]: I0117 00:31:36.200786 2430 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 17 00:31:36.202799 kubelet[2430]: I0117 00:31:36.202760 2430 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 17 00:31:36.202799 kubelet[2430]: I0117 00:31:36.202799 2430 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 17 00:31:36.202962 kubelet[2430]: I0117 00:31:36.202820 2430 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 17 00:31:36.202962 kubelet[2430]: I0117 00:31:36.202842 2430 kubelet.go:2382] "Starting kubelet main sync loop" Jan 17 00:31:36.202962 kubelet[2430]: E0117 00:31:36.202899 2430 kubelet.go:2406] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Jan 17 00:31:36.274116 kubelet[2430]: I0117 00:31:36.273400 2430 kubelet_node_status.go:75] "Attempting to register node" node="172.31.24.155" Jan 17 00:31:36.284473 kubelet[2430]: I0117 00:31:36.284437 2430 kubelet_node_status.go:78] "Successfully registered node" node="172.31.24.155" Jan 17 00:31:36.396399 kubelet[2430]: I0117 00:31:36.396346 2430 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jan 17 00:31:36.396843 containerd[1973]: time="2026-01-17T00:31:36.396809764Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 17 00:31:36.397471 kubelet[2430]: I0117 00:31:36.397032 2430 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jan 17 00:31:36.607265 sudo[2297]: pam_unix(sudo:session): session closed for user root Jan 17 00:31:36.684445 sshd[2294]: pam_unix(sshd:session): session closed for user core Jan 17 00:31:36.687855 systemd[1]: sshd@6-172.31.24.155:22-4.153.228.146:41478.service: Deactivated successfully. Jan 17 00:31:36.689828 systemd[1]: session-7.scope: Deactivated successfully. Jan 17 00:31:36.691965 systemd-logind[1961]: Session 7 logged out. Waiting for processes to exit. Jan 17 00:31:36.693438 systemd-logind[1961]: Removed session 7. Jan 17 00:31:36.996511 kubelet[2430]: I0117 00:31:36.995992 2430 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 17 00:31:36.997166 kubelet[2430]: W0117 00:31:36.996561 2430 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 17 00:31:36.997166 kubelet[2430]: W0117 00:31:36.996626 2430 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 17 00:31:36.997166 kubelet[2430]: W0117 00:31:36.996656 2430 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 17 00:31:37.074074 kubelet[2430]: I0117 00:31:37.073981 2430 apiserver.go:52] "Watching apiserver" Jan 17 00:31:37.074074 kubelet[2430]: E0117 00:31:37.074002 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:31:37.097997 systemd[1]: Created slice kubepods-besteffort-pod184430d5_a82a_43ce_85bb_856bc0abed0c.slice - libcontainer container kubepods-besteffort-pod184430d5_a82a_43ce_85bb_856bc0abed0c.slice. Jan 17 00:31:37.103769 kubelet[2430]: I0117 00:31:37.103731 2430 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 17 00:31:37.107950 kubelet[2430]: I0117 00:31:37.107231 2430 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e1deb6cc-dc9c-4146-8150-2f9f18bceaf5-clustermesh-secrets\") pod \"cilium-xxghc\" (UID: \"e1deb6cc-dc9c-4146-8150-2f9f18bceaf5\") " pod="kube-system/cilium-xxghc" Jan 17 00:31:37.107950 kubelet[2430]: I0117 00:31:37.107269 2430 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e1deb6cc-dc9c-4146-8150-2f9f18bceaf5-cilium-run\") pod \"cilium-xxghc\" (UID: \"e1deb6cc-dc9c-4146-8150-2f9f18bceaf5\") " pod="kube-system/cilium-xxghc" Jan 17 00:31:37.107950 kubelet[2430]: I0117 00:31:37.107304 2430 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e1deb6cc-dc9c-4146-8150-2f9f18bceaf5-hostproc\") pod \"cilium-xxghc\" (UID: \"e1deb6cc-dc9c-4146-8150-2f9f18bceaf5\") " pod="kube-system/cilium-xxghc" Jan 17 00:31:37.107950 kubelet[2430]: I0117 00:31:37.107329 2430 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e1deb6cc-dc9c-4146-8150-2f9f18bceaf5-cilium-cgroup\") pod \"cilium-xxghc\" (UID: \"e1deb6cc-dc9c-4146-8150-2f9f18bceaf5\") " pod="kube-system/cilium-xxghc" Jan 17 00:31:37.107950 kubelet[2430]: I0117 00:31:37.107356 2430 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jr9zk\" (UniqueName: \"kubernetes.io/projected/184430d5-a82a-43ce-85bb-856bc0abed0c-kube-api-access-jr9zk\") pod \"kube-proxy-9vrlw\" (UID: \"184430d5-a82a-43ce-85bb-856bc0abed0c\") " pod="kube-system/kube-proxy-9vrlw" Jan 17 00:31:37.107950 kubelet[2430]: I0117 00:31:37.107382 2430 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e1deb6cc-dc9c-4146-8150-2f9f18bceaf5-bpf-maps\") pod \"cilium-xxghc\" (UID: \"e1deb6cc-dc9c-4146-8150-2f9f18bceaf5\") " pod="kube-system/cilium-xxghc" Jan 17 00:31:37.108304 kubelet[2430]: I0117 00:31:37.107401 2430 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e1deb6cc-dc9c-4146-8150-2f9f18bceaf5-cni-path\") pod \"cilium-xxghc\" (UID: \"e1deb6cc-dc9c-4146-8150-2f9f18bceaf5\") " pod="kube-system/cilium-xxghc" Jan 17 00:31:37.108304 kubelet[2430]: I0117 00:31:37.107433 2430 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e1deb6cc-dc9c-4146-8150-2f9f18bceaf5-cilium-config-path\") pod \"cilium-xxghc\" (UID: \"e1deb6cc-dc9c-4146-8150-2f9f18bceaf5\") " pod="kube-system/cilium-xxghc" Jan 17 00:31:37.108304 kubelet[2430]: I0117 00:31:37.107463 2430 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5stqx\" (UniqueName: \"kubernetes.io/projected/e1deb6cc-dc9c-4146-8150-2f9f18bceaf5-kube-api-access-5stqx\") pod \"cilium-xxghc\" (UID: \"e1deb6cc-dc9c-4146-8150-2f9f18bceaf5\") " pod="kube-system/cilium-xxghc" Jan 17 00:31:37.108304 kubelet[2430]: I0117 00:31:37.107488 2430 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e1deb6cc-dc9c-4146-8150-2f9f18bceaf5-etc-cni-netd\") pod \"cilium-xxghc\" (UID: \"e1deb6cc-dc9c-4146-8150-2f9f18bceaf5\") " pod="kube-system/cilium-xxghc" Jan 17 00:31:37.108304 kubelet[2430]: I0117 00:31:37.107515 2430 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e1deb6cc-dc9c-4146-8150-2f9f18bceaf5-xtables-lock\") pod \"cilium-xxghc\" (UID: \"e1deb6cc-dc9c-4146-8150-2f9f18bceaf5\") " pod="kube-system/cilium-xxghc" Jan 17 00:31:37.108304 kubelet[2430]: I0117 00:31:37.107541 2430 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e1deb6cc-dc9c-4146-8150-2f9f18bceaf5-hubble-tls\") pod \"cilium-xxghc\" (UID: \"e1deb6cc-dc9c-4146-8150-2f9f18bceaf5\") " pod="kube-system/cilium-xxghc" Jan 17 00:31:37.108574 kubelet[2430]: I0117 00:31:37.107564 2430 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/184430d5-a82a-43ce-85bb-856bc0abed0c-kube-proxy\") pod \"kube-proxy-9vrlw\" (UID: \"184430d5-a82a-43ce-85bb-856bc0abed0c\") " pod="kube-system/kube-proxy-9vrlw" Jan 17 00:31:37.108574 kubelet[2430]: I0117 00:31:37.107589 2430 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/184430d5-a82a-43ce-85bb-856bc0abed0c-xtables-lock\") pod \"kube-proxy-9vrlw\" (UID: \"184430d5-a82a-43ce-85bb-856bc0abed0c\") " pod="kube-system/kube-proxy-9vrlw" Jan 17 00:31:37.108574 kubelet[2430]: I0117 00:31:37.107611 2430 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/184430d5-a82a-43ce-85bb-856bc0abed0c-lib-modules\") pod \"kube-proxy-9vrlw\" (UID: \"184430d5-a82a-43ce-85bb-856bc0abed0c\") " pod="kube-system/kube-proxy-9vrlw" Jan 17 00:31:37.108574 kubelet[2430]: I0117 00:31:37.107634 2430 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e1deb6cc-dc9c-4146-8150-2f9f18bceaf5-lib-modules\") pod \"cilium-xxghc\" (UID: \"e1deb6cc-dc9c-4146-8150-2f9f18bceaf5\") " pod="kube-system/cilium-xxghc" Jan 17 00:31:37.108574 kubelet[2430]: I0117 00:31:37.107659 2430 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e1deb6cc-dc9c-4146-8150-2f9f18bceaf5-host-proc-sys-net\") pod \"cilium-xxghc\" (UID: \"e1deb6cc-dc9c-4146-8150-2f9f18bceaf5\") " pod="kube-system/cilium-xxghc" Jan 17 00:31:37.108772 kubelet[2430]: I0117 00:31:37.107687 2430 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e1deb6cc-dc9c-4146-8150-2f9f18bceaf5-host-proc-sys-kernel\") pod \"cilium-xxghc\" (UID: \"e1deb6cc-dc9c-4146-8150-2f9f18bceaf5\") " pod="kube-system/cilium-xxghc" Jan 17 00:31:37.110639 systemd[1]: Created slice kubepods-burstable-pode1deb6cc_dc9c_4146_8150_2f9f18bceaf5.slice - libcontainer container kubepods-burstable-pode1deb6cc_dc9c_4146_8150_2f9f18bceaf5.slice. Jan 17 00:31:37.408726 containerd[1973]: time="2026-01-17T00:31:37.408677113Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9vrlw,Uid:184430d5-a82a-43ce-85bb-856bc0abed0c,Namespace:kube-system,Attempt:0,}" Jan 17 00:31:37.420298 containerd[1973]: time="2026-01-17T00:31:37.420248815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xxghc,Uid:e1deb6cc-dc9c-4146-8150-2f9f18bceaf5,Namespace:kube-system,Attempt:0,}" Jan 17 00:31:37.959089 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount57096517.mount: Deactivated successfully. Jan 17 00:31:37.969596 containerd[1973]: time="2026-01-17T00:31:37.969528643Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:31:37.970733 containerd[1973]: time="2026-01-17T00:31:37.970685795Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:31:37.972010 containerd[1973]: time="2026-01-17T00:31:37.971973059Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 17 00:31:37.973071 containerd[1973]: time="2026-01-17T00:31:37.973035399Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 00:31:37.974814 containerd[1973]: time="2026-01-17T00:31:37.974764567Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:31:37.978563 containerd[1973]: time="2026-01-17T00:31:37.977382616Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:31:37.978563 containerd[1973]: time="2026-01-17T00:31:37.978258123Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 569.499052ms" Jan 17 00:31:37.980863 containerd[1973]: time="2026-01-17T00:31:37.980815814Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 560.472806ms" Jan 17 00:31:38.075108 kubelet[2430]: E0117 00:31:38.075040 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:31:38.195358 containerd[1973]: time="2026-01-17T00:31:38.195036527Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:31:38.195358 containerd[1973]: time="2026-01-17T00:31:38.195110177Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:31:38.195358 containerd[1973]: time="2026-01-17T00:31:38.195144873Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:31:38.195358 containerd[1973]: time="2026-01-17T00:31:38.195250920Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:31:38.195358 containerd[1973]: time="2026-01-17T00:31:38.193767347Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:31:38.195358 containerd[1973]: time="2026-01-17T00:31:38.193851024Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:31:38.195358 containerd[1973]: time="2026-01-17T00:31:38.193875104Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:31:38.195358 containerd[1973]: time="2026-01-17T00:31:38.194509085Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:31:38.306945 systemd[1]: run-containerd-runc-k8s.io-2d28139164eb7c8da73f6595b264a0ec46a23be98e5f1f54a949b85cbbe43e69-runc.oNI9AE.mount: Deactivated successfully. Jan 17 00:31:38.322427 systemd[1]: Started cri-containerd-2d28139164eb7c8da73f6595b264a0ec46a23be98e5f1f54a949b85cbbe43e69.scope - libcontainer container 2d28139164eb7c8da73f6595b264a0ec46a23be98e5f1f54a949b85cbbe43e69. Jan 17 00:31:38.328412 systemd[1]: Started cri-containerd-1c7ed507caee7d9454faeb161308cafb4d01b85b7fd3d2829dd728afe512510d.scope - libcontainer container 1c7ed507caee7d9454faeb161308cafb4d01b85b7fd3d2829dd728afe512510d. Jan 17 00:31:38.369845 containerd[1973]: time="2026-01-17T00:31:38.369711755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xxghc,Uid:e1deb6cc-dc9c-4146-8150-2f9f18bceaf5,Namespace:kube-system,Attempt:0,} returns sandbox id \"1c7ed507caee7d9454faeb161308cafb4d01b85b7fd3d2829dd728afe512510d\"" Jan 17 00:31:38.372587 containerd[1973]: time="2026-01-17T00:31:38.372548027Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9vrlw,Uid:184430d5-a82a-43ce-85bb-856bc0abed0c,Namespace:kube-system,Attempt:0,} returns sandbox id \"2d28139164eb7c8da73f6595b264a0ec46a23be98e5f1f54a949b85cbbe43e69\"" Jan 17 00:31:38.375869 containerd[1973]: time="2026-01-17T00:31:38.375829223Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 17 00:31:39.075446 kubelet[2430]: E0117 00:31:39.075384 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:31:40.076037 kubelet[2430]: E0117 00:31:40.075959 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:31:41.076305 kubelet[2430]: E0117 00:31:41.076256 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:31:42.077515 kubelet[2430]: E0117 00:31:42.077406 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:31:43.078233 kubelet[2430]: E0117 00:31:43.078194 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:31:43.380248 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount354141337.mount: Deactivated successfully. Jan 17 00:31:44.078750 kubelet[2430]: E0117 00:31:44.078710 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:31:45.079836 kubelet[2430]: E0117 00:31:45.079795 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:31:45.737735 containerd[1973]: time="2026-01-17T00:31:45.737660658Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:31:45.738937 containerd[1973]: time="2026-01-17T00:31:45.738764968Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jan 17 00:31:45.740728 containerd[1973]: time="2026-01-17T00:31:45.740508271Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:31:45.741937 containerd[1973]: time="2026-01-17T00:31:45.741905702Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 7.366039901s" Jan 17 00:31:45.742019 containerd[1973]: time="2026-01-17T00:31:45.741940914Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 17 00:31:45.743827 containerd[1973]: time="2026-01-17T00:31:45.743798028Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 17 00:31:45.745337 containerd[1973]: time="2026-01-17T00:31:45.745306713Z" level=info msg="CreateContainer within sandbox \"1c7ed507caee7d9454faeb161308cafb4d01b85b7fd3d2829dd728afe512510d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 17 00:31:45.768571 containerd[1973]: time="2026-01-17T00:31:45.768516652Z" level=info msg="CreateContainer within sandbox \"1c7ed507caee7d9454faeb161308cafb4d01b85b7fd3d2829dd728afe512510d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6c50203996fc2649deb1758b8a1e5ec4414491289de625528cb5a02a5a5695c1\"" Jan 17 00:31:45.769404 containerd[1973]: time="2026-01-17T00:31:45.769374497Z" level=info msg="StartContainer for \"6c50203996fc2649deb1758b8a1e5ec4414491289de625528cb5a02a5a5695c1\"" Jan 17 00:31:45.808436 systemd[1]: Started cri-containerd-6c50203996fc2649deb1758b8a1e5ec4414491289de625528cb5a02a5a5695c1.scope - libcontainer container 6c50203996fc2649deb1758b8a1e5ec4414491289de625528cb5a02a5a5695c1. Jan 17 00:31:45.838511 containerd[1973]: time="2026-01-17T00:31:45.838447077Z" level=info msg="StartContainer for \"6c50203996fc2649deb1758b8a1e5ec4414491289de625528cb5a02a5a5695c1\" returns successfully" Jan 17 00:31:45.851835 systemd[1]: cri-containerd-6c50203996fc2649deb1758b8a1e5ec4414491289de625528cb5a02a5a5695c1.scope: Deactivated successfully. Jan 17 00:31:45.876407 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6c50203996fc2649deb1758b8a1e5ec4414491289de625528cb5a02a5a5695c1-rootfs.mount: Deactivated successfully. Jan 17 00:31:46.080536 kubelet[2430]: E0117 00:31:46.080488 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:31:46.159503 containerd[1973]: time="2026-01-17T00:31:46.159370913Z" level=info msg="shim disconnected" id=6c50203996fc2649deb1758b8a1e5ec4414491289de625528cb5a02a5a5695c1 namespace=k8s.io Jan 17 00:31:46.159503 containerd[1973]: time="2026-01-17T00:31:46.159497209Z" level=warning msg="cleaning up after shim disconnected" id=6c50203996fc2649deb1758b8a1e5ec4414491289de625528cb5a02a5a5695c1 namespace=k8s.io Jan 17 00:31:46.159503 containerd[1973]: time="2026-01-17T00:31:46.159511316Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:31:46.251252 containerd[1973]: time="2026-01-17T00:31:46.251212418Z" level=info msg="CreateContainer within sandbox \"1c7ed507caee7d9454faeb161308cafb4d01b85b7fd3d2829dd728afe512510d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 17 00:31:46.267733 containerd[1973]: time="2026-01-17T00:31:46.267686088Z" level=info msg="CreateContainer within sandbox \"1c7ed507caee7d9454faeb161308cafb4d01b85b7fd3d2829dd728afe512510d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c2b66c2291d4c18b95bd381b5bf3cf95e5bc793d3c9048f557ba49348d022240\"" Jan 17 00:31:46.269110 containerd[1973]: time="2026-01-17T00:31:46.268207096Z" level=info msg="StartContainer for \"c2b66c2291d4c18b95bd381b5bf3cf95e5bc793d3c9048f557ba49348d022240\"" Jan 17 00:31:46.299409 systemd[1]: Started cri-containerd-c2b66c2291d4c18b95bd381b5bf3cf95e5bc793d3c9048f557ba49348d022240.scope - libcontainer container c2b66c2291d4c18b95bd381b5bf3cf95e5bc793d3c9048f557ba49348d022240. Jan 17 00:31:46.340993 containerd[1973]: time="2026-01-17T00:31:46.340718703Z" level=info msg="StartContainer for \"c2b66c2291d4c18b95bd381b5bf3cf95e5bc793d3c9048f557ba49348d022240\" returns successfully" Jan 17 00:31:46.353793 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 00:31:46.354390 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:31:46.354463 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:31:46.361762 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:31:46.362102 systemd[1]: cri-containerd-c2b66c2291d4c18b95bd381b5bf3cf95e5bc793d3c9048f557ba49348d022240.scope: Deactivated successfully. Jan 17 00:31:46.395041 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:31:46.404542 containerd[1973]: time="2026-01-17T00:31:46.404141343Z" level=info msg="shim disconnected" id=c2b66c2291d4c18b95bd381b5bf3cf95e5bc793d3c9048f557ba49348d022240 namespace=k8s.io Jan 17 00:31:46.404542 containerd[1973]: time="2026-01-17T00:31:46.404208855Z" level=warning msg="cleaning up after shim disconnected" id=c2b66c2291d4c18b95bd381b5bf3cf95e5bc793d3c9048f557ba49348d022240 namespace=k8s.io Jan 17 00:31:46.404542 containerd[1973]: time="2026-01-17T00:31:46.404218604Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:31:46.418269 containerd[1973]: time="2026-01-17T00:31:46.418210139Z" level=warning msg="cleanup warnings time=\"2026-01-17T00:31:46Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 17 00:31:47.080946 kubelet[2430]: E0117 00:31:47.080902 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:31:47.121654 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1486598740.mount: Deactivated successfully. Jan 17 00:31:47.258578 containerd[1973]: time="2026-01-17T00:31:47.258533186Z" level=info msg="CreateContainer within sandbox \"1c7ed507caee7d9454faeb161308cafb4d01b85b7fd3d2829dd728afe512510d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 17 00:31:47.289918 containerd[1973]: time="2026-01-17T00:31:47.289692452Z" level=info msg="CreateContainer within sandbox \"1c7ed507caee7d9454faeb161308cafb4d01b85b7fd3d2829dd728afe512510d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"bf5f2a39120c428f4775449c063b87849c1d52ca79352e68359838bf6e46d420\"" Jan 17 00:31:47.291909 containerd[1973]: time="2026-01-17T00:31:47.290765025Z" level=info msg="StartContainer for \"bf5f2a39120c428f4775449c063b87849c1d52ca79352e68359838bf6e46d420\"" Jan 17 00:31:47.349566 systemd[1]: Started cri-containerd-bf5f2a39120c428f4775449c063b87849c1d52ca79352e68359838bf6e46d420.scope - libcontainer container bf5f2a39120c428f4775449c063b87849c1d52ca79352e68359838bf6e46d420. Jan 17 00:31:47.400206 systemd[1]: cri-containerd-bf5f2a39120c428f4775449c063b87849c1d52ca79352e68359838bf6e46d420.scope: Deactivated successfully. Jan 17 00:31:47.400841 containerd[1973]: time="2026-01-17T00:31:47.400487857Z" level=info msg="StartContainer for \"bf5f2a39120c428f4775449c063b87849c1d52ca79352e68359838bf6e46d420\" returns successfully" Jan 17 00:31:47.551097 containerd[1973]: time="2026-01-17T00:31:47.551033992Z" level=info msg="shim disconnected" id=bf5f2a39120c428f4775449c063b87849c1d52ca79352e68359838bf6e46d420 namespace=k8s.io Jan 17 00:31:47.551097 containerd[1973]: time="2026-01-17T00:31:47.551087853Z" level=warning msg="cleaning up after shim disconnected" id=bf5f2a39120c428f4775449c063b87849c1d52ca79352e68359838bf6e46d420 namespace=k8s.io Jan 17 00:31:47.551097 containerd[1973]: time="2026-01-17T00:31:47.551096334Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:31:47.757734 systemd[1]: run-containerd-runc-k8s.io-bf5f2a39120c428f4775449c063b87849c1d52ca79352e68359838bf6e46d420-runc.gFfBhw.mount: Deactivated successfully. Jan 17 00:31:47.757878 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bf5f2a39120c428f4775449c063b87849c1d52ca79352e68359838bf6e46d420-rootfs.mount: Deactivated successfully. Jan 17 00:31:47.866854 containerd[1973]: time="2026-01-17T00:31:47.866772331Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:31:47.867907 containerd[1973]: time="2026-01-17T00:31:47.867742758Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=31161899" Jan 17 00:31:47.870288 containerd[1973]: time="2026-01-17T00:31:47.869640157Z" level=info msg="ImageCreate event name:\"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:31:47.872208 containerd[1973]: time="2026-01-17T00:31:47.872152831Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:31:47.872743 containerd[1973]: time="2026-01-17T00:31:47.872712584Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"31160918\" in 2.128879265s" Jan 17 00:31:47.872808 containerd[1973]: time="2026-01-17T00:31:47.872747536Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\"" Jan 17 00:31:47.875539 containerd[1973]: time="2026-01-17T00:31:47.875496015Z" level=info msg="CreateContainer within sandbox \"2d28139164eb7c8da73f6595b264a0ec46a23be98e5f1f54a949b85cbbe43e69\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 17 00:31:47.908712 containerd[1973]: time="2026-01-17T00:31:47.908459926Z" level=info msg="CreateContainer within sandbox \"2d28139164eb7c8da73f6595b264a0ec46a23be98e5f1f54a949b85cbbe43e69\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b7543c2c39fdd9a013eea1092d05a88ee40ff4ad2fa25eef21b8f0d83a0927fb\"" Jan 17 00:31:47.911413 containerd[1973]: time="2026-01-17T00:31:47.911330119Z" level=info msg="StartContainer for \"b7543c2c39fdd9a013eea1092d05a88ee40ff4ad2fa25eef21b8f0d83a0927fb\"" Jan 17 00:31:47.971734 systemd[1]: Started cri-containerd-b7543c2c39fdd9a013eea1092d05a88ee40ff4ad2fa25eef21b8f0d83a0927fb.scope - libcontainer container b7543c2c39fdd9a013eea1092d05a88ee40ff4ad2fa25eef21b8f0d83a0927fb. Jan 17 00:31:48.013742 containerd[1973]: time="2026-01-17T00:31:48.012215501Z" level=info msg="StartContainer for \"b7543c2c39fdd9a013eea1092d05a88ee40ff4ad2fa25eef21b8f0d83a0927fb\" returns successfully" Jan 17 00:31:48.081034 kubelet[2430]: E0117 00:31:48.080998 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:31:48.272134 containerd[1973]: time="2026-01-17T00:31:48.271124346Z" level=info msg="CreateContainer within sandbox \"1c7ed507caee7d9454faeb161308cafb4d01b85b7fd3d2829dd728afe512510d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 17 00:31:48.297925 containerd[1973]: time="2026-01-17T00:31:48.296258670Z" level=info msg="CreateContainer within sandbox \"1c7ed507caee7d9454faeb161308cafb4d01b85b7fd3d2829dd728afe512510d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"9cb85e2b35f3ccbfae6f4436022e05425ad551be0f44bcf0f32557ed7762ed77\"" Jan 17 00:31:48.300914 containerd[1973]: time="2026-01-17T00:31:48.300694161Z" level=info msg="StartContainer for \"9cb85e2b35f3ccbfae6f4436022e05425ad551be0f44bcf0f32557ed7762ed77\"" Jan 17 00:31:48.347852 systemd[1]: Started cri-containerd-9cb85e2b35f3ccbfae6f4436022e05425ad551be0f44bcf0f32557ed7762ed77.scope - libcontainer container 9cb85e2b35f3ccbfae6f4436022e05425ad551be0f44bcf0f32557ed7762ed77. Jan 17 00:31:48.389595 systemd[1]: cri-containerd-9cb85e2b35f3ccbfae6f4436022e05425ad551be0f44bcf0f32557ed7762ed77.scope: Deactivated successfully. Jan 17 00:31:48.395176 containerd[1973]: time="2026-01-17T00:31:48.395064448Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode1deb6cc_dc9c_4146_8150_2f9f18bceaf5.slice/cri-containerd-9cb85e2b35f3ccbfae6f4436022e05425ad551be0f44bcf0f32557ed7762ed77.scope/memory.events\": no such file or directory" Jan 17 00:31:48.397145 containerd[1973]: time="2026-01-17T00:31:48.396893034Z" level=info msg="StartContainer for \"9cb85e2b35f3ccbfae6f4436022e05425ad551be0f44bcf0f32557ed7762ed77\" returns successfully" Jan 17 00:31:48.491319 containerd[1973]: time="2026-01-17T00:31:48.491254330Z" level=info msg="shim disconnected" id=9cb85e2b35f3ccbfae6f4436022e05425ad551be0f44bcf0f32557ed7762ed77 namespace=k8s.io Jan 17 00:31:48.491319 containerd[1973]: time="2026-01-17T00:31:48.491303990Z" level=warning msg="cleaning up after shim disconnected" id=9cb85e2b35f3ccbfae6f4436022e05425ad551be0f44bcf0f32557ed7762ed77 namespace=k8s.io Jan 17 00:31:48.491319 containerd[1973]: time="2026-01-17T00:31:48.491312976Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:31:48.756730 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount598029485.mount: Deactivated successfully. Jan 17 00:31:49.081882 kubelet[2430]: E0117 00:31:49.081650 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:31:49.277618 containerd[1973]: time="2026-01-17T00:31:49.277578270Z" level=info msg="CreateContainer within sandbox \"1c7ed507caee7d9454faeb161308cafb4d01b85b7fd3d2829dd728afe512510d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 17 00:31:49.299557 kubelet[2430]: I0117 00:31:49.298648 2430 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9vrlw" podStartSLOduration=3.799802835 podStartE2EDuration="13.298632313s" podCreationTimestamp="2026-01-17 00:31:36 +0000 UTC" firstStartedPulling="2026-01-17 00:31:38.375308806 +0000 UTC m=+2.785420237" lastFinishedPulling="2026-01-17 00:31:47.874138295 +0000 UTC m=+12.284249715" observedRunningTime="2026-01-17 00:31:48.31375517 +0000 UTC m=+12.723866610" watchObservedRunningTime="2026-01-17 00:31:49.298632313 +0000 UTC m=+13.708743754" Jan 17 00:31:49.301058 containerd[1973]: time="2026-01-17T00:31:49.301013813Z" level=info msg="CreateContainer within sandbox \"1c7ed507caee7d9454faeb161308cafb4d01b85b7fd3d2829dd728afe512510d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e17866f7fb39adea01765b50d068d151eab49e8d2d304e46351ac528e00ba2f9\"" Jan 17 00:31:49.301506 containerd[1973]: time="2026-01-17T00:31:49.301478188Z" level=info msg="StartContainer for \"e17866f7fb39adea01765b50d068d151eab49e8d2d304e46351ac528e00ba2f9\"" Jan 17 00:31:49.334125 systemd[1]: run-containerd-runc-k8s.io-e17866f7fb39adea01765b50d068d151eab49e8d2d304e46351ac528e00ba2f9-runc.yxsVUB.mount: Deactivated successfully. Jan 17 00:31:49.345474 systemd[1]: Started cri-containerd-e17866f7fb39adea01765b50d068d151eab49e8d2d304e46351ac528e00ba2f9.scope - libcontainer container e17866f7fb39adea01765b50d068d151eab49e8d2d304e46351ac528e00ba2f9. Jan 17 00:31:49.389786 containerd[1973]: time="2026-01-17T00:31:49.389742606Z" level=info msg="StartContainer for \"e17866f7fb39adea01765b50d068d151eab49e8d2d304e46351ac528e00ba2f9\" returns successfully" Jan 17 00:31:49.521470 kubelet[2430]: I0117 00:31:49.521290 2430 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 17 00:31:49.881736 kernel: Initializing XFRM netlink socket Jan 17 00:31:50.082979 kubelet[2430]: E0117 00:31:50.082922 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:31:51.083590 kubelet[2430]: E0117 00:31:51.083507 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:31:51.374605 kubelet[2430]: I0117 00:31:51.374463 2430 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-xxghc" podStartSLOduration=8.006338729 podStartE2EDuration="15.374444834s" podCreationTimestamp="2026-01-17 00:31:36 +0000 UTC" firstStartedPulling="2026-01-17 00:31:38.3753093 +0000 UTC m=+2.785420731" lastFinishedPulling="2026-01-17 00:31:45.743415404 +0000 UTC m=+10.153526836" observedRunningTime="2026-01-17 00:31:50.306135027 +0000 UTC m=+14.716246467" watchObservedRunningTime="2026-01-17 00:31:51.374444834 +0000 UTC m=+15.784556265" Jan 17 00:31:51.381098 systemd[1]: Created slice kubepods-besteffort-pod6b7a22bf_3978_449f_a7c7_7165c6f64a27.slice - libcontainer container kubepods-besteffort-pod6b7a22bf_3978_449f_a7c7_7165c6f64a27.slice. Jan 17 00:31:51.409981 kubelet[2430]: I0117 00:31:51.409934 2430 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prxqv\" (UniqueName: \"kubernetes.io/projected/6b7a22bf-3978-449f-a7c7-7165c6f64a27-kube-api-access-prxqv\") pod \"nginx-deployment-7fcdb87857-nc6vg\" (UID: \"6b7a22bf-3978-449f-a7c7-7165c6f64a27\") " pod="default/nginx-deployment-7fcdb87857-nc6vg" Jan 17 00:31:51.539922 systemd-networkd[1900]: cilium_host: Link UP Jan 17 00:31:51.540356 systemd-networkd[1900]: cilium_net: Link UP Jan 17 00:31:51.540361 systemd-networkd[1900]: cilium_net: Gained carrier Jan 17 00:31:51.540555 systemd-networkd[1900]: cilium_host: Gained carrier Jan 17 00:31:51.548762 (udev-worker)[3125]: Network interface NamePolicy= disabled on kernel command line. Jan 17 00:31:51.550930 (udev-worker)[2851]: Network interface NamePolicy= disabled on kernel command line. Jan 17 00:31:51.672559 (udev-worker)[3150]: Network interface NamePolicy= disabled on kernel command line. Jan 17 00:31:51.678081 systemd-networkd[1900]: cilium_vxlan: Link UP Jan 17 00:31:51.678678 systemd-networkd[1900]: cilium_vxlan: Gained carrier Jan 17 00:31:51.685044 containerd[1973]: time="2026-01-17T00:31:51.685002348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-nc6vg,Uid:6b7a22bf-3978-449f-a7c7-7165c6f64a27,Namespace:default,Attempt:0,}" Jan 17 00:31:51.937201 kernel: NET: Registered PF_ALG protocol family Jan 17 00:31:51.996957 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 17 00:31:52.083916 kubelet[2430]: E0117 00:31:52.083859 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:31:52.212836 systemd-networkd[1900]: cilium_host: Gained IPv6LL Jan 17 00:31:52.277532 systemd-networkd[1900]: cilium_net: Gained IPv6LL Jan 17 00:31:52.696665 systemd-networkd[1900]: lxc_health: Link UP Jan 17 00:31:52.705011 systemd-networkd[1900]: lxc_health: Gained carrier Jan 17 00:31:53.084717 kubelet[2430]: E0117 00:31:53.084648 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:31:53.172603 systemd-networkd[1900]: cilium_vxlan: Gained IPv6LL Jan 17 00:31:53.264142 systemd-networkd[1900]: lxcdc71fe142527: Link UP Jan 17 00:31:53.281477 kernel: eth0: renamed from tmp38957 Jan 17 00:31:53.282825 systemd-networkd[1900]: lxcdc71fe142527: Gained carrier Jan 17 00:31:53.812827 systemd-networkd[1900]: lxc_health: Gained IPv6LL Jan 17 00:31:54.085664 kubelet[2430]: E0117 00:31:54.085339 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:31:55.029318 systemd-networkd[1900]: lxcdc71fe142527: Gained IPv6LL Jan 17 00:31:55.085696 kubelet[2430]: E0117 00:31:55.085642 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:31:56.072865 kubelet[2430]: E0117 00:31:56.072814 2430 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:31:56.086515 kubelet[2430]: E0117 00:31:56.086455 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:31:57.087264 kubelet[2430]: E0117 00:31:57.087201 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:31:57.272828 ntpd[1957]: Listen normally on 8 cilium_host 192.168.1.95:123 Jan 17 00:31:57.273747 ntpd[1957]: 17 Jan 00:31:57 ntpd[1957]: Listen normally on 8 cilium_host 192.168.1.95:123 Jan 17 00:31:57.273747 ntpd[1957]: 17 Jan 00:31:57 ntpd[1957]: Listen normally on 9 cilium_net [fe80::492:c4ff:feab:7fe4%3]:123 Jan 17 00:31:57.273747 ntpd[1957]: 17 Jan 00:31:57 ntpd[1957]: Listen normally on 10 cilium_host [fe80::684a:ddff:fecf:cb47%4]:123 Jan 17 00:31:57.273747 ntpd[1957]: 17 Jan 00:31:57 ntpd[1957]: Listen normally on 11 cilium_vxlan [fe80::5444:48ff:fe30:74a5%5]:123 Jan 17 00:31:57.273747 ntpd[1957]: 17 Jan 00:31:57 ntpd[1957]: Listen normally on 12 lxc_health [fe80::f0da:a0ff:fe52:36e5%7]:123 Jan 17 00:31:57.273747 ntpd[1957]: 17 Jan 00:31:57 ntpd[1957]: Listen normally on 13 lxcdc71fe142527 [fe80::64c4:2aff:fe46:64a4%9]:123 Jan 17 00:31:57.272906 ntpd[1957]: Listen normally on 9 cilium_net [fe80::492:c4ff:feab:7fe4%3]:123 Jan 17 00:31:57.272951 ntpd[1957]: Listen normally on 10 cilium_host [fe80::684a:ddff:fecf:cb47%4]:123 Jan 17 00:31:57.272981 ntpd[1957]: Listen normally on 11 cilium_vxlan [fe80::5444:48ff:fe30:74a5%5]:123 Jan 17 00:31:57.273013 ntpd[1957]: Listen normally on 12 lxc_health [fe80::f0da:a0ff:fe52:36e5%7]:123 Jan 17 00:31:57.273043 ntpd[1957]: Listen normally on 13 lxcdc71fe142527 [fe80::64c4:2aff:fe46:64a4%9]:123 Jan 17 00:31:57.490099 containerd[1973]: time="2026-01-17T00:31:57.489172517Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:31:57.490649 containerd[1973]: time="2026-01-17T00:31:57.489266745Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:31:57.490649 containerd[1973]: time="2026-01-17T00:31:57.489288629Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:31:57.490649 containerd[1973]: time="2026-01-17T00:31:57.489396187Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:31:57.553590 systemd[1]: run-containerd-runc-k8s.io-38957ac9893fb0dfc398b08321c9dc6a6c3b8eb946c7bdf30843cbd2f3543b5a-runc.hssoIV.mount: Deactivated successfully. Jan 17 00:31:57.564489 systemd[1]: Started cri-containerd-38957ac9893fb0dfc398b08321c9dc6a6c3b8eb946c7bdf30843cbd2f3543b5a.scope - libcontainer container 38957ac9893fb0dfc398b08321c9dc6a6c3b8eb946c7bdf30843cbd2f3543b5a. Jan 17 00:31:57.611651 containerd[1973]: time="2026-01-17T00:31:57.611607400Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-nc6vg,Uid:6b7a22bf-3978-449f-a7c7-7165c6f64a27,Namespace:default,Attempt:0,} returns sandbox id \"38957ac9893fb0dfc398b08321c9dc6a6c3b8eb946c7bdf30843cbd2f3543b5a\"" Jan 17 00:31:57.613990 containerd[1973]: time="2026-01-17T00:31:57.613597330Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 17 00:31:58.088399 kubelet[2430]: E0117 00:31:58.088324 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:31:59.089210 kubelet[2430]: E0117 00:31:59.088989 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:32:00.090037 kubelet[2430]: E0117 00:32:00.089993 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:32:00.283305 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2882456660.mount: Deactivated successfully. Jan 17 00:32:01.090987 kubelet[2430]: E0117 00:32:01.090873 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:32:02.094590 kubelet[2430]: E0117 00:32:02.094489 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:32:02.425676 containerd[1973]: time="2026-01-17T00:32:02.425288792Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:32:02.429278 containerd[1973]: time="2026-01-17T00:32:02.429216160Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=63840319" Jan 17 00:32:02.438234 containerd[1973]: time="2026-01-17T00:32:02.431034132Z" level=info msg="ImageCreate event name:\"sha256:66e7a6d326b670d959427bd421ca82232b0b81b18d85abaecb4ab9823d35056e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:32:02.452737 containerd[1973]: time="2026-01-17T00:32:02.451175437Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:892d1d54ab079b8cffa2317ccb45829886a0c3c3edbdf92bb286904b09797767\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:32:02.458290 containerd[1973]: time="2026-01-17T00:32:02.458023506Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:66e7a6d326b670d959427bd421ca82232b0b81b18d85abaecb4ab9823d35056e\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:892d1d54ab079b8cffa2317ccb45829886a0c3c3edbdf92bb286904b09797767\", size \"63840197\" in 4.844219334s" Jan 17 00:32:02.458290 containerd[1973]: time="2026-01-17T00:32:02.458093953Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:66e7a6d326b670d959427bd421ca82232b0b81b18d85abaecb4ab9823d35056e\"" Jan 17 00:32:02.480347 containerd[1973]: time="2026-01-17T00:32:02.480268172Z" level=info msg="CreateContainer within sandbox \"38957ac9893fb0dfc398b08321c9dc6a6c3b8eb946c7bdf30843cbd2f3543b5a\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jan 17 00:32:02.529590 containerd[1973]: time="2026-01-17T00:32:02.529530770Z" level=info msg="CreateContainer within sandbox \"38957ac9893fb0dfc398b08321c9dc6a6c3b8eb946c7bdf30843cbd2f3543b5a\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"2348b47e3bc0fc6343d05f03abdf6bca608946521567a35325c90116038d8899\"" Jan 17 00:32:02.530984 containerd[1973]: time="2026-01-17T00:32:02.530833669Z" level=info msg="StartContainer for \"2348b47e3bc0fc6343d05f03abdf6bca608946521567a35325c90116038d8899\"" Jan 17 00:32:02.732481 systemd[1]: Started cri-containerd-2348b47e3bc0fc6343d05f03abdf6bca608946521567a35325c90116038d8899.scope - libcontainer container 2348b47e3bc0fc6343d05f03abdf6bca608946521567a35325c90116038d8899. Jan 17 00:32:02.855052 containerd[1973]: time="2026-01-17T00:32:02.854967330Z" level=info msg="StartContainer for \"2348b47e3bc0fc6343d05f03abdf6bca608946521567a35325c90116038d8899\" returns successfully" Jan 17 00:32:03.095551 kubelet[2430]: E0117 00:32:03.095506 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:32:03.361923 kubelet[2430]: I0117 00:32:03.361738 2430 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-7fcdb87857-nc6vg" podStartSLOduration=7.506943193 podStartE2EDuration="12.361715504s" podCreationTimestamp="2026-01-17 00:31:51 +0000 UTC" firstStartedPulling="2026-01-17 00:31:57.612703685 +0000 UTC m=+22.022815105" lastFinishedPulling="2026-01-17 00:32:02.467475982 +0000 UTC m=+26.877587416" observedRunningTime="2026-01-17 00:32:03.361662537 +0000 UTC m=+27.771773980" watchObservedRunningTime="2026-01-17 00:32:03.361715504 +0000 UTC m=+27.771826944" Jan 17 00:32:04.095866 kubelet[2430]: E0117 00:32:04.095808 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:32:05.096698 kubelet[2430]: E0117 00:32:05.096624 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:32:06.063495 update_engine[1964]: I20260117 00:32:06.063390 1964 update_attempter.cc:509] Updating boot flags... Jan 17 00:32:06.097757 kubelet[2430]: E0117 00:32:06.097695 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:32:06.122477 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 33 scanned by (udev-worker) (3632) Jan 17 00:32:06.332223 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 33 scanned by (udev-worker) (3636) Jan 17 00:32:06.527219 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 33 scanned by (udev-worker) (3636) Jan 17 00:32:07.097927 kubelet[2430]: E0117 00:32:07.097871 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:32:07.813160 systemd[1]: Created slice kubepods-besteffort-pod8eced921_bf5f_495b_abdf_7162f901d7e0.slice - libcontainer container kubepods-besteffort-pod8eced921_bf5f_495b_abdf_7162f901d7e0.slice. Jan 17 00:32:07.901959 kubelet[2430]: I0117 00:32:07.901845 2430 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c76m6\" (UniqueName: \"kubernetes.io/projected/8eced921-bf5f-495b-abdf-7162f901d7e0-kube-api-access-c76m6\") pod \"nfs-server-provisioner-0\" (UID: \"8eced921-bf5f-495b-abdf-7162f901d7e0\") " pod="default/nfs-server-provisioner-0" Jan 17 00:32:07.905798 kubelet[2430]: I0117 00:32:07.902072 2430 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/8eced921-bf5f-495b-abdf-7162f901d7e0-data\") pod \"nfs-server-provisioner-0\" (UID: \"8eced921-bf5f-495b-abdf-7162f901d7e0\") " pod="default/nfs-server-provisioner-0" Jan 17 00:32:08.098926 kubelet[2430]: E0117 00:32:08.098793 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:32:08.116974 containerd[1973]: time="2026-01-17T00:32:08.116936883Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:8eced921-bf5f-495b-abdf-7162f901d7e0,Namespace:default,Attempt:0,}" Jan 17 00:32:08.172996 (udev-worker)[3635]: Network interface NamePolicy= disabled on kernel command line. Jan 17 00:32:08.174294 systemd-networkd[1900]: lxcec4444ddce2d: Link UP Jan 17 00:32:08.182778 (udev-worker)[3637]: Network interface NamePolicy= disabled on kernel command line. Jan 17 00:32:08.185227 kernel: eth0: renamed from tmp157aa Jan 17 00:32:08.189444 systemd-networkd[1900]: lxcec4444ddce2d: Gained carrier Jan 17 00:32:08.410728 containerd[1973]: time="2026-01-17T00:32:08.410465140Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:32:08.410728 containerd[1973]: time="2026-01-17T00:32:08.410525106Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:32:08.410728 containerd[1973]: time="2026-01-17T00:32:08.410546558Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:32:08.411031 containerd[1973]: time="2026-01-17T00:32:08.410666420Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:32:08.435220 systemd[1]: run-containerd-runc-k8s.io-157aaa9b8fb6d3d0548fc78677b74817e86411c69173974ec367c879750010ba-runc.Yq6Hcg.mount: Deactivated successfully. Jan 17 00:32:08.446457 systemd[1]: Started cri-containerd-157aaa9b8fb6d3d0548fc78677b74817e86411c69173974ec367c879750010ba.scope - libcontainer container 157aaa9b8fb6d3d0548fc78677b74817e86411c69173974ec367c879750010ba. Jan 17 00:32:08.493062 containerd[1973]: time="2026-01-17T00:32:08.492682766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:8eced921-bf5f-495b-abdf-7162f901d7e0,Namespace:default,Attempt:0,} returns sandbox id \"157aaa9b8fb6d3d0548fc78677b74817e86411c69173974ec367c879750010ba\"" Jan 17 00:32:08.495348 containerd[1973]: time="2026-01-17T00:32:08.495100901Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jan 17 00:32:09.102207 kubelet[2430]: E0117 00:32:09.099881 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:32:10.100746 kubelet[2430]: E0117 00:32:10.100145 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:32:10.196388 systemd-networkd[1900]: lxcec4444ddce2d: Gained IPv6LL Jan 17 00:32:11.093769 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount175568751.mount: Deactivated successfully. Jan 17 00:32:11.101204 kubelet[2430]: E0117 00:32:11.100498 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:32:12.101709 kubelet[2430]: E0117 00:32:12.101631 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:32:12.273066 ntpd[1957]: Listen normally on 14 lxcec4444ddce2d [fe80::2423:b1ff:fec0:18b1%11]:123 Jan 17 00:32:12.273836 ntpd[1957]: 17 Jan 00:32:12 ntpd[1957]: Listen normally on 14 lxcec4444ddce2d [fe80::2423:b1ff:fec0:18b1%11]:123 Jan 17 00:32:13.101913 kubelet[2430]: E0117 00:32:13.101868 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:32:13.192779 containerd[1973]: time="2026-01-17T00:32:13.192713223Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:32:13.194253 containerd[1973]: time="2026-01-17T00:32:13.194200042Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039406" Jan 17 00:32:13.197199 containerd[1973]: time="2026-01-17T00:32:13.195680448Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:32:13.198906 containerd[1973]: time="2026-01-17T00:32:13.198714443Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:32:13.199586 containerd[1973]: time="2026-01-17T00:32:13.199552757Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 4.70441713s" Jan 17 00:32:13.199657 containerd[1973]: time="2026-01-17T00:32:13.199589754Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Jan 17 00:32:13.202572 containerd[1973]: time="2026-01-17T00:32:13.202524757Z" level=info msg="CreateContainer within sandbox \"157aaa9b8fb6d3d0548fc78677b74817e86411c69173974ec367c879750010ba\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jan 17 00:32:13.219201 containerd[1973]: time="2026-01-17T00:32:13.219135877Z" level=info msg="CreateContainer within sandbox \"157aaa9b8fb6d3d0548fc78677b74817e86411c69173974ec367c879750010ba\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"c6d53c1b5750f85aa5803f9e5ee72d77135479bb2699d552e22f8d67c7f55334\"" Jan 17 00:32:13.219811 containerd[1973]: time="2026-01-17T00:32:13.219784277Z" level=info msg="StartContainer for \"c6d53c1b5750f85aa5803f9e5ee72d77135479bb2699d552e22f8d67c7f55334\"" Jan 17 00:32:13.258448 systemd[1]: Started cri-containerd-c6d53c1b5750f85aa5803f9e5ee72d77135479bb2699d552e22f8d67c7f55334.scope - libcontainer container c6d53c1b5750f85aa5803f9e5ee72d77135479bb2699d552e22f8d67c7f55334. Jan 17 00:32:13.287544 containerd[1973]: time="2026-01-17T00:32:13.287497375Z" level=info msg="StartContainer for \"c6d53c1b5750f85aa5803f9e5ee72d77135479bb2699d552e22f8d67c7f55334\" returns successfully" Jan 17 00:32:13.372347 kubelet[2430]: I0117 00:32:13.371842 2430 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.665459059 podStartE2EDuration="6.371823212s" podCreationTimestamp="2026-01-17 00:32:07 +0000 UTC" firstStartedPulling="2026-01-17 00:32:08.494605689 +0000 UTC m=+32.904717111" lastFinishedPulling="2026-01-17 00:32:13.200969844 +0000 UTC m=+37.611081264" observedRunningTime="2026-01-17 00:32:13.371151848 +0000 UTC m=+37.781263284" watchObservedRunningTime="2026-01-17 00:32:13.371823212 +0000 UTC m=+37.781934653" Jan 17 00:32:14.102755 kubelet[2430]: E0117 00:32:14.102681 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:32:15.103599 kubelet[2430]: E0117 00:32:15.103541 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:32:16.073373 kubelet[2430]: E0117 00:32:16.073307 2430 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:32:16.104449 kubelet[2430]: E0117 00:32:16.104379 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:32:17.105201 kubelet[2430]: E0117 00:32:17.105121 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:32:18.106056 kubelet[2430]: E0117 00:32:18.105997 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:32:19.106825 kubelet[2430]: E0117 00:32:19.106764 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:32:20.107155 kubelet[2430]: E0117 00:32:20.107097 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:32:21.108328 kubelet[2430]: E0117 00:32:21.108275 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:32:22.108797 kubelet[2430]: E0117 00:32:22.108713 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:32:22.976286 systemd[1]: Created slice kubepods-besteffort-pod8a4e7a68_3e41_4f04_be8a_67223425eb14.slice - libcontainer container kubepods-besteffort-pod8a4e7a68_3e41_4f04_be8a_67223425eb14.slice. Jan 17 00:32:23.108927 kubelet[2430]: E0117 00:32:23.108835 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:32:23.108927 kubelet[2430]: I0117 00:32:23.108838 2430 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stq2c\" (UniqueName: \"kubernetes.io/projected/8a4e7a68-3e41-4f04-be8a-67223425eb14-kube-api-access-stq2c\") pod \"test-pod-1\" (UID: \"8a4e7a68-3e41-4f04-be8a-67223425eb14\") " pod="default/test-pod-1" Jan 17 00:32:23.109475 kubelet[2430]: I0117 00:32:23.108949 2430 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-808cad7e-51b7-4c31-acca-8fd1c9835bc0\" (UniqueName: \"kubernetes.io/nfs/8a4e7a68-3e41-4f04-be8a-67223425eb14-pvc-808cad7e-51b7-4c31-acca-8fd1c9835bc0\") pod \"test-pod-1\" (UID: \"8a4e7a68-3e41-4f04-be8a-67223425eb14\") " pod="default/test-pod-1" Jan 17 00:32:23.276211 kernel: FS-Cache: Loaded Jan 17 00:32:23.360750 kernel: RPC: Registered named UNIX socket transport module. Jan 17 00:32:23.360888 kernel: RPC: Registered udp transport module. Jan 17 00:32:23.360937 kernel: RPC: Registered tcp transport module. Jan 17 00:32:23.361501 kernel: RPC: Registered tcp-with-tls transport module. Jan 17 00:32:23.362489 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jan 17 00:32:23.662612 kernel: NFS: Registering the id_resolver key type Jan 17 00:32:23.662825 kernel: Key type id_resolver registered Jan 17 00:32:23.662866 kernel: Key type id_legacy registered Jan 17 00:32:23.714162 nfsidmap[4065]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Jan 17 00:32:23.718649 nfsidmap[4066]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Jan 17 00:32:23.881132 containerd[1973]: time="2026-01-17T00:32:23.881045232Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:8a4e7a68-3e41-4f04-be8a-67223425eb14,Namespace:default,Attempt:0,}" Jan 17 00:32:23.923243 systemd-networkd[1900]: lxc933eda24661d: Link UP Jan 17 00:32:23.927464 (udev-worker)[4062]: Network interface NamePolicy= disabled on kernel command line. Jan 17 00:32:23.930354 kernel: eth0: renamed from tmp18dfd Jan 17 00:32:23.935121 systemd-networkd[1900]: lxc933eda24661d: Gained carrier Jan 17 00:32:24.109677 kubelet[2430]: E0117 00:32:24.109627 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:32:24.117241 containerd[1973]: time="2026-01-17T00:32:24.115888802Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:32:24.117241 containerd[1973]: time="2026-01-17T00:32:24.115940258Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:32:24.117241 containerd[1973]: time="2026-01-17T00:32:24.115951579Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:32:24.117241 containerd[1973]: time="2026-01-17T00:32:24.116030149Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:32:24.134459 systemd[1]: Started cri-containerd-18dfdebec2f59e99f00742eeb2a0f250fbfa5cae12c14cf1c8e07368da599635.scope - libcontainer container 18dfdebec2f59e99f00742eeb2a0f250fbfa5cae12c14cf1c8e07368da599635. Jan 17 00:32:24.182779 containerd[1973]: time="2026-01-17T00:32:24.182723796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:8a4e7a68-3e41-4f04-be8a-67223425eb14,Namespace:default,Attempt:0,} returns sandbox id \"18dfdebec2f59e99f00742eeb2a0f250fbfa5cae12c14cf1c8e07368da599635\"" Jan 17 00:32:24.184785 containerd[1973]: time="2026-01-17T00:32:24.184702723Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 17 00:32:24.505365 containerd[1973]: time="2026-01-17T00:32:24.505239066Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:32:24.506873 containerd[1973]: time="2026-01-17T00:32:24.506281803Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Jan 17 00:32:24.509231 containerd[1973]: time="2026-01-17T00:32:24.509193717Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:66e7a6d326b670d959427bd421ca82232b0b81b18d85abaecb4ab9823d35056e\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:892d1d54ab079b8cffa2317ccb45829886a0c3c3edbdf92bb286904b09797767\", size \"63840197\" in 324.330601ms" Jan 17 00:32:24.509231 containerd[1973]: time="2026-01-17T00:32:24.509229103Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:66e7a6d326b670d959427bd421ca82232b0b81b18d85abaecb4ab9823d35056e\"" Jan 17 00:32:24.511027 containerd[1973]: time="2026-01-17T00:32:24.510982009Z" level=info msg="CreateContainer within sandbox \"18dfdebec2f59e99f00742eeb2a0f250fbfa5cae12c14cf1c8e07368da599635\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jan 17 00:32:24.531793 containerd[1973]: time="2026-01-17T00:32:24.531724313Z" level=info msg="CreateContainer within sandbox \"18dfdebec2f59e99f00742eeb2a0f250fbfa5cae12c14cf1c8e07368da599635\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"3a5695f077dac42e7fb0902e281ba7aed715311f18b66e3de74137a1b28fe763\"" Jan 17 00:32:24.532266 containerd[1973]: time="2026-01-17T00:32:24.532238153Z" level=info msg="StartContainer for \"3a5695f077dac42e7fb0902e281ba7aed715311f18b66e3de74137a1b28fe763\"" Jan 17 00:32:24.620385 systemd[1]: Started cri-containerd-3a5695f077dac42e7fb0902e281ba7aed715311f18b66e3de74137a1b28fe763.scope - libcontainer container 3a5695f077dac42e7fb0902e281ba7aed715311f18b66e3de74137a1b28fe763. Jan 17 00:32:24.649148 containerd[1973]: time="2026-01-17T00:32:24.649102010Z" level=info msg="StartContainer for \"3a5695f077dac42e7fb0902e281ba7aed715311f18b66e3de74137a1b28fe763\" returns successfully" Jan 17 00:32:25.110797 kubelet[2430]: E0117 00:32:25.110742 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:32:25.395251 kubelet[2430]: I0117 00:32:25.394990 2430 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=16.069389434 podStartE2EDuration="16.394972509s" podCreationTimestamp="2026-01-17 00:32:09 +0000 UTC" firstStartedPulling="2026-01-17 00:32:24.184248956 +0000 UTC m=+48.594360376" lastFinishedPulling="2026-01-17 00:32:24.50983203 +0000 UTC m=+48.919943451" observedRunningTime="2026-01-17 00:32:25.394465286 +0000 UTC m=+49.804576707" watchObservedRunningTime="2026-01-17 00:32:25.394972509 +0000 UTC m=+49.805083949" Jan 17 00:32:25.492479 systemd-networkd[1900]: lxc933eda24661d: Gained IPv6LL Jan 17 00:32:26.111642 kubelet[2430]: E0117 00:32:26.111588 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:32:27.111792 kubelet[2430]: E0117 00:32:27.111726 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:32:28.113210 kubelet[2430]: E0117 00:32:28.113080 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:32:28.272895 ntpd[1957]: Listen normally on 15 lxc933eda24661d [fe80::6ceb:feff:fec3:5914%13]:123 Jan 17 00:32:28.273617 ntpd[1957]: 17 Jan 00:32:28 ntpd[1957]: Listen normally on 15 lxc933eda24661d [fe80::6ceb:feff:fec3:5914%13]:123 Jan 17 00:32:29.113880 kubelet[2430]: E0117 00:32:29.113800 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:32:30.115081 kubelet[2430]: E0117 00:32:30.114981 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:32:30.507548 systemd[1]: run-containerd-runc-k8s.io-e17866f7fb39adea01765b50d068d151eab49e8d2d304e46351ac528e00ba2f9-runc.s8XTox.mount: Deactivated successfully. Jan 17 00:32:30.529410 containerd[1973]: time="2026-01-17T00:32:30.529349938Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 00:32:30.537968 containerd[1973]: time="2026-01-17T00:32:30.537933731Z" level=info msg="StopContainer for \"e17866f7fb39adea01765b50d068d151eab49e8d2d304e46351ac528e00ba2f9\" with timeout 2 (s)" Jan 17 00:32:30.538444 containerd[1973]: time="2026-01-17T00:32:30.538415333Z" level=info msg="Stop container \"e17866f7fb39adea01765b50d068d151eab49e8d2d304e46351ac528e00ba2f9\" with signal terminated" Jan 17 00:32:30.546414 systemd-networkd[1900]: lxc_health: Link DOWN Jan 17 00:32:30.546423 systemd-networkd[1900]: lxc_health: Lost carrier Jan 17 00:32:30.562479 systemd[1]: cri-containerd-e17866f7fb39adea01765b50d068d151eab49e8d2d304e46351ac528e00ba2f9.scope: Deactivated successfully. Jan 17 00:32:30.562789 systemd[1]: cri-containerd-e17866f7fb39adea01765b50d068d151eab49e8d2d304e46351ac528e00ba2f9.scope: Consumed 7.800s CPU time. Jan 17 00:32:30.587369 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e17866f7fb39adea01765b50d068d151eab49e8d2d304e46351ac528e00ba2f9-rootfs.mount: Deactivated successfully. Jan 17 00:32:30.604967 containerd[1973]: time="2026-01-17T00:32:30.604906410Z" level=info msg="shim disconnected" id=e17866f7fb39adea01765b50d068d151eab49e8d2d304e46351ac528e00ba2f9 namespace=k8s.io Jan 17 00:32:30.604967 containerd[1973]: time="2026-01-17T00:32:30.604959804Z" level=warning msg="cleaning up after shim disconnected" id=e17866f7fb39adea01765b50d068d151eab49e8d2d304e46351ac528e00ba2f9 namespace=k8s.io Jan 17 00:32:30.604967 containerd[1973]: time="2026-01-17T00:32:30.604972320Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:32:30.622635 containerd[1973]: time="2026-01-17T00:32:30.622593171Z" level=info msg="StopContainer for \"e17866f7fb39adea01765b50d068d151eab49e8d2d304e46351ac528e00ba2f9\" returns successfully" Jan 17 00:32:30.623372 containerd[1973]: time="2026-01-17T00:32:30.623316243Z" level=info msg="StopPodSandbox for \"1c7ed507caee7d9454faeb161308cafb4d01b85b7fd3d2829dd728afe512510d\"" Jan 17 00:32:30.623372 containerd[1973]: time="2026-01-17T00:32:30.623354659Z" level=info msg="Container to stop \"9cb85e2b35f3ccbfae6f4436022e05425ad551be0f44bcf0f32557ed7762ed77\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 00:32:30.623372 containerd[1973]: time="2026-01-17T00:32:30.623368263Z" level=info msg="Container to stop \"e17866f7fb39adea01765b50d068d151eab49e8d2d304e46351ac528e00ba2f9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 00:32:30.623372 containerd[1973]: time="2026-01-17T00:32:30.623377822Z" level=info msg="Container to stop \"6c50203996fc2649deb1758b8a1e5ec4414491289de625528cb5a02a5a5695c1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 00:32:30.625374 containerd[1973]: time="2026-01-17T00:32:30.623386910Z" level=info msg="Container to stop \"c2b66c2291d4c18b95bd381b5bf3cf95e5bc793d3c9048f557ba49348d022240\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 00:32:30.625374 containerd[1973]: time="2026-01-17T00:32:30.623396092Z" level=info msg="Container to stop \"bf5f2a39120c428f4775449c063b87849c1d52ca79352e68359838bf6e46d420\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 00:32:30.625628 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1c7ed507caee7d9454faeb161308cafb4d01b85b7fd3d2829dd728afe512510d-shm.mount: Deactivated successfully. Jan 17 00:32:30.631755 systemd[1]: cri-containerd-1c7ed507caee7d9454faeb161308cafb4d01b85b7fd3d2829dd728afe512510d.scope: Deactivated successfully. Jan 17 00:32:30.652880 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1c7ed507caee7d9454faeb161308cafb4d01b85b7fd3d2829dd728afe512510d-rootfs.mount: Deactivated successfully. Jan 17 00:32:30.661602 containerd[1973]: time="2026-01-17T00:32:30.661531186Z" level=info msg="shim disconnected" id=1c7ed507caee7d9454faeb161308cafb4d01b85b7fd3d2829dd728afe512510d namespace=k8s.io Jan 17 00:32:30.661602 containerd[1973]: time="2026-01-17T00:32:30.661586158Z" level=warning msg="cleaning up after shim disconnected" id=1c7ed507caee7d9454faeb161308cafb4d01b85b7fd3d2829dd728afe512510d namespace=k8s.io Jan 17 00:32:30.661602 containerd[1973]: time="2026-01-17T00:32:30.661595551Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:32:30.679962 containerd[1973]: time="2026-01-17T00:32:30.679789164Z" level=info msg="TearDown network for sandbox \"1c7ed507caee7d9454faeb161308cafb4d01b85b7fd3d2829dd728afe512510d\" successfully" Jan 17 00:32:30.679962 containerd[1973]: time="2026-01-17T00:32:30.679834831Z" level=info msg="StopPodSandbox for \"1c7ed507caee7d9454faeb161308cafb4d01b85b7fd3d2829dd728afe512510d\" returns successfully" Jan 17 00:32:30.760841 kubelet[2430]: I0117 00:32:30.760135 2430 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e1deb6cc-dc9c-4146-8150-2f9f18bceaf5-cilium-run\") pod \"e1deb6cc-dc9c-4146-8150-2f9f18bceaf5\" (UID: \"e1deb6cc-dc9c-4146-8150-2f9f18bceaf5\") " Jan 17 00:32:30.760841 kubelet[2430]: I0117 00:32:30.760173 2430 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e1deb6cc-dc9c-4146-8150-2f9f18bceaf5-cilium-cgroup\") pod \"e1deb6cc-dc9c-4146-8150-2f9f18bceaf5\" (UID: \"e1deb6cc-dc9c-4146-8150-2f9f18bceaf5\") " Jan 17 00:32:30.760841 kubelet[2430]: I0117 00:32:30.760223 2430 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e1deb6cc-dc9c-4146-8150-2f9f18bceaf5-hubble-tls\") pod \"e1deb6cc-dc9c-4146-8150-2f9f18bceaf5\" (UID: \"e1deb6cc-dc9c-4146-8150-2f9f18bceaf5\") " Jan 17 00:32:30.760841 kubelet[2430]: I0117 00:32:30.760241 2430 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e1deb6cc-dc9c-4146-8150-2f9f18bceaf5-host-proc-sys-net\") pod \"e1deb6cc-dc9c-4146-8150-2f9f18bceaf5\" (UID: \"e1deb6cc-dc9c-4146-8150-2f9f18bceaf5\") " Jan 17 00:32:30.760841 kubelet[2430]: I0117 00:32:30.760243 2430 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1deb6cc-dc9c-4146-8150-2f9f18bceaf5-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "e1deb6cc-dc9c-4146-8150-2f9f18bceaf5" (UID: "e1deb6cc-dc9c-4146-8150-2f9f18bceaf5"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:32:30.760841 kubelet[2430]: I0117 00:32:30.760258 2430 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e1deb6cc-dc9c-4146-8150-2f9f18bceaf5-hostproc\") pod \"e1deb6cc-dc9c-4146-8150-2f9f18bceaf5\" (UID: \"e1deb6cc-dc9c-4146-8150-2f9f18bceaf5\") " Jan 17 00:32:30.761096 kubelet[2430]: I0117 00:32:30.760324 2430 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5stqx\" (UniqueName: \"kubernetes.io/projected/e1deb6cc-dc9c-4146-8150-2f9f18bceaf5-kube-api-access-5stqx\") pod \"e1deb6cc-dc9c-4146-8150-2f9f18bceaf5\" (UID: \"e1deb6cc-dc9c-4146-8150-2f9f18bceaf5\") " Jan 17 00:32:30.761096 kubelet[2430]: I0117 00:32:30.760341 2430 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e1deb6cc-dc9c-4146-8150-2f9f18bceaf5-xtables-lock\") pod \"e1deb6cc-dc9c-4146-8150-2f9f18bceaf5\" (UID: \"e1deb6cc-dc9c-4146-8150-2f9f18bceaf5\") " Jan 17 00:32:30.761096 kubelet[2430]: I0117 00:32:30.760357 2430 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e1deb6cc-dc9c-4146-8150-2f9f18bceaf5-host-proc-sys-kernel\") pod \"e1deb6cc-dc9c-4146-8150-2f9f18bceaf5\" (UID: \"e1deb6cc-dc9c-4146-8150-2f9f18bceaf5\") " Jan 17 00:32:30.761096 kubelet[2430]: I0117 00:32:30.760376 2430 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e1deb6cc-dc9c-4146-8150-2f9f18bceaf5-clustermesh-secrets\") pod \"e1deb6cc-dc9c-4146-8150-2f9f18bceaf5\" (UID: \"e1deb6cc-dc9c-4146-8150-2f9f18bceaf5\") " Jan 17 00:32:30.761096 kubelet[2430]: I0117 00:32:30.760390 2430 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e1deb6cc-dc9c-4146-8150-2f9f18bceaf5-bpf-maps\") pod \"e1deb6cc-dc9c-4146-8150-2f9f18bceaf5\" (UID: \"e1deb6cc-dc9c-4146-8150-2f9f18bceaf5\") " Jan 17 00:32:30.761096 kubelet[2430]: I0117 00:32:30.760409 2430 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e1deb6cc-dc9c-4146-8150-2f9f18bceaf5-cilium-config-path\") pod \"e1deb6cc-dc9c-4146-8150-2f9f18bceaf5\" (UID: \"e1deb6cc-dc9c-4146-8150-2f9f18bceaf5\") " Jan 17 00:32:30.761274 kubelet[2430]: I0117 00:32:30.760425 2430 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e1deb6cc-dc9c-4146-8150-2f9f18bceaf5-etc-cni-netd\") pod \"e1deb6cc-dc9c-4146-8150-2f9f18bceaf5\" (UID: \"e1deb6cc-dc9c-4146-8150-2f9f18bceaf5\") " Jan 17 00:32:30.761274 kubelet[2430]: I0117 00:32:30.760447 2430 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e1deb6cc-dc9c-4146-8150-2f9f18bceaf5-cni-path\") pod \"e1deb6cc-dc9c-4146-8150-2f9f18bceaf5\" (UID: \"e1deb6cc-dc9c-4146-8150-2f9f18bceaf5\") " Jan 17 00:32:30.761274 kubelet[2430]: I0117 00:32:30.760469 2430 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e1deb6cc-dc9c-4146-8150-2f9f18bceaf5-lib-modules\") pod \"e1deb6cc-dc9c-4146-8150-2f9f18bceaf5\" (UID: \"e1deb6cc-dc9c-4146-8150-2f9f18bceaf5\") " Jan 17 00:32:30.761274 kubelet[2430]: I0117 00:32:30.760505 2430 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e1deb6cc-dc9c-4146-8150-2f9f18bceaf5-cilium-run\") on node \"172.31.24.155\" DevicePath \"\"" Jan 17 00:32:30.761274 kubelet[2430]: I0117 00:32:30.760283 2430 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1deb6cc-dc9c-4146-8150-2f9f18bceaf5-hostproc" (OuterVolumeSpecName: "hostproc") pod "e1deb6cc-dc9c-4146-8150-2f9f18bceaf5" (UID: "e1deb6cc-dc9c-4146-8150-2f9f18bceaf5"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:32:30.761274 kubelet[2430]: I0117 00:32:30.760297 2430 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1deb6cc-dc9c-4146-8150-2f9f18bceaf5-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "e1deb6cc-dc9c-4146-8150-2f9f18bceaf5" (UID: "e1deb6cc-dc9c-4146-8150-2f9f18bceaf5"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:32:30.761431 kubelet[2430]: I0117 00:32:30.760525 2430 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1deb6cc-dc9c-4146-8150-2f9f18bceaf5-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "e1deb6cc-dc9c-4146-8150-2f9f18bceaf5" (UID: "e1deb6cc-dc9c-4146-8150-2f9f18bceaf5"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:32:30.763203 kubelet[2430]: I0117 00:32:30.762828 2430 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1deb6cc-dc9c-4146-8150-2f9f18bceaf5-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "e1deb6cc-dc9c-4146-8150-2f9f18bceaf5" (UID: "e1deb6cc-dc9c-4146-8150-2f9f18bceaf5"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:32:30.763203 kubelet[2430]: I0117 00:32:30.762882 2430 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1deb6cc-dc9c-4146-8150-2f9f18bceaf5-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "e1deb6cc-dc9c-4146-8150-2f9f18bceaf5" (UID: "e1deb6cc-dc9c-4146-8150-2f9f18bceaf5"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:32:30.763203 kubelet[2430]: I0117 00:32:30.762903 2430 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1deb6cc-dc9c-4146-8150-2f9f18bceaf5-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "e1deb6cc-dc9c-4146-8150-2f9f18bceaf5" (UID: "e1deb6cc-dc9c-4146-8150-2f9f18bceaf5"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:32:30.763203 kubelet[2430]: I0117 00:32:30.762920 2430 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1deb6cc-dc9c-4146-8150-2f9f18bceaf5-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "e1deb6cc-dc9c-4146-8150-2f9f18bceaf5" (UID: "e1deb6cc-dc9c-4146-8150-2f9f18bceaf5"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:32:30.765455 kubelet[2430]: I0117 00:32:30.765407 2430 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1deb6cc-dc9c-4146-8150-2f9f18bceaf5-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "e1deb6cc-dc9c-4146-8150-2f9f18bceaf5" (UID: "e1deb6cc-dc9c-4146-8150-2f9f18bceaf5"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 17 00:32:30.765529 kubelet[2430]: I0117 00:32:30.765468 2430 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1deb6cc-dc9c-4146-8150-2f9f18bceaf5-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "e1deb6cc-dc9c-4146-8150-2f9f18bceaf5" (UID: "e1deb6cc-dc9c-4146-8150-2f9f18bceaf5"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:32:30.765529 kubelet[2430]: I0117 00:32:30.765487 2430 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1deb6cc-dc9c-4146-8150-2f9f18bceaf5-cni-path" (OuterVolumeSpecName: "cni-path") pod "e1deb6cc-dc9c-4146-8150-2f9f18bceaf5" (UID: "e1deb6cc-dc9c-4146-8150-2f9f18bceaf5"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:32:30.767903 kubelet[2430]: I0117 00:32:30.767798 2430 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1deb6cc-dc9c-4146-8150-2f9f18bceaf5-kube-api-access-5stqx" (OuterVolumeSpecName: "kube-api-access-5stqx") pod "e1deb6cc-dc9c-4146-8150-2f9f18bceaf5" (UID: "e1deb6cc-dc9c-4146-8150-2f9f18bceaf5"). InnerVolumeSpecName "kube-api-access-5stqx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 17 00:32:30.767903 kubelet[2430]: I0117 00:32:30.767815 2430 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1deb6cc-dc9c-4146-8150-2f9f18bceaf5-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "e1deb6cc-dc9c-4146-8150-2f9f18bceaf5" (UID: "e1deb6cc-dc9c-4146-8150-2f9f18bceaf5"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 17 00:32:30.768289 kubelet[2430]: I0117 00:32:30.768247 2430 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1deb6cc-dc9c-4146-8150-2f9f18bceaf5-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e1deb6cc-dc9c-4146-8150-2f9f18bceaf5" (UID: "e1deb6cc-dc9c-4146-8150-2f9f18bceaf5"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 17 00:32:30.861315 kubelet[2430]: I0117 00:32:30.861274 2430 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e1deb6cc-dc9c-4146-8150-2f9f18bceaf5-cni-path\") on node \"172.31.24.155\" DevicePath \"\"" Jan 17 00:32:30.861315 kubelet[2430]: I0117 00:32:30.861307 2430 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e1deb6cc-dc9c-4146-8150-2f9f18bceaf5-lib-modules\") on node \"172.31.24.155\" DevicePath \"\"" Jan 17 00:32:30.861315 kubelet[2430]: I0117 00:32:30.861316 2430 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e1deb6cc-dc9c-4146-8150-2f9f18bceaf5-cilium-cgroup\") on node \"172.31.24.155\" DevicePath \"\"" Jan 17 00:32:30.861315 kubelet[2430]: I0117 00:32:30.861324 2430 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e1deb6cc-dc9c-4146-8150-2f9f18bceaf5-hubble-tls\") on node \"172.31.24.155\" DevicePath \"\"" Jan 17 00:32:30.861315 kubelet[2430]: I0117 00:32:30.861335 2430 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e1deb6cc-dc9c-4146-8150-2f9f18bceaf5-host-proc-sys-net\") on node \"172.31.24.155\" DevicePath \"\"" Jan 17 00:32:30.861574 kubelet[2430]: I0117 00:32:30.861346 2430 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e1deb6cc-dc9c-4146-8150-2f9f18bceaf5-hostproc\") on node \"172.31.24.155\" DevicePath \"\"" Jan 17 00:32:30.861574 kubelet[2430]: I0117 00:32:30.861354 2430 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5stqx\" (UniqueName: \"kubernetes.io/projected/e1deb6cc-dc9c-4146-8150-2f9f18bceaf5-kube-api-access-5stqx\") on node \"172.31.24.155\" DevicePath \"\"" Jan 17 00:32:30.861574 kubelet[2430]: I0117 00:32:30.861362 2430 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e1deb6cc-dc9c-4146-8150-2f9f18bceaf5-xtables-lock\") on node \"172.31.24.155\" DevicePath \"\"" Jan 17 00:32:30.861574 kubelet[2430]: I0117 00:32:30.861371 2430 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e1deb6cc-dc9c-4146-8150-2f9f18bceaf5-host-proc-sys-kernel\") on node \"172.31.24.155\" DevicePath \"\"" Jan 17 00:32:30.861574 kubelet[2430]: I0117 00:32:30.861378 2430 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e1deb6cc-dc9c-4146-8150-2f9f18bceaf5-clustermesh-secrets\") on node \"172.31.24.155\" DevicePath \"\"" Jan 17 00:32:30.861574 kubelet[2430]: I0117 00:32:30.861386 2430 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e1deb6cc-dc9c-4146-8150-2f9f18bceaf5-bpf-maps\") on node \"172.31.24.155\" DevicePath \"\"" Jan 17 00:32:30.861574 kubelet[2430]: I0117 00:32:30.861393 2430 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e1deb6cc-dc9c-4146-8150-2f9f18bceaf5-cilium-config-path\") on node \"172.31.24.155\" DevicePath \"\"" Jan 17 00:32:30.861574 kubelet[2430]: I0117 00:32:30.861400 2430 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e1deb6cc-dc9c-4146-8150-2f9f18bceaf5-etc-cni-netd\") on node \"172.31.24.155\" DevicePath \"\"" Jan 17 00:32:31.116275 kubelet[2430]: E0117 00:32:31.116136 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:32:31.203259 kubelet[2430]: E0117 00:32:31.203223 2430 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 17 00:32:31.399820 kubelet[2430]: I0117 00:32:31.399647 2430 scope.go:117] "RemoveContainer" containerID="e17866f7fb39adea01765b50d068d151eab49e8d2d304e46351ac528e00ba2f9" Jan 17 00:32:31.406466 containerd[1973]: time="2026-01-17T00:32:31.403880494Z" level=info msg="RemoveContainer for \"e17866f7fb39adea01765b50d068d151eab49e8d2d304e46351ac528e00ba2f9\"" Jan 17 00:32:31.405238 systemd[1]: Removed slice kubepods-burstable-pode1deb6cc_dc9c_4146_8150_2f9f18bceaf5.slice - libcontainer container kubepods-burstable-pode1deb6cc_dc9c_4146_8150_2f9f18bceaf5.slice. Jan 17 00:32:31.405377 systemd[1]: kubepods-burstable-pode1deb6cc_dc9c_4146_8150_2f9f18bceaf5.slice: Consumed 7.893s CPU time. Jan 17 00:32:31.410162 containerd[1973]: time="2026-01-17T00:32:31.410120488Z" level=info msg="RemoveContainer for \"e17866f7fb39adea01765b50d068d151eab49e8d2d304e46351ac528e00ba2f9\" returns successfully" Jan 17 00:32:31.410490 kubelet[2430]: I0117 00:32:31.410461 2430 scope.go:117] "RemoveContainer" containerID="9cb85e2b35f3ccbfae6f4436022e05425ad551be0f44bcf0f32557ed7762ed77" Jan 17 00:32:31.412509 containerd[1973]: time="2026-01-17T00:32:31.412471029Z" level=info msg="RemoveContainer for \"9cb85e2b35f3ccbfae6f4436022e05425ad551be0f44bcf0f32557ed7762ed77\"" Jan 17 00:32:31.416962 containerd[1973]: time="2026-01-17T00:32:31.416911077Z" level=info msg="RemoveContainer for \"9cb85e2b35f3ccbfae6f4436022e05425ad551be0f44bcf0f32557ed7762ed77\" returns successfully" Jan 17 00:32:31.417307 kubelet[2430]: I0117 00:32:31.417169 2430 scope.go:117] "RemoveContainer" containerID="bf5f2a39120c428f4775449c063b87849c1d52ca79352e68359838bf6e46d420" Jan 17 00:32:31.418709 containerd[1973]: time="2026-01-17T00:32:31.418504146Z" level=info msg="RemoveContainer for \"bf5f2a39120c428f4775449c063b87849c1d52ca79352e68359838bf6e46d420\"" Jan 17 00:32:31.423645 containerd[1973]: time="2026-01-17T00:32:31.423572776Z" level=info msg="RemoveContainer for \"bf5f2a39120c428f4775449c063b87849c1d52ca79352e68359838bf6e46d420\" returns successfully" Jan 17 00:32:31.423820 kubelet[2430]: I0117 00:32:31.423785 2430 scope.go:117] "RemoveContainer" containerID="c2b66c2291d4c18b95bd381b5bf3cf95e5bc793d3c9048f557ba49348d022240" Jan 17 00:32:31.425116 containerd[1973]: time="2026-01-17T00:32:31.425090583Z" level=info msg="RemoveContainer for \"c2b66c2291d4c18b95bd381b5bf3cf95e5bc793d3c9048f557ba49348d022240\"" Jan 17 00:32:31.428923 containerd[1973]: time="2026-01-17T00:32:31.428878736Z" level=info msg="RemoveContainer for \"c2b66c2291d4c18b95bd381b5bf3cf95e5bc793d3c9048f557ba49348d022240\" returns successfully" Jan 17 00:32:31.429160 kubelet[2430]: I0117 00:32:31.429081 2430 scope.go:117] "RemoveContainer" containerID="6c50203996fc2649deb1758b8a1e5ec4414491289de625528cb5a02a5a5695c1" Jan 17 00:32:31.430263 containerd[1973]: time="2026-01-17T00:32:31.430232728Z" level=info msg="RemoveContainer for \"6c50203996fc2649deb1758b8a1e5ec4414491289de625528cb5a02a5a5695c1\"" Jan 17 00:32:31.434293 containerd[1973]: time="2026-01-17T00:32:31.434249253Z" level=info msg="RemoveContainer for \"6c50203996fc2649deb1758b8a1e5ec4414491289de625528cb5a02a5a5695c1\" returns successfully" Jan 17 00:32:31.434510 kubelet[2430]: I0117 00:32:31.434490 2430 scope.go:117] "RemoveContainer" containerID="e17866f7fb39adea01765b50d068d151eab49e8d2d304e46351ac528e00ba2f9" Jan 17 00:32:31.435069 kubelet[2430]: E0117 00:32:31.435039 2430 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e17866f7fb39adea01765b50d068d151eab49e8d2d304e46351ac528e00ba2f9\": not found" containerID="e17866f7fb39adea01765b50d068d151eab49e8d2d304e46351ac528e00ba2f9" Jan 17 00:32:31.435124 containerd[1973]: time="2026-01-17T00:32:31.434887718Z" level=error msg="ContainerStatus for \"e17866f7fb39adea01765b50d068d151eab49e8d2d304e46351ac528e00ba2f9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e17866f7fb39adea01765b50d068d151eab49e8d2d304e46351ac528e00ba2f9\": not found" Jan 17 00:32:31.435196 kubelet[2430]: I0117 00:32:31.435075 2430 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e17866f7fb39adea01765b50d068d151eab49e8d2d304e46351ac528e00ba2f9"} err="failed to get container status \"e17866f7fb39adea01765b50d068d151eab49e8d2d304e46351ac528e00ba2f9\": rpc error: code = NotFound desc = an error occurred when try to find container \"e17866f7fb39adea01765b50d068d151eab49e8d2d304e46351ac528e00ba2f9\": not found" Jan 17 00:32:31.435196 kubelet[2430]: I0117 00:32:31.435171 2430 scope.go:117] "RemoveContainer" containerID="9cb85e2b35f3ccbfae6f4436022e05425ad551be0f44bcf0f32557ed7762ed77" Jan 17 00:32:31.435442 containerd[1973]: time="2026-01-17T00:32:31.435409413Z" level=error msg="ContainerStatus for \"9cb85e2b35f3ccbfae6f4436022e05425ad551be0f44bcf0f32557ed7762ed77\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9cb85e2b35f3ccbfae6f4436022e05425ad551be0f44bcf0f32557ed7762ed77\": not found" Jan 17 00:32:31.435673 kubelet[2430]: E0117 00:32:31.435644 2430 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9cb85e2b35f3ccbfae6f4436022e05425ad551be0f44bcf0f32557ed7762ed77\": not found" containerID="9cb85e2b35f3ccbfae6f4436022e05425ad551be0f44bcf0f32557ed7762ed77" Jan 17 00:32:31.435750 kubelet[2430]: I0117 00:32:31.435695 2430 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9cb85e2b35f3ccbfae6f4436022e05425ad551be0f44bcf0f32557ed7762ed77"} err="failed to get container status \"9cb85e2b35f3ccbfae6f4436022e05425ad551be0f44bcf0f32557ed7762ed77\": rpc error: code = NotFound desc = an error occurred when try to find container \"9cb85e2b35f3ccbfae6f4436022e05425ad551be0f44bcf0f32557ed7762ed77\": not found" Jan 17 00:32:31.435750 kubelet[2430]: I0117 00:32:31.435718 2430 scope.go:117] "RemoveContainer" containerID="bf5f2a39120c428f4775449c063b87849c1d52ca79352e68359838bf6e46d420" Jan 17 00:32:31.435994 containerd[1973]: time="2026-01-17T00:32:31.435952146Z" level=error msg="ContainerStatus for \"bf5f2a39120c428f4775449c063b87849c1d52ca79352e68359838bf6e46d420\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bf5f2a39120c428f4775449c063b87849c1d52ca79352e68359838bf6e46d420\": not found" Jan 17 00:32:31.436245 kubelet[2430]: E0117 00:32:31.436091 2430 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bf5f2a39120c428f4775449c063b87849c1d52ca79352e68359838bf6e46d420\": not found" containerID="bf5f2a39120c428f4775449c063b87849c1d52ca79352e68359838bf6e46d420" Jan 17 00:32:31.436245 kubelet[2430]: I0117 00:32:31.436112 2430 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bf5f2a39120c428f4775449c063b87849c1d52ca79352e68359838bf6e46d420"} err="failed to get container status \"bf5f2a39120c428f4775449c063b87849c1d52ca79352e68359838bf6e46d420\": rpc error: code = NotFound desc = an error occurred when try to find container \"bf5f2a39120c428f4775449c063b87849c1d52ca79352e68359838bf6e46d420\": not found" Jan 17 00:32:31.436245 kubelet[2430]: I0117 00:32:31.436127 2430 scope.go:117] "RemoveContainer" containerID="c2b66c2291d4c18b95bd381b5bf3cf95e5bc793d3c9048f557ba49348d022240" Jan 17 00:32:31.436441 containerd[1973]: time="2026-01-17T00:32:31.436374293Z" level=error msg="ContainerStatus for \"c2b66c2291d4c18b95bd381b5bf3cf95e5bc793d3c9048f557ba49348d022240\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c2b66c2291d4c18b95bd381b5bf3cf95e5bc793d3c9048f557ba49348d022240\": not found" Jan 17 00:32:31.436616 kubelet[2430]: E0117 00:32:31.436589 2430 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c2b66c2291d4c18b95bd381b5bf3cf95e5bc793d3c9048f557ba49348d022240\": not found" containerID="c2b66c2291d4c18b95bd381b5bf3cf95e5bc793d3c9048f557ba49348d022240" Jan 17 00:32:31.436692 kubelet[2430]: I0117 00:32:31.436619 2430 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c2b66c2291d4c18b95bd381b5bf3cf95e5bc793d3c9048f557ba49348d022240"} err="failed to get container status \"c2b66c2291d4c18b95bd381b5bf3cf95e5bc793d3c9048f557ba49348d022240\": rpc error: code = NotFound desc = an error occurred when try to find container \"c2b66c2291d4c18b95bd381b5bf3cf95e5bc793d3c9048f557ba49348d022240\": not found" Jan 17 00:32:31.436692 kubelet[2430]: I0117 00:32:31.436639 2430 scope.go:117] "RemoveContainer" containerID="6c50203996fc2649deb1758b8a1e5ec4414491289de625528cb5a02a5a5695c1" Jan 17 00:32:31.436918 containerd[1973]: time="2026-01-17T00:32:31.436883204Z" level=error msg="ContainerStatus for \"6c50203996fc2649deb1758b8a1e5ec4414491289de625528cb5a02a5a5695c1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6c50203996fc2649deb1758b8a1e5ec4414491289de625528cb5a02a5a5695c1\": not found" Jan 17 00:32:31.437042 kubelet[2430]: E0117 00:32:31.437012 2430 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6c50203996fc2649deb1758b8a1e5ec4414491289de625528cb5a02a5a5695c1\": not found" containerID="6c50203996fc2649deb1758b8a1e5ec4414491289de625528cb5a02a5a5695c1" Jan 17 00:32:31.437142 kubelet[2430]: I0117 00:32:31.437046 2430 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6c50203996fc2649deb1758b8a1e5ec4414491289de625528cb5a02a5a5695c1"} err="failed to get container status \"6c50203996fc2649deb1758b8a1e5ec4414491289de625528cb5a02a5a5695c1\": rpc error: code = NotFound desc = an error occurred when try to find container \"6c50203996fc2649deb1758b8a1e5ec4414491289de625528cb5a02a5a5695c1\": not found" Jan 17 00:32:31.499383 systemd[1]: var-lib-kubelet-pods-e1deb6cc\x2ddc9c\x2d4146\x2d8150\x2d2f9f18bceaf5-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5stqx.mount: Deactivated successfully. Jan 17 00:32:31.499492 systemd[1]: var-lib-kubelet-pods-e1deb6cc\x2ddc9c\x2d4146\x2d8150\x2d2f9f18bceaf5-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 17 00:32:31.499554 systemd[1]: var-lib-kubelet-pods-e1deb6cc\x2ddc9c\x2d4146\x2d8150\x2d2f9f18bceaf5-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 17 00:32:32.116325 kubelet[2430]: E0117 00:32:32.116271 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:32:32.206207 kubelet[2430]: I0117 00:32:32.206156 2430 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1deb6cc-dc9c-4146-8150-2f9f18bceaf5" path="/var/lib/kubelet/pods/e1deb6cc-dc9c-4146-8150-2f9f18bceaf5/volumes" Jan 17 00:32:33.116501 kubelet[2430]: E0117 00:32:33.116445 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:32:33.272870 ntpd[1957]: Deleting interface #12 lxc_health, fe80::f0da:a0ff:fe52:36e5%7#123, interface stats: received=0, sent=0, dropped=0, active_time=36 secs Jan 17 00:32:33.273270 ntpd[1957]: 17 Jan 00:32:33 ntpd[1957]: Deleting interface #12 lxc_health, fe80::f0da:a0ff:fe52:36e5%7#123, interface stats: received=0, sent=0, dropped=0, active_time=36 secs Jan 17 00:32:34.117440 kubelet[2430]: E0117 00:32:34.117375 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:32:34.137198 kubelet[2430]: I0117 00:32:34.137139 2430 memory_manager.go:355] "RemoveStaleState removing state" podUID="e1deb6cc-dc9c-4146-8150-2f9f18bceaf5" containerName="cilium-agent" Jan 17 00:32:34.148982 systemd[1]: Created slice kubepods-besteffort-pod64acf2ba_fec1_4c17_a9a7_1aa20a42f472.slice - libcontainer container kubepods-besteffort-pod64acf2ba_fec1_4c17_a9a7_1aa20a42f472.slice. Jan 17 00:32:34.163739 kubelet[2430]: W0117 00:32:34.163703 2430 reflector.go:569] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:172.31.24.155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node '172.31.24.155' and this object Jan 17 00:32:34.164012 kubelet[2430]: E0117 00:32:34.163760 2430 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:172.31.24.155\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '172.31.24.155' and this object" logger="UnhandledError" Jan 17 00:32:34.164012 kubelet[2430]: I0117 00:32:34.163701 2430 status_manager.go:890] "Failed to get status for pod" podUID="64acf2ba-fec1-4c17-a9a7-1aa20a42f472" pod="kube-system/cilium-operator-6c4d7847fc-r7rp2" err="pods \"cilium-operator-6c4d7847fc-r7rp2\" is forbidden: User \"system:node:172.31.24.155\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '172.31.24.155' and this object" Jan 17 00:32:34.212733 systemd[1]: Created slice kubepods-burstable-pod41952fa7_5cec_4d20_979e_2089332bfecd.slice - libcontainer container kubepods-burstable-pod41952fa7_5cec_4d20_979e_2089332bfecd.slice. Jan 17 00:32:34.283518 kubelet[2430]: I0117 00:32:34.283447 2430 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/41952fa7-5cec-4d20-979e-2089332bfecd-lib-modules\") pod \"cilium-k6tlc\" (UID: \"41952fa7-5cec-4d20-979e-2089332bfecd\") " pod="kube-system/cilium-k6tlc" Jan 17 00:32:34.283518 kubelet[2430]: I0117 00:32:34.283495 2430 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/41952fa7-5cec-4d20-979e-2089332bfecd-cilium-ipsec-secrets\") pod \"cilium-k6tlc\" (UID: \"41952fa7-5cec-4d20-979e-2089332bfecd\") " pod="kube-system/cilium-k6tlc" Jan 17 00:32:34.283518 kubelet[2430]: I0117 00:32:34.283523 2430 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/64acf2ba-fec1-4c17-a9a7-1aa20a42f472-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-r7rp2\" (UID: \"64acf2ba-fec1-4c17-a9a7-1aa20a42f472\") " pod="kube-system/cilium-operator-6c4d7847fc-r7rp2" Jan 17 00:32:34.283906 kubelet[2430]: I0117 00:32:34.283539 2430 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/41952fa7-5cec-4d20-979e-2089332bfecd-cilium-run\") pod \"cilium-k6tlc\" (UID: \"41952fa7-5cec-4d20-979e-2089332bfecd\") " pod="kube-system/cilium-k6tlc" Jan 17 00:32:34.283906 kubelet[2430]: I0117 00:32:34.283557 2430 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/41952fa7-5cec-4d20-979e-2089332bfecd-cilium-cgroup\") pod \"cilium-k6tlc\" (UID: \"41952fa7-5cec-4d20-979e-2089332bfecd\") " pod="kube-system/cilium-k6tlc" Jan 17 00:32:34.283906 kubelet[2430]: I0117 00:32:34.283574 2430 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgzct\" (UniqueName: \"kubernetes.io/projected/41952fa7-5cec-4d20-979e-2089332bfecd-kube-api-access-qgzct\") pod \"cilium-k6tlc\" (UID: \"41952fa7-5cec-4d20-979e-2089332bfecd\") " pod="kube-system/cilium-k6tlc" Jan 17 00:32:34.283906 kubelet[2430]: I0117 00:32:34.283591 2430 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7x7b\" (UniqueName: \"kubernetes.io/projected/64acf2ba-fec1-4c17-a9a7-1aa20a42f472-kube-api-access-k7x7b\") pod \"cilium-operator-6c4d7847fc-r7rp2\" (UID: \"64acf2ba-fec1-4c17-a9a7-1aa20a42f472\") " pod="kube-system/cilium-operator-6c4d7847fc-r7rp2" Jan 17 00:32:34.283906 kubelet[2430]: I0117 00:32:34.283606 2430 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/41952fa7-5cec-4d20-979e-2089332bfecd-hostproc\") pod \"cilium-k6tlc\" (UID: \"41952fa7-5cec-4d20-979e-2089332bfecd\") " pod="kube-system/cilium-k6tlc" Jan 17 00:32:34.284196 kubelet[2430]: I0117 00:32:34.283621 2430 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/41952fa7-5cec-4d20-979e-2089332bfecd-clustermesh-secrets\") pod \"cilium-k6tlc\" (UID: \"41952fa7-5cec-4d20-979e-2089332bfecd\") " pod="kube-system/cilium-k6tlc" Jan 17 00:32:34.284196 kubelet[2430]: I0117 00:32:34.283638 2430 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/41952fa7-5cec-4d20-979e-2089332bfecd-host-proc-sys-net\") pod \"cilium-k6tlc\" (UID: \"41952fa7-5cec-4d20-979e-2089332bfecd\") " pod="kube-system/cilium-k6tlc" Jan 17 00:32:34.284196 kubelet[2430]: I0117 00:32:34.283652 2430 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/41952fa7-5cec-4d20-979e-2089332bfecd-hubble-tls\") pod \"cilium-k6tlc\" (UID: \"41952fa7-5cec-4d20-979e-2089332bfecd\") " pod="kube-system/cilium-k6tlc" Jan 17 00:32:34.284196 kubelet[2430]: I0117 00:32:34.283667 2430 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/41952fa7-5cec-4d20-979e-2089332bfecd-host-proc-sys-kernel\") pod \"cilium-k6tlc\" (UID: \"41952fa7-5cec-4d20-979e-2089332bfecd\") " pod="kube-system/cilium-k6tlc" Jan 17 00:32:34.284196 kubelet[2430]: I0117 00:32:34.283682 2430 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/41952fa7-5cec-4d20-979e-2089332bfecd-etc-cni-netd\") pod \"cilium-k6tlc\" (UID: \"41952fa7-5cec-4d20-979e-2089332bfecd\") " pod="kube-system/cilium-k6tlc" Jan 17 00:32:34.284196 kubelet[2430]: I0117 00:32:34.283696 2430 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/41952fa7-5cec-4d20-979e-2089332bfecd-xtables-lock\") pod \"cilium-k6tlc\" (UID: \"41952fa7-5cec-4d20-979e-2089332bfecd\") " pod="kube-system/cilium-k6tlc" Jan 17 00:32:34.284350 kubelet[2430]: I0117 00:32:34.283713 2430 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/41952fa7-5cec-4d20-979e-2089332bfecd-cilium-config-path\") pod \"cilium-k6tlc\" (UID: \"41952fa7-5cec-4d20-979e-2089332bfecd\") " pod="kube-system/cilium-k6tlc" Jan 17 00:32:34.284350 kubelet[2430]: I0117 00:32:34.283729 2430 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/41952fa7-5cec-4d20-979e-2089332bfecd-bpf-maps\") pod \"cilium-k6tlc\" (UID: \"41952fa7-5cec-4d20-979e-2089332bfecd\") " pod="kube-system/cilium-k6tlc" Jan 17 00:32:34.284350 kubelet[2430]: I0117 00:32:34.283742 2430 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/41952fa7-5cec-4d20-979e-2089332bfecd-cni-path\") pod \"cilium-k6tlc\" (UID: \"41952fa7-5cec-4d20-979e-2089332bfecd\") " pod="kube-system/cilium-k6tlc" Jan 17 00:32:35.118202 kubelet[2430]: E0117 00:32:35.118143 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:32:35.386092 kubelet[2430]: E0117 00:32:35.385967 2430 configmap.go:193] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Jan 17 00:32:35.386092 kubelet[2430]: E0117 00:32:35.386068 2430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41952fa7-5cec-4d20-979e-2089332bfecd-cilium-config-path podName:41952fa7-5cec-4d20-979e-2089332bfecd nodeName:}" failed. No retries permitted until 2026-01-17 00:32:35.886045762 +0000 UTC m=+60.296157194 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/41952fa7-5cec-4d20-979e-2089332bfecd-cilium-config-path") pod "cilium-k6tlc" (UID: "41952fa7-5cec-4d20-979e-2089332bfecd") : failed to sync configmap cache: timed out waiting for the condition Jan 17 00:32:35.391696 kubelet[2430]: E0117 00:32:35.391630 2430 configmap.go:193] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Jan 17 00:32:35.391696 kubelet[2430]: E0117 00:32:35.391713 2430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/64acf2ba-fec1-4c17-a9a7-1aa20a42f472-cilium-config-path podName:64acf2ba-fec1-4c17-a9a7-1aa20a42f472 nodeName:}" failed. No retries permitted until 2026-01-17 00:32:35.891696374 +0000 UTC m=+60.301807806 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/64acf2ba-fec1-4c17-a9a7-1aa20a42f472-cilium-config-path") pod "cilium-operator-6c4d7847fc-r7rp2" (UID: "64acf2ba-fec1-4c17-a9a7-1aa20a42f472") : failed to sync configmap cache: timed out waiting for the condition Jan 17 00:32:35.951830 containerd[1973]: time="2026-01-17T00:32:35.951788145Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-r7rp2,Uid:64acf2ba-fec1-4c17-a9a7-1aa20a42f472,Namespace:kube-system,Attempt:0,}" Jan 17 00:32:35.982486 containerd[1973]: time="2026-01-17T00:32:35.982346530Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:32:35.982486 containerd[1973]: time="2026-01-17T00:32:35.982413879Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:32:35.982486 containerd[1973]: time="2026-01-17T00:32:35.982443086Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:32:35.982815 containerd[1973]: time="2026-01-17T00:32:35.982557631Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:32:36.011427 systemd[1]: Started cri-containerd-9a49e812541f789d2e0386d670746c31e2242d9b7b787340837a383125449f21.scope - libcontainer container 9a49e812541f789d2e0386d670746c31e2242d9b7b787340837a383125449f21. Jan 17 00:32:36.022740 containerd[1973]: time="2026-01-17T00:32:36.022603355Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-k6tlc,Uid:41952fa7-5cec-4d20-979e-2089332bfecd,Namespace:kube-system,Attempt:0,}" Jan 17 00:32:36.059420 containerd[1973]: time="2026-01-17T00:32:36.059387259Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-r7rp2,Uid:64acf2ba-fec1-4c17-a9a7-1aa20a42f472,Namespace:kube-system,Attempt:0,} returns sandbox id \"9a49e812541f789d2e0386d670746c31e2242d9b7b787340837a383125449f21\"" Jan 17 00:32:36.061843 containerd[1973]: time="2026-01-17T00:32:36.061264085Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:32:36.061843 containerd[1973]: time="2026-01-17T00:32:36.061315450Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:32:36.061843 containerd[1973]: time="2026-01-17T00:32:36.061334622Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:32:36.061843 containerd[1973]: time="2026-01-17T00:32:36.061408936Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:32:36.063090 containerd[1973]: time="2026-01-17T00:32:36.063065747Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 17 00:32:36.074212 kubelet[2430]: E0117 00:32:36.074154 2430 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:32:36.094445 systemd[1]: Started cri-containerd-bf9c9c77fa74e6fd20661042afd6d6b44e1c491eb97608d02ab27de25ab20628.scope - libcontainer container bf9c9c77fa74e6fd20661042afd6d6b44e1c491eb97608d02ab27de25ab20628. Jan 17 00:32:36.117344 containerd[1973]: time="2026-01-17T00:32:36.117150713Z" level=info msg="StopPodSandbox for \"1c7ed507caee7d9454faeb161308cafb4d01b85b7fd3d2829dd728afe512510d\"" Jan 17 00:32:36.117344 containerd[1973]: time="2026-01-17T00:32:36.117291887Z" level=info msg="TearDown network for sandbox \"1c7ed507caee7d9454faeb161308cafb4d01b85b7fd3d2829dd728afe512510d\" successfully" Jan 17 00:32:36.117344 containerd[1973]: time="2026-01-17T00:32:36.117309745Z" level=info msg="StopPodSandbox for \"1c7ed507caee7d9454faeb161308cafb4d01b85b7fd3d2829dd728afe512510d\" returns successfully" Jan 17 00:32:36.118711 containerd[1973]: time="2026-01-17T00:32:36.118428979Z" level=info msg="RemovePodSandbox for \"1c7ed507caee7d9454faeb161308cafb4d01b85b7fd3d2829dd728afe512510d\"" Jan 17 00:32:36.118711 containerd[1973]: time="2026-01-17T00:32:36.118465067Z" level=info msg="Forcibly stopping sandbox \"1c7ed507caee7d9454faeb161308cafb4d01b85b7fd3d2829dd728afe512510d\"" Jan 17 00:32:36.118711 containerd[1973]: time="2026-01-17T00:32:36.118535246Z" level=info msg="TearDown network for sandbox \"1c7ed507caee7d9454faeb161308cafb4d01b85b7fd3d2829dd728afe512510d\" successfully" Jan 17 00:32:36.121393 kubelet[2430]: E0117 00:32:36.121165 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:32:36.125156 containerd[1973]: time="2026-01-17T00:32:36.125001056Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1c7ed507caee7d9454faeb161308cafb4d01b85b7fd3d2829dd728afe512510d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:32:36.125349 containerd[1973]: time="2026-01-17T00:32:36.125175850Z" level=info msg="RemovePodSandbox \"1c7ed507caee7d9454faeb161308cafb4d01b85b7fd3d2829dd728afe512510d\" returns successfully" Jan 17 00:32:36.130022 containerd[1973]: time="2026-01-17T00:32:36.129974607Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-k6tlc,Uid:41952fa7-5cec-4d20-979e-2089332bfecd,Namespace:kube-system,Attempt:0,} returns sandbox id \"bf9c9c77fa74e6fd20661042afd6d6b44e1c491eb97608d02ab27de25ab20628\"" Jan 17 00:32:36.133623 containerd[1973]: time="2026-01-17T00:32:36.133495217Z" level=info msg="CreateContainer within sandbox \"bf9c9c77fa74e6fd20661042afd6d6b44e1c491eb97608d02ab27de25ab20628\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 17 00:32:36.148452 containerd[1973]: time="2026-01-17T00:32:36.148402078Z" level=info msg="CreateContainer within sandbox \"bf9c9c77fa74e6fd20661042afd6d6b44e1c491eb97608d02ab27de25ab20628\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"48f5842db8caa934c065b880e6b993daef05f191c5bf0aa40c2ceba3c276f37f\"" Jan 17 00:32:36.149780 containerd[1973]: time="2026-01-17T00:32:36.148998530Z" level=info msg="StartContainer for \"48f5842db8caa934c065b880e6b993daef05f191c5bf0aa40c2ceba3c276f37f\"" Jan 17 00:32:36.176566 systemd[1]: Started cri-containerd-48f5842db8caa934c065b880e6b993daef05f191c5bf0aa40c2ceba3c276f37f.scope - libcontainer container 48f5842db8caa934c065b880e6b993daef05f191c5bf0aa40c2ceba3c276f37f. Jan 17 00:32:36.209325 kubelet[2430]: E0117 00:32:36.207719 2430 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 17 00:32:36.221976 containerd[1973]: time="2026-01-17T00:32:36.221587491Z" level=info msg="StartContainer for \"48f5842db8caa934c065b880e6b993daef05f191c5bf0aa40c2ceba3c276f37f\" returns successfully" Jan 17 00:32:36.243548 systemd[1]: cri-containerd-48f5842db8caa934c065b880e6b993daef05f191c5bf0aa40c2ceba3c276f37f.scope: Deactivated successfully. Jan 17 00:32:36.286161 containerd[1973]: time="2026-01-17T00:32:36.286089551Z" level=info msg="shim disconnected" id=48f5842db8caa934c065b880e6b993daef05f191c5bf0aa40c2ceba3c276f37f namespace=k8s.io Jan 17 00:32:36.286161 containerd[1973]: time="2026-01-17T00:32:36.286157206Z" level=warning msg="cleaning up after shim disconnected" id=48f5842db8caa934c065b880e6b993daef05f191c5bf0aa40c2ceba3c276f37f namespace=k8s.io Jan 17 00:32:36.286161 containerd[1973]: time="2026-01-17T00:32:36.286166243Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:32:36.418077 containerd[1973]: time="2026-01-17T00:32:36.418033757Z" level=info msg="CreateContainer within sandbox \"bf9c9c77fa74e6fd20661042afd6d6b44e1c491eb97608d02ab27de25ab20628\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 17 00:32:36.433128 containerd[1973]: time="2026-01-17T00:32:36.432916277Z" level=info msg="CreateContainer within sandbox \"bf9c9c77fa74e6fd20661042afd6d6b44e1c491eb97608d02ab27de25ab20628\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"0c2ac44bc600b75d4164c9be4ae7e9e1c035946e601158a1b99bbf92df0a07b4\"" Jan 17 00:32:36.433717 containerd[1973]: time="2026-01-17T00:32:36.433684538Z" level=info msg="StartContainer for \"0c2ac44bc600b75d4164c9be4ae7e9e1c035946e601158a1b99bbf92df0a07b4\"" Jan 17 00:32:36.467392 systemd[1]: Started cri-containerd-0c2ac44bc600b75d4164c9be4ae7e9e1c035946e601158a1b99bbf92df0a07b4.scope - libcontainer container 0c2ac44bc600b75d4164c9be4ae7e9e1c035946e601158a1b99bbf92df0a07b4. Jan 17 00:32:36.495352 containerd[1973]: time="2026-01-17T00:32:36.495305526Z" level=info msg="StartContainer for \"0c2ac44bc600b75d4164c9be4ae7e9e1c035946e601158a1b99bbf92df0a07b4\" returns successfully" Jan 17 00:32:36.508468 systemd[1]: cri-containerd-0c2ac44bc600b75d4164c9be4ae7e9e1c035946e601158a1b99bbf92df0a07b4.scope: Deactivated successfully. Jan 17 00:32:36.537343 containerd[1973]: time="2026-01-17T00:32:36.537287978Z" level=info msg="shim disconnected" id=0c2ac44bc600b75d4164c9be4ae7e9e1c035946e601158a1b99bbf92df0a07b4 namespace=k8s.io Jan 17 00:32:36.537343 containerd[1973]: time="2026-01-17T00:32:36.537336595Z" level=warning msg="cleaning up after shim disconnected" id=0c2ac44bc600b75d4164c9be4ae7e9e1c035946e601158a1b99bbf92df0a07b4 namespace=k8s.io Jan 17 00:32:36.537343 containerd[1973]: time="2026-01-17T00:32:36.537345142Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:32:37.121720 kubelet[2430]: E0117 00:32:37.121625 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:32:37.205908 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2001281995.mount: Deactivated successfully. Jan 17 00:32:37.425727 containerd[1973]: time="2026-01-17T00:32:37.425611560Z" level=info msg="CreateContainer within sandbox \"bf9c9c77fa74e6fd20661042afd6d6b44e1c491eb97608d02ab27de25ab20628\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 17 00:32:37.452383 containerd[1973]: time="2026-01-17T00:32:37.451718263Z" level=info msg="CreateContainer within sandbox \"bf9c9c77fa74e6fd20661042afd6d6b44e1c491eb97608d02ab27de25ab20628\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"60990f77b2d454d736fa43fa60e2a9b7495b51a24036d702ec10bf303ad42503\"" Jan 17 00:32:37.452932 containerd[1973]: time="2026-01-17T00:32:37.452852793Z" level=info msg="StartContainer for \"60990f77b2d454d736fa43fa60e2a9b7495b51a24036d702ec10bf303ad42503\"" Jan 17 00:32:37.494083 systemd[1]: Started cri-containerd-60990f77b2d454d736fa43fa60e2a9b7495b51a24036d702ec10bf303ad42503.scope - libcontainer container 60990f77b2d454d736fa43fa60e2a9b7495b51a24036d702ec10bf303ad42503. Jan 17 00:32:37.552882 containerd[1973]: time="2026-01-17T00:32:37.552356102Z" level=info msg="StartContainer for \"60990f77b2d454d736fa43fa60e2a9b7495b51a24036d702ec10bf303ad42503\" returns successfully" Jan 17 00:32:37.560889 systemd[1]: cri-containerd-60990f77b2d454d736fa43fa60e2a9b7495b51a24036d702ec10bf303ad42503.scope: Deactivated successfully. Jan 17 00:32:37.608876 kubelet[2430]: I0117 00:32:37.608021 2430 setters.go:602] "Node became not ready" node="172.31.24.155" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-17T00:32:37Z","lastTransitionTime":"2026-01-17T00:32:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 17 00:32:37.647951 containerd[1973]: time="2026-01-17T00:32:37.647798552Z" level=info msg="shim disconnected" id=60990f77b2d454d736fa43fa60e2a9b7495b51a24036d702ec10bf303ad42503 namespace=k8s.io Jan 17 00:32:37.647951 containerd[1973]: time="2026-01-17T00:32:37.647853330Z" level=warning msg="cleaning up after shim disconnected" id=60990f77b2d454d736fa43fa60e2a9b7495b51a24036d702ec10bf303ad42503 namespace=k8s.io Jan 17 00:32:37.647951 containerd[1973]: time="2026-01-17T00:32:37.647865152Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:32:37.944531 containerd[1973]: time="2026-01-17T00:32:37.944461221Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:32:37.946166 containerd[1973]: time="2026-01-17T00:32:37.946082348Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jan 17 00:32:37.947628 containerd[1973]: time="2026-01-17T00:32:37.947426850Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:32:37.948755 containerd[1973]: time="2026-01-17T00:32:37.948721991Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 1.885346343s" Jan 17 00:32:37.948832 containerd[1973]: time="2026-01-17T00:32:37.948759670Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 17 00:32:37.951252 containerd[1973]: time="2026-01-17T00:32:37.951219579Z" level=info msg="CreateContainer within sandbox \"9a49e812541f789d2e0386d670746c31e2242d9b7b787340837a383125449f21\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 17 00:32:37.973160 containerd[1973]: time="2026-01-17T00:32:37.973112960Z" level=info msg="CreateContainer within sandbox \"9a49e812541f789d2e0386d670746c31e2242d9b7b787340837a383125449f21\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"1a2bdd075ad8079a592eb9ca722a9ac951c9337394f2710d22f5b934a4a5edc6\"" Jan 17 00:32:37.974155 containerd[1973]: time="2026-01-17T00:32:37.974113060Z" level=info msg="StartContainer for \"1a2bdd075ad8079a592eb9ca722a9ac951c9337394f2710d22f5b934a4a5edc6\"" Jan 17 00:32:38.015415 systemd[1]: Started cri-containerd-1a2bdd075ad8079a592eb9ca722a9ac951c9337394f2710d22f5b934a4a5edc6.scope - libcontainer container 1a2bdd075ad8079a592eb9ca722a9ac951c9337394f2710d22f5b934a4a5edc6. Jan 17 00:32:38.050283 containerd[1973]: time="2026-01-17T00:32:38.050001256Z" level=info msg="StartContainer for \"1a2bdd075ad8079a592eb9ca722a9ac951c9337394f2710d22f5b934a4a5edc6\" returns successfully" Jan 17 00:32:38.122772 kubelet[2430]: E0117 00:32:38.122733 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:32:38.437638 containerd[1973]: time="2026-01-17T00:32:38.437345951Z" level=info msg="CreateContainer within sandbox \"bf9c9c77fa74e6fd20661042afd6d6b44e1c491eb97608d02ab27de25ab20628\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 17 00:32:38.459844 containerd[1973]: time="2026-01-17T00:32:38.459781510Z" level=info msg="CreateContainer within sandbox \"bf9c9c77fa74e6fd20661042afd6d6b44e1c491eb97608d02ab27de25ab20628\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"49c644bdffc2b03ae659e0f31b4048067ec1d48b333fa5303a66c014a5e4c0bc\"" Jan 17 00:32:38.461320 containerd[1973]: time="2026-01-17T00:32:38.460492598Z" level=info msg="StartContainer for \"49c644bdffc2b03ae659e0f31b4048067ec1d48b333fa5303a66c014a5e4c0bc\"" Jan 17 00:32:38.472209 kubelet[2430]: I0117 00:32:38.472109 2430 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-r7rp2" podStartSLOduration=2.583740122 podStartE2EDuration="4.472088942s" podCreationTimestamp="2026-01-17 00:32:34 +0000 UTC" firstStartedPulling="2026-01-17 00:32:36.061620184 +0000 UTC m=+60.471731604" lastFinishedPulling="2026-01-17 00:32:37.949969005 +0000 UTC m=+62.360080424" observedRunningTime="2026-01-17 00:32:38.471774299 +0000 UTC m=+62.881885740" watchObservedRunningTime="2026-01-17 00:32:38.472088942 +0000 UTC m=+62.882200384" Jan 17 00:32:38.493437 systemd[1]: Started cri-containerd-49c644bdffc2b03ae659e0f31b4048067ec1d48b333fa5303a66c014a5e4c0bc.scope - libcontainer container 49c644bdffc2b03ae659e0f31b4048067ec1d48b333fa5303a66c014a5e4c0bc. Jan 17 00:32:38.527488 containerd[1973]: time="2026-01-17T00:32:38.525251433Z" level=info msg="StartContainer for \"49c644bdffc2b03ae659e0f31b4048067ec1d48b333fa5303a66c014a5e4c0bc\" returns successfully" Jan 17 00:32:38.525598 systemd[1]: cri-containerd-49c644bdffc2b03ae659e0f31b4048067ec1d48b333fa5303a66c014a5e4c0bc.scope: Deactivated successfully. Jan 17 00:32:38.568208 containerd[1973]: time="2026-01-17T00:32:38.568119163Z" level=info msg="shim disconnected" id=49c644bdffc2b03ae659e0f31b4048067ec1d48b333fa5303a66c014a5e4c0bc namespace=k8s.io Jan 17 00:32:38.568208 containerd[1973]: time="2026-01-17T00:32:38.568174546Z" level=warning msg="cleaning up after shim disconnected" id=49c644bdffc2b03ae659e0f31b4048067ec1d48b333fa5303a66c014a5e4c0bc namespace=k8s.io Jan 17 00:32:38.568208 containerd[1973]: time="2026-01-17T00:32:38.568203597Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:32:39.123219 kubelet[2430]: E0117 00:32:39.123008 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:32:39.445174 containerd[1973]: time="2026-01-17T00:32:39.445070334Z" level=info msg="CreateContainer within sandbox \"bf9c9c77fa74e6fd20661042afd6d6b44e1c491eb97608d02ab27de25ab20628\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 17 00:32:39.478340 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount716619693.mount: Deactivated successfully. Jan 17 00:32:39.488732 containerd[1973]: time="2026-01-17T00:32:39.488679991Z" level=info msg="CreateContainer within sandbox \"bf9c9c77fa74e6fd20661042afd6d6b44e1c491eb97608d02ab27de25ab20628\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"92de27fbb39eabfa6499d4ceb0a3d8b318224ac5e05ced86e2e189e170bfeeb4\"" Jan 17 00:32:39.489481 containerd[1973]: time="2026-01-17T00:32:39.489446346Z" level=info msg="StartContainer for \"92de27fbb39eabfa6499d4ceb0a3d8b318224ac5e05ced86e2e189e170bfeeb4\"" Jan 17 00:32:39.536787 systemd[1]: Started cri-containerd-92de27fbb39eabfa6499d4ceb0a3d8b318224ac5e05ced86e2e189e170bfeeb4.scope - libcontainer container 92de27fbb39eabfa6499d4ceb0a3d8b318224ac5e05ced86e2e189e170bfeeb4. Jan 17 00:32:39.569146 containerd[1973]: time="2026-01-17T00:32:39.569096105Z" level=info msg="StartContainer for \"92de27fbb39eabfa6499d4ceb0a3d8b318224ac5e05ced86e2e189e170bfeeb4\" returns successfully" Jan 17 00:32:40.123805 kubelet[2430]: E0117 00:32:40.123745 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:32:40.323295 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jan 17 00:32:40.485232 kubelet[2430]: I0117 00:32:40.484974 2430 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-k6tlc" podStartSLOduration=6.484954146 podStartE2EDuration="6.484954146s" podCreationTimestamp="2026-01-17 00:32:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:32:40.484327433 +0000 UTC m=+64.894438874" watchObservedRunningTime="2026-01-17 00:32:40.484954146 +0000 UTC m=+64.895065588" Jan 17 00:32:41.124878 kubelet[2430]: E0117 00:32:41.124813 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:32:42.125527 kubelet[2430]: E0117 00:32:42.125475 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:32:43.126066 kubelet[2430]: E0117 00:32:43.125983 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:32:43.158958 systemd[1]: run-containerd-runc-k8s.io-92de27fbb39eabfa6499d4ceb0a3d8b318224ac5e05ced86e2e189e170bfeeb4-runc.tfPyUh.mount: Deactivated successfully. Jan 17 00:32:43.381711 (udev-worker)[5213]: Network interface NamePolicy= disabled on kernel command line. Jan 17 00:32:43.384966 systemd-networkd[1900]: lxc_health: Link UP Jan 17 00:32:43.395809 (udev-worker)[5210]: Network interface NamePolicy= disabled on kernel command line. Jan 17 00:32:43.396278 systemd-networkd[1900]: lxc_health: Gained carrier Jan 17 00:32:44.126330 kubelet[2430]: E0117 00:32:44.126278 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:32:45.012538 systemd-networkd[1900]: lxc_health: Gained IPv6LL Jan 17 00:32:45.126733 kubelet[2430]: E0117 00:32:45.126676 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:32:45.418506 systemd[1]: run-containerd-runc-k8s.io-92de27fbb39eabfa6499d4ceb0a3d8b318224ac5e05ced86e2e189e170bfeeb4-runc.id9uCz.mount: Deactivated successfully. Jan 17 00:32:46.127891 kubelet[2430]: E0117 00:32:46.127828 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:32:47.128551 kubelet[2430]: E0117 00:32:47.128496 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:32:47.272976 ntpd[1957]: Listen normally on 16 lxc_health [fe80::68c7:87ff:fe40:bcbe%15]:123 Jan 17 00:32:47.273618 ntpd[1957]: 17 Jan 00:32:47 ntpd[1957]: Listen normally on 16 lxc_health [fe80::68c7:87ff:fe40:bcbe%15]:123 Jan 17 00:32:47.602024 systemd[1]: run-containerd-runc-k8s.io-92de27fbb39eabfa6499d4ceb0a3d8b318224ac5e05ced86e2e189e170bfeeb4-runc.64fNGu.mount: Deactivated successfully. Jan 17 00:32:48.129296 kubelet[2430]: E0117 00:32:48.129242 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:32:49.129429 kubelet[2430]: E0117 00:32:49.129386 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:32:49.782481 systemd[1]: run-containerd-runc-k8s.io-92de27fbb39eabfa6499d4ceb0a3d8b318224ac5e05ced86e2e189e170bfeeb4-runc.eS2QO1.mount: Deactivated successfully. Jan 17 00:32:50.129975 kubelet[2430]: E0117 00:32:50.129917 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:32:51.131097 kubelet[2430]: E0117 00:32:51.131039 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:32:52.131659 kubelet[2430]: E0117 00:32:52.131612 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:32:53.132206 kubelet[2430]: E0117 00:32:53.132145 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:32:54.132865 kubelet[2430]: E0117 00:32:54.132820 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:32:55.133203 kubelet[2430]: E0117 00:32:55.133134 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:32:56.073265 kubelet[2430]: E0117 00:32:56.073203 2430 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:32:56.134064 kubelet[2430]: E0117 00:32:56.133999 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:32:57.135032 kubelet[2430]: E0117 00:32:57.134954 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:32:58.135200 kubelet[2430]: E0117 00:32:58.135120 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:32:59.135446 kubelet[2430]: E0117 00:32:59.135391 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:33:00.136361 kubelet[2430]: E0117 00:33:00.136300 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:33:01.137021 kubelet[2430]: E0117 00:33:01.136965 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:33:02.137611 kubelet[2430]: E0117 00:33:02.137550 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:33:03.138473 kubelet[2430]: E0117 00:33:03.138057 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:33:04.139787 kubelet[2430]: E0117 00:33:04.139684 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:33:05.140146 kubelet[2430]: E0117 00:33:05.140086 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:33:06.140319 kubelet[2430]: E0117 00:33:06.140246 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:33:07.141050 kubelet[2430]: E0117 00:33:07.140982 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:33:07.367400 kubelet[2430]: E0117 00:33:07.367335 2430 controller.go:195] "Failed to update lease" err="Put \"https://172.31.18.61:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.24.155?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 17 00:33:08.141722 kubelet[2430]: E0117 00:33:08.141640 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:33:09.142033 kubelet[2430]: E0117 00:33:09.141949 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:33:10.142202 kubelet[2430]: E0117 00:33:10.142138 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:33:11.142752 kubelet[2430]: E0117 00:33:11.142694 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:33:12.142923 kubelet[2430]: E0117 00:33:12.142868 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:33:13.144057 kubelet[2430]: E0117 00:33:13.143998 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:33:14.144840 kubelet[2430]: E0117 00:33:14.144783 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:33:15.145170 kubelet[2430]: E0117 00:33:15.145062 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:33:16.073345 kubelet[2430]: E0117 00:33:16.073265 2430 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:33:16.145542 kubelet[2430]: E0117 00:33:16.145501 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:33:17.146614 kubelet[2430]: E0117 00:33:17.146557 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:33:17.368136 kubelet[2430]: E0117 00:33:17.368072 2430 controller.go:195] "Failed to update lease" err="Put \"https://172.31.18.61:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.24.155?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 17 00:33:18.147398 kubelet[2430]: E0117 00:33:18.147338 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:33:18.361459 kubelet[2430]: E0117 00:33:18.361377 2430 kubelet_node_status.go:548] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-17T00:33:08Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-17T00:33:08Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-17T00:33:08Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-17T00:33:08Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\\\"],\\\"sizeBytes\\\":166719855},{\\\"names\\\":[\\\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\\\",\\\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\\\"],\\\"sizeBytes\\\":91036984},{\\\"names\\\":[\\\"ghcr.io/flatcar/nginx@sha256:892d1d54ab079b8cffa2317ccb45829886a0c3c3edbdf92bb286904b09797767\\\",\\\"ghcr.io/flatcar/nginx:latest\\\"],\\\"sizeBytes\\\":63840197},{\\\"names\\\":[\\\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\\\",\\\"registry.k8s.io/kube-proxy:v1.32.11\\\"],\\\"sizeBytes\\\":31160918},{\\\"names\\\":[\\\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\\\"],\\\"sizeBytes\\\":18897442},{\\\"names\\\":[\\\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\\\",\\\"registry.k8s.io/pause:3.8\\\"],\\\"sizeBytes\\\":311286}]}}\" for node \"172.31.24.155\": Patch \"https://172.31.18.61:6443/api/v1/nodes/172.31.24.155/status?timeout=10s\": context deadline exceeded" Jan 17 00:33:19.148081 kubelet[2430]: E0117 00:33:19.147981 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:33:20.148581 kubelet[2430]: E0117 00:33:20.148537 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:33:21.148946 kubelet[2430]: E0117 00:33:21.148895 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:33:22.150095 kubelet[2430]: E0117 00:33:22.149952 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:33:23.151018 kubelet[2430]: E0117 00:33:23.150961 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:33:24.151993 kubelet[2430]: E0117 00:33:24.151921 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:33:25.152157 kubelet[2430]: E0117 00:33:25.152083 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:33:26.152561 kubelet[2430]: E0117 00:33:26.152491 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:33:27.152955 kubelet[2430]: E0117 00:33:27.152894 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:33:27.383034 kubelet[2430]: E0117 00:33:27.382977 2430 controller.go:195] "Failed to update lease" err="Put \"https://172.31.18.61:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.24.155?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 17 00:33:28.153927 kubelet[2430]: E0117 00:33:28.153875 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:33:28.361898 kubelet[2430]: E0117 00:33:28.361825 2430 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"172.31.24.155\": Get \"https://172.31.18.61:6443/api/v1/nodes/172.31.24.155?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 17 00:33:29.154868 kubelet[2430]: E0117 00:33:29.154824 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:33:30.155744 kubelet[2430]: E0117 00:33:30.155663 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:33:31.156443 kubelet[2430]: E0117 00:33:31.156384 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:33:32.157591 kubelet[2430]: E0117 00:33:32.157546 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:33:33.158285 kubelet[2430]: E0117 00:33:33.158207 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:33:34.158818 kubelet[2430]: E0117 00:33:34.158771 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:33:35.159315 kubelet[2430]: E0117 00:33:35.159271 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:33:36.073389 kubelet[2430]: E0117 00:33:36.073350 2430 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:33:36.159937 kubelet[2430]: E0117 00:33:36.159869 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:33:37.160070 kubelet[2430]: E0117 00:33:37.160013 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:33:37.383377 kubelet[2430]: E0117 00:33:37.383320 2430 controller.go:195] "Failed to update lease" err="Put \"https://172.31.18.61:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.24.155?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 17 00:33:37.415790 kubelet[2430]: E0117 00:33:37.415408 2430 controller.go:195] "Failed to update lease" err="Put \"https://172.31.18.61:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.24.155?timeout=10s\": unexpected EOF" Jan 17 00:33:37.415790 kubelet[2430]: I0117 00:33:37.415446 2430 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 17 00:33:38.160541 kubelet[2430]: E0117 00:33:38.160483 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:33:38.416032 kubelet[2430]: E0117 00:33:38.415904 2430 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"172.31.24.155\": Get \"https://172.31.18.61:6443/api/v1/nodes/172.31.24.155?timeout=10s\": context deadline exceeded - error from a previous attempt: unexpected EOF" Jan 17 00:33:38.420201 kubelet[2430]: E0117 00:33:38.417478 2430 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"172.31.24.155\": Get \"https://172.31.18.61:6443/api/v1/nodes/172.31.24.155?timeout=10s\": dial tcp 172.31.18.61:6443: connect: connection refused" Jan 17 00:33:38.427221 kubelet[2430]: E0117 00:33:38.425783 2430 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.61:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.24.155?timeout=10s\": dial tcp 172.31.18.61:6443: connect: connection refused - error from a previous attempt: dial tcp 172.31.18.61:6443: connect: connection reset by peer" interval="200ms" Jan 17 00:33:38.427221 kubelet[2430]: E0117 00:33:38.426758 2430 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"172.31.24.155\": Get \"https://172.31.18.61:6443/api/v1/nodes/172.31.24.155?timeout=10s\": dial tcp 172.31.18.61:6443: connect: connection refused" Jan 17 00:33:38.427221 kubelet[2430]: E0117 00:33:38.426777 2430 kubelet_node_status.go:535] "Unable to update node status" err="update node status exceeds retry count" Jan 17 00:33:39.161548 kubelet[2430]: E0117 00:33:39.161500 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:33:40.161851 kubelet[2430]: E0117 00:33:40.161789 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:33:41.162513 kubelet[2430]: E0117 00:33:41.162453 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:33:42.163364 kubelet[2430]: E0117 00:33:42.163306 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:33:43.163766 kubelet[2430]: E0117 00:33:43.163632 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:33:44.164043 kubelet[2430]: E0117 00:33:44.164000 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:33:45.165083 kubelet[2430]: E0117 00:33:45.165039 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:33:46.165929 kubelet[2430]: E0117 00:33:46.165874 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:33:47.166465 kubelet[2430]: E0117 00:33:47.166392 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:33:48.167264 kubelet[2430]: E0117 00:33:48.167198 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:33:48.626363 kubelet[2430]: E0117 00:33:48.626305 2430 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.61:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.24.155?timeout=10s\": context deadline exceeded" interval="400ms" Jan 17 00:33:49.167775 kubelet[2430]: E0117 00:33:49.167703 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:33:50.168921 kubelet[2430]: E0117 00:33:50.168844 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:33:51.170043 kubelet[2430]: E0117 00:33:51.169997 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:33:52.171201 kubelet[2430]: E0117 00:33:52.171106 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:33:53.172203 kubelet[2430]: E0117 00:33:53.172147 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:33:54.172382 kubelet[2430]: E0117 00:33:54.172329 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:33:55.173416 kubelet[2430]: E0117 00:33:55.173336 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:33:56.073137 kubelet[2430]: E0117 00:33:56.073078 2430 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:33:56.174199 kubelet[2430]: E0117 00:33:56.174127 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:33:57.174814 kubelet[2430]: E0117 00:33:57.174748 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:33:58.175810 kubelet[2430]: E0117 00:33:58.175745 2430 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:33:58.734007 kubelet[2430]: E0117 00:33:58.733937 2430 kubelet_node_status.go:548] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-17T00:33:48Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-17T00:33:48Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-17T00:33:48Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-17T00:33:48Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\\\"],\\\"sizeBytes\\\":166719855},{\\\"names\\\":[\\\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\\\",\\\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\\\"],\\\"sizeBytes\\\":91036984},{\\\"names\\\":[\\\"ghcr.io/flatcar/nginx@sha256:892d1d54ab079b8cffa2317ccb45829886a0c3c3edbdf92bb286904b09797767\\\",\\\"ghcr.io/flatcar/nginx:latest\\\"],\\\"sizeBytes\\\":63840197},{\\\"names\\\":[\\\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\\\",\\\"registry.k8s.io/kube-proxy:v1.32.11\\\"],\\\"sizeBytes\\\":31160918},{\\\"names\\\":[\\\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\\\"],\\\"sizeBytes\\\":18897442},{\\\"names\\\":[\\\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\\\",\\\"registry.k8s.io/pause:3.8\\\"],\\\"sizeBytes\\\":311286}]}}\" for node \"172.31.24.155\": Patch \"https://172.31.18.61:6443/api/v1/nodes/172.31.24.155/status?timeout=10s\": context deadline exceeded" Jan 17 00:33:59.028047 kubelet[2430]: E0117 00:33:59.027914 2430 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.61:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.24.155?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="800ms"