Jan 20 03:17:35.391891 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 19 22:14:52 -00 2026 Jan 20 03:17:35.391927 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f1266f495940b87d8762edac6a2036329f4c1218cb3943862a5de7e7a0c377ea Jan 20 03:17:35.391942 kernel: BIOS-provided physical RAM map: Jan 20 03:17:35.391956 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 20 03:17:35.391966 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 20 03:17:35.391974 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 20 03:17:35.391986 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jan 20 03:17:35.391996 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 20 03:17:35.392005 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Jan 20 03:17:35.392016 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jan 20 03:17:35.392024 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Jan 20 03:17:35.392034 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jan 20 03:17:35.392048 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jan 20 03:17:35.392059 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jan 20 03:17:35.392069 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jan 20 03:17:35.392079 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 20 03:17:35.392089 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Jan 20 03:17:35.392105 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Jan 20 03:17:35.392114 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Jan 20 03:17:35.392123 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Jan 20 03:17:35.392134 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jan 20 03:17:35.392144 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 20 03:17:35.392155 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 20 03:17:35.392164 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 20 03:17:35.392175 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jan 20 03:17:35.392185 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 20 03:17:35.392196 kernel: NX (Execute Disable) protection: active Jan 20 03:17:35.392205 kernel: APIC: Static calls initialized Jan 20 03:17:35.392362 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable Jan 20 03:17:35.392378 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable Jan 20 03:17:35.392389 kernel: extended physical RAM map: Jan 20 03:17:35.392399 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 20 03:17:35.392410 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 20 03:17:35.392418 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 20 03:17:35.392429 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Jan 20 03:17:35.392439 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 20 03:17:35.392449 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Jan 20 03:17:35.392461 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jan 20 03:17:35.392470 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable Jan 20 03:17:35.392487 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable Jan 20 03:17:35.392501 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable Jan 20 03:17:35.392513 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable Jan 20 03:17:35.392523 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable Jan 20 03:17:35.392535 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jan 20 03:17:35.392549 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jan 20 03:17:35.392560 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jan 20 03:17:35.392571 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jan 20 03:17:35.392583 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 20 03:17:35.392592 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Jan 20 03:17:35.392602 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Jan 20 03:17:35.392614 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Jan 20 03:17:35.392625 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Jan 20 03:17:35.392636 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jan 20 03:17:35.392646 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 20 03:17:35.392657 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 20 03:17:35.392672 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 20 03:17:35.392683 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jan 20 03:17:35.392693 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 20 03:17:35.392704 kernel: efi: EFI v2.7 by EDK II Jan 20 03:17:35.392715 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Jan 20 03:17:35.392726 kernel: random: crng init done Jan 20 03:17:35.392735 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Jan 20 03:17:35.392748 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Jan 20 03:17:35.392758 kernel: secureboot: Secure boot disabled Jan 20 03:17:35.392769 kernel: SMBIOS 2.8 present. Jan 20 03:17:35.392779 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Jan 20 03:17:35.392794 kernel: DMI: Memory slots populated: 1/1 Jan 20 03:17:35.392805 kernel: Hypervisor detected: KVM Jan 20 03:17:35.392816 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Jan 20 03:17:35.392825 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 20 03:17:35.392837 kernel: kvm-clock: using sched offset of 7501840483 cycles Jan 20 03:17:35.392849 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 20 03:17:35.392861 kernel: tsc: Detected 2445.426 MHz processor Jan 20 03:17:35.392871 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 20 03:17:35.392883 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 20 03:17:35.392893 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Jan 20 03:17:35.392905 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 20 03:17:35.392919 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 20 03:17:35.392932 kernel: Using GB pages for direct mapping Jan 20 03:17:35.392943 kernel: ACPI: Early table checksum verification disabled Jan 20 03:17:35.392955 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jan 20 03:17:35.392965 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jan 20 03:17:35.392977 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 03:17:35.392988 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 03:17:35.393000 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jan 20 03:17:35.393009 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 03:17:35.393025 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 03:17:35.393036 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 03:17:35.393048 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 03:17:35.393058 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jan 20 03:17:35.393070 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Jan 20 03:17:35.393081 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Jan 20 03:17:35.393093 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jan 20 03:17:35.393103 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Jan 20 03:17:35.393118 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Jan 20 03:17:35.393130 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Jan 20 03:17:35.393140 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Jan 20 03:17:35.393151 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Jan 20 03:17:35.393162 kernel: No NUMA configuration found Jan 20 03:17:35.393173 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Jan 20 03:17:35.393184 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Jan 20 03:17:35.393194 kernel: Zone ranges: Jan 20 03:17:35.393206 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 20 03:17:35.393217 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Jan 20 03:17:35.393374 kernel: Normal empty Jan 20 03:17:35.393385 kernel: Device empty Jan 20 03:17:35.393397 kernel: Movable zone start for each node Jan 20 03:17:35.393406 kernel: Early memory node ranges Jan 20 03:17:35.393416 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 20 03:17:35.393428 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jan 20 03:17:35.393439 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jan 20 03:17:35.393451 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Jan 20 03:17:35.393460 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Jan 20 03:17:35.393476 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Jan 20 03:17:35.393487 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Jan 20 03:17:35.393499 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Jan 20 03:17:35.393508 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Jan 20 03:17:35.393520 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 20 03:17:35.393543 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 20 03:17:35.393556 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jan 20 03:17:35.393568 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 20 03:17:35.393579 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Jan 20 03:17:35.393592 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Jan 20 03:17:35.393602 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jan 20 03:17:35.393614 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Jan 20 03:17:35.393626 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Jan 20 03:17:35.393642 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 20 03:17:35.393652 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 20 03:17:35.393665 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 20 03:17:35.393676 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 20 03:17:35.393688 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 20 03:17:35.393703 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 20 03:17:35.393715 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 20 03:17:35.393727 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 20 03:17:35.393738 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 20 03:17:35.393748 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 20 03:17:35.393760 kernel: TSC deadline timer available Jan 20 03:17:35.393772 kernel: CPU topo: Max. logical packages: 1 Jan 20 03:17:35.393783 kernel: CPU topo: Max. logical dies: 1 Jan 20 03:17:35.393793 kernel: CPU topo: Max. dies per package: 1 Jan 20 03:17:35.393809 kernel: CPU topo: Max. threads per core: 1 Jan 20 03:17:35.393821 kernel: CPU topo: Num. cores per package: 4 Jan 20 03:17:35.393832 kernel: CPU topo: Num. threads per package: 4 Jan 20 03:17:35.393842 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Jan 20 03:17:35.393854 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 20 03:17:35.393865 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 20 03:17:35.393877 kernel: kvm-guest: setup PV sched yield Jan 20 03:17:35.393886 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Jan 20 03:17:35.393898 kernel: Booting paravirtualized kernel on KVM Jan 20 03:17:35.393910 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 20 03:17:35.393926 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 20 03:17:35.393937 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Jan 20 03:17:35.393949 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Jan 20 03:17:35.393961 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 20 03:17:35.393971 kernel: kvm-guest: PV spinlocks enabled Jan 20 03:17:35.393983 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 20 03:17:35.393996 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f1266f495940b87d8762edac6a2036329f4c1218cb3943862a5de7e7a0c377ea Jan 20 03:17:35.394009 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 20 03:17:35.394023 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 20 03:17:35.394036 kernel: Fallback order for Node 0: 0 Jan 20 03:17:35.394047 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Jan 20 03:17:35.394060 kernel: Policy zone: DMA32 Jan 20 03:17:35.394075 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 20 03:17:35.394089 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 20 03:17:35.394100 kernel: ftrace: allocating 40097 entries in 157 pages Jan 20 03:17:35.394111 kernel: ftrace: allocated 157 pages with 5 groups Jan 20 03:17:35.394123 kernel: Dynamic Preempt: voluntary Jan 20 03:17:35.394140 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 20 03:17:35.394151 kernel: rcu: RCU event tracing is enabled. Jan 20 03:17:35.394164 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 20 03:17:35.394175 kernel: Trampoline variant of Tasks RCU enabled. Jan 20 03:17:35.394188 kernel: Rude variant of Tasks RCU enabled. Jan 20 03:17:35.394198 kernel: Tracing variant of Tasks RCU enabled. Jan 20 03:17:35.394211 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 20 03:17:35.394367 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 20 03:17:35.394380 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 20 03:17:35.394398 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 20 03:17:35.394410 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 20 03:17:35.394422 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 20 03:17:35.394433 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 20 03:17:35.394445 kernel: Console: colour dummy device 80x25 Jan 20 03:17:35.394457 kernel: printk: legacy console [ttyS0] enabled Jan 20 03:17:35.394469 kernel: ACPI: Core revision 20240827 Jan 20 03:17:35.394480 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 20 03:17:35.394491 kernel: APIC: Switch to symmetric I/O mode setup Jan 20 03:17:35.394507 kernel: x2apic enabled Jan 20 03:17:35.394519 kernel: APIC: Switched APIC routing to: physical x2apic Jan 20 03:17:35.394529 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 20 03:17:35.394542 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 20 03:17:35.394553 kernel: kvm-guest: setup PV IPIs Jan 20 03:17:35.394566 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 20 03:17:35.394577 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Jan 20 03:17:35.394590 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Jan 20 03:17:35.394601 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 20 03:17:35.394617 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 20 03:17:35.394627 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 20 03:17:35.394640 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 20 03:17:35.394651 kernel: Spectre V2 : Mitigation: Retpolines Jan 20 03:17:35.394663 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 20 03:17:35.394673 kernel: Speculative Store Bypass: Vulnerable Jan 20 03:17:35.394685 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 20 03:17:35.394698 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 20 03:17:35.394713 kernel: active return thunk: srso_alias_return_thunk Jan 20 03:17:35.394726 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 20 03:17:35.394738 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 20 03:17:35.394750 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 20 03:17:35.394760 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 20 03:17:35.394773 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 20 03:17:35.394785 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 20 03:17:35.394797 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 20 03:17:35.394808 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 20 03:17:35.394824 kernel: Freeing SMP alternatives memory: 32K Jan 20 03:17:35.394836 kernel: pid_max: default: 32768 minimum: 301 Jan 20 03:17:35.394848 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 20 03:17:35.394858 kernel: landlock: Up and running. Jan 20 03:17:35.394870 kernel: SELinux: Initializing. Jan 20 03:17:35.394881 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 20 03:17:35.394893 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 20 03:17:35.394903 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 20 03:17:35.394917 kernel: Performance Events: PMU not available due to virtualization, using software events only. Jan 20 03:17:35.394931 kernel: signal: max sigframe size: 1776 Jan 20 03:17:35.394944 kernel: rcu: Hierarchical SRCU implementation. Jan 20 03:17:35.394955 kernel: rcu: Max phase no-delay instances is 400. Jan 20 03:17:35.394968 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 20 03:17:35.394978 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 20 03:17:35.394990 kernel: smp: Bringing up secondary CPUs ... Jan 20 03:17:35.395001 kernel: smpboot: x86: Booting SMP configuration: Jan 20 03:17:35.395014 kernel: .... node #0, CPUs: #1 #2 #3 Jan 20 03:17:35.395024 kernel: smp: Brought up 1 node, 4 CPUs Jan 20 03:17:35.395041 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Jan 20 03:17:35.395053 kernel: Memory: 2414476K/2565800K available (14336K kernel code, 2445K rwdata, 26064K rodata, 46204K init, 2556K bss, 145388K reserved, 0K cma-reserved) Jan 20 03:17:35.395065 kernel: devtmpfs: initialized Jan 20 03:17:35.395076 kernel: x86/mm: Memory block size: 128MB Jan 20 03:17:35.395088 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jan 20 03:17:35.395100 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jan 20 03:17:35.395111 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Jan 20 03:17:35.395122 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jan 20 03:17:35.395134 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Jan 20 03:17:35.395150 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jan 20 03:17:35.395161 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 20 03:17:35.395173 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 20 03:17:35.395184 kernel: pinctrl core: initialized pinctrl subsystem Jan 20 03:17:35.395197 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 20 03:17:35.395207 kernel: audit: initializing netlink subsys (disabled) Jan 20 03:17:35.395219 kernel: audit: type=2000 audit(1768879050.750:1): state=initialized audit_enabled=0 res=1 Jan 20 03:17:35.395367 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 20 03:17:35.395382 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 20 03:17:35.395394 kernel: cpuidle: using governor menu Jan 20 03:17:35.395404 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 20 03:17:35.395417 kernel: dca service started, version 1.12.1 Jan 20 03:17:35.395428 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Jan 20 03:17:35.395440 kernel: PCI: Using configuration type 1 for base access Jan 20 03:17:35.395451 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 20 03:17:35.395503 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 20 03:17:35.395514 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 20 03:17:35.395529 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 20 03:17:35.395797 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 20 03:17:35.395813 kernel: ACPI: Added _OSI(Module Device) Jan 20 03:17:35.395825 kernel: ACPI: Added _OSI(Processor Device) Jan 20 03:17:35.395838 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 20 03:17:35.395849 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 20 03:17:35.395859 kernel: ACPI: Interpreter enabled Jan 20 03:17:35.395871 kernel: ACPI: PM: (supports S0 S3 S5) Jan 20 03:17:35.395883 kernel: ACPI: Using IOAPIC for interrupt routing Jan 20 03:17:35.395901 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 20 03:17:35.395912 kernel: PCI: Using E820 reservations for host bridge windows Jan 20 03:17:35.395923 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 20 03:17:35.395935 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 20 03:17:35.396195 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 20 03:17:35.396549 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 20 03:17:35.396737 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 20 03:17:35.396756 kernel: PCI host bridge to bus 0000:00 Jan 20 03:17:35.396951 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 20 03:17:35.397104 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 20 03:17:35.397381 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 20 03:17:35.397525 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Jan 20 03:17:35.397661 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Jan 20 03:17:35.397806 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Jan 20 03:17:35.397965 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 20 03:17:35.398135 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jan 20 03:17:35.398441 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jan 20 03:17:35.398597 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Jan 20 03:17:35.398745 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Jan 20 03:17:35.398904 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Jan 20 03:17:35.399057 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 20 03:17:35.399305 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jan 20 03:17:35.399517 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Jan 20 03:17:35.399666 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Jan 20 03:17:35.399812 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Jan 20 03:17:35.399986 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jan 20 03:17:35.400139 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Jan 20 03:17:35.400449 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Jan 20 03:17:35.400611 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Jan 20 03:17:35.400783 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jan 20 03:17:35.400943 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Jan 20 03:17:35.401092 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Jan 20 03:17:35.401372 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Jan 20 03:17:35.401528 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Jan 20 03:17:35.401685 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jan 20 03:17:35.401946 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 20 03:17:35.402592 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jan 20 03:17:35.402787 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Jan 20 03:17:35.402971 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Jan 20 03:17:35.403165 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jan 20 03:17:35.403558 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Jan 20 03:17:35.403584 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 20 03:17:35.403599 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 20 03:17:35.403609 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 20 03:17:35.403621 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 20 03:17:35.403633 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 20 03:17:35.403645 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 20 03:17:35.403656 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 20 03:17:35.403667 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 20 03:17:35.403679 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 20 03:17:35.403696 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 20 03:17:35.403706 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 20 03:17:35.403718 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 20 03:17:35.403730 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 20 03:17:35.403742 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 20 03:17:35.403753 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 20 03:17:35.403764 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 20 03:17:35.403776 kernel: iommu: Default domain type: Translated Jan 20 03:17:35.403789 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 20 03:17:35.403803 kernel: efivars: Registered efivars operations Jan 20 03:17:35.403815 kernel: PCI: Using ACPI for IRQ routing Jan 20 03:17:35.403827 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 20 03:17:35.403839 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jan 20 03:17:35.403849 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Jan 20 03:17:35.403861 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] Jan 20 03:17:35.403872 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] Jan 20 03:17:35.403884 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Jan 20 03:17:35.403894 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Jan 20 03:17:35.403911 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Jan 20 03:17:35.403923 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Jan 20 03:17:35.404111 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 20 03:17:35.404442 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 20 03:17:35.404623 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 20 03:17:35.404640 kernel: vgaarb: loaded Jan 20 03:17:35.404653 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 20 03:17:35.404663 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 20 03:17:35.404682 kernel: clocksource: Switched to clocksource kvm-clock Jan 20 03:17:35.404694 kernel: VFS: Disk quotas dquot_6.6.0 Jan 20 03:17:35.404705 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 20 03:17:35.404716 kernel: pnp: PnP ACPI init Jan 20 03:17:35.404911 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Jan 20 03:17:35.404931 kernel: pnp: PnP ACPI: found 6 devices Jan 20 03:17:35.404943 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 20 03:17:35.404956 kernel: NET: Registered PF_INET protocol family Jan 20 03:17:35.404972 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 20 03:17:35.404985 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 20 03:17:35.404997 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 20 03:17:35.405008 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 20 03:17:35.405042 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 20 03:17:35.405058 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 20 03:17:35.405069 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 20 03:17:35.405082 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 20 03:17:35.405098 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 20 03:17:35.405109 kernel: NET: Registered PF_XDP protocol family Jan 20 03:17:35.405434 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Jan 20 03:17:35.405622 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Jan 20 03:17:35.405793 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 20 03:17:35.405963 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 20 03:17:35.406133 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 20 03:17:35.406446 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Jan 20 03:17:35.406623 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Jan 20 03:17:35.406792 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Jan 20 03:17:35.406809 kernel: PCI: CLS 0 bytes, default 64 Jan 20 03:17:35.406823 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Jan 20 03:17:35.406836 kernel: Initialise system trusted keyrings Jan 20 03:17:35.406848 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 20 03:17:35.406859 kernel: Key type asymmetric registered Jan 20 03:17:35.406871 kernel: Asymmetric key parser 'x509' registered Jan 20 03:17:35.406883 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 20 03:17:35.406899 kernel: io scheduler mq-deadline registered Jan 20 03:17:35.406911 kernel: io scheduler kyber registered Jan 20 03:17:35.406923 kernel: io scheduler bfq registered Jan 20 03:17:35.406936 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 20 03:17:35.406947 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 20 03:17:35.406960 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 20 03:17:35.406972 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 20 03:17:35.406989 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 20 03:17:35.407000 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 20 03:17:35.407013 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 20 03:17:35.407025 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 20 03:17:35.407037 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 20 03:17:35.407374 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 20 03:17:35.407395 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 20 03:17:35.407582 kernel: rtc_cmos 00:04: registered as rtc0 Jan 20 03:17:35.407759 kernel: rtc_cmos 00:04: setting system clock to 2026-01-20T03:17:34 UTC (1768879054) Jan 20 03:17:35.407933 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jan 20 03:17:35.407952 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 20 03:17:35.407965 kernel: efifb: probing for efifb Jan 20 03:17:35.407976 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Jan 20 03:17:35.407989 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Jan 20 03:17:35.408001 kernel: efifb: scrolling: redraw Jan 20 03:17:35.408014 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 20 03:17:35.408031 kernel: Console: switching to colour frame buffer device 160x50 Jan 20 03:17:35.408047 kernel: fb0: EFI VGA frame buffer device Jan 20 03:17:35.408059 kernel: pstore: Using crash dump compression: deflate Jan 20 03:17:35.408070 kernel: pstore: Registered efi_pstore as persistent store backend Jan 20 03:17:35.408083 kernel: NET: Registered PF_INET6 protocol family Jan 20 03:17:35.408095 kernel: Segment Routing with IPv6 Jan 20 03:17:35.408107 kernel: In-situ OAM (IOAM) with IPv6 Jan 20 03:17:35.408118 kernel: NET: Registered PF_PACKET protocol family Jan 20 03:17:35.408131 kernel: Key type dns_resolver registered Jan 20 03:17:35.408147 kernel: IPI shorthand broadcast: enabled Jan 20 03:17:35.408158 kernel: sched_clock: Marking stable (4059050553, 530778223)->(4869769359, -279940583) Jan 20 03:17:35.408170 kernel: registered taskstats version 1 Jan 20 03:17:35.408182 kernel: Loading compiled-in X.509 certificates Jan 20 03:17:35.408195 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: 5eaf2083485884e476a8ac33c4b07b82eff139e9' Jan 20 03:17:35.408206 kernel: Demotion targets for Node 0: null Jan 20 03:17:35.408219 kernel: Key type .fscrypt registered Jan 20 03:17:35.408386 kernel: Key type fscrypt-provisioning registered Jan 20 03:17:35.408397 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 20 03:17:35.408414 kernel: ima: Allocated hash algorithm: sha1 Jan 20 03:17:35.408426 kernel: ima: No architecture policies found Jan 20 03:17:35.408438 kernel: clk: Disabling unused clocks Jan 20 03:17:35.408449 kernel: Warning: unable to open an initial console. Jan 20 03:17:35.408462 kernel: Freeing unused kernel image (initmem) memory: 46204K Jan 20 03:17:35.408474 kernel: Write protecting the kernel read-only data: 40960k Jan 20 03:17:35.408485 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Jan 20 03:17:35.408497 kernel: Run /init as init process Jan 20 03:17:35.408509 kernel: with arguments: Jan 20 03:17:35.408525 kernel: /init Jan 20 03:17:35.408537 kernel: with environment: Jan 20 03:17:35.408548 kernel: HOME=/ Jan 20 03:17:35.408561 kernel: TERM=linux Jan 20 03:17:35.408573 systemd[1]: Successfully made /usr/ read-only. Jan 20 03:17:35.408589 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 20 03:17:35.408603 systemd[1]: Detected virtualization kvm. Jan 20 03:17:35.408620 systemd[1]: Detected architecture x86-64. Jan 20 03:17:35.408632 systemd[1]: Running in initrd. Jan 20 03:17:35.408643 systemd[1]: No hostname configured, using default hostname. Jan 20 03:17:35.408655 systemd[1]: Hostname set to . Jan 20 03:17:35.408668 systemd[1]: Initializing machine ID from VM UUID. Jan 20 03:17:35.408681 systemd[1]: Queued start job for default target initrd.target. Jan 20 03:17:35.408693 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 03:17:35.408706 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 03:17:35.408723 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 20 03:17:35.408736 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 20 03:17:35.408750 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 20 03:17:35.408766 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 20 03:17:35.408779 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 20 03:17:35.408792 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 20 03:17:35.408806 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 03:17:35.408822 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 20 03:17:35.408836 systemd[1]: Reached target paths.target - Path Units. Jan 20 03:17:35.408849 systemd[1]: Reached target slices.target - Slice Units. Jan 20 03:17:35.408863 systemd[1]: Reached target swap.target - Swaps. Jan 20 03:17:35.408874 systemd[1]: Reached target timers.target - Timer Units. Jan 20 03:17:35.408887 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 20 03:17:35.408901 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 20 03:17:35.408914 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 20 03:17:35.408926 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 20 03:17:35.408944 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 20 03:17:35.408957 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 20 03:17:35.408975 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 03:17:35.408986 systemd[1]: Reached target sockets.target - Socket Units. Jan 20 03:17:35.409000 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 20 03:17:35.409013 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 20 03:17:35.409026 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 20 03:17:35.409039 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 20 03:17:35.409057 systemd[1]: Starting systemd-fsck-usr.service... Jan 20 03:17:35.409071 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 20 03:17:35.409082 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 20 03:17:35.409096 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 03:17:35.409142 systemd-journald[203]: Collecting audit messages is disabled. Jan 20 03:17:35.409180 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 20 03:17:35.409193 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 03:17:35.409207 systemd-journald[203]: Journal started Jan 20 03:17:35.409398 systemd-journald[203]: Runtime Journal (/run/log/journal/327cea59a37f4116a676ed1005944dff) is 6M, max 48.1M, 42.1M free. Jan 20 03:17:35.389790 systemd-modules-load[205]: Inserted module 'overlay' Jan 20 03:17:35.420676 systemd[1]: Started systemd-journald.service - Journal Service. Jan 20 03:17:35.425903 systemd[1]: Finished systemd-fsck-usr.service. Jan 20 03:17:35.431725 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 20 03:17:35.453584 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 20 03:17:35.472873 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 03:17:35.491964 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 20 03:17:35.505615 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 20 03:17:35.497929 systemd-tmpfiles[217]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 20 03:17:35.512504 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 03:17:35.548477 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 20 03:17:35.564368 kernel: Bridge firewalling registered Jan 20 03:17:35.558367 systemd-modules-load[205]: Inserted module 'br_netfilter' Jan 20 03:17:35.572956 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 20 03:17:35.601951 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 20 03:17:35.618818 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 20 03:17:35.627207 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 03:17:35.652122 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 03:17:35.656155 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 20 03:17:35.667058 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 20 03:17:35.700543 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 20 03:17:35.722043 dracut-cmdline[243]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f1266f495940b87d8762edac6a2036329f4c1218cb3943862a5de7e7a0c377ea Jan 20 03:17:35.753731 systemd-resolved[245]: Positive Trust Anchors: Jan 20 03:17:35.753778 systemd-resolved[245]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 20 03:17:35.753804 systemd-resolved[245]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 20 03:17:35.756582 systemd-resolved[245]: Defaulting to hostname 'linux'. Jan 20 03:17:35.758489 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 20 03:17:35.761209 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 20 03:17:35.953458 kernel: SCSI subsystem initialized Jan 20 03:17:35.965643 kernel: Loading iSCSI transport class v2.0-870. Jan 20 03:17:35.983364 kernel: iscsi: registered transport (tcp) Jan 20 03:17:36.018139 kernel: iscsi: registered transport (qla4xxx) Jan 20 03:17:36.018369 kernel: QLogic iSCSI HBA Driver Jan 20 03:17:36.059727 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 20 03:17:36.105019 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 20 03:17:36.108643 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 20 03:17:36.186191 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 20 03:17:36.190210 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 20 03:17:36.286447 kernel: raid6: avx2x4 gen() 22035 MB/s Jan 20 03:17:36.306441 kernel: raid6: avx2x2 gen() 23371 MB/s Jan 20 03:17:36.327774 kernel: raid6: avx2x1 gen() 16268 MB/s Jan 20 03:17:36.327826 kernel: raid6: using algorithm avx2x2 gen() 23371 MB/s Jan 20 03:17:36.350727 kernel: raid6: .... xor() 23997 MB/s, rmw enabled Jan 20 03:17:36.350823 kernel: raid6: using avx2x2 recovery algorithm Jan 20 03:17:36.378571 kernel: xor: automatically using best checksumming function avx Jan 20 03:17:36.595513 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 20 03:17:36.607719 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 20 03:17:36.612158 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 03:17:36.673408 systemd-udevd[454]: Using default interface naming scheme 'v255'. Jan 20 03:17:36.682834 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 03:17:36.688445 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 20 03:17:36.742905 dracut-pre-trigger[456]: rd.md=0: removing MD RAID activation Jan 20 03:17:36.816210 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 20 03:17:36.824496 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 20 03:17:36.960133 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 03:17:36.972484 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 20 03:17:37.065643 kernel: libata version 3.00 loaded. Jan 20 03:17:37.073836 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 20 03:17:37.078821 kernel: cryptd: max_cpu_qlen set to 1000 Jan 20 03:17:37.100157 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 20 03:17:37.136045 kernel: AES CTR mode by8 optimization enabled Jan 20 03:17:37.136105 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 20 03:17:37.136123 kernel: GPT:9289727 != 19775487 Jan 20 03:17:37.136139 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 20 03:17:37.136160 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jan 20 03:17:37.136175 kernel: GPT:9289727 != 19775487 Jan 20 03:17:37.136189 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 20 03:17:37.136203 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 20 03:17:37.152433 kernel: ahci 0000:00:1f.2: version 3.0 Jan 20 03:17:37.152704 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 20 03:17:37.154698 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 03:17:37.154890 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 03:17:37.189740 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jan 20 03:17:37.190201 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jan 20 03:17:37.190570 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 20 03:17:37.190138 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 03:17:37.198632 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 03:17:37.205928 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 20 03:17:37.227186 kernel: scsi host0: ahci Jan 20 03:17:37.227666 kernel: scsi host1: ahci Jan 20 03:17:37.231982 kernel: scsi host2: ahci Jan 20 03:17:37.236362 kernel: scsi host3: ahci Jan 20 03:17:37.240384 kernel: scsi host4: ahci Jan 20 03:17:37.252866 kernel: scsi host5: ahci Jan 20 03:17:37.253196 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 lpm-pol 1 Jan 20 03:17:37.253217 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 lpm-pol 1 Jan 20 03:17:37.253833 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 20 03:17:37.287525 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 lpm-pol 1 Jan 20 03:17:37.287554 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 lpm-pol 1 Jan 20 03:17:37.287578 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 lpm-pol 1 Jan 20 03:17:37.287600 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 lpm-pol 1 Jan 20 03:17:37.285621 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 03:17:37.299374 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 20 03:17:37.309910 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 20 03:17:37.313423 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 20 03:17:37.342125 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 20 03:17:37.354768 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 20 03:17:37.400308 disk-uuid[620]: Primary Header is updated. Jan 20 03:17:37.400308 disk-uuid[620]: Secondary Entries is updated. Jan 20 03:17:37.400308 disk-uuid[620]: Secondary Header is updated. Jan 20 03:17:37.415461 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 20 03:17:37.425500 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 20 03:17:37.598456 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 20 03:17:37.598528 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 20 03:17:37.604420 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 20 03:17:37.609488 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 20 03:17:37.615399 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 20 03:17:37.620482 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 20 03:17:37.620555 kernel: ata3.00: LPM support broken, forcing max_power Jan 20 03:17:37.628846 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 20 03:17:37.628904 kernel: ata3.00: applying bridge limits Jan 20 03:17:37.632479 kernel: ata3.00: LPM support broken, forcing max_power Jan 20 03:17:37.638569 kernel: ata3.00: configured for UDMA/100 Jan 20 03:17:37.660583 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 20 03:17:37.746114 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 20 03:17:37.746685 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 20 03:17:37.763451 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 20 03:17:38.207327 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 20 03:17:38.216528 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 20 03:17:38.226639 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 03:17:38.233402 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 20 03:17:38.266394 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 20 03:17:38.330585 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 20 03:17:38.430797 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 20 03:17:38.432519 disk-uuid[621]: The operation has completed successfully. Jan 20 03:17:38.485550 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 20 03:17:38.485837 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 20 03:17:38.524675 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 20 03:17:38.561529 sh[650]: Success Jan 20 03:17:38.600475 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 20 03:17:38.600548 kernel: device-mapper: uevent: version 1.0.3 Jan 20 03:17:38.606512 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 20 03:17:38.629563 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jan 20 03:17:38.680209 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 20 03:17:38.688687 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 20 03:17:38.708827 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 20 03:17:38.724436 kernel: BTRFS: device fsid 1cad4abe-82cb-4052-9906-9dfb1f3e3340 devid 1 transid 44 /dev/mapper/usr (253:0) scanned by mount (662) Jan 20 03:17:38.736482 kernel: BTRFS info (device dm-0): first mount of filesystem 1cad4abe-82cb-4052-9906-9dfb1f3e3340 Jan 20 03:17:38.736558 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 20 03:17:38.759585 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 20 03:17:38.759669 kernel: BTRFS info (device dm-0): enabling free space tree Jan 20 03:17:38.761977 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 20 03:17:38.766408 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 20 03:17:38.770382 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 20 03:17:38.771495 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 20 03:17:38.816741 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 20 03:17:38.867455 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (699) Jan 20 03:17:38.876445 kernel: BTRFS info (device vda6): first mount of filesystem 084bfd60-dd5e-4810-8f7b-6e24dbaec2b2 Jan 20 03:17:38.876510 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 20 03:17:38.901505 kernel: BTRFS info (device vda6): turning on async discard Jan 20 03:17:38.901584 kernel: BTRFS info (device vda6): enabling free space tree Jan 20 03:17:38.915507 kernel: BTRFS info (device vda6): last unmount of filesystem 084bfd60-dd5e-4810-8f7b-6e24dbaec2b2 Jan 20 03:17:38.920326 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 20 03:17:38.932949 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 20 03:17:39.072575 ignition[762]: Ignition 2.22.0 Jan 20 03:17:39.072658 ignition[762]: Stage: fetch-offline Jan 20 03:17:39.072700 ignition[762]: no configs at "/usr/lib/ignition/base.d" Jan 20 03:17:39.078813 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 20 03:17:39.072713 ignition[762]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 03:17:39.090193 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 20 03:17:39.072822 ignition[762]: parsed url from cmdline: "" Jan 20 03:17:39.072829 ignition[762]: no config URL provided Jan 20 03:17:39.072838 ignition[762]: reading system config file "/usr/lib/ignition/user.ign" Jan 20 03:17:39.072856 ignition[762]: no config at "/usr/lib/ignition/user.ign" Jan 20 03:17:39.072893 ignition[762]: op(1): [started] loading QEMU firmware config module Jan 20 03:17:39.072901 ignition[762]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 20 03:17:39.133885 ignition[762]: op(1): [finished] loading QEMU firmware config module Jan 20 03:17:39.133937 ignition[762]: QEMU firmware config was not found. Ignoring... Jan 20 03:17:39.134903 ignition[762]: parsing config with SHA512: f04dfb3d84e624cd8a7b7c6cbcb75d087c6fc14318d8161f7e294366c031ad0de50a24bceb37fc5b485e83397401d281bbc631c271654da5bc28a462d942024a Jan 20 03:17:39.144756 unknown[762]: fetched base config from "system" Jan 20 03:17:39.144764 unknown[762]: fetched user config from "qemu" Jan 20 03:17:39.145017 ignition[762]: fetch-offline: fetch-offline passed Jan 20 03:17:39.148137 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 20 03:17:39.145078 ignition[762]: Ignition finished successfully Jan 20 03:17:39.219199 systemd-networkd[839]: lo: Link UP Jan 20 03:17:39.219392 systemd-networkd[839]: lo: Gained carrier Jan 20 03:17:39.221076 systemd-networkd[839]: Enumeration completed Jan 20 03:17:39.221417 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 20 03:17:39.222488 systemd-networkd[839]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 03:17:39.222494 systemd-networkd[839]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 20 03:17:39.224789 systemd-networkd[839]: eth0: Link UP Jan 20 03:17:39.225099 systemd-networkd[839]: eth0: Gained carrier Jan 20 03:17:39.225111 systemd-networkd[839]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 03:17:39.227581 systemd[1]: Reached target network.target - Network. Jan 20 03:17:39.236411 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 20 03:17:39.237471 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 20 03:17:39.307395 systemd-networkd[839]: eth0: DHCPv4 address 10.0.0.17/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 20 03:17:39.343823 ignition[844]: Ignition 2.22.0 Jan 20 03:17:39.343891 ignition[844]: Stage: kargs Jan 20 03:17:39.344079 ignition[844]: no configs at "/usr/lib/ignition/base.d" Jan 20 03:17:39.344097 ignition[844]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 03:17:39.345409 ignition[844]: kargs: kargs passed Jan 20 03:17:39.354086 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 20 03:17:39.345470 ignition[844]: Ignition finished successfully Jan 20 03:17:39.360067 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 20 03:17:39.414635 ignition[853]: Ignition 2.22.0 Jan 20 03:17:39.414687 ignition[853]: Stage: disks Jan 20 03:17:39.414816 ignition[853]: no configs at "/usr/lib/ignition/base.d" Jan 20 03:17:39.414827 ignition[853]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 03:17:39.415433 ignition[853]: disks: disks passed Jan 20 03:17:39.415479 ignition[853]: Ignition finished successfully Jan 20 03:17:39.435820 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 20 03:17:39.448093 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 20 03:17:39.449955 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 20 03:17:39.462161 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 20 03:17:39.472469 systemd[1]: Reached target sysinit.target - System Initialization. Jan 20 03:17:39.481180 systemd[1]: Reached target basic.target - Basic System. Jan 20 03:17:39.485931 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 20 03:17:39.535185 systemd-fsck[863]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jan 20 03:17:39.542030 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 20 03:17:39.561558 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 20 03:17:39.751383 kernel: EXT4-fs (vda9): mounted filesystem d87587c2-84ee-4a64-a55e-c6773c94f548 r/w with ordered data mode. Quota mode: none. Jan 20 03:17:39.751985 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 20 03:17:39.755817 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 20 03:17:39.771794 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 20 03:17:39.778395 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 20 03:17:39.783699 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 20 03:17:39.783743 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 20 03:17:39.783769 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 20 03:17:39.823019 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 20 03:17:39.839738 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (871) Jan 20 03:17:39.841411 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 20 03:17:39.868202 kernel: BTRFS info (device vda6): first mount of filesystem 084bfd60-dd5e-4810-8f7b-6e24dbaec2b2 Jan 20 03:17:39.868508 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 20 03:17:39.868532 kernel: BTRFS info (device vda6): turning on async discard Jan 20 03:17:39.868549 kernel: BTRFS info (device vda6): enabling free space tree Jan 20 03:17:39.863828 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 20 03:17:39.945177 initrd-setup-root[895]: cut: /sysroot/etc/passwd: No such file or directory Jan 20 03:17:39.959890 initrd-setup-root[902]: cut: /sysroot/etc/group: No such file or directory Jan 20 03:17:39.969016 initrd-setup-root[909]: cut: /sysroot/etc/shadow: No such file or directory Jan 20 03:17:39.982855 initrd-setup-root[916]: cut: /sysroot/etc/gshadow: No such file or directory Jan 20 03:17:40.155510 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 20 03:17:40.160119 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 20 03:17:40.187609 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 20 03:17:40.199091 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 20 03:17:40.208068 kernel: BTRFS info (device vda6): last unmount of filesystem 084bfd60-dd5e-4810-8f7b-6e24dbaec2b2 Jan 20 03:17:40.244540 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 20 03:17:40.260084 ignition[984]: INFO : Ignition 2.22.0 Jan 20 03:17:40.260084 ignition[984]: INFO : Stage: mount Jan 20 03:17:40.316606 ignition[984]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 03:17:40.316606 ignition[984]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 03:17:40.316606 ignition[984]: INFO : mount: mount passed Jan 20 03:17:40.316606 ignition[984]: INFO : Ignition finished successfully Jan 20 03:17:40.301666 systemd-networkd[839]: eth0: Gained IPv6LL Jan 20 03:17:40.358725 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 20 03:17:40.370133 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 20 03:17:40.754485 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 20 03:17:40.794340 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (997) Jan 20 03:17:40.805151 kernel: BTRFS info (device vda6): first mount of filesystem 084bfd60-dd5e-4810-8f7b-6e24dbaec2b2 Jan 20 03:17:40.805190 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 20 03:17:40.820414 kernel: BTRFS info (device vda6): turning on async discard Jan 20 03:17:40.820539 kernel: BTRFS info (device vda6): enabling free space tree Jan 20 03:17:40.823153 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 20 03:17:40.886673 ignition[1014]: INFO : Ignition 2.22.0 Jan 20 03:17:40.886673 ignition[1014]: INFO : Stage: files Jan 20 03:17:40.893993 ignition[1014]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 03:17:40.893993 ignition[1014]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 03:17:40.893993 ignition[1014]: DEBUG : files: compiled without relabeling support, skipping Jan 20 03:17:40.893993 ignition[1014]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 20 03:17:40.893993 ignition[1014]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 20 03:17:40.893993 ignition[1014]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 20 03:17:40.893993 ignition[1014]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 20 03:17:40.893993 ignition[1014]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 20 03:17:40.893911 unknown[1014]: wrote ssh authorized keys file for user: core Jan 20 03:17:40.951049 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Jan 20 03:17:40.951049 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Jan 20 03:17:40.951049 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 20 03:17:40.951049 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 20 03:17:40.951049 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 20 03:17:40.951049 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 20 03:17:40.951049 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 20 03:17:40.951049 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jan 20 03:17:41.136145 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Jan 20 03:17:41.862020 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 20 03:17:41.862020 ignition[1014]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" Jan 20 03:17:41.882503 ignition[1014]: INFO : files: op(7): op(8): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 20 03:17:41.895105 ignition[1014]: INFO : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 20 03:17:41.895105 ignition[1014]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" Jan 20 03:17:41.895105 ignition[1014]: INFO : files: op(9): [started] setting preset to disabled for "coreos-metadata.service" Jan 20 03:17:41.924903 ignition[1014]: INFO : files: op(9): op(a): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 20 03:17:41.934912 ignition[1014]: INFO : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 20 03:17:41.934912 ignition[1014]: INFO : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service" Jan 20 03:17:41.934912 ignition[1014]: INFO : files: createResultFile: createFiles: op(b): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 20 03:17:41.934912 ignition[1014]: INFO : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 20 03:17:41.934912 ignition[1014]: INFO : files: files passed Jan 20 03:17:41.934912 ignition[1014]: INFO : Ignition finished successfully Jan 20 03:17:41.981957 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 20 03:17:41.996423 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 20 03:17:42.010819 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 20 03:17:42.040467 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 20 03:17:42.040615 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 20 03:17:42.050884 initrd-setup-root-after-ignition[1042]: grep: /sysroot/oem/oem-release: No such file or directory Jan 20 03:17:42.061520 initrd-setup-root-after-ignition[1044]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 20 03:17:42.061520 initrd-setup-root-after-ignition[1044]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 20 03:17:42.056508 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 20 03:17:42.090186 initrd-setup-root-after-ignition[1048]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 20 03:17:42.081056 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 20 03:17:42.088417 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 20 03:17:42.194858 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 20 03:17:42.195064 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 20 03:17:42.198537 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 20 03:17:42.206600 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 20 03:17:42.218500 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 20 03:17:42.220006 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 20 03:17:42.280924 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 20 03:17:42.285436 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 20 03:17:42.319851 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 20 03:17:42.322564 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 03:17:42.335422 systemd[1]: Stopped target timers.target - Timer Units. Jan 20 03:17:42.344159 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 20 03:17:42.344457 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 20 03:17:42.352347 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 20 03:17:42.362123 systemd[1]: Stopped target basic.target - Basic System. Jan 20 03:17:42.366324 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 20 03:17:42.378769 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 20 03:17:42.390294 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 20 03:17:42.399124 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 20 03:17:42.403136 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 20 03:17:42.424453 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 20 03:17:42.434530 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 20 03:17:42.444856 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 20 03:17:42.454514 systemd[1]: Stopped target swap.target - Swaps. Jan 20 03:17:42.458165 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 20 03:17:42.458485 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 20 03:17:42.475004 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 20 03:17:42.483418 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 03:17:42.491326 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 20 03:17:42.503647 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 03:17:42.506208 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 20 03:17:42.506512 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 20 03:17:42.521197 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 20 03:17:42.521495 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 20 03:17:42.529926 systemd[1]: Stopped target paths.target - Path Units. Jan 20 03:17:42.542055 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 20 03:17:42.547474 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 03:17:42.553572 systemd[1]: Stopped target slices.target - Slice Units. Jan 20 03:17:42.562135 systemd[1]: Stopped target sockets.target - Socket Units. Jan 20 03:17:42.582079 systemd[1]: iscsid.socket: Deactivated successfully. Jan 20 03:17:42.582284 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 20 03:17:42.592559 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 20 03:17:42.592754 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 20 03:17:42.595123 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 20 03:17:42.595404 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 20 03:17:42.608217 systemd[1]: ignition-files.service: Deactivated successfully. Jan 20 03:17:42.608456 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 20 03:17:42.616216 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 20 03:17:42.657584 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 20 03:17:42.662179 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 20 03:17:42.662508 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 03:17:42.672120 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 20 03:17:42.697470 ignition[1069]: INFO : Ignition 2.22.0 Jan 20 03:17:42.697470 ignition[1069]: INFO : Stage: umount Jan 20 03:17:42.697470 ignition[1069]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 03:17:42.697470 ignition[1069]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 03:17:42.697470 ignition[1069]: INFO : umount: umount passed Jan 20 03:17:42.697470 ignition[1069]: INFO : Ignition finished successfully Jan 20 03:17:42.672462 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 20 03:17:42.698609 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 20 03:17:42.698755 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 20 03:17:42.708056 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 20 03:17:42.708197 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 20 03:17:42.717338 systemd[1]: Stopped target network.target - Network. Jan 20 03:17:42.723339 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 20 03:17:42.723463 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 20 03:17:42.736196 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 20 03:17:42.736332 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 20 03:17:42.743518 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 20 03:17:42.743580 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 20 03:17:42.752453 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 20 03:17:42.752520 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 20 03:17:42.758514 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 20 03:17:42.771589 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 20 03:17:42.779674 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 20 03:17:42.780616 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 20 03:17:42.800662 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jan 20 03:17:42.801979 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 20 03:17:42.802102 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 03:17:42.821657 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jan 20 03:17:42.822102 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 20 03:17:42.822442 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 20 03:17:42.837216 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jan 20 03:17:42.837998 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 20 03:17:42.849141 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 20 03:17:42.849218 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 20 03:17:42.864942 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 20 03:17:42.878942 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 20 03:17:42.879073 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 20 03:17:42.892439 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 20 03:17:42.892549 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 20 03:17:42.923547 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 20 03:17:42.923650 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 20 03:17:42.927668 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 03:17:42.948115 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 20 03:17:42.948456 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 20 03:17:42.952788 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 20 03:17:42.952933 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 20 03:17:42.962166 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 20 03:17:42.962497 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 20 03:17:42.986656 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 20 03:17:42.986828 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 20 03:17:43.038994 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 20 03:17:43.039210 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 03:17:43.045007 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 20 03:17:43.045074 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 20 03:17:43.052452 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 20 03:17:43.052523 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 03:17:43.062038 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 20 03:17:43.062115 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 20 03:17:43.076854 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 20 03:17:43.076922 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 20 03:17:43.088908 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 20 03:17:43.089015 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 03:17:43.099530 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 20 03:17:43.118912 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 20 03:17:43.119038 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 20 03:17:43.135732 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 20 03:17:43.135835 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 03:17:43.153586 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 20 03:17:43.153697 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 20 03:17:43.168858 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 20 03:17:43.168956 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 03:17:43.178765 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 03:17:43.178881 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 03:17:43.200927 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jan 20 03:17:43.201034 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Jan 20 03:17:43.201184 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jan 20 03:17:43.201478 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 20 03:17:43.202003 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 20 03:17:43.202208 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 20 03:17:43.210704 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 20 03:17:43.222435 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 20 03:17:43.360015 systemd-journald[203]: Received SIGTERM from PID 1 (systemd). Jan 20 03:17:43.284568 systemd[1]: Switching root. Jan 20 03:17:43.364944 systemd-journald[203]: Journal stopped Jan 20 03:17:45.263361 kernel: SELinux: policy capability network_peer_controls=1 Jan 20 03:17:45.263491 kernel: SELinux: policy capability open_perms=1 Jan 20 03:17:45.263505 kernel: SELinux: policy capability extended_socket_class=1 Jan 20 03:17:45.263517 kernel: SELinux: policy capability always_check_network=0 Jan 20 03:17:45.263532 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 20 03:17:45.263548 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 20 03:17:45.263559 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 20 03:17:45.263571 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 20 03:17:45.263583 kernel: SELinux: policy capability userspace_initial_context=0 Jan 20 03:17:45.263594 kernel: audit: type=1403 audit(1768879063.601:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 20 03:17:45.263606 systemd[1]: Successfully loaded SELinux policy in 99.326ms. Jan 20 03:17:45.263625 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.488ms. Jan 20 03:17:45.263637 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 20 03:17:45.263648 systemd[1]: Detected virtualization kvm. Jan 20 03:17:45.263659 systemd[1]: Detected architecture x86-64. Jan 20 03:17:45.263670 systemd[1]: Detected first boot. Jan 20 03:17:45.263682 systemd[1]: Initializing machine ID from VM UUID. Jan 20 03:17:45.263695 zram_generator::config[1116]: No configuration found. Jan 20 03:17:45.263707 kernel: Guest personality initialized and is inactive Jan 20 03:17:45.263717 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 20 03:17:45.263728 kernel: Initialized host personality Jan 20 03:17:45.263738 kernel: NET: Registered PF_VSOCK protocol family Jan 20 03:17:45.263754 systemd[1]: Populated /etc with preset unit settings. Jan 20 03:17:45.263765 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jan 20 03:17:45.263776 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 20 03:17:45.263789 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 20 03:17:45.263800 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 20 03:17:45.263811 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 20 03:17:45.263822 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 20 03:17:45.263834 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 20 03:17:45.263844 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 20 03:17:45.263855 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 20 03:17:45.263867 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 20 03:17:45.263878 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 20 03:17:45.263896 systemd[1]: Created slice user.slice - User and Session Slice. Jan 20 03:17:45.263907 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 03:17:45.263918 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 03:17:45.263930 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 20 03:17:45.263947 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 20 03:17:45.263958 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 20 03:17:45.263969 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 20 03:17:45.263982 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 20 03:17:45.263993 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 03:17:45.264004 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 20 03:17:45.264015 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 20 03:17:45.264026 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 20 03:17:45.264037 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 20 03:17:45.264048 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 20 03:17:45.264059 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 03:17:45.264070 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 20 03:17:45.264083 systemd[1]: Reached target slices.target - Slice Units. Jan 20 03:17:45.264094 systemd[1]: Reached target swap.target - Swaps. Jan 20 03:17:45.264105 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 20 03:17:45.264116 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 20 03:17:45.264127 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 20 03:17:45.264137 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 20 03:17:45.264148 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 20 03:17:45.264159 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 03:17:45.264171 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 20 03:17:45.264181 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 20 03:17:45.264195 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 20 03:17:45.264206 systemd[1]: Mounting media.mount - External Media Directory... Jan 20 03:17:45.264217 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 03:17:45.264326 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 20 03:17:45.264338 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 20 03:17:45.264349 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 20 03:17:45.264360 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 20 03:17:45.264432 systemd[1]: Reached target machines.target - Containers. Jan 20 03:17:45.264466 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 20 03:17:45.264489 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 03:17:45.264509 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 20 03:17:45.264526 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 20 03:17:45.264544 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 03:17:45.264563 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 20 03:17:45.264582 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 03:17:45.264601 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 20 03:17:45.264627 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 03:17:45.264648 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 20 03:17:45.264666 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 20 03:17:45.264684 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 20 03:17:45.264701 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 20 03:17:45.264720 systemd[1]: Stopped systemd-fsck-usr.service. Jan 20 03:17:45.264740 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 20 03:17:45.264759 kernel: ACPI: bus type drm_connector registered Jan 20 03:17:45.264778 kernel: fuse: init (API version 7.41) Jan 20 03:17:45.264800 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 20 03:17:45.264816 kernel: loop: module loaded Jan 20 03:17:45.264833 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 20 03:17:45.264852 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 20 03:17:45.264872 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 20 03:17:45.264892 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 20 03:17:45.264945 systemd-journald[1201]: Collecting audit messages is disabled. Jan 20 03:17:45.264991 systemd-journald[1201]: Journal started Jan 20 03:17:45.265025 systemd-journald[1201]: Runtime Journal (/run/log/journal/327cea59a37f4116a676ed1005944dff) is 6M, max 48.1M, 42.1M free. Jan 20 03:17:44.516830 systemd[1]: Queued start job for default target multi-user.target. Jan 20 03:17:44.538346 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 20 03:17:44.539489 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 20 03:17:44.540073 systemd[1]: systemd-journald.service: Consumed 1.941s CPU time. Jan 20 03:17:45.278648 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 20 03:17:45.291339 systemd[1]: verity-setup.service: Deactivated successfully. Jan 20 03:17:45.291466 systemd[1]: Stopped verity-setup.service. Jan 20 03:17:45.305609 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 03:17:45.315138 systemd[1]: Started systemd-journald.service - Journal Service. Jan 20 03:17:45.320113 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 20 03:17:45.325623 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 20 03:17:45.331697 systemd[1]: Mounted media.mount - External Media Directory. Jan 20 03:17:45.337094 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 20 03:17:45.343156 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 20 03:17:45.349740 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 20 03:17:45.356369 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 20 03:17:45.364804 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 03:17:45.372607 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 20 03:17:45.372998 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 20 03:17:45.381532 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 03:17:45.381882 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 03:17:45.390612 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 20 03:17:45.390976 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 20 03:17:45.398709 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 03:17:45.398959 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 03:17:45.406732 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 20 03:17:45.407020 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 20 03:17:45.413193 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 03:17:45.413608 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 03:17:45.419081 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 20 03:17:45.424936 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 20 03:17:45.430993 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 20 03:17:45.438649 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 20 03:17:45.445491 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 03:17:45.469998 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 20 03:17:45.477630 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 20 03:17:45.484486 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 20 03:17:45.491732 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 20 03:17:45.491828 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 20 03:17:45.499983 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 20 03:17:45.510674 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 20 03:17:45.516000 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 03:17:45.517722 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 20 03:17:45.524722 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 20 03:17:45.530842 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 20 03:17:45.532362 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 20 03:17:45.538664 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 20 03:17:45.541490 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 20 03:17:45.548308 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 20 03:17:45.555836 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 20 03:17:45.565173 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 20 03:17:45.574843 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 20 03:17:45.584608 systemd-journald[1201]: Time spent on flushing to /var/log/journal/327cea59a37f4116a676ed1005944dff is 34.653ms for 1051 entries. Jan 20 03:17:45.584608 systemd-journald[1201]: System Journal (/var/log/journal/327cea59a37f4116a676ed1005944dff) is 8M, max 195.6M, 187.6M free. Jan 20 03:17:45.637548 systemd-journald[1201]: Received client request to flush runtime journal. Jan 20 03:17:45.637632 kernel: loop0: detected capacity change from 0 to 229808 Jan 20 03:17:45.583421 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 20 03:17:45.614553 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 20 03:17:45.624526 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 20 03:17:45.652549 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 20 03:17:45.671205 systemd-tmpfiles[1237]: ACLs are not supported, ignoring. Jan 20 03:17:45.671337 systemd-tmpfiles[1237]: ACLs are not supported, ignoring. Jan 20 03:17:45.697905 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 20 03:17:45.707708 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 20 03:17:45.709885 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 20 03:17:45.716674 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 20 03:17:45.736581 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 20 03:17:45.766498 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 20 03:17:45.810963 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 20 03:17:45.821860 kernel: loop1: detected capacity change from 0 to 128560 Jan 20 03:17:45.826598 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 20 03:17:45.861996 systemd-tmpfiles[1257]: ACLs are not supported, ignoring. Jan 20 03:17:45.862028 systemd-tmpfiles[1257]: ACLs are not supported, ignoring. Jan 20 03:17:45.869217 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 03:17:45.879358 kernel: loop2: detected capacity change from 0 to 110984 Jan 20 03:17:45.931372 kernel: loop3: detected capacity change from 0 to 229808 Jan 20 03:17:45.958377 kernel: loop4: detected capacity change from 0 to 128560 Jan 20 03:17:45.987351 kernel: loop5: detected capacity change from 0 to 110984 Jan 20 03:17:46.013739 (sd-merge)[1262]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 20 03:17:46.014816 (sd-merge)[1262]: Merged extensions into '/usr'. Jan 20 03:17:46.023084 systemd[1]: Reload requested from client PID 1236 ('systemd-sysext') (unit systemd-sysext.service)... Jan 20 03:17:46.023131 systemd[1]: Reloading... Jan 20 03:17:46.106371 zram_generator::config[1287]: No configuration found. Jan 20 03:17:46.309737 ldconfig[1231]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 20 03:17:46.399162 systemd[1]: Reloading finished in 375 ms. Jan 20 03:17:46.429723 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 20 03:17:46.435611 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 20 03:17:46.441856 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 20 03:17:46.483436 systemd[1]: Starting ensure-sysext.service... Jan 20 03:17:46.488487 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 20 03:17:46.497778 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 03:17:46.534945 systemd[1]: Reload requested from client PID 1326 ('systemctl') (unit ensure-sysext.service)... Jan 20 03:17:46.534994 systemd[1]: Reloading... Jan 20 03:17:46.545460 systemd-tmpfiles[1327]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 20 03:17:46.545529 systemd-tmpfiles[1327]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 20 03:17:46.545856 systemd-tmpfiles[1327]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 20 03:17:46.546120 systemd-tmpfiles[1327]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 20 03:17:46.547555 systemd-tmpfiles[1327]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 20 03:17:46.547945 systemd-tmpfiles[1327]: ACLs are not supported, ignoring. Jan 20 03:17:46.548061 systemd-tmpfiles[1327]: ACLs are not supported, ignoring. Jan 20 03:17:46.553689 systemd-tmpfiles[1327]: Detected autofs mount point /boot during canonicalization of boot. Jan 20 03:17:46.553701 systemd-tmpfiles[1327]: Skipping /boot Jan 20 03:17:46.562586 systemd-udevd[1328]: Using default interface naming scheme 'v255'. Jan 20 03:17:46.567740 systemd-tmpfiles[1327]: Detected autofs mount point /boot during canonicalization of boot. Jan 20 03:17:46.567785 systemd-tmpfiles[1327]: Skipping /boot Jan 20 03:17:46.610309 zram_generator::config[1358]: No configuration found. Jan 20 03:17:46.786367 kernel: mousedev: PS/2 mouse device common for all mice Jan 20 03:17:46.793303 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 20 03:17:46.805489 kernel: ACPI: button: Power Button [PWRF] Jan 20 03:17:46.835463 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jan 20 03:17:46.835843 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 20 03:17:46.846925 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 20 03:17:46.886318 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 20 03:17:46.886499 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 20 03:17:46.894143 systemd[1]: Reloading finished in 358 ms. Jan 20 03:17:46.908563 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 03:17:46.928365 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 03:17:47.000503 systemd[1]: Finished ensure-sysext.service. Jan 20 03:17:47.027371 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 03:17:47.028925 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 20 03:17:47.037918 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 20 03:17:47.044137 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 03:17:47.118783 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 03:17:47.128596 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 20 03:17:47.136514 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 03:17:47.169379 kernel: kvm_amd: TSC scaling supported Jan 20 03:17:47.169544 kernel: kvm_amd: Nested Virtualization enabled Jan 20 03:17:47.169570 kernel: kvm_amd: Nested Paging enabled Jan 20 03:17:47.173537 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 20 03:17:47.173603 kernel: kvm_amd: PMU virtualization is disabled Jan 20 03:17:47.177642 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 03:17:47.197324 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 03:17:47.200463 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 20 03:17:47.207569 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 20 03:17:47.223578 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 20 03:17:47.291124 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 20 03:17:47.302683 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 20 03:17:47.310885 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 20 03:17:47.320582 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 20 03:17:47.331901 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 03:17:47.335866 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 03:17:47.339656 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 20 03:17:47.344617 augenrules[1477]: No rules Jan 20 03:17:47.348097 systemd[1]: audit-rules.service: Deactivated successfully. Jan 20 03:17:47.348666 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 20 03:17:47.357163 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 03:17:47.363947 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 03:17:47.372025 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 20 03:17:47.372524 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 20 03:17:47.380060 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 03:17:47.380781 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 03:17:47.389067 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 03:17:47.389734 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 03:17:47.397446 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 20 03:17:47.399795 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 20 03:17:47.420330 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 20 03:17:47.434541 kernel: EDAC MC: Ver: 3.0.0 Jan 20 03:17:47.435364 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 20 03:17:47.435749 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 20 03:17:47.438170 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 20 03:17:47.445643 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 20 03:17:47.451025 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 20 03:17:47.467879 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 20 03:17:47.477022 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 03:17:47.520621 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 20 03:17:47.620470 systemd-networkd[1472]: lo: Link UP Jan 20 03:17:47.620480 systemd-networkd[1472]: lo: Gained carrier Jan 20 03:17:47.622451 systemd-networkd[1472]: Enumeration completed Jan 20 03:17:47.622620 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 20 03:17:47.624202 systemd-networkd[1472]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 03:17:47.624211 systemd-networkd[1472]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 20 03:17:47.625546 systemd-networkd[1472]: eth0: Link UP Jan 20 03:17:47.625779 systemd-networkd[1472]: eth0: Gained carrier Jan 20 03:17:47.625846 systemd-networkd[1472]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 03:17:47.629082 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 20 03:17:47.634144 systemd[1]: Reached target time-set.target - System Time Set. Jan 20 03:17:47.640008 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 20 03:17:47.647380 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 20 03:17:47.647481 systemd-resolved[1474]: Positive Trust Anchors: Jan 20 03:17:47.647490 systemd-resolved[1474]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 20 03:17:47.647515 systemd-resolved[1474]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 20 03:17:47.651645 systemd-resolved[1474]: Defaulting to hostname 'linux'. Jan 20 03:17:47.654088 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 20 03:17:47.660661 systemd[1]: Reached target network.target - Network. Jan 20 03:17:47.664578 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 20 03:17:47.671385 systemd[1]: Reached target sysinit.target - System Initialization. Jan 20 03:17:47.677953 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 20 03:17:47.686081 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 20 03:17:47.694600 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 20 03:17:47.696355 systemd-networkd[1472]: eth0: DHCPv4 address 10.0.0.17/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 20 03:17:47.698548 systemd-timesyncd[1475]: Network configuration changed, trying to establish connection. Jan 20 03:17:47.701084 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 20 03:17:48.552412 systemd-timesyncd[1475]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 20 03:17:48.553364 systemd-timesyncd[1475]: Initial clock synchronization to Tue 2026-01-20 03:17:48.552326 UTC. Jan 20 03:17:48.555125 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 20 03:17:48.555956 systemd-resolved[1474]: Clock change detected. Flushing caches. Jan 20 03:17:48.561382 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 20 03:17:48.567778 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 20 03:17:48.567871 systemd[1]: Reached target paths.target - Path Units. Jan 20 03:17:48.571987 systemd[1]: Reached target timers.target - Timer Units. Jan 20 03:17:48.577217 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 20 03:17:48.584119 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 20 03:17:48.590053 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 20 03:17:48.596266 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 20 03:17:48.602253 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 20 03:17:48.609770 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 20 03:17:48.615249 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 20 03:17:48.622755 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 20 03:17:48.628774 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 20 03:17:48.635957 systemd[1]: Reached target sockets.target - Socket Units. Jan 20 03:17:48.640756 systemd[1]: Reached target basic.target - Basic System. Jan 20 03:17:48.645091 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 20 03:17:48.645128 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 20 03:17:48.646690 systemd[1]: Starting containerd.service - containerd container runtime... Jan 20 03:17:48.653114 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 20 03:17:48.677007 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 20 03:17:48.683680 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 20 03:17:48.689090 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 20 03:17:48.694139 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 20 03:17:48.697663 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 20 03:17:48.702366 jq[1519]: false Jan 20 03:17:48.703952 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 20 03:17:48.708237 extend-filesystems[1520]: Found /dev/vda6 Jan 20 03:17:48.712416 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 20 03:17:48.715313 extend-filesystems[1520]: Found /dev/vda9 Jan 20 03:17:48.720705 extend-filesystems[1520]: Checking size of /dev/vda9 Jan 20 03:17:48.732574 extend-filesystems[1520]: Resized partition /dev/vda9 Jan 20 03:17:48.735842 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 20 03:17:48.743707 google_oslogin_nss_cache[1521]: oslogin_cache_refresh[1521]: Refreshing passwd entry cache Jan 20 03:17:48.743688 oslogin_cache_refresh[1521]: Refreshing passwd entry cache Jan 20 03:17:48.747351 extend-filesystems[1534]: resize2fs 1.47.3 (8-Jul-2025) Jan 20 03:17:48.753423 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 20 03:17:48.760110 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 20 03:17:48.761008 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 20 03:17:48.763725 systemd[1]: Starting update-engine.service - Update Engine... Jan 20 03:17:48.765769 google_oslogin_nss_cache[1521]: oslogin_cache_refresh[1521]: Failure getting users, quitting Jan 20 03:17:48.765769 google_oslogin_nss_cache[1521]: oslogin_cache_refresh[1521]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 20 03:17:48.765732 oslogin_cache_refresh[1521]: Failure getting users, quitting Jan 20 03:17:48.765887 google_oslogin_nss_cache[1521]: oslogin_cache_refresh[1521]: Refreshing group entry cache Jan 20 03:17:48.765761 oslogin_cache_refresh[1521]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 20 03:17:48.765827 oslogin_cache_refresh[1521]: Refreshing group entry cache Jan 20 03:17:48.770656 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 20 03:17:48.778701 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 20 03:17:48.782410 google_oslogin_nss_cache[1521]: oslogin_cache_refresh[1521]: Failure getting groups, quitting Jan 20 03:17:48.782410 google_oslogin_nss_cache[1521]: oslogin_cache_refresh[1521]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 20 03:17:48.782392 oslogin_cache_refresh[1521]: Failure getting groups, quitting Jan 20 03:17:48.782409 oslogin_cache_refresh[1521]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 20 03:17:48.787224 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 20 03:17:48.794935 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 20 03:17:48.795326 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 20 03:17:48.797043 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 20 03:17:48.797352 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 20 03:17:48.802681 systemd[1]: motdgen.service: Deactivated successfully. Jan 20 03:17:48.807811 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 20 03:17:48.813313 jq[1545]: true Jan 20 03:17:48.847963 update_engine[1542]: I20260120 03:17:48.834882 1542 main.cc:92] Flatcar Update Engine starting Jan 20 03:17:48.814936 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 20 03:17:48.815288 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 20 03:17:48.836970 (ntainerd)[1550]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 20 03:17:48.850331 jq[1549]: true Jan 20 03:17:48.861726 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 20 03:17:48.887208 dbus-daemon[1517]: [system] SELinux support is enabled Jan 20 03:17:48.887734 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 20 03:17:48.895801 extend-filesystems[1534]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 20 03:17:48.895801 extend-filesystems[1534]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 20 03:17:48.895801 extend-filesystems[1534]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 20 03:17:48.935324 extend-filesystems[1520]: Resized filesystem in /dev/vda9 Jan 20 03:17:48.930203 dbus-daemon[1517]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 20 03:17:48.939399 update_engine[1542]: I20260120 03:17:48.902729 1542 update_check_scheduler.cc:74] Next update check in 4m0s Jan 20 03:17:48.909875 systemd-logind[1537]: Watching system buttons on /dev/input/event2 (Power Button) Jan 20 03:17:48.909896 systemd-logind[1537]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 20 03:17:48.910110 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 20 03:17:48.910365 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 20 03:17:48.910377 systemd-logind[1537]: New seat seat0. Jan 20 03:17:48.917175 systemd[1]: Started systemd-logind.service - User Login Management. Jan 20 03:17:48.926856 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 20 03:17:48.926912 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 20 03:17:48.928716 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 20 03:17:48.928740 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 20 03:17:48.949244 systemd[1]: Started update-engine.service - Update Engine. Jan 20 03:17:48.961144 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 20 03:17:48.966023 bash[1576]: Updated "/home/core/.ssh/authorized_keys" Jan 20 03:17:48.969577 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 20 03:17:48.980098 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 20 03:17:49.019663 locksmithd[1577]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 20 03:17:49.055892 sshd_keygen[1544]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 20 03:17:49.089933 containerd[1550]: time="2026-01-20T03:17:49Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 20 03:17:49.092017 containerd[1550]: time="2026-01-20T03:17:49.091899654Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Jan 20 03:17:49.103864 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 20 03:17:49.111675 containerd[1550]: time="2026-01-20T03:17:49.109743074Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="11.482µs" Jan 20 03:17:49.111675 containerd[1550]: time="2026-01-20T03:17:49.109836749Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 20 03:17:49.111675 containerd[1550]: time="2026-01-20T03:17:49.109862366Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 20 03:17:49.111675 containerd[1550]: time="2026-01-20T03:17:49.110124215Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 20 03:17:49.111675 containerd[1550]: time="2026-01-20T03:17:49.110146928Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 20 03:17:49.111675 containerd[1550]: time="2026-01-20T03:17:49.110181993Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 20 03:17:49.111675 containerd[1550]: time="2026-01-20T03:17:49.110274226Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 20 03:17:49.111675 containerd[1550]: time="2026-01-20T03:17:49.110293050Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 20 03:17:49.111675 containerd[1550]: time="2026-01-20T03:17:49.110827999Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 20 03:17:49.111675 containerd[1550]: time="2026-01-20T03:17:49.110848147Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 20 03:17:49.111675 containerd[1550]: time="2026-01-20T03:17:49.110863154Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 20 03:17:49.111675 containerd[1550]: time="2026-01-20T03:17:49.110874295Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 20 03:17:49.112072 containerd[1550]: time="2026-01-20T03:17:49.111007675Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 20 03:17:49.112072 containerd[1550]: time="2026-01-20T03:17:49.111304188Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 20 03:17:49.112072 containerd[1550]: time="2026-01-20T03:17:49.111346748Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 20 03:17:49.112072 containerd[1550]: time="2026-01-20T03:17:49.111365732Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 20 03:17:49.112072 containerd[1550]: time="2026-01-20T03:17:49.111577959Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 20 03:17:49.112369 containerd[1550]: time="2026-01-20T03:17:49.112261415Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 20 03:17:49.112664 containerd[1550]: time="2026-01-20T03:17:49.112400574Z" level=info msg="metadata content store policy set" policy=shared Jan 20 03:17:49.116809 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 20 03:17:49.126982 containerd[1550]: time="2026-01-20T03:17:49.126809148Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 20 03:17:49.126982 containerd[1550]: time="2026-01-20T03:17:49.126931547Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 20 03:17:49.126982 containerd[1550]: time="2026-01-20T03:17:49.126956965Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 20 03:17:49.126982 containerd[1550]: time="2026-01-20T03:17:49.126970510Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 20 03:17:49.126982 containerd[1550]: time="2026-01-20T03:17:49.126981580Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 20 03:17:49.126982 containerd[1550]: time="2026-01-20T03:17:49.126990537Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 20 03:17:49.127184 containerd[1550]: time="2026-01-20T03:17:49.127001597Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 20 03:17:49.127184 containerd[1550]: time="2026-01-20T03:17:49.127011556Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 20 03:17:49.127184 containerd[1550]: time="2026-01-20T03:17:49.127021474Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 20 03:17:49.127184 containerd[1550]: time="2026-01-20T03:17:49.127032445Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 20 03:17:49.127184 containerd[1550]: time="2026-01-20T03:17:49.127040771Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 20 03:17:49.127184 containerd[1550]: time="2026-01-20T03:17:49.127058213Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 20 03:17:49.127341 containerd[1550]: time="2026-01-20T03:17:49.127188998Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 20 03:17:49.127341 containerd[1550]: time="2026-01-20T03:17:49.127216358Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 20 03:17:49.127341 containerd[1550]: time="2026-01-20T03:17:49.127244731Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 20 03:17:49.127341 containerd[1550]: time="2026-01-20T03:17:49.127262043Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 20 03:17:49.127341 containerd[1550]: time="2026-01-20T03:17:49.127278204Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 20 03:17:49.127341 containerd[1550]: time="2026-01-20T03:17:49.127290627Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 20 03:17:49.127341 containerd[1550]: time="2026-01-20T03:17:49.127304192Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 20 03:17:49.127341 containerd[1550]: time="2026-01-20T03:17:49.127315915Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 20 03:17:49.127341 containerd[1550]: time="2026-01-20T03:17:49.127339759Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 20 03:17:49.127784 containerd[1550]: time="2026-01-20T03:17:49.127353364Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 20 03:17:49.127784 containerd[1550]: time="2026-01-20T03:17:49.127365757Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 20 03:17:49.127784 containerd[1550]: time="2026-01-20T03:17:49.127419117Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 20 03:17:49.127871 containerd[1550]: time="2026-01-20T03:17:49.127790300Z" level=info msg="Start snapshots syncer" Jan 20 03:17:49.128139 containerd[1550]: time="2026-01-20T03:17:49.128103805Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 20 03:17:49.128824 containerd[1550]: time="2026-01-20T03:17:49.128578130Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 20 03:17:49.128824 containerd[1550]: time="2026-01-20T03:17:49.128768235Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 20 03:17:49.130660 containerd[1550]: time="2026-01-20T03:17:49.130400712Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 20 03:17:49.130779 containerd[1550]: time="2026-01-20T03:17:49.130729496Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 20 03:17:49.130779 containerd[1550]: time="2026-01-20T03:17:49.130755545Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 20 03:17:49.130779 containerd[1550]: time="2026-01-20T03:17:49.130766124Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 20 03:17:49.130779 containerd[1550]: time="2026-01-20T03:17:49.130775922Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 20 03:17:49.130911 containerd[1550]: time="2026-01-20T03:17:49.130787544Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 20 03:17:49.130911 containerd[1550]: time="2026-01-20T03:17:49.130796852Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 20 03:17:49.130911 containerd[1550]: time="2026-01-20T03:17:49.130805598Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 20 03:17:49.130911 containerd[1550]: time="2026-01-20T03:17:49.130826587Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 20 03:17:49.130911 containerd[1550]: time="2026-01-20T03:17:49.130836175Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 20 03:17:49.130911 containerd[1550]: time="2026-01-20T03:17:49.130845652Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 20 03:17:49.131597 containerd[1550]: time="2026-01-20T03:17:49.131502959Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 20 03:17:49.131597 containerd[1550]: time="2026-01-20T03:17:49.131568401Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 20 03:17:49.134200 containerd[1550]: time="2026-01-20T03:17:49.133277572Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 20 03:17:49.134200 containerd[1550]: time="2026-01-20T03:17:49.133373190Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 20 03:17:49.134200 containerd[1550]: time="2026-01-20T03:17:49.133388980Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 20 03:17:49.134200 containerd[1550]: time="2026-01-20T03:17:49.133552825Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 20 03:17:49.134200 containerd[1550]: time="2026-01-20T03:17:49.133591588Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 20 03:17:49.134200 containerd[1550]: time="2026-01-20T03:17:49.133676797Z" level=info msg="runtime interface created" Jan 20 03:17:49.134200 containerd[1550]: time="2026-01-20T03:17:49.133686665Z" level=info msg="created NRI interface" Jan 20 03:17:49.134200 containerd[1550]: time="2026-01-20T03:17:49.133701343Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 20 03:17:49.134200 containerd[1550]: time="2026-01-20T03:17:49.133719968Z" level=info msg="Connect containerd service" Jan 20 03:17:49.134200 containerd[1550]: time="2026-01-20T03:17:49.133749062Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 20 03:17:49.137567 containerd[1550]: time="2026-01-20T03:17:49.136678339Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 20 03:17:49.158706 systemd[1]: issuegen.service: Deactivated successfully. Jan 20 03:17:49.159078 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 20 03:17:49.168832 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 20 03:17:49.200365 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 20 03:17:49.210979 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 20 03:17:49.220206 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 20 03:17:49.228202 systemd[1]: Reached target getty.target - Login Prompts. Jan 20 03:17:49.267423 containerd[1550]: time="2026-01-20T03:17:49.267204921Z" level=info msg="Start subscribing containerd event" Jan 20 03:17:49.267423 containerd[1550]: time="2026-01-20T03:17:49.267311560Z" level=info msg="Start recovering state" Jan 20 03:17:49.267713 containerd[1550]: time="2026-01-20T03:17:49.267676241Z" level=info msg="Start event monitor" Jan 20 03:17:49.267713 containerd[1550]: time="2026-01-20T03:17:49.267697351Z" level=info msg="Start cni network conf syncer for default" Jan 20 03:17:49.267713 containerd[1550]: time="2026-01-20T03:17:49.267710535Z" level=info msg="Start streaming server" Jan 20 03:17:49.267812 containerd[1550]: time="2026-01-20T03:17:49.267722657Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 20 03:17:49.267812 containerd[1550]: time="2026-01-20T03:17:49.267733598Z" level=info msg="runtime interface starting up..." Jan 20 03:17:49.267812 containerd[1550]: time="2026-01-20T03:17:49.267742585Z" level=info msg="starting plugins..." Jan 20 03:17:49.267812 containerd[1550]: time="2026-01-20T03:17:49.267763173Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 20 03:17:49.268117 containerd[1550]: time="2026-01-20T03:17:49.267313013Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 20 03:17:49.268117 containerd[1550]: time="2026-01-20T03:17:49.268098158Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 20 03:17:49.268294 systemd[1]: Started containerd.service - containerd container runtime. Jan 20 03:17:49.271749 containerd[1550]: time="2026-01-20T03:17:49.271571662Z" level=info msg="containerd successfully booted in 0.182503s" Jan 20 03:17:49.853060 systemd-networkd[1472]: eth0: Gained IPv6LL Jan 20 03:17:49.857849 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 20 03:17:49.864043 systemd[1]: Reached target network-online.target - Network is Online. Jan 20 03:17:49.871325 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 20 03:17:49.879100 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 03:17:49.891668 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 20 03:17:49.944742 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 20 03:17:49.945192 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 20 03:17:49.951816 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 20 03:17:49.959825 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 20 03:17:50.469781 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 20 03:17:50.478281 systemd[1]: Started sshd@0-10.0.0.17:22-10.0.0.1:44060.service - OpenSSH per-connection server daemon (10.0.0.1:44060). Jan 20 03:17:50.583490 sshd[1641]: Accepted publickey for core from 10.0.0.1 port 44060 ssh2: RSA SHA256:PKC+rA93hG9WJXDyWbauwpnu2MKk0W9xGwncBra6yds Jan 20 03:17:50.586076 sshd-session[1641]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 03:17:50.595348 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 20 03:17:50.601763 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 20 03:17:50.621236 systemd-logind[1537]: New session 1 of user core. Jan 20 03:17:50.636847 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 20 03:17:50.650573 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 20 03:17:50.673973 (systemd)[1646]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 20 03:17:50.678392 systemd-logind[1537]: New session c1 of user core. Jan 20 03:17:50.831392 systemd[1646]: Queued start job for default target default.target. Jan 20 03:17:50.847222 systemd[1646]: Created slice app.slice - User Application Slice. Jan 20 03:17:50.847296 systemd[1646]: Reached target paths.target - Paths. Jan 20 03:17:50.847391 systemd[1646]: Reached target timers.target - Timers. Jan 20 03:17:50.849575 systemd[1646]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 20 03:17:50.867547 systemd[1646]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 20 03:17:50.867716 systemd[1646]: Reached target sockets.target - Sockets. Jan 20 03:17:50.867754 systemd[1646]: Reached target basic.target - Basic System. Jan 20 03:17:50.867796 systemd[1646]: Reached target default.target - Main User Target. Jan 20 03:17:50.867828 systemd[1646]: Startup finished in 178ms. Jan 20 03:17:50.868199 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 20 03:17:50.875860 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 20 03:17:50.952941 systemd[1]: Started sshd@1-10.0.0.17:22-10.0.0.1:44064.service - OpenSSH per-connection server daemon (10.0.0.1:44064). Jan 20 03:17:50.998163 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 03:17:51.004268 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 20 03:17:51.004813 (kubelet)[1665]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 03:17:51.010811 systemd[1]: Startup finished in 4.193s (kernel) + 8.743s (initrd) + 6.653s (userspace) = 19.590s. Jan 20 03:17:51.026261 sshd[1659]: Accepted publickey for core from 10.0.0.1 port 44064 ssh2: RSA SHA256:PKC+rA93hG9WJXDyWbauwpnu2MKk0W9xGwncBra6yds Jan 20 03:17:51.028945 sshd-session[1659]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 03:17:51.037835 systemd-logind[1537]: New session 2 of user core. Jan 20 03:17:51.044810 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 20 03:17:51.111884 sshd[1670]: Connection closed by 10.0.0.1 port 44064 Jan 20 03:17:51.113143 sshd-session[1659]: pam_unix(sshd:session): session closed for user core Jan 20 03:17:51.128029 systemd[1]: sshd@1-10.0.0.17:22-10.0.0.1:44064.service: Deactivated successfully. Jan 20 03:17:51.131963 systemd[1]: session-2.scope: Deactivated successfully. Jan 20 03:17:51.134238 systemd-logind[1537]: Session 2 logged out. Waiting for processes to exit. Jan 20 03:17:51.139598 systemd[1]: Started sshd@2-10.0.0.17:22-10.0.0.1:44072.service - OpenSSH per-connection server daemon (10.0.0.1:44072). Jan 20 03:17:51.143577 systemd-logind[1537]: Removed session 2. Jan 20 03:17:51.221087 sshd[1681]: Accepted publickey for core from 10.0.0.1 port 44072 ssh2: RSA SHA256:PKC+rA93hG9WJXDyWbauwpnu2MKk0W9xGwncBra6yds Jan 20 03:17:51.223407 sshd-session[1681]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 03:17:51.235331 systemd-logind[1537]: New session 3 of user core. Jan 20 03:17:51.242872 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 20 03:17:51.301126 sshd[1684]: Connection closed by 10.0.0.1 port 44072 Jan 20 03:17:51.303243 sshd-session[1681]: pam_unix(sshd:session): session closed for user core Jan 20 03:17:51.314748 systemd[1]: sshd@2-10.0.0.17:22-10.0.0.1:44072.service: Deactivated successfully. Jan 20 03:17:51.316914 systemd[1]: session-3.scope: Deactivated successfully. Jan 20 03:17:51.318418 systemd-logind[1537]: Session 3 logged out. Waiting for processes to exit. Jan 20 03:17:51.322302 systemd[1]: Started sshd@3-10.0.0.17:22-10.0.0.1:44078.service - OpenSSH per-connection server daemon (10.0.0.1:44078). Jan 20 03:17:51.324246 systemd-logind[1537]: Removed session 3. Jan 20 03:17:51.402785 sshd[1690]: Accepted publickey for core from 10.0.0.1 port 44078 ssh2: RSA SHA256:PKC+rA93hG9WJXDyWbauwpnu2MKk0W9xGwncBra6yds Jan 20 03:17:51.404824 sshd-session[1690]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 03:17:51.413922 systemd-logind[1537]: New session 4 of user core. Jan 20 03:17:51.417984 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 20 03:17:51.483937 sshd[1694]: Connection closed by 10.0.0.1 port 44078 Jan 20 03:17:51.484415 sshd-session[1690]: pam_unix(sshd:session): session closed for user core Jan 20 03:17:51.500426 systemd[1]: sshd@3-10.0.0.17:22-10.0.0.1:44078.service: Deactivated successfully. Jan 20 03:17:51.503189 systemd[1]: session-4.scope: Deactivated successfully. Jan 20 03:17:51.504893 systemd-logind[1537]: Session 4 logged out. Waiting for processes to exit. Jan 20 03:17:51.508178 systemd[1]: Started sshd@4-10.0.0.17:22-10.0.0.1:44092.service - OpenSSH per-connection server daemon (10.0.0.1:44092). Jan 20 03:17:51.510087 systemd-logind[1537]: Removed session 4. Jan 20 03:17:51.574117 sshd[1700]: Accepted publickey for core from 10.0.0.1 port 44092 ssh2: RSA SHA256:PKC+rA93hG9WJXDyWbauwpnu2MKk0W9xGwncBra6yds Jan 20 03:17:51.575939 sshd-session[1700]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 03:17:51.582572 systemd-logind[1537]: New session 5 of user core. Jan 20 03:17:51.599992 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 20 03:17:51.642724 kubelet[1665]: E0120 03:17:51.642393 1665 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 03:17:51.646590 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 03:17:51.646962 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 03:17:51.647881 systemd[1]: kubelet.service: Consumed 1.131s CPU time, 265.5M memory peak. Jan 20 03:17:51.674680 sudo[1705]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 20 03:17:51.675161 sudo[1705]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 03:17:51.694891 sudo[1705]: pam_unix(sudo:session): session closed for user root Jan 20 03:17:51.697036 sshd[1703]: Connection closed by 10.0.0.1 port 44092 Jan 20 03:17:51.698730 sshd-session[1700]: pam_unix(sshd:session): session closed for user core Jan 20 03:17:51.720198 systemd[1]: sshd@4-10.0.0.17:22-10.0.0.1:44092.service: Deactivated successfully. Jan 20 03:17:51.723789 systemd[1]: session-5.scope: Deactivated successfully. Jan 20 03:17:51.726082 systemd-logind[1537]: Session 5 logged out. Waiting for processes to exit. Jan 20 03:17:51.729856 systemd[1]: Started sshd@5-10.0.0.17:22-10.0.0.1:44106.service - OpenSSH per-connection server daemon (10.0.0.1:44106). Jan 20 03:17:51.731794 systemd-logind[1537]: Removed session 5. Jan 20 03:17:51.810838 sshd[1711]: Accepted publickey for core from 10.0.0.1 port 44106 ssh2: RSA SHA256:PKC+rA93hG9WJXDyWbauwpnu2MKk0W9xGwncBra6yds Jan 20 03:17:51.813245 sshd-session[1711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 03:17:51.822165 systemd-logind[1537]: New session 6 of user core. Jan 20 03:17:51.840863 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 20 03:17:51.903732 sudo[1716]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 20 03:17:51.904117 sudo[1716]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 03:17:51.914901 sudo[1716]: pam_unix(sudo:session): session closed for user root Jan 20 03:17:51.926008 sudo[1715]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 20 03:17:51.927017 sudo[1715]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 03:17:51.945259 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 20 03:17:52.015356 augenrules[1738]: No rules Jan 20 03:17:52.017371 systemd[1]: audit-rules.service: Deactivated successfully. Jan 20 03:17:52.017939 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 20 03:17:52.019615 sudo[1715]: pam_unix(sudo:session): session closed for user root Jan 20 03:17:52.021818 sshd[1714]: Connection closed by 10.0.0.1 port 44106 Jan 20 03:17:52.022260 sshd-session[1711]: pam_unix(sshd:session): session closed for user core Jan 20 03:17:52.038832 systemd[1]: sshd@5-10.0.0.17:22-10.0.0.1:44106.service: Deactivated successfully. Jan 20 03:17:52.040905 systemd[1]: session-6.scope: Deactivated successfully. Jan 20 03:17:52.042256 systemd-logind[1537]: Session 6 logged out. Waiting for processes to exit. Jan 20 03:17:52.045853 systemd[1]: Started sshd@6-10.0.0.17:22-10.0.0.1:44122.service - OpenSSH per-connection server daemon (10.0.0.1:44122). Jan 20 03:17:52.047858 systemd-logind[1537]: Removed session 6. Jan 20 03:17:52.150834 sshd[1747]: Accepted publickey for core from 10.0.0.1 port 44122 ssh2: RSA SHA256:PKC+rA93hG9WJXDyWbauwpnu2MKk0W9xGwncBra6yds Jan 20 03:17:52.153309 sshd-session[1747]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 03:17:52.161575 systemd-logind[1537]: New session 7 of user core. Jan 20 03:17:52.176015 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 20 03:17:52.236946 sudo[1751]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 20 03:17:52.237422 sudo[1751]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 03:17:52.258935 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 20 03:17:52.339986 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 20 03:17:52.340590 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 20 03:17:53.092279 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 03:17:53.092858 systemd[1]: kubelet.service: Consumed 1.131s CPU time, 265.5M memory peak. Jan 20 03:17:53.096802 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 03:17:53.141559 systemd[1]: Reload requested from client PID 1795 ('systemctl') (unit session-7.scope)... Jan 20 03:17:53.141614 systemd[1]: Reloading... Jan 20 03:17:53.268561 zram_generator::config[1841]: No configuration found. Jan 20 03:17:53.518504 systemd[1]: Reloading finished in 376 ms. Jan 20 03:17:53.617412 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 20 03:17:53.617832 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 20 03:17:53.618386 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 03:17:53.618627 systemd[1]: kubelet.service: Consumed 179ms CPU time, 98.2M memory peak. Jan 20 03:17:53.621604 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 03:17:53.906109 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 03:17:53.924115 (kubelet)[1885]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 20 03:17:54.014936 kubelet[1885]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 03:17:54.014936 kubelet[1885]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 20 03:17:54.014936 kubelet[1885]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 03:17:54.014936 kubelet[1885]: I0120 03:17:54.014848 1885 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 20 03:17:54.646073 kubelet[1885]: I0120 03:17:54.645884 1885 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 20 03:17:54.646073 kubelet[1885]: I0120 03:17:54.645948 1885 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 20 03:17:54.646293 kubelet[1885]: I0120 03:17:54.646200 1885 server.go:956] "Client rotation is on, will bootstrap in background" Jan 20 03:17:54.703921 kubelet[1885]: I0120 03:17:54.703715 1885 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 20 03:17:54.717093 kubelet[1885]: I0120 03:17:54.716995 1885 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 20 03:17:54.727235 kubelet[1885]: I0120 03:17:54.727044 1885 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 20 03:17:54.727821 kubelet[1885]: I0120 03:17:54.727560 1885 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 20 03:17:54.727973 kubelet[1885]: I0120 03:17:54.727703 1885 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.17","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 20 03:17:54.727973 kubelet[1885]: I0120 03:17:54.727937 1885 topology_manager.go:138] "Creating topology manager with none policy" Jan 20 03:17:54.727973 kubelet[1885]: I0120 03:17:54.727950 1885 container_manager_linux.go:303] "Creating device plugin manager" Jan 20 03:17:54.729926 kubelet[1885]: I0120 03:17:54.729138 1885 state_mem.go:36] "Initialized new in-memory state store" Jan 20 03:17:54.733136 kubelet[1885]: I0120 03:17:54.733021 1885 kubelet.go:480] "Attempting to sync node with API server" Jan 20 03:17:54.733136 kubelet[1885]: I0120 03:17:54.733094 1885 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 20 03:17:54.734818 kubelet[1885]: I0120 03:17:54.734629 1885 kubelet.go:386] "Adding apiserver pod source" Jan 20 03:17:54.734818 kubelet[1885]: I0120 03:17:54.734759 1885 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 20 03:17:54.734894 kubelet[1885]: E0120 03:17:54.734857 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 03:17:54.734924 kubelet[1885]: E0120 03:17:54.734896 1885 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 03:17:54.741193 kubelet[1885]: I0120 03:17:54.741046 1885 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 20 03:17:54.742155 kubelet[1885]: I0120 03:17:54.742008 1885 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 20 03:17:54.742967 kubelet[1885]: W0120 03:17:54.742848 1885 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 20 03:17:54.749425 kubelet[1885]: I0120 03:17:54.749336 1885 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 20 03:17:54.749868 kubelet[1885]: I0120 03:17:54.749806 1885 server.go:1289] "Started kubelet" Jan 20 03:17:54.750391 kubelet[1885]: I0120 03:17:54.750270 1885 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 20 03:17:54.753241 kubelet[1885]: I0120 03:17:54.753159 1885 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 20 03:17:54.753880 kubelet[1885]: I0120 03:17:54.753560 1885 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 20 03:17:54.753880 kubelet[1885]: I0120 03:17:54.753709 1885 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 20 03:17:54.765034 kubelet[1885]: I0120 03:17:54.764951 1885 server.go:317] "Adding debug handlers to kubelet server" Jan 20 03:17:54.765286 kubelet[1885]: E0120 03:17:54.765176 1885 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"10.0.0.17\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 20 03:17:54.768079 kubelet[1885]: E0120 03:17:54.767766 1885 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 20 03:17:54.769007 kubelet[1885]: I0120 03:17:54.768850 1885 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 20 03:17:54.770294 kubelet[1885]: I0120 03:17:54.769982 1885 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 20 03:17:54.770294 kubelet[1885]: I0120 03:17:54.770117 1885 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 20 03:17:54.770695 kubelet[1885]: I0120 03:17:54.770620 1885 reconciler.go:26] "Reconciler: start to sync state" Jan 20 03:17:54.775017 kubelet[1885]: E0120 03:17:54.774759 1885 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.17\" not found" Jan 20 03:17:54.775070 kubelet[1885]: E0120 03:17:54.773225 1885 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.17.188c5225850f62dd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.17,UID:10.0.0.17,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.0.0.17,},FirstTimestamp:2026-01-20 03:17:54.749407965 +0000 UTC m=+0.816986288,LastTimestamp:2026-01-20 03:17:54.749407965 +0000 UTC m=+0.816986288,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.17,}" Jan 20 03:17:54.780078 kubelet[1885]: E0120 03:17:54.779113 1885 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 20 03:17:54.780078 kubelet[1885]: I0120 03:17:54.779137 1885 factory.go:223] Registration of the systemd container factory successfully Jan 20 03:17:54.781945 kubelet[1885]: I0120 03:17:54.781046 1885 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 20 03:17:54.785685 kubelet[1885]: I0120 03:17:54.785282 1885 factory.go:223] Registration of the containerd container factory successfully Jan 20 03:17:54.799517 kubelet[1885]: E0120 03:17:54.799350 1885 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.17\" not found" node="10.0.0.17" Jan 20 03:17:54.806159 kubelet[1885]: I0120 03:17:54.804990 1885 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 20 03:17:54.806159 kubelet[1885]: I0120 03:17:54.805007 1885 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 20 03:17:54.806159 kubelet[1885]: I0120 03:17:54.805031 1885 state_mem.go:36] "Initialized new in-memory state store" Jan 20 03:17:54.876300 kubelet[1885]: E0120 03:17:54.876033 1885 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.17\" not found" Jan 20 03:17:54.879222 kubelet[1885]: I0120 03:17:54.878919 1885 policy_none.go:49] "None policy: Start" Jan 20 03:17:54.879222 kubelet[1885]: I0120 03:17:54.878995 1885 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 20 03:17:54.879222 kubelet[1885]: I0120 03:17:54.879019 1885 state_mem.go:35] "Initializing new in-memory state store" Jan 20 03:17:54.890120 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 20 03:17:54.906311 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 20 03:17:54.912763 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 20 03:17:54.924189 kubelet[1885]: E0120 03:17:54.924100 1885 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 20 03:17:54.924866 kubelet[1885]: I0120 03:17:54.924386 1885 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 20 03:17:54.925190 kubelet[1885]: I0120 03:17:54.924852 1885 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 20 03:17:54.925190 kubelet[1885]: I0120 03:17:54.925160 1885 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 20 03:17:54.929593 kubelet[1885]: E0120 03:17:54.929424 1885 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 20 03:17:54.929712 kubelet[1885]: E0120 03:17:54.929561 1885 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.17\" not found" Jan 20 03:17:54.960366 kubelet[1885]: I0120 03:17:54.960260 1885 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 20 03:17:54.963896 kubelet[1885]: I0120 03:17:54.963808 1885 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 20 03:17:54.963981 kubelet[1885]: I0120 03:17:54.963949 1885 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 20 03:17:54.963981 kubelet[1885]: I0120 03:17:54.963980 1885 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 20 03:17:54.964047 kubelet[1885]: I0120 03:17:54.963988 1885 kubelet.go:2436] "Starting kubelet main sync loop" Jan 20 03:17:54.964047 kubelet[1885]: E0120 03:17:54.964032 1885 kubelet.go:2460] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Jan 20 03:17:55.028030 kubelet[1885]: I0120 03:17:55.027872 1885 kubelet_node_status.go:75] "Attempting to register node" node="10.0.0.17" Jan 20 03:17:55.039131 kubelet[1885]: I0120 03:17:55.039020 1885 kubelet_node_status.go:78] "Successfully registered node" node="10.0.0.17" Jan 20 03:17:55.039131 kubelet[1885]: E0120 03:17:55.039080 1885 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"10.0.0.17\": node \"10.0.0.17\" not found" Jan 20 03:17:55.070008 kubelet[1885]: E0120 03:17:55.069947 1885 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.17\" not found" Jan 20 03:17:55.171327 kubelet[1885]: E0120 03:17:55.170946 1885 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.17\" not found" Jan 20 03:17:55.273531 kubelet[1885]: E0120 03:17:55.271339 1885 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.17\" not found" Jan 20 03:17:55.373585 kubelet[1885]: E0120 03:17:55.373299 1885 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.17\" not found" Jan 20 03:17:55.458106 sudo[1751]: pam_unix(sudo:session): session closed for user root Jan 20 03:17:55.460346 sshd[1750]: Connection closed by 10.0.0.1 port 44122 Jan 20 03:17:55.461114 sshd-session[1747]: pam_unix(sshd:session): session closed for user core Jan 20 03:17:55.470637 systemd[1]: sshd@6-10.0.0.17:22-10.0.0.1:44122.service: Deactivated successfully. Jan 20 03:17:55.476490 kubelet[1885]: E0120 03:17:55.474072 1885 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.17\" not found" Jan 20 03:17:55.479010 systemd[1]: session-7.scope: Deactivated successfully. Jan 20 03:17:55.479548 systemd[1]: session-7.scope: Consumed 697ms CPU time, 78.4M memory peak. Jan 20 03:17:55.481725 systemd-logind[1537]: Session 7 logged out. Waiting for processes to exit. Jan 20 03:17:55.485764 systemd-logind[1537]: Removed session 7. Jan 20 03:17:55.574538 kubelet[1885]: E0120 03:17:55.574196 1885 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.17\" not found" Jan 20 03:17:55.652155 kubelet[1885]: I0120 03:17:55.648187 1885 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 20 03:17:55.652155 kubelet[1885]: I0120 03:17:55.648710 1885 reflector.go:556] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" err="very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received" Jan 20 03:17:55.652155 kubelet[1885]: I0120 03:17:55.648719 1885 reflector.go:556] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" err="very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received" Jan 20 03:17:55.675922 kubelet[1885]: E0120 03:17:55.675701 1885 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.17\" not found" Jan 20 03:17:55.737549 kubelet[1885]: E0120 03:17:55.735074 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 03:17:55.776838 kubelet[1885]: E0120 03:17:55.776584 1885 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.17\" not found" Jan 20 03:17:55.878201 kubelet[1885]: E0120 03:17:55.878006 1885 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.17\" not found" Jan 20 03:17:55.980280 kubelet[1885]: E0120 03:17:55.979629 1885 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.17\" not found" Jan 20 03:17:56.080992 kubelet[1885]: E0120 03:17:56.080717 1885 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.17\" not found" Jan 20 03:17:56.182067 kubelet[1885]: E0120 03:17:56.181889 1885 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.17\" not found" Jan 20 03:17:56.283158 kubelet[1885]: E0120 03:17:56.282993 1885 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.17\" not found" Jan 20 03:17:56.385347 kubelet[1885]: I0120 03:17:56.385058 1885 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jan 20 03:17:56.386075 containerd[1550]: time="2026-01-20T03:17:56.385797469Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 20 03:17:56.387586 kubelet[1885]: I0120 03:17:56.387240 1885 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jan 20 03:17:56.736344 kubelet[1885]: E0120 03:17:56.735969 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 03:17:56.738779 kubelet[1885]: I0120 03:17:56.738547 1885 apiserver.go:52] "Watching apiserver" Jan 20 03:17:56.763125 systemd[1]: Created slice kubepods-besteffort-podeb83bc17_02e4_4089_addf_272f39ce72c3.slice - libcontainer container kubepods-besteffort-podeb83bc17_02e4_4089_addf_272f39ce72c3.slice. Jan 20 03:17:56.777553 kubelet[1885]: I0120 03:17:56.776853 1885 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 20 03:17:56.786590 kubelet[1885]: I0120 03:17:56.786557 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/eb7c50b2-127d-433e-8b60-fc5452e0e1e0-bpf-maps\") pod \"cilium-9hs9c\" (UID: \"eb7c50b2-127d-433e-8b60-fc5452e0e1e0\") " pod="kube-system/cilium-9hs9c" Jan 20 03:17:56.786813 systemd[1]: Created slice kubepods-burstable-podeb7c50b2_127d_433e_8b60_fc5452e0e1e0.slice - libcontainer container kubepods-burstable-podeb7c50b2_127d_433e_8b60_fc5452e0e1e0.slice. Jan 20 03:17:56.787216 kubelet[1885]: I0120 03:17:56.786926 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/eb7c50b2-127d-433e-8b60-fc5452e0e1e0-hostproc\") pod \"cilium-9hs9c\" (UID: \"eb7c50b2-127d-433e-8b60-fc5452e0e1e0\") " pod="kube-system/cilium-9hs9c" Jan 20 03:17:56.787993 kubelet[1885]: I0120 03:17:56.787269 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/eb7c50b2-127d-433e-8b60-fc5452e0e1e0-cni-path\") pod \"cilium-9hs9c\" (UID: \"eb7c50b2-127d-433e-8b60-fc5452e0e1e0\") " pod="kube-system/cilium-9hs9c" Jan 20 03:17:56.788073 kubelet[1885]: I0120 03:17:56.788027 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkkgp\" (UniqueName: \"kubernetes.io/projected/eb7c50b2-127d-433e-8b60-fc5452e0e1e0-kube-api-access-lkkgp\") pod \"cilium-9hs9c\" (UID: \"eb7c50b2-127d-433e-8b60-fc5452e0e1e0\") " pod="kube-system/cilium-9hs9c" Jan 20 03:17:56.788626 kubelet[1885]: I0120 03:17:56.788215 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/eb83bc17-02e4-4089-addf-272f39ce72c3-kube-proxy\") pod \"kube-proxy-rsp9k\" (UID: \"eb83bc17-02e4-4089-addf-272f39ce72c3\") " pod="kube-system/kube-proxy-rsp9k" Jan 20 03:17:56.789747 kubelet[1885]: I0120 03:17:56.788761 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/eb7c50b2-127d-433e-8b60-fc5452e0e1e0-cilium-run\") pod \"cilium-9hs9c\" (UID: \"eb7c50b2-127d-433e-8b60-fc5452e0e1e0\") " pod="kube-system/cilium-9hs9c" Jan 20 03:17:56.789747 kubelet[1885]: I0120 03:17:56.788933 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/eb7c50b2-127d-433e-8b60-fc5452e0e1e0-etc-cni-netd\") pod \"cilium-9hs9c\" (UID: \"eb7c50b2-127d-433e-8b60-fc5452e0e1e0\") " pod="kube-system/cilium-9hs9c" Jan 20 03:17:56.789747 kubelet[1885]: I0120 03:17:56.788959 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eb7c50b2-127d-433e-8b60-fc5452e0e1e0-lib-modules\") pod \"cilium-9hs9c\" (UID: \"eb7c50b2-127d-433e-8b60-fc5452e0e1e0\") " pod="kube-system/cilium-9hs9c" Jan 20 03:17:56.789747 kubelet[1885]: I0120 03:17:56.788985 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/eb7c50b2-127d-433e-8b60-fc5452e0e1e0-cilium-config-path\") pod \"cilium-9hs9c\" (UID: \"eb7c50b2-127d-433e-8b60-fc5452e0e1e0\") " pod="kube-system/cilium-9hs9c" Jan 20 03:17:56.789747 kubelet[1885]: I0120 03:17:56.789009 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/eb7c50b2-127d-433e-8b60-fc5452e0e1e0-cilium-cgroup\") pod \"cilium-9hs9c\" (UID: \"eb7c50b2-127d-433e-8b60-fc5452e0e1e0\") " pod="kube-system/cilium-9hs9c" Jan 20 03:17:56.789747 kubelet[1885]: I0120 03:17:56.789028 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/eb7c50b2-127d-433e-8b60-fc5452e0e1e0-host-proc-sys-kernel\") pod \"cilium-9hs9c\" (UID: \"eb7c50b2-127d-433e-8b60-fc5452e0e1e0\") " pod="kube-system/cilium-9hs9c" Jan 20 03:17:56.789955 kubelet[1885]: I0120 03:17:56.789043 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/eb7c50b2-127d-433e-8b60-fc5452e0e1e0-hubble-tls\") pod \"cilium-9hs9c\" (UID: \"eb7c50b2-127d-433e-8b60-fc5452e0e1e0\") " pod="kube-system/cilium-9hs9c" Jan 20 03:17:56.789955 kubelet[1885]: I0120 03:17:56.789056 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eb83bc17-02e4-4089-addf-272f39ce72c3-xtables-lock\") pod \"kube-proxy-rsp9k\" (UID: \"eb83bc17-02e4-4089-addf-272f39ce72c3\") " pod="kube-system/kube-proxy-rsp9k" Jan 20 03:17:56.789955 kubelet[1885]: I0120 03:17:56.789069 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eb83bc17-02e4-4089-addf-272f39ce72c3-lib-modules\") pod \"kube-proxy-rsp9k\" (UID: \"eb83bc17-02e4-4089-addf-272f39ce72c3\") " pod="kube-system/kube-proxy-rsp9k" Jan 20 03:17:56.789955 kubelet[1885]: I0120 03:17:56.789083 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6k6t\" (UniqueName: \"kubernetes.io/projected/eb83bc17-02e4-4089-addf-272f39ce72c3-kube-api-access-c6k6t\") pod \"kube-proxy-rsp9k\" (UID: \"eb83bc17-02e4-4089-addf-272f39ce72c3\") " pod="kube-system/kube-proxy-rsp9k" Jan 20 03:17:56.789955 kubelet[1885]: I0120 03:17:56.789096 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eb7c50b2-127d-433e-8b60-fc5452e0e1e0-xtables-lock\") pod \"cilium-9hs9c\" (UID: \"eb7c50b2-127d-433e-8b60-fc5452e0e1e0\") " pod="kube-system/cilium-9hs9c" Jan 20 03:17:56.790101 kubelet[1885]: I0120 03:17:56.789187 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/eb7c50b2-127d-433e-8b60-fc5452e0e1e0-clustermesh-secrets\") pod \"cilium-9hs9c\" (UID: \"eb7c50b2-127d-433e-8b60-fc5452e0e1e0\") " pod="kube-system/cilium-9hs9c" Jan 20 03:17:56.790101 kubelet[1885]: I0120 03:17:56.789212 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/eb7c50b2-127d-433e-8b60-fc5452e0e1e0-host-proc-sys-net\") pod \"cilium-9hs9c\" (UID: \"eb7c50b2-127d-433e-8b60-fc5452e0e1e0\") " pod="kube-system/cilium-9hs9c" Jan 20 03:17:57.080573 kubelet[1885]: E0120 03:17:57.080183 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:17:57.081895 containerd[1550]: time="2026-01-20T03:17:57.081766341Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rsp9k,Uid:eb83bc17-02e4-4089-addf-272f39ce72c3,Namespace:kube-system,Attempt:0,}" Jan 20 03:17:57.102966 kubelet[1885]: E0120 03:17:57.102615 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:17:57.104167 containerd[1550]: time="2026-01-20T03:17:57.103833322Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9hs9c,Uid:eb7c50b2-127d-433e-8b60-fc5452e0e1e0,Namespace:kube-system,Attempt:0,}" Jan 20 03:17:57.736521 kubelet[1885]: E0120 03:17:57.736191 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 03:17:57.755013 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2582312530.mount: Deactivated successfully. Jan 20 03:17:57.767273 containerd[1550]: time="2026-01-20T03:17:57.767163548Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 03:17:57.773566 containerd[1550]: time="2026-01-20T03:17:57.773357669Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jan 20 03:17:57.775999 containerd[1550]: time="2026-01-20T03:17:57.775898156Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 03:17:57.778061 containerd[1550]: time="2026-01-20T03:17:57.777991148Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 20 03:17:57.779853 containerd[1550]: time="2026-01-20T03:17:57.779614420Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 03:17:57.783862 containerd[1550]: time="2026-01-20T03:17:57.783752877Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 03:17:57.784648 containerd[1550]: time="2026-01-20T03:17:57.784555139Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 690.648874ms" Jan 20 03:17:57.787803 containerd[1550]: time="2026-01-20T03:17:57.787657846Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 676.924375ms" Jan 20 03:17:57.808569 containerd[1550]: time="2026-01-20T03:17:57.808415358Z" level=info msg="connecting to shim 5cc4268ec7480a781c9e26f876980a2e684b3d8939d136d2e7517da3bc42cd34" address="unix:///run/containerd/s/ff799b5019f10702484a068145312395cb6c8ba88d24d68f180fe4f3caad5546" namespace=k8s.io protocol=ttrpc version=3 Jan 20 03:17:57.821272 containerd[1550]: time="2026-01-20T03:17:57.821160998Z" level=info msg="connecting to shim 88fc39901f8fe66268d55c6870dfce307c41b10719aeec3603b84bbca1183790" address="unix:///run/containerd/s/c69fbd1d81cb3d566365c5b9b8b7ecea664eebcdff105fe52e95b2e404970936" namespace=k8s.io protocol=ttrpc version=3 Jan 20 03:17:57.858932 systemd[1]: Started cri-containerd-5cc4268ec7480a781c9e26f876980a2e684b3d8939d136d2e7517da3bc42cd34.scope - libcontainer container 5cc4268ec7480a781c9e26f876980a2e684b3d8939d136d2e7517da3bc42cd34. Jan 20 03:17:57.864776 systemd[1]: Started cri-containerd-88fc39901f8fe66268d55c6870dfce307c41b10719aeec3603b84bbca1183790.scope - libcontainer container 88fc39901f8fe66268d55c6870dfce307c41b10719aeec3603b84bbca1183790. Jan 20 03:17:57.924094 containerd[1550]: time="2026-01-20T03:17:57.924001965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rsp9k,Uid:eb83bc17-02e4-4089-addf-272f39ce72c3,Namespace:kube-system,Attempt:0,} returns sandbox id \"5cc4268ec7480a781c9e26f876980a2e684b3d8939d136d2e7517da3bc42cd34\"" Jan 20 03:17:57.926565 kubelet[1885]: E0120 03:17:57.926539 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:17:57.928367 containerd[1550]: time="2026-01-20T03:17:57.928073425Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Jan 20 03:17:57.928367 containerd[1550]: time="2026-01-20T03:17:57.928137404Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9hs9c,Uid:eb7c50b2-127d-433e-8b60-fc5452e0e1e0,Namespace:kube-system,Attempt:0,} returns sandbox id \"88fc39901f8fe66268d55c6870dfce307c41b10719aeec3603b84bbca1183790\"" Jan 20 03:17:57.931027 kubelet[1885]: E0120 03:17:57.930905 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:17:58.737011 kubelet[1885]: E0120 03:17:58.736820 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 03:17:58.991102 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount37330419.mount: Deactivated successfully. Jan 20 03:17:59.488993 containerd[1550]: time="2026-01-20T03:17:59.488815278Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 03:17:59.490313 containerd[1550]: time="2026-01-20T03:17:59.490266346Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=31930096" Jan 20 03:17:59.491736 containerd[1550]: time="2026-01-20T03:17:59.491642134Z" level=info msg="ImageCreate event name:\"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 03:17:59.494031 containerd[1550]: time="2026-01-20T03:17:59.493922591Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 03:17:59.494642 containerd[1550]: time="2026-01-20T03:17:59.494552371Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"31929115\" in 1.566437949s" Jan 20 03:17:59.494642 containerd[1550]: time="2026-01-20T03:17:59.494614607Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\"" Jan 20 03:17:59.496854 containerd[1550]: time="2026-01-20T03:17:59.496782294Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 20 03:17:59.500863 containerd[1550]: time="2026-01-20T03:17:59.500798409Z" level=info msg="CreateContainer within sandbox \"5cc4268ec7480a781c9e26f876980a2e684b3d8939d136d2e7517da3bc42cd34\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 20 03:17:59.513263 containerd[1550]: time="2026-01-20T03:17:59.513104299Z" level=info msg="Container f543532c116373d2074801108dec4ef0a4f20ce822241dc6f0c653ca18a9e792: CDI devices from CRI Config.CDIDevices: []" Jan 20 03:17:59.525827 containerd[1550]: time="2026-01-20T03:17:59.525729404Z" level=info msg="CreateContainer within sandbox \"5cc4268ec7480a781c9e26f876980a2e684b3d8939d136d2e7517da3bc42cd34\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f543532c116373d2074801108dec4ef0a4f20ce822241dc6f0c653ca18a9e792\"" Jan 20 03:17:59.527168 containerd[1550]: time="2026-01-20T03:17:59.527140230Z" level=info msg="StartContainer for \"f543532c116373d2074801108dec4ef0a4f20ce822241dc6f0c653ca18a9e792\"" Jan 20 03:17:59.528885 containerd[1550]: time="2026-01-20T03:17:59.528717865Z" level=info msg="connecting to shim f543532c116373d2074801108dec4ef0a4f20ce822241dc6f0c653ca18a9e792" address="unix:///run/containerd/s/ff799b5019f10702484a068145312395cb6c8ba88d24d68f180fe4f3caad5546" protocol=ttrpc version=3 Jan 20 03:17:59.557741 systemd[1]: Started cri-containerd-f543532c116373d2074801108dec4ef0a4f20ce822241dc6f0c653ca18a9e792.scope - libcontainer container f543532c116373d2074801108dec4ef0a4f20ce822241dc6f0c653ca18a9e792. Jan 20 03:17:59.694871 containerd[1550]: time="2026-01-20T03:17:59.694793340Z" level=info msg="StartContainer for \"f543532c116373d2074801108dec4ef0a4f20ce822241dc6f0c653ca18a9e792\" returns successfully" Jan 20 03:17:59.737932 kubelet[1885]: E0120 03:17:59.737733 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 03:17:59.989551 kubelet[1885]: E0120 03:17:59.989158 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:18:00.007743 kubelet[1885]: I0120 03:18:00.007407 1885 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-rsp9k" podStartSLOduration=3.439174294 podStartE2EDuration="5.0073893s" podCreationTimestamp="2026-01-20 03:17:55 +0000 UTC" firstStartedPulling="2026-01-20 03:17:57.927394108 +0000 UTC m=+3.994972431" lastFinishedPulling="2026-01-20 03:17:59.495609114 +0000 UTC m=+5.563187437" observedRunningTime="2026-01-20 03:18:00.007211137 +0000 UTC m=+6.074789470" watchObservedRunningTime="2026-01-20 03:18:00.0073893 +0000 UTC m=+6.074967633" Jan 20 03:18:00.738993 kubelet[1885]: E0120 03:18:00.738812 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 03:18:00.995179 kubelet[1885]: E0120 03:18:00.994174 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:18:01.740417 kubelet[1885]: E0120 03:18:01.740232 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 03:18:02.741818 kubelet[1885]: E0120 03:18:02.741566 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 03:18:03.742406 kubelet[1885]: E0120 03:18:03.742343 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 03:18:04.742654 kubelet[1885]: E0120 03:18:04.742586 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 03:18:05.743274 kubelet[1885]: E0120 03:18:05.743230 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 03:18:06.744909 kubelet[1885]: E0120 03:18:06.744849 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 03:18:07.043154 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2640538916.mount: Deactivated successfully. Jan 20 03:18:07.745836 kubelet[1885]: E0120 03:18:07.745787 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 03:18:08.747476 kubelet[1885]: E0120 03:18:08.747258 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 03:18:09.604028 containerd[1550]: time="2026-01-20T03:18:09.603888147Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 03:18:09.605349 containerd[1550]: time="2026-01-20T03:18:09.605218547Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jan 20 03:18:09.607165 containerd[1550]: time="2026-01-20T03:18:09.607013000Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 03:18:09.608706 containerd[1550]: time="2026-01-20T03:18:09.608542925Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 10.111688156s" Jan 20 03:18:09.608706 containerd[1550]: time="2026-01-20T03:18:09.608601695Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 20 03:18:09.613987 containerd[1550]: time="2026-01-20T03:18:09.613805773Z" level=info msg="CreateContainer within sandbox \"88fc39901f8fe66268d55c6870dfce307c41b10719aeec3603b84bbca1183790\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 20 03:18:09.624641 containerd[1550]: time="2026-01-20T03:18:09.623823462Z" level=info msg="Container 67962695c31bab1bb9b43efdb98092f23b08f6d49ae0a0c4320344b80891cb2b: CDI devices from CRI Config.CDIDevices: []" Jan 20 03:18:09.636557 containerd[1550]: time="2026-01-20T03:18:09.636250868Z" level=info msg="CreateContainer within sandbox \"88fc39901f8fe66268d55c6870dfce307c41b10719aeec3603b84bbca1183790\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"67962695c31bab1bb9b43efdb98092f23b08f6d49ae0a0c4320344b80891cb2b\"" Jan 20 03:18:09.637569 containerd[1550]: time="2026-01-20T03:18:09.636890958Z" level=info msg="StartContainer for \"67962695c31bab1bb9b43efdb98092f23b08f6d49ae0a0c4320344b80891cb2b\"" Jan 20 03:18:09.638787 containerd[1550]: time="2026-01-20T03:18:09.638614705Z" level=info msg="connecting to shim 67962695c31bab1bb9b43efdb98092f23b08f6d49ae0a0c4320344b80891cb2b" address="unix:///run/containerd/s/c69fbd1d81cb3d566365c5b9b8b7ecea664eebcdff105fe52e95b2e404970936" protocol=ttrpc version=3 Jan 20 03:18:09.671713 systemd[1]: Started cri-containerd-67962695c31bab1bb9b43efdb98092f23b08f6d49ae0a0c4320344b80891cb2b.scope - libcontainer container 67962695c31bab1bb9b43efdb98092f23b08f6d49ae0a0c4320344b80891cb2b. Jan 20 03:18:09.719304 containerd[1550]: time="2026-01-20T03:18:09.719110523Z" level=info msg="StartContainer for \"67962695c31bab1bb9b43efdb98092f23b08f6d49ae0a0c4320344b80891cb2b\" returns successfully" Jan 20 03:18:09.733306 systemd[1]: cri-containerd-67962695c31bab1bb9b43efdb98092f23b08f6d49ae0a0c4320344b80891cb2b.scope: Deactivated successfully. Jan 20 03:18:09.737702 containerd[1550]: time="2026-01-20T03:18:09.737611035Z" level=info msg="received container exit event container_id:\"67962695c31bab1bb9b43efdb98092f23b08f6d49ae0a0c4320344b80891cb2b\" id:\"67962695c31bab1bb9b43efdb98092f23b08f6d49ae0a0c4320344b80891cb2b\" pid:2251 exited_at:{seconds:1768879089 nanos:736918142}" Jan 20 03:18:09.748320 kubelet[1885]: E0120 03:18:09.748278 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 03:18:09.771090 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-67962695c31bab1bb9b43efdb98092f23b08f6d49ae0a0c4320344b80891cb2b-rootfs.mount: Deactivated successfully. Jan 20 03:18:10.020698 kubelet[1885]: E0120 03:18:10.020100 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:18:10.029555 containerd[1550]: time="2026-01-20T03:18:10.029253737Z" level=info msg="CreateContainer within sandbox \"88fc39901f8fe66268d55c6870dfce307c41b10719aeec3603b84bbca1183790\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 20 03:18:10.043205 containerd[1550]: time="2026-01-20T03:18:10.043076796Z" level=info msg="Container cf01eb7bbbdfeaec72b0a78598363bd7a42927bdbd32935fb00d68dd4c7c6392: CDI devices from CRI Config.CDIDevices: []" Jan 20 03:18:10.054083 containerd[1550]: time="2026-01-20T03:18:10.053968355Z" level=info msg="CreateContainer within sandbox \"88fc39901f8fe66268d55c6870dfce307c41b10719aeec3603b84bbca1183790\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"cf01eb7bbbdfeaec72b0a78598363bd7a42927bdbd32935fb00d68dd4c7c6392\"" Jan 20 03:18:10.055425 containerd[1550]: time="2026-01-20T03:18:10.055218865Z" level=info msg="StartContainer for \"cf01eb7bbbdfeaec72b0a78598363bd7a42927bdbd32935fb00d68dd4c7c6392\"" Jan 20 03:18:10.056890 containerd[1550]: time="2026-01-20T03:18:10.056822828Z" level=info msg="connecting to shim cf01eb7bbbdfeaec72b0a78598363bd7a42927bdbd32935fb00d68dd4c7c6392" address="unix:///run/containerd/s/c69fbd1d81cb3d566365c5b9b8b7ecea664eebcdff105fe52e95b2e404970936" protocol=ttrpc version=3 Jan 20 03:18:10.090700 systemd[1]: Started cri-containerd-cf01eb7bbbdfeaec72b0a78598363bd7a42927bdbd32935fb00d68dd4c7c6392.scope - libcontainer container cf01eb7bbbdfeaec72b0a78598363bd7a42927bdbd32935fb00d68dd4c7c6392. Jan 20 03:18:10.139584 containerd[1550]: time="2026-01-20T03:18:10.139380079Z" level=info msg="StartContainer for \"cf01eb7bbbdfeaec72b0a78598363bd7a42927bdbd32935fb00d68dd4c7c6392\" returns successfully" Jan 20 03:18:10.157601 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 20 03:18:10.157857 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 20 03:18:10.158589 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 20 03:18:10.160722 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 20 03:18:10.162509 systemd[1]: cri-containerd-cf01eb7bbbdfeaec72b0a78598363bd7a42927bdbd32935fb00d68dd4c7c6392.scope: Deactivated successfully. Jan 20 03:18:10.163565 containerd[1550]: time="2026-01-20T03:18:10.162923956Z" level=info msg="received container exit event container_id:\"cf01eb7bbbdfeaec72b0a78598363bd7a42927bdbd32935fb00d68dd4c7c6392\" id:\"cf01eb7bbbdfeaec72b0a78598363bd7a42927bdbd32935fb00d68dd4c7c6392\" pid:2296 exited_at:{seconds:1768879090 nanos:162617273}" Jan 20 03:18:10.186372 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 20 03:18:10.749468 kubelet[1885]: E0120 03:18:10.749320 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 03:18:11.025621 kubelet[1885]: E0120 03:18:11.025176 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:18:11.033554 containerd[1550]: time="2026-01-20T03:18:11.033217405Z" level=info msg="CreateContainer within sandbox \"88fc39901f8fe66268d55c6870dfce307c41b10719aeec3603b84bbca1183790\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 20 03:18:11.048351 containerd[1550]: time="2026-01-20T03:18:11.048273598Z" level=info msg="Container d0296f2eb3b7aa089312eab27e859f17a77fa14ff6e2437924e9ff7f7c2015de: CDI devices from CRI Config.CDIDevices: []" Jan 20 03:18:11.061906 containerd[1550]: time="2026-01-20T03:18:11.061834949Z" level=info msg="CreateContainer within sandbox \"88fc39901f8fe66268d55c6870dfce307c41b10719aeec3603b84bbca1183790\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d0296f2eb3b7aa089312eab27e859f17a77fa14ff6e2437924e9ff7f7c2015de\"" Jan 20 03:18:11.062788 containerd[1550]: time="2026-01-20T03:18:11.062620215Z" level=info msg="StartContainer for \"d0296f2eb3b7aa089312eab27e859f17a77fa14ff6e2437924e9ff7f7c2015de\"" Jan 20 03:18:11.064521 containerd[1550]: time="2026-01-20T03:18:11.064152925Z" level=info msg="connecting to shim d0296f2eb3b7aa089312eab27e859f17a77fa14ff6e2437924e9ff7f7c2015de" address="unix:///run/containerd/s/c69fbd1d81cb3d566365c5b9b8b7ecea664eebcdff105fe52e95b2e404970936" protocol=ttrpc version=3 Jan 20 03:18:11.106687 systemd[1]: Started cri-containerd-d0296f2eb3b7aa089312eab27e859f17a77fa14ff6e2437924e9ff7f7c2015de.scope - libcontainer container d0296f2eb3b7aa089312eab27e859f17a77fa14ff6e2437924e9ff7f7c2015de. Jan 20 03:18:11.214189 containerd[1550]: time="2026-01-20T03:18:11.214120452Z" level=info msg="StartContainer for \"d0296f2eb3b7aa089312eab27e859f17a77fa14ff6e2437924e9ff7f7c2015de\" returns successfully" Jan 20 03:18:11.214618 systemd[1]: cri-containerd-d0296f2eb3b7aa089312eab27e859f17a77fa14ff6e2437924e9ff7f7c2015de.scope: Deactivated successfully. Jan 20 03:18:11.219057 containerd[1550]: time="2026-01-20T03:18:11.218537607Z" level=info msg="received container exit event container_id:\"d0296f2eb3b7aa089312eab27e859f17a77fa14ff6e2437924e9ff7f7c2015de\" id:\"d0296f2eb3b7aa089312eab27e859f17a77fa14ff6e2437924e9ff7f7c2015de\" pid:2343 exited_at:{seconds:1768879091 nanos:217993810}" Jan 20 03:18:11.256922 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d0296f2eb3b7aa089312eab27e859f17a77fa14ff6e2437924e9ff7f7c2015de-rootfs.mount: Deactivated successfully. Jan 20 03:18:11.750497 kubelet[1885]: E0120 03:18:11.750230 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 03:18:12.032139 kubelet[1885]: E0120 03:18:12.031908 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:18:12.039625 containerd[1550]: time="2026-01-20T03:18:12.039520499Z" level=info msg="CreateContainer within sandbox \"88fc39901f8fe66268d55c6870dfce307c41b10719aeec3603b84bbca1183790\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 20 03:18:12.056199 containerd[1550]: time="2026-01-20T03:18:12.056121665Z" level=info msg="Container 42152ae6528e7a487af29cee25198c83c7c4e43f57916ac28617a73e8321173e: CDI devices from CRI Config.CDIDevices: []" Jan 20 03:18:12.066866 containerd[1550]: time="2026-01-20T03:18:12.066703627Z" level=info msg="CreateContainer within sandbox \"88fc39901f8fe66268d55c6870dfce307c41b10719aeec3603b84bbca1183790\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"42152ae6528e7a487af29cee25198c83c7c4e43f57916ac28617a73e8321173e\"" Jan 20 03:18:12.067651 containerd[1550]: time="2026-01-20T03:18:12.067555387Z" level=info msg="StartContainer for \"42152ae6528e7a487af29cee25198c83c7c4e43f57916ac28617a73e8321173e\"" Jan 20 03:18:12.068629 containerd[1550]: time="2026-01-20T03:18:12.068572345Z" level=info msg="connecting to shim 42152ae6528e7a487af29cee25198c83c7c4e43f57916ac28617a73e8321173e" address="unix:///run/containerd/s/c69fbd1d81cb3d566365c5b9b8b7ecea664eebcdff105fe52e95b2e404970936" protocol=ttrpc version=3 Jan 20 03:18:12.113888 systemd[1]: Started cri-containerd-42152ae6528e7a487af29cee25198c83c7c4e43f57916ac28617a73e8321173e.scope - libcontainer container 42152ae6528e7a487af29cee25198c83c7c4e43f57916ac28617a73e8321173e. Jan 20 03:18:12.159574 systemd[1]: cri-containerd-42152ae6528e7a487af29cee25198c83c7c4e43f57916ac28617a73e8321173e.scope: Deactivated successfully. Jan 20 03:18:12.162181 containerd[1550]: time="2026-01-20T03:18:12.162008822Z" level=info msg="received container exit event container_id:\"42152ae6528e7a487af29cee25198c83c7c4e43f57916ac28617a73e8321173e\" id:\"42152ae6528e7a487af29cee25198c83c7c4e43f57916ac28617a73e8321173e\" pid:2382 exited_at:{seconds:1768879092 nanos:160356152}" Jan 20 03:18:12.164406 containerd[1550]: time="2026-01-20T03:18:12.164351119Z" level=info msg="StartContainer for \"42152ae6528e7a487af29cee25198c83c7c4e43f57916ac28617a73e8321173e\" returns successfully" Jan 20 03:18:12.199251 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-42152ae6528e7a487af29cee25198c83c7c4e43f57916ac28617a73e8321173e-rootfs.mount: Deactivated successfully. Jan 20 03:18:12.751341 kubelet[1885]: E0120 03:18:12.751196 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 03:18:13.041219 kubelet[1885]: E0120 03:18:13.040842 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:18:13.049657 containerd[1550]: time="2026-01-20T03:18:13.049186837Z" level=info msg="CreateContainer within sandbox \"88fc39901f8fe66268d55c6870dfce307c41b10719aeec3603b84bbca1183790\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 20 03:18:13.065108 containerd[1550]: time="2026-01-20T03:18:13.064916336Z" level=info msg="Container 0f5108679775f35b1ab3b60417d1094f7992ae6477ba139c62c49d5edf813e21: CDI devices from CRI Config.CDIDevices: []" Jan 20 03:18:13.078049 containerd[1550]: time="2026-01-20T03:18:13.077972292Z" level=info msg="CreateContainer within sandbox \"88fc39901f8fe66268d55c6870dfce307c41b10719aeec3603b84bbca1183790\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"0f5108679775f35b1ab3b60417d1094f7992ae6477ba139c62c49d5edf813e21\"" Jan 20 03:18:13.079131 containerd[1550]: time="2026-01-20T03:18:13.078986736Z" level=info msg="StartContainer for \"0f5108679775f35b1ab3b60417d1094f7992ae6477ba139c62c49d5edf813e21\"" Jan 20 03:18:13.080321 containerd[1550]: time="2026-01-20T03:18:13.080185223Z" level=info msg="connecting to shim 0f5108679775f35b1ab3b60417d1094f7992ae6477ba139c62c49d5edf813e21" address="unix:///run/containerd/s/c69fbd1d81cb3d566365c5b9b8b7ecea664eebcdff105fe52e95b2e404970936" protocol=ttrpc version=3 Jan 20 03:18:13.119857 systemd[1]: Started cri-containerd-0f5108679775f35b1ab3b60417d1094f7992ae6477ba139c62c49d5edf813e21.scope - libcontainer container 0f5108679775f35b1ab3b60417d1094f7992ae6477ba139c62c49d5edf813e21. Jan 20 03:18:13.188150 containerd[1550]: time="2026-01-20T03:18:13.187952332Z" level=info msg="StartContainer for \"0f5108679775f35b1ab3b60417d1094f7992ae6477ba139c62c49d5edf813e21\" returns successfully" Jan 20 03:18:13.304890 kubelet[1885]: I0120 03:18:13.304205 1885 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 20 03:18:13.752759 kubelet[1885]: E0120 03:18:13.752590 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 03:18:13.796595 kernel: Initializing XFRM netlink socket Jan 20 03:18:14.049676 kubelet[1885]: E0120 03:18:14.049379 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:18:14.074175 kubelet[1885]: I0120 03:18:14.073984 1885 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-9hs9c" podStartSLOduration=7.397534259 podStartE2EDuration="19.073963964s" podCreationTimestamp="2026-01-20 03:17:55 +0000 UTC" firstStartedPulling="2026-01-20 03:17:57.932897219 +0000 UTC m=+4.000475543" lastFinishedPulling="2026-01-20 03:18:09.609326925 +0000 UTC m=+15.676905248" observedRunningTime="2026-01-20 03:18:14.073911807 +0000 UTC m=+20.141490150" watchObservedRunningTime="2026-01-20 03:18:14.073963964 +0000 UTC m=+20.141542287" Jan 20 03:18:14.735838 kubelet[1885]: E0120 03:18:14.735687 1885 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 03:18:14.753317 kubelet[1885]: E0120 03:18:14.753189 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 03:18:15.052422 kubelet[1885]: E0120 03:18:15.052095 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:18:15.184216 systemd-networkd[1472]: cilium_host: Link UP Jan 20 03:18:15.184529 systemd-networkd[1472]: cilium_net: Link UP Jan 20 03:18:15.184968 systemd-networkd[1472]: cilium_net: Gained carrier Jan 20 03:18:15.185354 systemd-networkd[1472]: cilium_host: Gained carrier Jan 20 03:18:15.349417 systemd-networkd[1472]: cilium_vxlan: Link UP Jan 20 03:18:15.349589 systemd-networkd[1472]: cilium_vxlan: Gained carrier Jan 20 03:18:15.622616 kernel: NET: Registered PF_ALG protocol family Jan 20 03:18:15.660730 systemd-networkd[1472]: cilium_host: Gained IPv6LL Jan 20 03:18:15.753625 kubelet[1885]: E0120 03:18:15.753575 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 03:18:16.054188 kubelet[1885]: E0120 03:18:16.054008 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:18:16.156730 systemd-networkd[1472]: cilium_net: Gained IPv6LL Jan 20 03:18:16.410600 systemd-networkd[1472]: lxc_health: Link UP Jan 20 03:18:16.412346 systemd-networkd[1472]: lxc_health: Gained carrier Jan 20 03:18:16.604847 systemd-networkd[1472]: cilium_vxlan: Gained IPv6LL Jan 20 03:18:16.755142 kubelet[1885]: E0120 03:18:16.754959 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 03:18:17.105210 kubelet[1885]: E0120 03:18:17.105134 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:18:17.549570 systemd[1]: Created slice kubepods-besteffort-pod46bcae7d_1020_42a2_88da_a2752c77f7df.slice - libcontainer container kubepods-besteffort-pod46bcae7d_1020_42a2_88da_a2752c77f7df.slice. Jan 20 03:18:17.659091 kubelet[1885]: I0120 03:18:17.658923 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxdx6\" (UniqueName: \"kubernetes.io/projected/46bcae7d-1020-42a2-88da-a2752c77f7df-kube-api-access-mxdx6\") pod \"nginx-deployment-7fcdb87857-xtbmk\" (UID: \"46bcae7d-1020-42a2-88da-a2752c77f7df\") " pod="default/nginx-deployment-7fcdb87857-xtbmk" Jan 20 03:18:17.755485 kubelet[1885]: E0120 03:18:17.755392 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 03:18:17.855353 containerd[1550]: time="2026-01-20T03:18:17.855260707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-xtbmk,Uid:46bcae7d-1020-42a2-88da-a2752c77f7df,Namespace:default,Attempt:0,}" Jan 20 03:18:17.896553 systemd-networkd[1472]: lxcdda3cfa19d30: Link UP Jan 20 03:18:17.900492 kernel: eth0: renamed from tmpd9d59 Jan 20 03:18:17.901937 systemd-networkd[1472]: lxcdda3cfa19d30: Gained carrier Jan 20 03:18:17.948838 systemd-networkd[1472]: lxc_health: Gained IPv6LL Jan 20 03:18:18.755652 kubelet[1885]: E0120 03:18:18.755575 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 03:18:19.100752 systemd-networkd[1472]: lxcdda3cfa19d30: Gained IPv6LL Jan 20 03:18:19.756663 kubelet[1885]: E0120 03:18:19.756506 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 03:18:20.757393 kubelet[1885]: E0120 03:18:20.757346 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 03:18:20.858133 containerd[1550]: time="2026-01-20T03:18:20.858080030Z" level=info msg="connecting to shim d9d59a9163e54965bef284954304d1aabb9e7a6b706458771e6a963b1b73d245" address="unix:///run/containerd/s/8f2830b7a414494bfb92ab18bc19e6831bd5efcc07ce34411390ea972b624376" namespace=k8s.io protocol=ttrpc version=3 Jan 20 03:18:20.892680 systemd[1]: Started cri-containerd-d9d59a9163e54965bef284954304d1aabb9e7a6b706458771e6a963b1b73d245.scope - libcontainer container d9d59a9163e54965bef284954304d1aabb9e7a6b706458771e6a963b1b73d245. Jan 20 03:18:20.907091 systemd-resolved[1474]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 03:18:20.943359 containerd[1550]: time="2026-01-20T03:18:20.943223075Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-xtbmk,Uid:46bcae7d-1020-42a2-88da-a2752c77f7df,Namespace:default,Attempt:0,} returns sandbox id \"d9d59a9163e54965bef284954304d1aabb9e7a6b706458771e6a963b1b73d245\"" Jan 20 03:18:20.944657 containerd[1550]: time="2026-01-20T03:18:20.944601405Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 20 03:18:21.758619 kubelet[1885]: E0120 03:18:21.758534 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 03:18:22.286297 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1429308393.mount: Deactivated successfully. Jan 20 03:18:22.759021 kubelet[1885]: E0120 03:18:22.758906 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 03:18:23.052027 containerd[1550]: time="2026-01-20T03:18:23.051831832Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 03:18:23.052734 containerd[1550]: time="2026-01-20T03:18:23.052675651Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=63836480" Jan 20 03:18:23.053927 containerd[1550]: time="2026-01-20T03:18:23.053872476Z" level=info msg="ImageCreate event name:\"sha256:94a30ed18e45363486a55379bd1fbb8c479f36524ae816779f3553afe6b787ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 03:18:23.056777 containerd[1550]: time="2026-01-20T03:18:23.056683720Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:32c5137cb8c7cf61e75836f150e983b9be21fecc642ada89fd936c8cd6c0faa0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 03:18:23.057904 containerd[1550]: time="2026-01-20T03:18:23.057783566Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:94a30ed18e45363486a55379bd1fbb8c479f36524ae816779f3553afe6b787ed\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:32c5137cb8c7cf61e75836f150e983b9be21fecc642ada89fd936c8cd6c0faa0\", size \"63836358\" in 2.113127771s" Jan 20 03:18:23.057904 containerd[1550]: time="2026-01-20T03:18:23.057889853Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:94a30ed18e45363486a55379bd1fbb8c479f36524ae816779f3553afe6b787ed\"" Jan 20 03:18:23.063033 containerd[1550]: time="2026-01-20T03:18:23.062971721Z" level=info msg="CreateContainer within sandbox \"d9d59a9163e54965bef284954304d1aabb9e7a6b706458771e6a963b1b73d245\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jan 20 03:18:23.073563 containerd[1550]: time="2026-01-20T03:18:23.073505522Z" level=info msg="Container aad24b85b28afa9bb6f87c3cbfeb1c3827d62ddd29a27ef0423868298f52e041: CDI devices from CRI Config.CDIDevices: []" Jan 20 03:18:23.080409 containerd[1550]: time="2026-01-20T03:18:23.080290385Z" level=info msg="CreateContainer within sandbox \"d9d59a9163e54965bef284954304d1aabb9e7a6b706458771e6a963b1b73d245\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"aad24b85b28afa9bb6f87c3cbfeb1c3827d62ddd29a27ef0423868298f52e041\"" Jan 20 03:18:23.083776 containerd[1550]: time="2026-01-20T03:18:23.083651156Z" level=info msg="StartContainer for \"aad24b85b28afa9bb6f87c3cbfeb1c3827d62ddd29a27ef0423868298f52e041\"" Jan 20 03:18:23.085110 containerd[1550]: time="2026-01-20T03:18:23.085007176Z" level=info msg="connecting to shim aad24b85b28afa9bb6f87c3cbfeb1c3827d62ddd29a27ef0423868298f52e041" address="unix:///run/containerd/s/8f2830b7a414494bfb92ab18bc19e6831bd5efcc07ce34411390ea972b624376" protocol=ttrpc version=3 Jan 20 03:18:23.125675 systemd[1]: Started cri-containerd-aad24b85b28afa9bb6f87c3cbfeb1c3827d62ddd29a27ef0423868298f52e041.scope - libcontainer container aad24b85b28afa9bb6f87c3cbfeb1c3827d62ddd29a27ef0423868298f52e041. Jan 20 03:18:23.168252 containerd[1550]: time="2026-01-20T03:18:23.168208900Z" level=info msg="StartContainer for \"aad24b85b28afa9bb6f87c3cbfeb1c3827d62ddd29a27ef0423868298f52e041\" returns successfully" Jan 20 03:18:23.760192 kubelet[1885]: E0120 03:18:23.760061 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 03:18:24.083294 kubelet[1885]: I0120 03:18:24.083174 1885 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-7fcdb87857-xtbmk" podStartSLOduration=4.968460777 podStartE2EDuration="7.083156307s" podCreationTimestamp="2026-01-20 03:18:17 +0000 UTC" firstStartedPulling="2026-01-20 03:18:20.944174258 +0000 UTC m=+27.011752581" lastFinishedPulling="2026-01-20 03:18:23.058869787 +0000 UTC m=+29.126448111" observedRunningTime="2026-01-20 03:18:24.083063366 +0000 UTC m=+30.150641689" watchObservedRunningTime="2026-01-20 03:18:24.083156307 +0000 UTC m=+30.150734630" Jan 20 03:18:24.761100 kubelet[1885]: E0120 03:18:24.760977 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 03:18:25.761649 kubelet[1885]: E0120 03:18:25.761544 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 03:18:25.794930 kubelet[1885]: I0120 03:18:25.794823 1885 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 20 03:18:25.795500 kubelet[1885]: E0120 03:18:25.795282 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:18:26.075597 kubelet[1885]: E0120 03:18:26.075548 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:18:26.762817 kubelet[1885]: E0120 03:18:26.762614 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 03:18:27.763252 kubelet[1885]: E0120 03:18:27.763167 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 03:18:28.764271 kubelet[1885]: E0120 03:18:28.764144 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 03:18:29.289332 systemd[1]: Created slice kubepods-besteffort-podd3092b98_4eed_432c_915e_9e8a17c4a962.slice - libcontainer container kubepods-besteffort-podd3092b98_4eed_432c_915e_9e8a17c4a962.slice. Jan 20 03:18:29.342010 kubelet[1885]: I0120 03:18:29.341905 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/d3092b98-4eed-432c-915e-9e8a17c4a962-data\") pod \"nfs-server-provisioner-0\" (UID: \"d3092b98-4eed-432c-915e-9e8a17c4a962\") " pod="default/nfs-server-provisioner-0" Jan 20 03:18:29.342010 kubelet[1885]: I0120 03:18:29.341995 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vffw7\" (UniqueName: \"kubernetes.io/projected/d3092b98-4eed-432c-915e-9e8a17c4a962-kube-api-access-vffw7\") pod \"nfs-server-provisioner-0\" (UID: \"d3092b98-4eed-432c-915e-9e8a17c4a962\") " pod="default/nfs-server-provisioner-0" Jan 20 03:18:29.593255 containerd[1550]: time="2026-01-20T03:18:29.593133905Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:d3092b98-4eed-432c-915e-9e8a17c4a962,Namespace:default,Attempt:0,}" Jan 20 03:18:29.617309 systemd-networkd[1472]: lxc6273d63130b6: Link UP Jan 20 03:18:29.628514 kernel: eth0: renamed from tmp580ec Jan 20 03:18:29.628560 systemd-networkd[1472]: lxc6273d63130b6: Gained carrier Jan 20 03:18:29.764536 kubelet[1885]: E0120 03:18:29.764498 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 03:18:29.770916 containerd[1550]: time="2026-01-20T03:18:29.770863035Z" level=info msg="connecting to shim 580ec186398e4b94fa99c7f1007b83bccfb5121929e2f2db11f7cbd35e51590c" address="unix:///run/containerd/s/d65ebe5bd8ef3b82333d37c8e7c7b1cea4ac77499db0cd592cd90f49a3b4279d" namespace=k8s.io protocol=ttrpc version=3 Jan 20 03:18:29.806658 systemd[1]: Started cri-containerd-580ec186398e4b94fa99c7f1007b83bccfb5121929e2f2db11f7cbd35e51590c.scope - libcontainer container 580ec186398e4b94fa99c7f1007b83bccfb5121929e2f2db11f7cbd35e51590c. Jan 20 03:18:29.826067 systemd-resolved[1474]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 03:18:29.870536 containerd[1550]: time="2026-01-20T03:18:29.870302408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:d3092b98-4eed-432c-915e-9e8a17c4a962,Namespace:default,Attempt:0,} returns sandbox id \"580ec186398e4b94fa99c7f1007b83bccfb5121929e2f2db11f7cbd35e51590c\"" Jan 20 03:18:29.872124 containerd[1550]: time="2026-01-20T03:18:29.872054859Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jan 20 03:18:30.765402 kubelet[1885]: E0120 03:18:30.765331 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 03:18:30.813113 systemd-networkd[1472]: lxc6273d63130b6: Gained IPv6LL Jan 20 03:18:31.598496 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount799435268.mount: Deactivated successfully. Jan 20 03:18:31.765957 kubelet[1885]: E0120 03:18:31.765893 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 03:18:32.766338 kubelet[1885]: E0120 03:18:32.766275 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 03:18:33.409399 containerd[1550]: time="2026-01-20T03:18:33.409327286Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 03:18:33.410390 containerd[1550]: time="2026-01-20T03:18:33.410365827Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039406" Jan 20 03:18:33.411825 containerd[1550]: time="2026-01-20T03:18:33.411794031Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 03:18:33.414499 containerd[1550]: time="2026-01-20T03:18:33.414477493Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 03:18:33.415145 containerd[1550]: time="2026-01-20T03:18:33.415097871Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 3.542989673s" Jan 20 03:18:33.415145 containerd[1550]: time="2026-01-20T03:18:33.415124450Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Jan 20 03:18:33.420302 containerd[1550]: time="2026-01-20T03:18:33.420276995Z" level=info msg="CreateContainer within sandbox \"580ec186398e4b94fa99c7f1007b83bccfb5121929e2f2db11f7cbd35e51590c\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jan 20 03:18:33.428104 containerd[1550]: time="2026-01-20T03:18:33.428066957Z" level=info msg="Container 1f71516ec4ece71900d0412871fd6c0d016d40fef5813a9b4755699e9bf86b7e: CDI devices from CRI Config.CDIDevices: []" Jan 20 03:18:33.436269 containerd[1550]: time="2026-01-20T03:18:33.436191870Z" level=info msg="CreateContainer within sandbox \"580ec186398e4b94fa99c7f1007b83bccfb5121929e2f2db11f7cbd35e51590c\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"1f71516ec4ece71900d0412871fd6c0d016d40fef5813a9b4755699e9bf86b7e\"" Jan 20 03:18:33.436933 containerd[1550]: time="2026-01-20T03:18:33.436841835Z" level=info msg="StartContainer for \"1f71516ec4ece71900d0412871fd6c0d016d40fef5813a9b4755699e9bf86b7e\"" Jan 20 03:18:33.437987 containerd[1550]: time="2026-01-20T03:18:33.437934887Z" level=info msg="connecting to shim 1f71516ec4ece71900d0412871fd6c0d016d40fef5813a9b4755699e9bf86b7e" address="unix:///run/containerd/s/d65ebe5bd8ef3b82333d37c8e7c7b1cea4ac77499db0cd592cd90f49a3b4279d" protocol=ttrpc version=3 Jan 20 03:18:33.480660 systemd[1]: Started cri-containerd-1f71516ec4ece71900d0412871fd6c0d016d40fef5813a9b4755699e9bf86b7e.scope - libcontainer container 1f71516ec4ece71900d0412871fd6c0d016d40fef5813a9b4755699e9bf86b7e. Jan 20 03:18:33.515610 containerd[1550]: time="2026-01-20T03:18:33.515540599Z" level=info msg="StartContainer for \"1f71516ec4ece71900d0412871fd6c0d016d40fef5813a9b4755699e9bf86b7e\" returns successfully" Jan 20 03:18:33.766707 kubelet[1885]: E0120 03:18:33.766529 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 03:18:34.421716 update_engine[1542]: I20260120 03:18:34.421542 1542 update_attempter.cc:509] Updating boot flags... Jan 20 03:18:34.735425 kubelet[1885]: E0120 03:18:34.735201 1885 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 03:18:34.767940 kubelet[1885]: E0120 03:18:34.767828 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 03:18:35.768485 kubelet[1885]: E0120 03:18:35.768245 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 03:18:36.768823 kubelet[1885]: E0120 03:18:36.768668 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 03:18:37.769711 kubelet[1885]: E0120 03:18:37.769594 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 03:18:38.768827 kubelet[1885]: I0120 03:18:38.768680 1885 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=6.224522739 podStartE2EDuration="9.76865761s" podCreationTimestamp="2026-01-20 03:18:29 +0000 UTC" firstStartedPulling="2026-01-20 03:18:29.871776101 +0000 UTC m=+35.939354424" lastFinishedPulling="2026-01-20 03:18:33.415910973 +0000 UTC m=+39.483489295" observedRunningTime="2026-01-20 03:18:34.112654355 +0000 UTC m=+40.180232698" watchObservedRunningTime="2026-01-20 03:18:38.76865761 +0000 UTC m=+44.836235933" Jan 20 03:18:38.770840 kubelet[1885]: E0120 03:18:38.770692 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 03:18:38.780117 systemd[1]: Created slice kubepods-besteffort-pod3666c56a_c522_4a71_b3ac_8892a14cb034.slice - libcontainer container kubepods-besteffort-pod3666c56a_c522_4a71_b3ac_8892a14cb034.slice. Jan 20 03:18:38.813256 kubelet[1885]: I0120 03:18:38.813013 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6269\" (UniqueName: \"kubernetes.io/projected/3666c56a-c522-4a71-b3ac-8892a14cb034-kube-api-access-v6269\") pod \"test-pod-1\" (UID: \"3666c56a-c522-4a71-b3ac-8892a14cb034\") " pod="default/test-pod-1" Jan 20 03:18:38.813256 kubelet[1885]: I0120 03:18:38.813240 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-dc2a5cc6-eaa5-4cce-b154-11aed60cdf1f\" (UniqueName: \"kubernetes.io/nfs/3666c56a-c522-4a71-b3ac-8892a14cb034-pvc-dc2a5cc6-eaa5-4cce-b154-11aed60cdf1f\") pod \"test-pod-1\" (UID: \"3666c56a-c522-4a71-b3ac-8892a14cb034\") " pod="default/test-pod-1" Jan 20 03:18:38.978535 kernel: netfs: FS-Cache loaded Jan 20 03:18:39.065485 kernel: RPC: Registered named UNIX socket transport module. Jan 20 03:18:39.065601 kernel: RPC: Registered udp transport module. Jan 20 03:18:39.065650 kernel: RPC: Registered tcp transport module. Jan 20 03:18:39.067374 kernel: RPC: Registered tcp-with-tls transport module. Jan 20 03:18:39.069388 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jan 20 03:18:39.309075 kernel: NFS: Registering the id_resolver key type Jan 20 03:18:39.309224 kernel: Key type id_resolver registered Jan 20 03:18:39.309257 kernel: Key type id_legacy registered Jan 20 03:18:39.355649 nfsidmap[3239]: libnfsidmap: Unable to determine the NFSv4 domain; Using 'localdomain' as the NFSv4 domain which means UIDs will be mapped to the 'Nobody-User' user defined in /etc/idmapd.conf Jan 20 03:18:39.357260 nfsidmap[3239]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Jan 20 03:18:39.367703 nfsidmap[3242]: libnfsidmap: Unable to determine the NFSv4 domain; Using 'localdomain' as the NFSv4 domain which means UIDs will be mapped to the 'Nobody-User' user defined in /etc/idmapd.conf Jan 20 03:18:39.367972 nfsidmap[3242]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Jan 20 03:18:39.381385 nfsrahead[3246]: setting /var/lib/kubelet/pods/3666c56a-c522-4a71-b3ac-8892a14cb034/volumes/kubernetes.io~nfs/pvc-dc2a5cc6-eaa5-4cce-b154-11aed60cdf1f readahead to 128 Jan 20 03:18:39.686806 containerd[1550]: time="2026-01-20T03:18:39.686606566Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:3666c56a-c522-4a71-b3ac-8892a14cb034,Namespace:default,Attempt:0,}" Jan 20 03:18:39.732561 kernel: eth0: renamed from tmpdb0f7 Jan 20 03:18:39.733723 systemd-networkd[1472]: lxc30ea0dc3635e: Link UP Jan 20 03:18:39.734199 systemd-networkd[1472]: lxc30ea0dc3635e: Gained carrier Jan 20 03:18:39.771666 kubelet[1885]: E0120 03:18:39.771542 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 03:18:39.946659 containerd[1550]: time="2026-01-20T03:18:39.946171383Z" level=info msg="connecting to shim db0f7e60a462e6ef0efb984a4617b310b19cfc8d85fd35b2c8dcf4e4b5699fcd" address="unix:///run/containerd/s/a7d38b884888e54c61e6537fa85301658e741a16f585747c77e2708cf849034c" namespace=k8s.io protocol=ttrpc version=3 Jan 20 03:18:39.991631 systemd[1]: Started cri-containerd-db0f7e60a462e6ef0efb984a4617b310b19cfc8d85fd35b2c8dcf4e4b5699fcd.scope - libcontainer container db0f7e60a462e6ef0efb984a4617b310b19cfc8d85fd35b2c8dcf4e4b5699fcd. Jan 20 03:18:40.016244 systemd-resolved[1474]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 03:18:40.088903 containerd[1550]: time="2026-01-20T03:18:40.088682710Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:3666c56a-c522-4a71-b3ac-8892a14cb034,Namespace:default,Attempt:0,} returns sandbox id \"db0f7e60a462e6ef0efb984a4617b310b19cfc8d85fd35b2c8dcf4e4b5699fcd\"" Jan 20 03:18:40.090404 containerd[1550]: time="2026-01-20T03:18:40.090340136Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 20 03:18:40.197631 containerd[1550]: time="2026-01-20T03:18:40.196809564Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 03:18:40.198151 containerd[1550]: time="2026-01-20T03:18:40.197986550Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Jan 20 03:18:40.202477 containerd[1550]: time="2026-01-20T03:18:40.202406903Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:94a30ed18e45363486a55379bd1fbb8c479f36524ae816779f3553afe6b787ed\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:32c5137cb8c7cf61e75836f150e983b9be21fecc642ada89fd936c8cd6c0faa0\", size \"63836358\" in 112.022825ms" Jan 20 03:18:40.202545 containerd[1550]: time="2026-01-20T03:18:40.202529812Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:94a30ed18e45363486a55379bd1fbb8c479f36524ae816779f3553afe6b787ed\"" Jan 20 03:18:40.207662 containerd[1550]: time="2026-01-20T03:18:40.207622516Z" level=info msg="CreateContainer within sandbox \"db0f7e60a462e6ef0efb984a4617b310b19cfc8d85fd35b2c8dcf4e4b5699fcd\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jan 20 03:18:40.218226 containerd[1550]: time="2026-01-20T03:18:40.218138699Z" level=info msg="Container 9cddfbf072ede261209765b1504e4dbf6ab1a0999dbbfa1b190b1c13dcb730c2: CDI devices from CRI Config.CDIDevices: []" Jan 20 03:18:40.232166 containerd[1550]: time="2026-01-20T03:18:40.232063626Z" level=info msg="CreateContainer within sandbox \"db0f7e60a462e6ef0efb984a4617b310b19cfc8d85fd35b2c8dcf4e4b5699fcd\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"9cddfbf072ede261209765b1504e4dbf6ab1a0999dbbfa1b190b1c13dcb730c2\"" Jan 20 03:18:40.232912 containerd[1550]: time="2026-01-20T03:18:40.232867863Z" level=info msg="StartContainer for \"9cddfbf072ede261209765b1504e4dbf6ab1a0999dbbfa1b190b1c13dcb730c2\"" Jan 20 03:18:40.234209 containerd[1550]: time="2026-01-20T03:18:40.234163706Z" level=info msg="connecting to shim 9cddfbf072ede261209765b1504e4dbf6ab1a0999dbbfa1b190b1c13dcb730c2" address="unix:///run/containerd/s/a7d38b884888e54c61e6537fa85301658e741a16f585747c77e2708cf849034c" protocol=ttrpc version=3 Jan 20 03:18:40.264810 systemd[1]: Started cri-containerd-9cddfbf072ede261209765b1504e4dbf6ab1a0999dbbfa1b190b1c13dcb730c2.scope - libcontainer container 9cddfbf072ede261209765b1504e4dbf6ab1a0999dbbfa1b190b1c13dcb730c2. Jan 20 03:18:40.312541 containerd[1550]: time="2026-01-20T03:18:40.312393080Z" level=info msg="StartContainer for \"9cddfbf072ede261209765b1504e4dbf6ab1a0999dbbfa1b190b1c13dcb730c2\" returns successfully" Jan 20 03:18:40.772184 kubelet[1885]: E0120 03:18:40.772005 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 03:18:40.860907 systemd-networkd[1472]: lxc30ea0dc3635e: Gained IPv6LL Jan 20 03:18:41.131520 kubelet[1885]: I0120 03:18:41.130917 1885 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=12.017578698 podStartE2EDuration="12.130902987s" podCreationTimestamp="2026-01-20 03:18:29 +0000 UTC" firstStartedPulling="2026-01-20 03:18:40.089909975 +0000 UTC m=+46.157488297" lastFinishedPulling="2026-01-20 03:18:40.203234263 +0000 UTC m=+46.270812586" observedRunningTime="2026-01-20 03:18:41.130682657 +0000 UTC m=+47.198260980" watchObservedRunningTime="2026-01-20 03:18:41.130902987 +0000 UTC m=+47.198481311" Jan 20 03:18:41.773062 kubelet[1885]: E0120 03:18:41.772929 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 03:18:42.774072 kubelet[1885]: E0120 03:18:42.773953 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 03:18:43.775133 kubelet[1885]: E0120 03:18:43.774971 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 03:18:44.776070 kubelet[1885]: E0120 03:18:44.775902 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 03:18:45.777177 kubelet[1885]: E0120 03:18:45.777068 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 03:18:46.777976 kubelet[1885]: E0120 03:18:46.777858 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 03:18:47.170149 containerd[1550]: time="2026-01-20T03:18:47.170080455Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 20 03:18:47.179149 containerd[1550]: time="2026-01-20T03:18:47.179091835Z" level=info msg="StopContainer for \"0f5108679775f35b1ab3b60417d1094f7992ae6477ba139c62c49d5edf813e21\" with timeout 2 (s)" Jan 20 03:18:47.179611 containerd[1550]: time="2026-01-20T03:18:47.179547493Z" level=info msg="Stop container \"0f5108679775f35b1ab3b60417d1094f7992ae6477ba139c62c49d5edf813e21\" with signal terminated" Jan 20 03:18:47.190896 systemd-networkd[1472]: lxc_health: Link DOWN Jan 20 03:18:47.190908 systemd-networkd[1472]: lxc_health: Lost carrier Jan 20 03:18:47.208634 systemd[1]: cri-containerd-0f5108679775f35b1ab3b60417d1094f7992ae6477ba139c62c49d5edf813e21.scope: Deactivated successfully. Jan 20 03:18:47.209031 systemd[1]: cri-containerd-0f5108679775f35b1ab3b60417d1094f7992ae6477ba139c62c49d5edf813e21.scope: Consumed 7.444s CPU time, 123.9M memory peak, 112K read from disk, 13.3M written to disk. Jan 20 03:18:47.211303 containerd[1550]: time="2026-01-20T03:18:47.211184972Z" level=info msg="received container exit event container_id:\"0f5108679775f35b1ab3b60417d1094f7992ae6477ba139c62c49d5edf813e21\" id:\"0f5108679775f35b1ab3b60417d1094f7992ae6477ba139c62c49d5edf813e21\" pid:2420 exited_at:{seconds:1768879127 nanos:209671542}" Jan 20 03:18:47.241142 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0f5108679775f35b1ab3b60417d1094f7992ae6477ba139c62c49d5edf813e21-rootfs.mount: Deactivated successfully. Jan 20 03:18:47.286539 containerd[1550]: time="2026-01-20T03:18:47.285169165Z" level=info msg="StopContainer for \"0f5108679775f35b1ab3b60417d1094f7992ae6477ba139c62c49d5edf813e21\" returns successfully" Jan 20 03:18:47.289206 containerd[1550]: time="2026-01-20T03:18:47.288693171Z" level=info msg="StopPodSandbox for \"88fc39901f8fe66268d55c6870dfce307c41b10719aeec3603b84bbca1183790\"" Jan 20 03:18:47.289317 containerd[1550]: time="2026-01-20T03:18:47.289210474Z" level=info msg="Container to stop \"67962695c31bab1bb9b43efdb98092f23b08f6d49ae0a0c4320344b80891cb2b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 20 03:18:47.289317 containerd[1550]: time="2026-01-20T03:18:47.289230592Z" level=info msg="Container to stop \"d0296f2eb3b7aa089312eab27e859f17a77fa14ff6e2437924e9ff7f7c2015de\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 20 03:18:47.289317 containerd[1550]: time="2026-01-20T03:18:47.289244358Z" level=info msg="Container to stop \"cf01eb7bbbdfeaec72b0a78598363bd7a42927bdbd32935fb00d68dd4c7c6392\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 20 03:18:47.289317 containerd[1550]: time="2026-01-20T03:18:47.289257142Z" level=info msg="Container to stop \"42152ae6528e7a487af29cee25198c83c7c4e43f57916ac28617a73e8321173e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 20 03:18:47.289317 containerd[1550]: time="2026-01-20T03:18:47.289269285Z" level=info msg="Container to stop \"0f5108679775f35b1ab3b60417d1094f7992ae6477ba139c62c49d5edf813e21\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 20 03:18:47.301630 systemd[1]: cri-containerd-88fc39901f8fe66268d55c6870dfce307c41b10719aeec3603b84bbca1183790.scope: Deactivated successfully. Jan 20 03:18:47.326734 containerd[1550]: time="2026-01-20T03:18:47.326615953Z" level=info msg="received sandbox exit event container_id:\"88fc39901f8fe66268d55c6870dfce307c41b10719aeec3603b84bbca1183790\" id:\"88fc39901f8fe66268d55c6870dfce307c41b10719aeec3603b84bbca1183790\" exit_status:137 exited_at:{seconds:1768879127 nanos:326316538}" monitor_name=podsandbox Jan 20 03:18:47.399924 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-88fc39901f8fe66268d55c6870dfce307c41b10719aeec3603b84bbca1183790-rootfs.mount: Deactivated successfully. Jan 20 03:18:47.408209 containerd[1550]: time="2026-01-20T03:18:47.408100146Z" level=info msg="shim disconnected" id=88fc39901f8fe66268d55c6870dfce307c41b10719aeec3603b84bbca1183790 namespace=k8s.io Jan 20 03:18:47.408209 containerd[1550]: time="2026-01-20T03:18:47.408156692Z" level=warning msg="cleaning up after shim disconnected" id=88fc39901f8fe66268d55c6870dfce307c41b10719aeec3603b84bbca1183790 namespace=k8s.io Jan 20 03:18:47.408209 containerd[1550]: time="2026-01-20T03:18:47.408165497Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 20 03:18:47.430576 containerd[1550]: time="2026-01-20T03:18:47.428913498Z" level=info msg="received sandbox container exit event sandbox_id:\"88fc39901f8fe66268d55c6870dfce307c41b10719aeec3603b84bbca1183790\" exit_status:137 exited_at:{seconds:1768879127 nanos:326316538}" monitor_name=criService Jan 20 03:18:47.430576 containerd[1550]: time="2026-01-20T03:18:47.429909919Z" level=info msg="TearDown network for sandbox \"88fc39901f8fe66268d55c6870dfce307c41b10719aeec3603b84bbca1183790\" successfully" Jan 20 03:18:47.430576 containerd[1550]: time="2026-01-20T03:18:47.429947618Z" level=info msg="StopPodSandbox for \"88fc39901f8fe66268d55c6870dfce307c41b10719aeec3603b84bbca1183790\" returns successfully" Jan 20 03:18:47.432079 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-88fc39901f8fe66268d55c6870dfce307c41b10719aeec3603b84bbca1183790-shm.mount: Deactivated successfully. Jan 20 03:18:47.485668 kubelet[1885]: I0120 03:18:47.485569 1885 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/eb7c50b2-127d-433e-8b60-fc5452e0e1e0-clustermesh-secrets\") pod \"eb7c50b2-127d-433e-8b60-fc5452e0e1e0\" (UID: \"eb7c50b2-127d-433e-8b60-fc5452e0e1e0\") " Jan 20 03:18:47.485668 kubelet[1885]: I0120 03:18:47.485650 1885 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/eb7c50b2-127d-433e-8b60-fc5452e0e1e0-hostproc\") pod \"eb7c50b2-127d-433e-8b60-fc5452e0e1e0\" (UID: \"eb7c50b2-127d-433e-8b60-fc5452e0e1e0\") " Jan 20 03:18:47.485910 kubelet[1885]: I0120 03:18:47.485679 1885 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/eb7c50b2-127d-433e-8b60-fc5452e0e1e0-host-proc-sys-kernel\") pod \"eb7c50b2-127d-433e-8b60-fc5452e0e1e0\" (UID: \"eb7c50b2-127d-433e-8b60-fc5452e0e1e0\") " Jan 20 03:18:47.485910 kubelet[1885]: I0120 03:18:47.485712 1885 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/eb7c50b2-127d-433e-8b60-fc5452e0e1e0-hubble-tls\") pod \"eb7c50b2-127d-433e-8b60-fc5452e0e1e0\" (UID: \"eb7c50b2-127d-433e-8b60-fc5452e0e1e0\") " Jan 20 03:18:47.485910 kubelet[1885]: I0120 03:18:47.485734 1885 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/eb7c50b2-127d-433e-8b60-fc5452e0e1e0-host-proc-sys-net\") pod \"eb7c50b2-127d-433e-8b60-fc5452e0e1e0\" (UID: \"eb7c50b2-127d-433e-8b60-fc5452e0e1e0\") " Jan 20 03:18:47.485910 kubelet[1885]: I0120 03:18:47.485891 1885 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb7c50b2-127d-433e-8b60-fc5452e0e1e0-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "eb7c50b2-127d-433e-8b60-fc5452e0e1e0" (UID: "eb7c50b2-127d-433e-8b60-fc5452e0e1e0"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 03:18:47.486018 kubelet[1885]: I0120 03:18:47.485986 1885 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb7c50b2-127d-433e-8b60-fc5452e0e1e0-hostproc" (OuterVolumeSpecName: "hostproc") pod "eb7c50b2-127d-433e-8b60-fc5452e0e1e0" (UID: "eb7c50b2-127d-433e-8b60-fc5452e0e1e0"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 03:18:47.486539 kubelet[1885]: I0120 03:18:47.486407 1885 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/eb7c50b2-127d-433e-8b60-fc5452e0e1e0-cni-path\") pod \"eb7c50b2-127d-433e-8b60-fc5452e0e1e0\" (UID: \"eb7c50b2-127d-433e-8b60-fc5452e0e1e0\") " Jan 20 03:18:47.486592 kubelet[1885]: I0120 03:18:47.486544 1885 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lkkgp\" (UniqueName: \"kubernetes.io/projected/eb7c50b2-127d-433e-8b60-fc5452e0e1e0-kube-api-access-lkkgp\") pod \"eb7c50b2-127d-433e-8b60-fc5452e0e1e0\" (UID: \"eb7c50b2-127d-433e-8b60-fc5452e0e1e0\") " Jan 20 03:18:47.486592 kubelet[1885]: I0120 03:18:47.486573 1885 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/eb7c50b2-127d-433e-8b60-fc5452e0e1e0-cilium-config-path\") pod \"eb7c50b2-127d-433e-8b60-fc5452e0e1e0\" (UID: \"eb7c50b2-127d-433e-8b60-fc5452e0e1e0\") " Jan 20 03:18:47.486652 kubelet[1885]: I0120 03:18:47.486595 1885 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/eb7c50b2-127d-433e-8b60-fc5452e0e1e0-cilium-cgroup\") pod \"eb7c50b2-127d-433e-8b60-fc5452e0e1e0\" (UID: \"eb7c50b2-127d-433e-8b60-fc5452e0e1e0\") " Jan 20 03:18:47.486652 kubelet[1885]: I0120 03:18:47.486617 1885 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eb7c50b2-127d-433e-8b60-fc5452e0e1e0-xtables-lock\") pod \"eb7c50b2-127d-433e-8b60-fc5452e0e1e0\" (UID: \"eb7c50b2-127d-433e-8b60-fc5452e0e1e0\") " Jan 20 03:18:47.486652 kubelet[1885]: I0120 03:18:47.486641 1885 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/eb7c50b2-127d-433e-8b60-fc5452e0e1e0-bpf-maps\") pod \"eb7c50b2-127d-433e-8b60-fc5452e0e1e0\" (UID: \"eb7c50b2-127d-433e-8b60-fc5452e0e1e0\") " Jan 20 03:18:47.486713 kubelet[1885]: I0120 03:18:47.486664 1885 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/eb7c50b2-127d-433e-8b60-fc5452e0e1e0-cilium-run\") pod \"eb7c50b2-127d-433e-8b60-fc5452e0e1e0\" (UID: \"eb7c50b2-127d-433e-8b60-fc5452e0e1e0\") " Jan 20 03:18:47.486713 kubelet[1885]: I0120 03:18:47.486685 1885 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/eb7c50b2-127d-433e-8b60-fc5452e0e1e0-etc-cni-netd\") pod \"eb7c50b2-127d-433e-8b60-fc5452e0e1e0\" (UID: \"eb7c50b2-127d-433e-8b60-fc5452e0e1e0\") " Jan 20 03:18:47.486713 kubelet[1885]: I0120 03:18:47.486704 1885 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eb7c50b2-127d-433e-8b60-fc5452e0e1e0-lib-modules\") pod \"eb7c50b2-127d-433e-8b60-fc5452e0e1e0\" (UID: \"eb7c50b2-127d-433e-8b60-fc5452e0e1e0\") " Jan 20 03:18:47.486809 kubelet[1885]: I0120 03:18:47.486743 1885 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/eb7c50b2-127d-433e-8b60-fc5452e0e1e0-hostproc\") on node \"10.0.0.17\" DevicePath \"\"" Jan 20 03:18:47.486831 kubelet[1885]: I0120 03:18:47.486806 1885 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/eb7c50b2-127d-433e-8b60-fc5452e0e1e0-host-proc-sys-kernel\") on node \"10.0.0.17\" DevicePath \"\"" Jan 20 03:18:47.488485 kubelet[1885]: I0120 03:18:47.486848 1885 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb7c50b2-127d-433e-8b60-fc5452e0e1e0-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "eb7c50b2-127d-433e-8b60-fc5452e0e1e0" (UID: "eb7c50b2-127d-433e-8b60-fc5452e0e1e0"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 03:18:47.488485 kubelet[1885]: I0120 03:18:47.486891 1885 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb7c50b2-127d-433e-8b60-fc5452e0e1e0-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "eb7c50b2-127d-433e-8b60-fc5452e0e1e0" (UID: "eb7c50b2-127d-433e-8b60-fc5452e0e1e0"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 03:18:47.488485 kubelet[1885]: I0120 03:18:47.486919 1885 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb7c50b2-127d-433e-8b60-fc5452e0e1e0-cni-path" (OuterVolumeSpecName: "cni-path") pod "eb7c50b2-127d-433e-8b60-fc5452e0e1e0" (UID: "eb7c50b2-127d-433e-8b60-fc5452e0e1e0"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 03:18:47.488582 kubelet[1885]: I0120 03:18:47.488559 1885 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb7c50b2-127d-433e-8b60-fc5452e0e1e0-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "eb7c50b2-127d-433e-8b60-fc5452e0e1e0" (UID: "eb7c50b2-127d-433e-8b60-fc5452e0e1e0"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 03:18:47.490629 kubelet[1885]: I0120 03:18:47.490603 1885 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb7c50b2-127d-433e-8b60-fc5452e0e1e0-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "eb7c50b2-127d-433e-8b60-fc5452e0e1e0" (UID: "eb7c50b2-127d-433e-8b60-fc5452e0e1e0"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 03:18:47.490746 kubelet[1885]: I0120 03:18:47.490727 1885 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb7c50b2-127d-433e-8b60-fc5452e0e1e0-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "eb7c50b2-127d-433e-8b60-fc5452e0e1e0" (UID: "eb7c50b2-127d-433e-8b60-fc5452e0e1e0"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 03:18:47.490862 kubelet[1885]: I0120 03:18:47.490849 1885 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb7c50b2-127d-433e-8b60-fc5452e0e1e0-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "eb7c50b2-127d-433e-8b60-fc5452e0e1e0" (UID: "eb7c50b2-127d-433e-8b60-fc5452e0e1e0"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 03:18:47.490927 kubelet[1885]: I0120 03:18:47.490916 1885 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb7c50b2-127d-433e-8b60-fc5452e0e1e0-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "eb7c50b2-127d-433e-8b60-fc5452e0e1e0" (UID: "eb7c50b2-127d-433e-8b60-fc5452e0e1e0"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 03:18:47.492038 systemd[1]: var-lib-kubelet-pods-eb7c50b2\x2d127d\x2d433e\x2d8b60\x2dfc5452e0e1e0-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 20 03:18:47.492196 systemd[1]: var-lib-kubelet-pods-eb7c50b2\x2d127d\x2d433e\x2d8b60\x2dfc5452e0e1e0-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 20 03:18:47.495670 kubelet[1885]: I0120 03:18:47.495580 1885 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb7c50b2-127d-433e-8b60-fc5452e0e1e0-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "eb7c50b2-127d-433e-8b60-fc5452e0e1e0" (UID: "eb7c50b2-127d-433e-8b60-fc5452e0e1e0"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 03:18:47.495728 systemd[1]: var-lib-kubelet-pods-eb7c50b2\x2d127d\x2d433e\x2d8b60\x2dfc5452e0e1e0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlkkgp.mount: Deactivated successfully. Jan 20 03:18:47.496273 kubelet[1885]: I0120 03:18:47.496052 1885 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb7c50b2-127d-433e-8b60-fc5452e0e1e0-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "eb7c50b2-127d-433e-8b60-fc5452e0e1e0" (UID: "eb7c50b2-127d-433e-8b60-fc5452e0e1e0"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 03:18:47.497201 kubelet[1885]: I0120 03:18:47.497153 1885 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb7c50b2-127d-433e-8b60-fc5452e0e1e0-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "eb7c50b2-127d-433e-8b60-fc5452e0e1e0" (UID: "eb7c50b2-127d-433e-8b60-fc5452e0e1e0"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 03:18:47.497377 kubelet[1885]: I0120 03:18:47.497349 1885 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb7c50b2-127d-433e-8b60-fc5452e0e1e0-kube-api-access-lkkgp" (OuterVolumeSpecName: "kube-api-access-lkkgp") pod "eb7c50b2-127d-433e-8b60-fc5452e0e1e0" (UID: "eb7c50b2-127d-433e-8b60-fc5452e0e1e0"). InnerVolumeSpecName "kube-api-access-lkkgp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 03:18:47.587387 kubelet[1885]: I0120 03:18:47.587205 1885 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/eb7c50b2-127d-433e-8b60-fc5452e0e1e0-hubble-tls\") on node \"10.0.0.17\" DevicePath \"\"" Jan 20 03:18:47.587387 kubelet[1885]: I0120 03:18:47.587319 1885 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/eb7c50b2-127d-433e-8b60-fc5452e0e1e0-host-proc-sys-net\") on node \"10.0.0.17\" DevicePath \"\"" Jan 20 03:18:47.587387 kubelet[1885]: I0120 03:18:47.587333 1885 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/eb7c50b2-127d-433e-8b60-fc5452e0e1e0-cni-path\") on node \"10.0.0.17\" DevicePath \"\"" Jan 20 03:18:47.587387 kubelet[1885]: I0120 03:18:47.587342 1885 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lkkgp\" (UniqueName: \"kubernetes.io/projected/eb7c50b2-127d-433e-8b60-fc5452e0e1e0-kube-api-access-lkkgp\") on node \"10.0.0.17\" DevicePath \"\"" Jan 20 03:18:47.587387 kubelet[1885]: I0120 03:18:47.587350 1885 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/eb7c50b2-127d-433e-8b60-fc5452e0e1e0-cilium-config-path\") on node \"10.0.0.17\" DevicePath \"\"" Jan 20 03:18:47.587387 kubelet[1885]: I0120 03:18:47.587358 1885 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/eb7c50b2-127d-433e-8b60-fc5452e0e1e0-cilium-cgroup\") on node \"10.0.0.17\" DevicePath \"\"" Jan 20 03:18:47.587387 kubelet[1885]: I0120 03:18:47.587366 1885 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eb7c50b2-127d-433e-8b60-fc5452e0e1e0-xtables-lock\") on node \"10.0.0.17\" DevicePath \"\"" Jan 20 03:18:47.587387 kubelet[1885]: I0120 03:18:47.587372 1885 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/eb7c50b2-127d-433e-8b60-fc5452e0e1e0-bpf-maps\") on node \"10.0.0.17\" DevicePath \"\"" Jan 20 03:18:47.588012 kubelet[1885]: I0120 03:18:47.587380 1885 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/eb7c50b2-127d-433e-8b60-fc5452e0e1e0-cilium-run\") on node \"10.0.0.17\" DevicePath \"\"" Jan 20 03:18:47.588012 kubelet[1885]: I0120 03:18:47.587387 1885 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/eb7c50b2-127d-433e-8b60-fc5452e0e1e0-etc-cni-netd\") on node \"10.0.0.17\" DevicePath \"\"" Jan 20 03:18:47.588012 kubelet[1885]: I0120 03:18:47.587393 1885 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eb7c50b2-127d-433e-8b60-fc5452e0e1e0-lib-modules\") on node \"10.0.0.17\" DevicePath \"\"" Jan 20 03:18:47.588012 kubelet[1885]: I0120 03:18:47.587400 1885 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/eb7c50b2-127d-433e-8b60-fc5452e0e1e0-clustermesh-secrets\") on node \"10.0.0.17\" DevicePath \"\"" Jan 20 03:18:47.778397 kubelet[1885]: E0120 03:18:47.778132 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 03:18:48.135424 kubelet[1885]: I0120 03:18:48.135349 1885 scope.go:117] "RemoveContainer" containerID="0f5108679775f35b1ab3b60417d1094f7992ae6477ba139c62c49d5edf813e21" Jan 20 03:18:48.137180 containerd[1550]: time="2026-01-20T03:18:48.137139111Z" level=info msg="RemoveContainer for \"0f5108679775f35b1ab3b60417d1094f7992ae6477ba139c62c49d5edf813e21\"" Jan 20 03:18:48.143592 systemd[1]: Removed slice kubepods-burstable-podeb7c50b2_127d_433e_8b60_fc5452e0e1e0.slice - libcontainer container kubepods-burstable-podeb7c50b2_127d_433e_8b60_fc5452e0e1e0.slice. Jan 20 03:18:48.143751 systemd[1]: kubepods-burstable-podeb7c50b2_127d_433e_8b60_fc5452e0e1e0.slice: Consumed 7.615s CPU time, 124.4M memory peak, 112K read from disk, 13.3M written to disk. Jan 20 03:18:48.144859 containerd[1550]: time="2026-01-20T03:18:48.144585983Z" level=info msg="RemoveContainer for \"0f5108679775f35b1ab3b60417d1094f7992ae6477ba139c62c49d5edf813e21\" returns successfully" Jan 20 03:18:48.145052 kubelet[1885]: I0120 03:18:48.144993 1885 scope.go:117] "RemoveContainer" containerID="42152ae6528e7a487af29cee25198c83c7c4e43f57916ac28617a73e8321173e" Jan 20 03:18:48.147151 containerd[1550]: time="2026-01-20T03:18:48.147114257Z" level=info msg="RemoveContainer for \"42152ae6528e7a487af29cee25198c83c7c4e43f57916ac28617a73e8321173e\"" Jan 20 03:18:48.152158 containerd[1550]: time="2026-01-20T03:18:48.152123973Z" level=info msg="RemoveContainer for \"42152ae6528e7a487af29cee25198c83c7c4e43f57916ac28617a73e8321173e\" returns successfully" Jan 20 03:18:48.152285 kubelet[1885]: I0120 03:18:48.152254 1885 scope.go:117] "RemoveContainer" containerID="d0296f2eb3b7aa089312eab27e859f17a77fa14ff6e2437924e9ff7f7c2015de" Jan 20 03:18:48.155627 containerd[1550]: time="2026-01-20T03:18:48.155540066Z" level=info msg="RemoveContainer for \"d0296f2eb3b7aa089312eab27e859f17a77fa14ff6e2437924e9ff7f7c2015de\"" Jan 20 03:18:48.161153 containerd[1550]: time="2026-01-20T03:18:48.161008276Z" level=info msg="RemoveContainer for \"d0296f2eb3b7aa089312eab27e859f17a77fa14ff6e2437924e9ff7f7c2015de\" returns successfully" Jan 20 03:18:48.161309 kubelet[1885]: I0120 03:18:48.161260 1885 scope.go:117] "RemoveContainer" containerID="cf01eb7bbbdfeaec72b0a78598363bd7a42927bdbd32935fb00d68dd4c7c6392" Jan 20 03:18:48.163632 containerd[1550]: time="2026-01-20T03:18:48.163202305Z" level=info msg="RemoveContainer for \"cf01eb7bbbdfeaec72b0a78598363bd7a42927bdbd32935fb00d68dd4c7c6392\"" Jan 20 03:18:48.167543 containerd[1550]: time="2026-01-20T03:18:48.167377044Z" level=info msg="RemoveContainer for \"cf01eb7bbbdfeaec72b0a78598363bd7a42927bdbd32935fb00d68dd4c7c6392\" returns successfully" Jan 20 03:18:48.169583 kubelet[1885]: I0120 03:18:48.169551 1885 scope.go:117] "RemoveContainer" containerID="67962695c31bab1bb9b43efdb98092f23b08f6d49ae0a0c4320344b80891cb2b" Jan 20 03:18:48.172346 containerd[1550]: time="2026-01-20T03:18:48.172278049Z" level=info msg="RemoveContainer for \"67962695c31bab1bb9b43efdb98092f23b08f6d49ae0a0c4320344b80891cb2b\"" Jan 20 03:18:48.176305 containerd[1550]: time="2026-01-20T03:18:48.176249340Z" level=info msg="RemoveContainer for \"67962695c31bab1bb9b43efdb98092f23b08f6d49ae0a0c4320344b80891cb2b\" returns successfully" Jan 20 03:18:48.176538 kubelet[1885]: I0120 03:18:48.176492 1885 scope.go:117] "RemoveContainer" containerID="0f5108679775f35b1ab3b60417d1094f7992ae6477ba139c62c49d5edf813e21" Jan 20 03:18:48.176714 containerd[1550]: time="2026-01-20T03:18:48.176646781Z" level=error msg="ContainerStatus for \"0f5108679775f35b1ab3b60417d1094f7992ae6477ba139c62c49d5edf813e21\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0f5108679775f35b1ab3b60417d1094f7992ae6477ba139c62c49d5edf813e21\": not found" Jan 20 03:18:48.176982 kubelet[1885]: E0120 03:18:48.176873 1885 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0f5108679775f35b1ab3b60417d1094f7992ae6477ba139c62c49d5edf813e21\": not found" containerID="0f5108679775f35b1ab3b60417d1094f7992ae6477ba139c62c49d5edf813e21" Jan 20 03:18:48.176982 kubelet[1885]: I0120 03:18:48.176915 1885 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0f5108679775f35b1ab3b60417d1094f7992ae6477ba139c62c49d5edf813e21"} err="failed to get container status \"0f5108679775f35b1ab3b60417d1094f7992ae6477ba139c62c49d5edf813e21\": rpc error: code = NotFound desc = an error occurred when try to find container \"0f5108679775f35b1ab3b60417d1094f7992ae6477ba139c62c49d5edf813e21\": not found" Jan 20 03:18:48.176982 kubelet[1885]: I0120 03:18:48.176946 1885 scope.go:117] "RemoveContainer" containerID="42152ae6528e7a487af29cee25198c83c7c4e43f57916ac28617a73e8321173e" Jan 20 03:18:48.177320 containerd[1550]: time="2026-01-20T03:18:48.177155150Z" level=error msg="ContainerStatus for \"42152ae6528e7a487af29cee25198c83c7c4e43f57916ac28617a73e8321173e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"42152ae6528e7a487af29cee25198c83c7c4e43f57916ac28617a73e8321173e\": not found" Jan 20 03:18:48.177493 kubelet[1885]: E0120 03:18:48.177297 1885 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"42152ae6528e7a487af29cee25198c83c7c4e43f57916ac28617a73e8321173e\": not found" containerID="42152ae6528e7a487af29cee25198c83c7c4e43f57916ac28617a73e8321173e" Jan 20 03:18:48.177493 kubelet[1885]: I0120 03:18:48.177411 1885 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"42152ae6528e7a487af29cee25198c83c7c4e43f57916ac28617a73e8321173e"} err="failed to get container status \"42152ae6528e7a487af29cee25198c83c7c4e43f57916ac28617a73e8321173e\": rpc error: code = NotFound desc = an error occurred when try to find container \"42152ae6528e7a487af29cee25198c83c7c4e43f57916ac28617a73e8321173e\": not found" Jan 20 03:18:48.177493 kubelet[1885]: I0120 03:18:48.177423 1885 scope.go:117] "RemoveContainer" containerID="d0296f2eb3b7aa089312eab27e859f17a77fa14ff6e2437924e9ff7f7c2015de" Jan 20 03:18:48.177844 containerd[1550]: time="2026-01-20T03:18:48.177670005Z" level=error msg="ContainerStatus for \"d0296f2eb3b7aa089312eab27e859f17a77fa14ff6e2437924e9ff7f7c2015de\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d0296f2eb3b7aa089312eab27e859f17a77fa14ff6e2437924e9ff7f7c2015de\": not found" Jan 20 03:18:48.177895 kubelet[1885]: E0120 03:18:48.177836 1885 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d0296f2eb3b7aa089312eab27e859f17a77fa14ff6e2437924e9ff7f7c2015de\": not found" containerID="d0296f2eb3b7aa089312eab27e859f17a77fa14ff6e2437924e9ff7f7c2015de" Jan 20 03:18:48.177895 kubelet[1885]: I0120 03:18:48.177865 1885 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d0296f2eb3b7aa089312eab27e859f17a77fa14ff6e2437924e9ff7f7c2015de"} err="failed to get container status \"d0296f2eb3b7aa089312eab27e859f17a77fa14ff6e2437924e9ff7f7c2015de\": rpc error: code = NotFound desc = an error occurred when try to find container \"d0296f2eb3b7aa089312eab27e859f17a77fa14ff6e2437924e9ff7f7c2015de\": not found" Jan 20 03:18:48.177895 kubelet[1885]: I0120 03:18:48.177885 1885 scope.go:117] "RemoveContainer" containerID="cf01eb7bbbdfeaec72b0a78598363bd7a42927bdbd32935fb00d68dd4c7c6392" Jan 20 03:18:48.178221 containerd[1550]: time="2026-01-20T03:18:48.178059965Z" level=error msg="ContainerStatus for \"cf01eb7bbbdfeaec72b0a78598363bd7a42927bdbd32935fb00d68dd4c7c6392\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cf01eb7bbbdfeaec72b0a78598363bd7a42927bdbd32935fb00d68dd4c7c6392\": not found" Jan 20 03:18:48.178502 kubelet[1885]: E0120 03:18:48.178392 1885 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cf01eb7bbbdfeaec72b0a78598363bd7a42927bdbd32935fb00d68dd4c7c6392\": not found" containerID="cf01eb7bbbdfeaec72b0a78598363bd7a42927bdbd32935fb00d68dd4c7c6392" Jan 20 03:18:48.178553 kubelet[1885]: I0120 03:18:48.178516 1885 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cf01eb7bbbdfeaec72b0a78598363bd7a42927bdbd32935fb00d68dd4c7c6392"} err="failed to get container status \"cf01eb7bbbdfeaec72b0a78598363bd7a42927bdbd32935fb00d68dd4c7c6392\": rpc error: code = NotFound desc = an error occurred when try to find container \"cf01eb7bbbdfeaec72b0a78598363bd7a42927bdbd32935fb00d68dd4c7c6392\": not found" Jan 20 03:18:48.178553 kubelet[1885]: I0120 03:18:48.178538 1885 scope.go:117] "RemoveContainer" containerID="67962695c31bab1bb9b43efdb98092f23b08f6d49ae0a0c4320344b80891cb2b" Jan 20 03:18:48.178850 containerd[1550]: time="2026-01-20T03:18:48.178792922Z" level=error msg="ContainerStatus for \"67962695c31bab1bb9b43efdb98092f23b08f6d49ae0a0c4320344b80891cb2b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"67962695c31bab1bb9b43efdb98092f23b08f6d49ae0a0c4320344b80891cb2b\": not found" Jan 20 03:18:48.178991 kubelet[1885]: E0120 03:18:48.178958 1885 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"67962695c31bab1bb9b43efdb98092f23b08f6d49ae0a0c4320344b80891cb2b\": not found" containerID="67962695c31bab1bb9b43efdb98092f23b08f6d49ae0a0c4320344b80891cb2b" Jan 20 03:18:48.179086 kubelet[1885]: I0120 03:18:48.178995 1885 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"67962695c31bab1bb9b43efdb98092f23b08f6d49ae0a0c4320344b80891cb2b"} err="failed to get container status \"67962695c31bab1bb9b43efdb98092f23b08f6d49ae0a0c4320344b80891cb2b\": rpc error: code = NotFound desc = an error occurred when try to find container \"67962695c31bab1bb9b43efdb98092f23b08f6d49ae0a0c4320344b80891cb2b\": not found" Jan 20 03:18:48.779563 kubelet[1885]: E0120 03:18:48.779409 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 03:18:48.967242 kubelet[1885]: I0120 03:18:48.967126 1885 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb7c50b2-127d-433e-8b60-fc5452e0e1e0" path="/var/lib/kubelet/pods/eb7c50b2-127d-433e-8b60-fc5452e0e1e0/volumes" Jan 20 03:18:49.780236 kubelet[1885]: E0120 03:18:49.780093 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 03:18:49.943805 kubelet[1885]: E0120 03:18:49.943672 1885 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 03:18:49.965108 kubelet[1885]: E0120 03:18:49.965079 1885 reflector.go:200] "Failed to watch" err="failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:10.0.0.17\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '10.0.0.17' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"hubble-server-certs\"" type="*v1.Secret" Jan 20 03:18:49.965406 kubelet[1885]: E0120 03:18:49.965076 1885 reflector.go:200] "Failed to watch" err="failed to list *v1.Secret: secrets \"cilium-ipsec-keys\" is forbidden: User \"system:node:10.0.0.17\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '10.0.0.17' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"cilium-ipsec-keys\"" type="*v1.Secret" Jan 20 03:18:49.965406 kubelet[1885]: E0120 03:18:49.965163 1885 reflector.go:200] "Failed to watch" err="failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:10.0.0.17\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '10.0.0.17' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"cilium-clustermesh\"" type="*v1.Secret" Jan 20 03:18:49.967346 systemd[1]: Created slice kubepods-besteffort-pod75a7b5e2_3273_47df_8e34_b8da04f9e5e6.slice - libcontainer container kubepods-besteffort-pod75a7b5e2_3273_47df_8e34_b8da04f9e5e6.slice. Jan 20 03:18:49.974533 systemd[1]: Created slice kubepods-burstable-pod3889783b_ecff_4c38_a10d_c15d11d7e004.slice - libcontainer container kubepods-burstable-pod3889783b_ecff_4c38_a10d_c15d11d7e004.slice. Jan 20 03:18:50.032670 kubelet[1885]: I0120 03:18:50.032348 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/75a7b5e2-3273-47df-8e34-b8da04f9e5e6-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-r9x28\" (UID: \"75a7b5e2-3273-47df-8e34-b8da04f9e5e6\") " pod="kube-system/cilium-operator-6c4d7847fc-r9x28" Jan 20 03:18:50.032670 kubelet[1885]: I0120 03:18:50.032400 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3889783b-ecff-4c38-a10d-c15d11d7e004-cilium-run\") pod \"cilium-szfbw\" (UID: \"3889783b-ecff-4c38-a10d-c15d11d7e004\") " pod="kube-system/cilium-szfbw" Jan 20 03:18:50.032670 kubelet[1885]: I0120 03:18:50.032417 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3889783b-ecff-4c38-a10d-c15d11d7e004-cilium-cgroup\") pod \"cilium-szfbw\" (UID: \"3889783b-ecff-4c38-a10d-c15d11d7e004\") " pod="kube-system/cilium-szfbw" Jan 20 03:18:50.032670 kubelet[1885]: I0120 03:18:50.032489 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3889783b-ecff-4c38-a10d-c15d11d7e004-host-proc-sys-kernel\") pod \"cilium-szfbw\" (UID: \"3889783b-ecff-4c38-a10d-c15d11d7e004\") " pod="kube-system/cilium-szfbw" Jan 20 03:18:50.032670 kubelet[1885]: I0120 03:18:50.032504 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-285x5\" (UniqueName: \"kubernetes.io/projected/75a7b5e2-3273-47df-8e34-b8da04f9e5e6-kube-api-access-285x5\") pod \"cilium-operator-6c4d7847fc-r9x28\" (UID: \"75a7b5e2-3273-47df-8e34-b8da04f9e5e6\") " pod="kube-system/cilium-operator-6c4d7847fc-r9x28" Jan 20 03:18:50.033028 kubelet[1885]: I0120 03:18:50.032517 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3889783b-ecff-4c38-a10d-c15d11d7e004-cni-path\") pod \"cilium-szfbw\" (UID: \"3889783b-ecff-4c38-a10d-c15d11d7e004\") " pod="kube-system/cilium-szfbw" Jan 20 03:18:50.033028 kubelet[1885]: I0120 03:18:50.032529 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3889783b-ecff-4c38-a10d-c15d11d7e004-xtables-lock\") pod \"cilium-szfbw\" (UID: \"3889783b-ecff-4c38-a10d-c15d11d7e004\") " pod="kube-system/cilium-szfbw" Jan 20 03:18:50.033028 kubelet[1885]: I0120 03:18:50.032551 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/3889783b-ecff-4c38-a10d-c15d11d7e004-cilium-ipsec-secrets\") pod \"cilium-szfbw\" (UID: \"3889783b-ecff-4c38-a10d-c15d11d7e004\") " pod="kube-system/cilium-szfbw" Jan 20 03:18:50.033028 kubelet[1885]: I0120 03:18:50.032575 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3889783b-ecff-4c38-a10d-c15d11d7e004-hubble-tls\") pod \"cilium-szfbw\" (UID: \"3889783b-ecff-4c38-a10d-c15d11d7e004\") " pod="kube-system/cilium-szfbw" Jan 20 03:18:50.033028 kubelet[1885]: I0120 03:18:50.032591 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68wrv\" (UniqueName: \"kubernetes.io/projected/3889783b-ecff-4c38-a10d-c15d11d7e004-kube-api-access-68wrv\") pod \"cilium-szfbw\" (UID: \"3889783b-ecff-4c38-a10d-c15d11d7e004\") " pod="kube-system/cilium-szfbw" Jan 20 03:18:50.033028 kubelet[1885]: I0120 03:18:50.032605 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3889783b-ecff-4c38-a10d-c15d11d7e004-bpf-maps\") pod \"cilium-szfbw\" (UID: \"3889783b-ecff-4c38-a10d-c15d11d7e004\") " pod="kube-system/cilium-szfbw" Jan 20 03:18:50.033307 kubelet[1885]: I0120 03:18:50.032649 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3889783b-ecff-4c38-a10d-c15d11d7e004-hostproc\") pod \"cilium-szfbw\" (UID: \"3889783b-ecff-4c38-a10d-c15d11d7e004\") " pod="kube-system/cilium-szfbw" Jan 20 03:18:50.033307 kubelet[1885]: I0120 03:18:50.032665 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3889783b-ecff-4c38-a10d-c15d11d7e004-etc-cni-netd\") pod \"cilium-szfbw\" (UID: \"3889783b-ecff-4c38-a10d-c15d11d7e004\") " pod="kube-system/cilium-szfbw" Jan 20 03:18:50.033307 kubelet[1885]: I0120 03:18:50.032682 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3889783b-ecff-4c38-a10d-c15d11d7e004-cilium-config-path\") pod \"cilium-szfbw\" (UID: \"3889783b-ecff-4c38-a10d-c15d11d7e004\") " pod="kube-system/cilium-szfbw" Jan 20 03:18:50.033307 kubelet[1885]: I0120 03:18:50.032733 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3889783b-ecff-4c38-a10d-c15d11d7e004-host-proc-sys-net\") pod \"cilium-szfbw\" (UID: \"3889783b-ecff-4c38-a10d-c15d11d7e004\") " pod="kube-system/cilium-szfbw" Jan 20 03:18:50.033307 kubelet[1885]: I0120 03:18:50.032815 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3889783b-ecff-4c38-a10d-c15d11d7e004-lib-modules\") pod \"cilium-szfbw\" (UID: \"3889783b-ecff-4c38-a10d-c15d11d7e004\") " pod="kube-system/cilium-szfbw" Jan 20 03:18:50.033307 kubelet[1885]: I0120 03:18:50.032847 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3889783b-ecff-4c38-a10d-c15d11d7e004-clustermesh-secrets\") pod \"cilium-szfbw\" (UID: \"3889783b-ecff-4c38-a10d-c15d11d7e004\") " pod="kube-system/cilium-szfbw" Jan 20 03:18:50.271847 kubelet[1885]: E0120 03:18:50.271674 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:18:50.273125 containerd[1550]: time="2026-01-20T03:18:50.272424918Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-r9x28,Uid:75a7b5e2-3273-47df-8e34-b8da04f9e5e6,Namespace:kube-system,Attempt:0,}" Jan 20 03:18:50.294322 containerd[1550]: time="2026-01-20T03:18:50.294069154Z" level=info msg="connecting to shim 8ed3146590f13e35de47da68db3bdd8a780c37979aa0e0ad77def3a2156d41f2" address="unix:///run/containerd/s/99be908d7406bbaf207639c080733caddb4226a3adcfd7a90b0df8bd4fd602a9" namespace=k8s.io protocol=ttrpc version=3 Jan 20 03:18:50.338759 systemd[1]: Started cri-containerd-8ed3146590f13e35de47da68db3bdd8a780c37979aa0e0ad77def3a2156d41f2.scope - libcontainer container 8ed3146590f13e35de47da68db3bdd8a780c37979aa0e0ad77def3a2156d41f2. Jan 20 03:18:50.396190 containerd[1550]: time="2026-01-20T03:18:50.396067155Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-r9x28,Uid:75a7b5e2-3273-47df-8e34-b8da04f9e5e6,Namespace:kube-system,Attempt:0,} returns sandbox id \"8ed3146590f13e35de47da68db3bdd8a780c37979aa0e0ad77def3a2156d41f2\"" Jan 20 03:18:50.397224 kubelet[1885]: E0120 03:18:50.397183 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:18:50.398314 containerd[1550]: time="2026-01-20T03:18:50.398124846Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 20 03:18:50.780474 kubelet[1885]: E0120 03:18:50.780367 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 03:18:51.135314 kubelet[1885]: E0120 03:18:51.135223 1885 secret.go:189] Couldn't get secret kube-system/cilium-ipsec-keys: failed to sync secret cache: timed out waiting for the condition Jan 20 03:18:51.135560 kubelet[1885]: E0120 03:18:51.135393 1885 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3889783b-ecff-4c38-a10d-c15d11d7e004-cilium-ipsec-secrets podName:3889783b-ecff-4c38-a10d-c15d11d7e004 nodeName:}" failed. No retries permitted until 2026-01-20 03:18:51.63537176 +0000 UTC m=+57.702950083 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-ipsec-secrets" (UniqueName: "kubernetes.io/secret/3889783b-ecff-4c38-a10d-c15d11d7e004-cilium-ipsec-secrets") pod "cilium-szfbw" (UID: "3889783b-ecff-4c38-a10d-c15d11d7e004") : failed to sync secret cache: timed out waiting for the condition Jan 20 03:18:51.589221 containerd[1550]: time="2026-01-20T03:18:51.589113735Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 03:18:51.590240 containerd[1550]: time="2026-01-20T03:18:51.590142048Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jan 20 03:18:51.591666 containerd[1550]: time="2026-01-20T03:18:51.591543234Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 03:18:51.593172 containerd[1550]: time="2026-01-20T03:18:51.593111171Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 1.194959836s" Jan 20 03:18:51.593172 containerd[1550]: time="2026-01-20T03:18:51.593156115Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 20 03:18:51.598031 containerd[1550]: time="2026-01-20T03:18:51.597977416Z" level=info msg="CreateContainer within sandbox \"8ed3146590f13e35de47da68db3bdd8a780c37979aa0e0ad77def3a2156d41f2\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 20 03:18:51.607193 containerd[1550]: time="2026-01-20T03:18:51.607137311Z" level=info msg="Container 7325fb2a5c5ee2c5fc1c6ae86e611dbe5ea1114e2d8a0902577acecf80088b08: CDI devices from CRI Config.CDIDevices: []" Jan 20 03:18:51.614645 containerd[1550]: time="2026-01-20T03:18:51.614535834Z" level=info msg="CreateContainer within sandbox \"8ed3146590f13e35de47da68db3bdd8a780c37979aa0e0ad77def3a2156d41f2\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"7325fb2a5c5ee2c5fc1c6ae86e611dbe5ea1114e2d8a0902577acecf80088b08\"" Jan 20 03:18:51.615387 containerd[1550]: time="2026-01-20T03:18:51.615347385Z" level=info msg="StartContainer for \"7325fb2a5c5ee2c5fc1c6ae86e611dbe5ea1114e2d8a0902577acecf80088b08\"" Jan 20 03:18:51.616420 containerd[1550]: time="2026-01-20T03:18:51.616364350Z" level=info msg="connecting to shim 7325fb2a5c5ee2c5fc1c6ae86e611dbe5ea1114e2d8a0902577acecf80088b08" address="unix:///run/containerd/s/99be908d7406bbaf207639c080733caddb4226a3adcfd7a90b0df8bd4fd602a9" protocol=ttrpc version=3 Jan 20 03:18:51.634678 systemd[1]: Started cri-containerd-7325fb2a5c5ee2c5fc1c6ae86e611dbe5ea1114e2d8a0902577acecf80088b08.scope - libcontainer container 7325fb2a5c5ee2c5fc1c6ae86e611dbe5ea1114e2d8a0902577acecf80088b08. Jan 20 03:18:51.721513 containerd[1550]: time="2026-01-20T03:18:51.721332855Z" level=info msg="StartContainer for \"7325fb2a5c5ee2c5fc1c6ae86e611dbe5ea1114e2d8a0902577acecf80088b08\" returns successfully" Jan 20 03:18:51.781081 kubelet[1885]: E0120 03:18:51.781025 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 03:18:51.797719 kubelet[1885]: E0120 03:18:51.797699 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:18:51.798306 containerd[1550]: time="2026-01-20T03:18:51.798270377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-szfbw,Uid:3889783b-ecff-4c38-a10d-c15d11d7e004,Namespace:kube-system,Attempt:0,}" Jan 20 03:18:51.818852 containerd[1550]: time="2026-01-20T03:18:51.818697183Z" level=info msg="connecting to shim 1f930a820c47f915bffcc735bb33e5d84bbea2bb00f96d416cbb76772d98fd6a" address="unix:///run/containerd/s/c867b669737cc6b30af905584b2454affc2606a9f372d5176ba2c458a7963782" namespace=k8s.io protocol=ttrpc version=3 Jan 20 03:18:51.856663 systemd[1]: Started cri-containerd-1f930a820c47f915bffcc735bb33e5d84bbea2bb00f96d416cbb76772d98fd6a.scope - libcontainer container 1f930a820c47f915bffcc735bb33e5d84bbea2bb00f96d416cbb76772d98fd6a. Jan 20 03:18:51.887668 containerd[1550]: time="2026-01-20T03:18:51.887562229Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-szfbw,Uid:3889783b-ecff-4c38-a10d-c15d11d7e004,Namespace:kube-system,Attempt:0,} returns sandbox id \"1f930a820c47f915bffcc735bb33e5d84bbea2bb00f96d416cbb76772d98fd6a\"" Jan 20 03:18:51.889063 kubelet[1885]: E0120 03:18:51.888822 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:18:51.895266 containerd[1550]: time="2026-01-20T03:18:51.895216876Z" level=info msg="CreateContainer within sandbox \"1f930a820c47f915bffcc735bb33e5d84bbea2bb00f96d416cbb76772d98fd6a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 20 03:18:51.906333 containerd[1550]: time="2026-01-20T03:18:51.906239359Z" level=info msg="Container 6f65c9ae272a3c45fa819d45d5ba41c6c3b46f2ac44cb83fe55a4d73031a824e: CDI devices from CRI Config.CDIDevices: []" Jan 20 03:18:51.912668 containerd[1550]: time="2026-01-20T03:18:51.912599147Z" level=info msg="CreateContainer within sandbox \"1f930a820c47f915bffcc735bb33e5d84bbea2bb00f96d416cbb76772d98fd6a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6f65c9ae272a3c45fa819d45d5ba41c6c3b46f2ac44cb83fe55a4d73031a824e\"" Jan 20 03:18:51.913601 containerd[1550]: time="2026-01-20T03:18:51.913548396Z" level=info msg="StartContainer for \"6f65c9ae272a3c45fa819d45d5ba41c6c3b46f2ac44cb83fe55a4d73031a824e\"" Jan 20 03:18:51.915060 containerd[1550]: time="2026-01-20T03:18:51.915012295Z" level=info msg="connecting to shim 6f65c9ae272a3c45fa819d45d5ba41c6c3b46f2ac44cb83fe55a4d73031a824e" address="unix:///run/containerd/s/c867b669737cc6b30af905584b2454affc2606a9f372d5176ba2c458a7963782" protocol=ttrpc version=3 Jan 20 03:18:51.941657 systemd[1]: Started cri-containerd-6f65c9ae272a3c45fa819d45d5ba41c6c3b46f2ac44cb83fe55a4d73031a824e.scope - libcontainer container 6f65c9ae272a3c45fa819d45d5ba41c6c3b46f2ac44cb83fe55a4d73031a824e. Jan 20 03:18:51.982317 containerd[1550]: time="2026-01-20T03:18:51.982254256Z" level=info msg="StartContainer for \"6f65c9ae272a3c45fa819d45d5ba41c6c3b46f2ac44cb83fe55a4d73031a824e\" returns successfully" Jan 20 03:18:51.992563 systemd[1]: cri-containerd-6f65c9ae272a3c45fa819d45d5ba41c6c3b46f2ac44cb83fe55a4d73031a824e.scope: Deactivated successfully. Jan 20 03:18:51.993614 systemd[1]: cri-containerd-6f65c9ae272a3c45fa819d45d5ba41c6c3b46f2ac44cb83fe55a4d73031a824e.scope: Consumed 31ms CPU time, 6.9M memory peak, 3.2M written to disk. Jan 20 03:18:51.996609 containerd[1550]: time="2026-01-20T03:18:51.996524251Z" level=info msg="received container exit event container_id:\"6f65c9ae272a3c45fa819d45d5ba41c6c3b46f2ac44cb83fe55a4d73031a824e\" id:\"6f65c9ae272a3c45fa819d45d5ba41c6c3b46f2ac44cb83fe55a4d73031a824e\" pid:3614 exited_at:{seconds:1768879131 nanos:996003321}" Jan 20 03:18:52.151027 kubelet[1885]: E0120 03:18:52.150685 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:18:52.154043 kubelet[1885]: E0120 03:18:52.153976 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:18:52.158895 containerd[1550]: time="2026-01-20T03:18:52.158746987Z" level=info msg="CreateContainer within sandbox \"1f930a820c47f915bffcc735bb33e5d84bbea2bb00f96d416cbb76772d98fd6a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 20 03:18:52.169876 containerd[1550]: time="2026-01-20T03:18:52.169758127Z" level=info msg="Container 7c83be7507f3b2c8de4e59d8e1d7c85bbaae7441005250277c6309ba95a6f10d: CDI devices from CRI Config.CDIDevices: []" Jan 20 03:18:52.177736 containerd[1550]: time="2026-01-20T03:18:52.177664528Z" level=info msg="CreateContainer within sandbox \"1f930a820c47f915bffcc735bb33e5d84bbea2bb00f96d416cbb76772d98fd6a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"7c83be7507f3b2c8de4e59d8e1d7c85bbaae7441005250277c6309ba95a6f10d\"" Jan 20 03:18:52.178541 containerd[1550]: time="2026-01-20T03:18:52.178488824Z" level=info msg="StartContainer for \"7c83be7507f3b2c8de4e59d8e1d7c85bbaae7441005250277c6309ba95a6f10d\"" Jan 20 03:18:52.179397 containerd[1550]: time="2026-01-20T03:18:52.179308502Z" level=info msg="connecting to shim 7c83be7507f3b2c8de4e59d8e1d7c85bbaae7441005250277c6309ba95a6f10d" address="unix:///run/containerd/s/c867b669737cc6b30af905584b2454affc2606a9f372d5176ba2c458a7963782" protocol=ttrpc version=3 Jan 20 03:18:52.185346 kubelet[1885]: I0120 03:18:52.185230 1885 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-r9x28" podStartSLOduration=1.98919005 podStartE2EDuration="3.185209846s" podCreationTimestamp="2026-01-20 03:18:49 +0000 UTC" firstStartedPulling="2026-01-20 03:18:50.397960505 +0000 UTC m=+56.465538827" lastFinishedPulling="2026-01-20 03:18:51.593980301 +0000 UTC m=+57.661558623" observedRunningTime="2026-01-20 03:18:52.165352539 +0000 UTC m=+58.232930892" watchObservedRunningTime="2026-01-20 03:18:52.185209846 +0000 UTC m=+58.252788179" Jan 20 03:18:52.205593 systemd[1]: Started cri-containerd-7c83be7507f3b2c8de4e59d8e1d7c85bbaae7441005250277c6309ba95a6f10d.scope - libcontainer container 7c83be7507f3b2c8de4e59d8e1d7c85bbaae7441005250277c6309ba95a6f10d. Jan 20 03:18:52.240412 containerd[1550]: time="2026-01-20T03:18:52.240303688Z" level=info msg="StartContainer for \"7c83be7507f3b2c8de4e59d8e1d7c85bbaae7441005250277c6309ba95a6f10d\" returns successfully" Jan 20 03:18:52.248990 systemd[1]: cri-containerd-7c83be7507f3b2c8de4e59d8e1d7c85bbaae7441005250277c6309ba95a6f10d.scope: Deactivated successfully. Jan 20 03:18:52.250508 containerd[1550]: time="2026-01-20T03:18:52.250049015Z" level=info msg="received container exit event container_id:\"7c83be7507f3b2c8de4e59d8e1d7c85bbaae7441005250277c6309ba95a6f10d\" id:\"7c83be7507f3b2c8de4e59d8e1d7c85bbaae7441005250277c6309ba95a6f10d\" pid:3659 exited_at:{seconds:1768879132 nanos:249176989}" Jan 20 03:18:52.278359 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7c83be7507f3b2c8de4e59d8e1d7c85bbaae7441005250277c6309ba95a6f10d-rootfs.mount: Deactivated successfully. Jan 20 03:18:52.781710 kubelet[1885]: E0120 03:18:52.781556 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 03:18:53.159239 kubelet[1885]: E0120 03:18:53.159168 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:18:53.159406 kubelet[1885]: E0120 03:18:53.159369 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:18:53.165997 containerd[1550]: time="2026-01-20T03:18:53.165712679Z" level=info msg="CreateContainer within sandbox \"1f930a820c47f915bffcc735bb33e5d84bbea2bb00f96d416cbb76772d98fd6a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 20 03:18:53.179836 containerd[1550]: time="2026-01-20T03:18:53.179745235Z" level=info msg="Container 5c2daca135f37804528cc673b450fc75517d40b8a7c9670cad2c41bc67e366a6: CDI devices from CRI Config.CDIDevices: []" Jan 20 03:18:53.190061 containerd[1550]: time="2026-01-20T03:18:53.189957243Z" level=info msg="CreateContainer within sandbox \"1f930a820c47f915bffcc735bb33e5d84bbea2bb00f96d416cbb76772d98fd6a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"5c2daca135f37804528cc673b450fc75517d40b8a7c9670cad2c41bc67e366a6\"" Jan 20 03:18:53.190817 containerd[1550]: time="2026-01-20T03:18:53.190732458Z" level=info msg="StartContainer for \"5c2daca135f37804528cc673b450fc75517d40b8a7c9670cad2c41bc67e366a6\"" Jan 20 03:18:53.192066 containerd[1550]: time="2026-01-20T03:18:53.192034025Z" level=info msg="connecting to shim 5c2daca135f37804528cc673b450fc75517d40b8a7c9670cad2c41bc67e366a6" address="unix:///run/containerd/s/c867b669737cc6b30af905584b2454affc2606a9f372d5176ba2c458a7963782" protocol=ttrpc version=3 Jan 20 03:18:53.227649 systemd[1]: Started cri-containerd-5c2daca135f37804528cc673b450fc75517d40b8a7c9670cad2c41bc67e366a6.scope - libcontainer container 5c2daca135f37804528cc673b450fc75517d40b8a7c9670cad2c41bc67e366a6. Jan 20 03:18:53.324829 containerd[1550]: time="2026-01-20T03:18:53.324647682Z" level=info msg="StartContainer for \"5c2daca135f37804528cc673b450fc75517d40b8a7c9670cad2c41bc67e366a6\" returns successfully" Jan 20 03:18:53.328193 systemd[1]: cri-containerd-5c2daca135f37804528cc673b450fc75517d40b8a7c9670cad2c41bc67e366a6.scope: Deactivated successfully. Jan 20 03:18:53.331302 containerd[1550]: time="2026-01-20T03:18:53.331130508Z" level=info msg="received container exit event container_id:\"5c2daca135f37804528cc673b450fc75517d40b8a7c9670cad2c41bc67e366a6\" id:\"5c2daca135f37804528cc673b450fc75517d40b8a7c9670cad2c41bc67e366a6\" pid:3703 exited_at:{seconds:1768879133 nanos:330719689}" Jan 20 03:18:53.362594 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5c2daca135f37804528cc673b450fc75517d40b8a7c9670cad2c41bc67e366a6-rootfs.mount: Deactivated successfully. Jan 20 03:18:53.782031 kubelet[1885]: E0120 03:18:53.781759 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 03:18:54.165656 kubelet[1885]: E0120 03:18:54.165542 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:18:54.171736 containerd[1550]: time="2026-01-20T03:18:54.171690155Z" level=info msg="CreateContainer within sandbox \"1f930a820c47f915bffcc735bb33e5d84bbea2bb00f96d416cbb76772d98fd6a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 20 03:18:54.184141 containerd[1550]: time="2026-01-20T03:18:54.183957665Z" level=info msg="Container f64745663421ff80ccaed95c72e6338f6a744d54162bfee2dcda22a4c4c68cfb: CDI devices from CRI Config.CDIDevices: []" Jan 20 03:18:54.191545 containerd[1550]: time="2026-01-20T03:18:54.191497432Z" level=info msg="CreateContainer within sandbox \"1f930a820c47f915bffcc735bb33e5d84bbea2bb00f96d416cbb76772d98fd6a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f64745663421ff80ccaed95c72e6338f6a744d54162bfee2dcda22a4c4c68cfb\"" Jan 20 03:18:54.192473 containerd[1550]: time="2026-01-20T03:18:54.192395692Z" level=info msg="StartContainer for \"f64745663421ff80ccaed95c72e6338f6a744d54162bfee2dcda22a4c4c68cfb\"" Jan 20 03:18:54.193916 containerd[1550]: time="2026-01-20T03:18:54.193774783Z" level=info msg="connecting to shim f64745663421ff80ccaed95c72e6338f6a744d54162bfee2dcda22a4c4c68cfb" address="unix:///run/containerd/s/c867b669737cc6b30af905584b2454affc2606a9f372d5176ba2c458a7963782" protocol=ttrpc version=3 Jan 20 03:18:54.221599 systemd[1]: Started cri-containerd-f64745663421ff80ccaed95c72e6338f6a744d54162bfee2dcda22a4c4c68cfb.scope - libcontainer container f64745663421ff80ccaed95c72e6338f6a744d54162bfee2dcda22a4c4c68cfb. Jan 20 03:18:54.257659 systemd[1]: cri-containerd-f64745663421ff80ccaed95c72e6338f6a744d54162bfee2dcda22a4c4c68cfb.scope: Deactivated successfully. Jan 20 03:18:54.259324 containerd[1550]: time="2026-01-20T03:18:54.259274449Z" level=info msg="received container exit event container_id:\"f64745663421ff80ccaed95c72e6338f6a744d54162bfee2dcda22a4c4c68cfb\" id:\"f64745663421ff80ccaed95c72e6338f6a744d54162bfee2dcda22a4c4c68cfb\" pid:3744 exited_at:{seconds:1768879134 nanos:257743894}" Jan 20 03:18:54.270689 containerd[1550]: time="2026-01-20T03:18:54.270570421Z" level=info msg="StartContainer for \"f64745663421ff80ccaed95c72e6338f6a744d54162bfee2dcda22a4c4c68cfb\" returns successfully" Jan 20 03:18:54.287415 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f64745663421ff80ccaed95c72e6338f6a744d54162bfee2dcda22a4c4c68cfb-rootfs.mount: Deactivated successfully. Jan 20 03:18:54.735725 kubelet[1885]: E0120 03:18:54.735619 1885 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 03:18:54.776113 containerd[1550]: time="2026-01-20T03:18:54.776054546Z" level=info msg="StopPodSandbox for \"88fc39901f8fe66268d55c6870dfce307c41b10719aeec3603b84bbca1183790\"" Jan 20 03:18:54.776247 containerd[1550]: time="2026-01-20T03:18:54.776220204Z" level=info msg="TearDown network for sandbox \"88fc39901f8fe66268d55c6870dfce307c41b10719aeec3603b84bbca1183790\" successfully" Jan 20 03:18:54.776275 containerd[1550]: time="2026-01-20T03:18:54.776238467Z" level=info msg="StopPodSandbox for \"88fc39901f8fe66268d55c6870dfce307c41b10719aeec3603b84bbca1183790\" returns successfully" Jan 20 03:18:54.776763 containerd[1550]: time="2026-01-20T03:18:54.776741716Z" level=info msg="RemovePodSandbox for \"88fc39901f8fe66268d55c6870dfce307c41b10719aeec3603b84bbca1183790\"" Jan 20 03:18:54.776856 containerd[1550]: time="2026-01-20T03:18:54.776765320Z" level=info msg="Forcibly stopping sandbox \"88fc39901f8fe66268d55c6870dfce307c41b10719aeec3603b84bbca1183790\"" Jan 20 03:18:54.776927 containerd[1550]: time="2026-01-20T03:18:54.776878001Z" level=info msg="TearDown network for sandbox \"88fc39901f8fe66268d55c6870dfce307c41b10719aeec3603b84bbca1183790\" successfully" Jan 20 03:18:54.778491 containerd[1550]: time="2026-01-20T03:18:54.778368840Z" level=info msg="Ensure that sandbox 88fc39901f8fe66268d55c6870dfce307c41b10719aeec3603b84bbca1183790 in task-service has been cleanup successfully" Jan 20 03:18:54.783056 kubelet[1885]: E0120 03:18:54.782994 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 03:18:54.785733 containerd[1550]: time="2026-01-20T03:18:54.784865333Z" level=info msg="RemovePodSandbox \"88fc39901f8fe66268d55c6870dfce307c41b10719aeec3603b84bbca1183790\" returns successfully" Jan 20 03:18:54.944760 kubelet[1885]: E0120 03:18:54.944728 1885 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 03:18:55.170222 kubelet[1885]: E0120 03:18:55.170152 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:18:55.177141 containerd[1550]: time="2026-01-20T03:18:55.177035194Z" level=info msg="CreateContainer within sandbox \"1f930a820c47f915bffcc735bb33e5d84bbea2bb00f96d416cbb76772d98fd6a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 20 03:18:55.186632 containerd[1550]: time="2026-01-20T03:18:55.186562732Z" level=info msg="Container b507f88d2cbea3876d398fb0a7d635862451346e71d3c3365af63512b542b5bd: CDI devices from CRI Config.CDIDevices: []" Jan 20 03:18:55.194752 containerd[1550]: time="2026-01-20T03:18:55.194674871Z" level=info msg="CreateContainer within sandbox \"1f930a820c47f915bffcc735bb33e5d84bbea2bb00f96d416cbb76772d98fd6a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b507f88d2cbea3876d398fb0a7d635862451346e71d3c3365af63512b542b5bd\"" Jan 20 03:18:55.195174 containerd[1550]: time="2026-01-20T03:18:55.195135760Z" level=info msg="StartContainer for \"b507f88d2cbea3876d398fb0a7d635862451346e71d3c3365af63512b542b5bd\"" Jan 20 03:18:55.196264 containerd[1550]: time="2026-01-20T03:18:55.196145142Z" level=info msg="connecting to shim b507f88d2cbea3876d398fb0a7d635862451346e71d3c3365af63512b542b5bd" address="unix:///run/containerd/s/c867b669737cc6b30af905584b2454affc2606a9f372d5176ba2c458a7963782" protocol=ttrpc version=3 Jan 20 03:18:55.220640 systemd[1]: Started cri-containerd-b507f88d2cbea3876d398fb0a7d635862451346e71d3c3365af63512b542b5bd.scope - libcontainer container b507f88d2cbea3876d398fb0a7d635862451346e71d3c3365af63512b542b5bd. Jan 20 03:18:55.277252 containerd[1550]: time="2026-01-20T03:18:55.277187944Z" level=info msg="StartContainer for \"b507f88d2cbea3876d398fb0a7d635862451346e71d3c3365af63512b542b5bd\" returns successfully" Jan 20 03:18:55.690509 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Jan 20 03:18:55.784142 kubelet[1885]: E0120 03:18:55.784044 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 03:18:56.176238 kubelet[1885]: E0120 03:18:56.176144 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:18:56.193854 kubelet[1885]: I0120 03:18:56.193740 1885 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-szfbw" podStartSLOduration=7.193727384 podStartE2EDuration="7.193727384s" podCreationTimestamp="2026-01-20 03:18:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 03:18:56.192979267 +0000 UTC m=+62.260557589" watchObservedRunningTime="2026-01-20 03:18:56.193727384 +0000 UTC m=+62.261305707" Jan 20 03:18:56.637376 kubelet[1885]: I0120 03:18:56.637294 1885 setters.go:618] "Node became not ready" node="10.0.0.17" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:18:56Z","lastTransitionTime":"2026-01-20T03:18:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 20 03:18:56.784674 kubelet[1885]: E0120 03:18:56.784532 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 03:18:57.785517 kubelet[1885]: E0120 03:18:57.785345 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 03:18:57.799612 kubelet[1885]: E0120 03:18:57.799561 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:18:58.786624 kubelet[1885]: E0120 03:18:58.786577 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 03:18:59.052399 systemd-networkd[1472]: lxc_health: Link UP Jan 20 03:18:59.061189 systemd-networkd[1472]: lxc_health: Gained carrier Jan 20 03:18:59.786744 kubelet[1885]: E0120 03:18:59.786696 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 03:18:59.800288 kubelet[1885]: E0120 03:18:59.800248 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:19:00.186340 kubelet[1885]: E0120 03:19:00.185987 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:19:00.508771 systemd-networkd[1472]: lxc_health: Gained IPv6LL Jan 20 03:19:00.787810 kubelet[1885]: E0120 03:19:00.787586 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 03:19:01.187895 kubelet[1885]: E0120 03:19:01.187768 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:19:01.787895 kubelet[1885]: E0120 03:19:01.787790 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 03:19:02.788071 kubelet[1885]: E0120 03:19:02.787953 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 03:19:03.788810 kubelet[1885]: E0120 03:19:03.788693 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 03:19:04.790045 kubelet[1885]: E0120 03:19:04.789935 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 03:19:05.790245 kubelet[1885]: E0120 03:19:05.790176 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 03:19:06.791185 kubelet[1885]: E0120 03:19:06.791084 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 03:19:07.791372 kubelet[1885]: E0120 03:19:07.791294 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"