Apr 21 02:46:03.160366 kernel: Linux version 6.12.81-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Mon Apr 20 22:35:05 -00 2026 Apr 21 02:46:03.160387 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=bff44a95b1e301b8c626c31d9593bbb30c469579bd546b0b84b6f8eaed8c72f7 Apr 21 02:46:03.160394 kernel: BIOS-provided physical RAM map: Apr 21 02:46:03.160401 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 21 02:46:03.160405 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Apr 21 02:46:03.160409 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Apr 21 02:46:03.160415 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Apr 21 02:46:03.160419 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Apr 21 02:46:03.160424 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Apr 21 02:46:03.160428 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Apr 21 02:46:03.160433 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Apr 21 02:46:03.160437 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Apr 21 02:46:03.160443 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Apr 21 02:46:03.160447 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Apr 21 02:46:03.160453 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Apr 21 02:46:03.160458 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Apr 21 02:46:03.160463 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Apr 21 02:46:03.160468 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Apr 21 02:46:03.160473 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Apr 21 02:46:03.160478 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Apr 21 02:46:03.160482 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Apr 21 02:46:03.160487 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Apr 21 02:46:03.160492 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Apr 21 02:46:03.160496 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 21 02:46:03.160501 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Apr 21 02:46:03.160506 kernel: NX (Execute Disable) protection: active Apr 21 02:46:03.160510 kernel: APIC: Static calls initialized Apr 21 02:46:03.160515 kernel: e820: update [mem 0x9b31e018-0x9b327c57] usable ==> usable Apr 21 02:46:03.160521 kernel: e820: update [mem 0x9b2e1018-0x9b31de57] usable ==> usable Apr 21 02:46:03.160526 kernel: extended physical RAM map: Apr 21 02:46:03.160531 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 21 02:46:03.160535 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Apr 21 02:46:03.160540 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Apr 21 02:46:03.160545 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Apr 21 02:46:03.160550 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Apr 21 02:46:03.160555 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Apr 21 02:46:03.160559 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Apr 21 02:46:03.160564 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e1017] usable Apr 21 02:46:03.160569 kernel: reserve setup_data: [mem 0x000000009b2e1018-0x000000009b31de57] usable Apr 21 02:46:03.160575 kernel: reserve setup_data: [mem 0x000000009b31de58-0x000000009b31e017] usable Apr 21 02:46:03.160582 kernel: reserve setup_data: [mem 0x000000009b31e018-0x000000009b327c57] usable Apr 21 02:46:03.160587 kernel: reserve setup_data: [mem 0x000000009b327c58-0x000000009bd3efff] usable Apr 21 02:46:03.160592 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Apr 21 02:46:03.160597 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Apr 21 02:46:03.160604 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Apr 21 02:46:03.160609 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Apr 21 02:46:03.160614 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Apr 21 02:46:03.160619 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Apr 21 02:46:03.160624 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Apr 21 02:46:03.160628 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Apr 21 02:46:03.160633 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Apr 21 02:46:03.160638 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Apr 21 02:46:03.160643 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Apr 21 02:46:03.160648 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Apr 21 02:46:03.160653 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 21 02:46:03.160659 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Apr 21 02:46:03.160664 kernel: efi: EFI v2.7 by EDK II Apr 21 02:46:03.160669 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Apr 21 02:46:03.160675 kernel: random: crng init done Apr 21 02:46:03.160680 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Apr 21 02:46:03.160685 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Apr 21 02:46:03.160690 kernel: secureboot: Secure boot disabled Apr 21 02:46:03.160694 kernel: SMBIOS 2.8 present. Apr 21 02:46:03.160700 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Apr 21 02:46:03.160704 kernel: DMI: Memory slots populated: 1/1 Apr 21 02:46:03.160709 kernel: Hypervisor detected: KVM Apr 21 02:46:03.160714 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x10000000000 Apr 21 02:46:03.160720 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 21 02:46:03.160725 kernel: kvm-clock: using sched offset of 4980012522 cycles Apr 21 02:46:03.160731 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 21 02:46:03.160736 kernel: tsc: Detected 2793.438 MHz processor Apr 21 02:46:03.160742 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 21 02:46:03.160747 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 21 02:46:03.160752 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x10000000000 Apr 21 02:46:03.160757 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Apr 21 02:46:03.160762 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 21 02:46:03.160769 kernel: Using GB pages for direct mapping Apr 21 02:46:03.160774 kernel: ACPI: Early table checksum verification disabled Apr 21 02:46:03.160779 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Apr 21 02:46:03.160784 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Apr 21 02:46:03.160789 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 02:46:03.160795 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 02:46:03.160800 kernel: ACPI: FACS 0x000000009CBDD000 000040 Apr 21 02:46:03.160805 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 02:46:03.160810 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 02:46:03.160816 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 02:46:03.160821 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 02:46:03.160826 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Apr 21 02:46:03.160831 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Apr 21 02:46:03.160837 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Apr 21 02:46:03.160842 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Apr 21 02:46:03.160847 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Apr 21 02:46:03.160852 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Apr 21 02:46:03.160857 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Apr 21 02:46:03.160863 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Apr 21 02:46:03.160868 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Apr 21 02:46:03.160873 kernel: No NUMA configuration found Apr 21 02:46:03.160878 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Apr 21 02:46:03.160883 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Apr 21 02:46:03.160889 kernel: Zone ranges: Apr 21 02:46:03.160894 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 21 02:46:03.160899 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Apr 21 02:46:03.160904 kernel: Normal empty Apr 21 02:46:03.160909 kernel: Device empty Apr 21 02:46:03.160915 kernel: Movable zone start for each node Apr 21 02:46:03.160920 kernel: Early memory node ranges Apr 21 02:46:03.160925 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Apr 21 02:46:03.160931 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Apr 21 02:46:03.160936 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Apr 21 02:46:03.160940 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Apr 21 02:46:03.160946 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Apr 21 02:46:03.160950 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Apr 21 02:46:03.160956 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Apr 21 02:46:03.160962 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Apr 21 02:46:03.160967 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Apr 21 02:46:03.160972 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 21 02:46:03.160977 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Apr 21 02:46:03.160982 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Apr 21 02:46:03.160992 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 21 02:46:03.160998 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Apr 21 02:46:03.161004 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Apr 21 02:46:03.161010 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Apr 21 02:46:03.161015 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Apr 21 02:46:03.161021 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Apr 21 02:46:03.161026 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 21 02:46:03.161033 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 21 02:46:03.161039 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 21 02:46:03.161045 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 21 02:46:03.161050 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 21 02:46:03.161056 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 21 02:46:03.161063 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 21 02:46:03.161069 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 21 02:46:03.161074 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 21 02:46:03.161080 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 21 02:46:03.161086 kernel: TSC deadline timer available Apr 21 02:46:03.161091 kernel: CPU topo: Max. logical packages: 1 Apr 21 02:46:03.161097 kernel: CPU topo: Max. logical dies: 1 Apr 21 02:46:03.161102 kernel: CPU topo: Max. dies per package: 1 Apr 21 02:46:03.161108 kernel: CPU topo: Max. threads per core: 1 Apr 21 02:46:03.161115 kernel: CPU topo: Num. cores per package: 4 Apr 21 02:46:03.161120 kernel: CPU topo: Num. threads per package: 4 Apr 21 02:46:03.161127 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Apr 21 02:46:03.161136 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 21 02:46:03.161145 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 21 02:46:03.161153 kernel: kvm-guest: setup PV sched yield Apr 21 02:46:03.161161 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Apr 21 02:46:03.161169 kernel: Booting paravirtualized kernel on KVM Apr 21 02:46:03.161177 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 21 02:46:03.161185 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Apr 21 02:46:03.161196 kernel: percpu: Embedded 60 pages/cpu s207448 r8192 d30120 u524288 Apr 21 02:46:03.161349 kernel: pcpu-alloc: s207448 r8192 d30120 u524288 alloc=1*2097152 Apr 21 02:46:03.161359 kernel: pcpu-alloc: [0] 0 1 2 3 Apr 21 02:46:03.161368 kernel: kvm-guest: PV spinlocks enabled Apr 21 02:46:03.161376 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 21 02:46:03.161385 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=bff44a95b1e301b8c626c31d9593bbb30c469579bd546b0b84b6f8eaed8c72f7 Apr 21 02:46:03.161393 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 21 02:46:03.161401 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 21 02:46:03.161413 kernel: Fallback order for Node 0: 0 Apr 21 02:46:03.161423 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Apr 21 02:46:03.161432 kernel: Policy zone: DMA32 Apr 21 02:46:03.161441 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 21 02:46:03.161448 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Apr 21 02:46:03.161456 kernel: ftrace: allocating 40126 entries in 157 pages Apr 21 02:46:03.161464 kernel: ftrace: allocated 157 pages with 5 groups Apr 21 02:46:03.161473 kernel: Dynamic Preempt: voluntary Apr 21 02:46:03.161482 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 21 02:46:03.161498 kernel: rcu: RCU event tracing is enabled. Apr 21 02:46:03.161508 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Apr 21 02:46:03.161518 kernel: Trampoline variant of Tasks RCU enabled. Apr 21 02:46:03.161528 kernel: Rude variant of Tasks RCU enabled. Apr 21 02:46:03.161537 kernel: Tracing variant of Tasks RCU enabled. Apr 21 02:46:03.161546 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 21 02:46:03.161555 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Apr 21 02:46:03.161563 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 21 02:46:03.161571 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 21 02:46:03.161581 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 21 02:46:03.161590 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Apr 21 02:46:03.161598 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 21 02:46:03.161606 kernel: Console: colour dummy device 80x25 Apr 21 02:46:03.161614 kernel: printk: legacy console [ttyS0] enabled Apr 21 02:46:03.161622 kernel: ACPI: Core revision 20240827 Apr 21 02:46:03.161632 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 21 02:46:03.161641 kernel: APIC: Switch to symmetric I/O mode setup Apr 21 02:46:03.161650 kernel: x2apic enabled Apr 21 02:46:03.161659 kernel: APIC: Switched APIC routing to: physical x2apic Apr 21 02:46:03.161665 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Apr 21 02:46:03.161671 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Apr 21 02:46:03.161676 kernel: kvm-guest: setup PV IPIs Apr 21 02:46:03.161682 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 21 02:46:03.161688 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 21 02:46:03.161693 kernel: Calibrating delay loop (skipped) preset value.. 5586.87 BogoMIPS (lpj=2793438) Apr 21 02:46:03.161699 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 21 02:46:03.161705 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Apr 21 02:46:03.161712 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Apr 21 02:46:03.161717 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 21 02:46:03.161723 kernel: Spectre V2 : Mitigation: Retpolines Apr 21 02:46:03.161729 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 21 02:46:03.161735 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 21 02:46:03.161743 kernel: RETBleed: Vulnerable Apr 21 02:46:03.161752 kernel: Speculative Store Bypass: Vulnerable Apr 21 02:46:03.161760 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 21 02:46:03.161768 kernel: GDS: Unknown: Dependent on hypervisor status Apr 21 02:46:03.161778 kernel: active return thunk: its_return_thunk Apr 21 02:46:03.161786 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 21 02:46:03.161794 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 21 02:46:03.161802 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 21 02:46:03.161810 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 21 02:46:03.161819 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 21 02:46:03.161828 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 21 02:46:03.161836 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 21 02:46:03.161845 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 21 02:46:03.161856 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 21 02:46:03.161865 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 21 02:46:03.161874 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 21 02:46:03.161884 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Apr 21 02:46:03.161894 kernel: Freeing SMP alternatives memory: 32K Apr 21 02:46:03.161903 kernel: pid_max: default: 32768 minimum: 301 Apr 21 02:46:03.161912 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Apr 21 02:46:03.161919 kernel: landlock: Up and running. Apr 21 02:46:03.161927 kernel: SELinux: Initializing. Apr 21 02:46:03.161937 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 21 02:46:03.161946 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 21 02:46:03.161955 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8370C CPU @ 2.80GHz (family: 0x6, model: 0x6a, stepping: 0x6) Apr 21 02:46:03.161963 kernel: Performance Events: unsupported p6 CPU model 106 no PMU driver, software events only. Apr 21 02:46:03.161971 kernel: signal: max sigframe size: 3632 Apr 21 02:46:03.161979 kernel: rcu: Hierarchical SRCU implementation. Apr 21 02:46:03.161988 kernel: rcu: Max phase no-delay instances is 400. Apr 21 02:46:03.161997 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Apr 21 02:46:03.162006 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 21 02:46:03.162014 kernel: smp: Bringing up secondary CPUs ... Apr 21 02:46:03.162022 kernel: smpboot: x86: Booting SMP configuration: Apr 21 02:46:03.162031 kernel: .... node #0, CPUs: #1 #2 #3 Apr 21 02:46:03.162040 kernel: smp: Brought up 1 node, 4 CPUs Apr 21 02:46:03.162051 kernel: smpboot: Total of 4 processors activated (22347.50 BogoMIPS) Apr 21 02:46:03.162061 kernel: Memory: 2374696K/2565800K available (14336K kernel code, 2453K rwdata, 26076K rodata, 46228K init, 2520K bss, 185216K reserved, 0K cma-reserved) Apr 21 02:46:03.162070 kernel: devtmpfs: initialized Apr 21 02:46:03.162078 kernel: x86/mm: Memory block size: 128MB Apr 21 02:46:03.162089 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Apr 21 02:46:03.162098 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Apr 21 02:46:03.162107 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Apr 21 02:46:03.162117 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Apr 21 02:46:03.162126 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Apr 21 02:46:03.162135 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Apr 21 02:46:03.162144 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 21 02:46:03.162151 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Apr 21 02:46:03.162159 kernel: pinctrl core: initialized pinctrl subsystem Apr 21 02:46:03.162169 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 21 02:46:03.162177 kernel: audit: initializing netlink subsys (disabled) Apr 21 02:46:03.162186 kernel: audit: type=2000 audit(1776739559.456:1): state=initialized audit_enabled=0 res=1 Apr 21 02:46:03.162194 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 21 02:46:03.162202 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 21 02:46:03.162322 kernel: cpuidle: using governor menu Apr 21 02:46:03.162332 kernel: efi: Freeing EFI boot services memory: 38812K Apr 21 02:46:03.162341 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 21 02:46:03.162350 kernel: dca service started, version 1.12.1 Apr 21 02:46:03.162363 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Apr 21 02:46:03.162372 kernel: PCI: Using configuration type 1 for base access Apr 21 02:46:03.162381 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 21 02:46:03.162389 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 21 02:46:03.162397 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 21 02:46:03.162404 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 21 02:46:03.162413 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 21 02:46:03.162422 kernel: ACPI: Added _OSI(Module Device) Apr 21 02:46:03.162432 kernel: ACPI: Added _OSI(Processor Device) Apr 21 02:46:03.162442 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 21 02:46:03.162449 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 21 02:46:03.162457 kernel: ACPI: Interpreter enabled Apr 21 02:46:03.162466 kernel: ACPI: PM: (supports S0 S3 S5) Apr 21 02:46:03.162475 kernel: ACPI: Using IOAPIC for interrupt routing Apr 21 02:46:03.162483 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 21 02:46:03.162491 kernel: PCI: Using E820 reservations for host bridge windows Apr 21 02:46:03.162499 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 21 02:46:03.162508 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 21 02:46:03.162651 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 21 02:46:03.162742 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 21 02:46:03.162808 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 21 02:46:03.162816 kernel: PCI host bridge to bus 0000:00 Apr 21 02:46:03.162879 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 21 02:46:03.162928 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 21 02:46:03.162978 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 21 02:46:03.163023 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Apr 21 02:46:03.163068 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Apr 21 02:46:03.163114 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Apr 21 02:46:03.163174 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 21 02:46:03.163378 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Apr 21 02:46:03.163444 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Apr 21 02:46:03.163525 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Apr 21 02:46:03.163604 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Apr 21 02:46:03.163659 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Apr 21 02:46:03.163712 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 21 02:46:03.163795 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Apr 21 02:46:03.163873 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Apr 21 02:46:03.163950 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Apr 21 02:46:03.164023 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Apr 21 02:46:03.164085 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Apr 21 02:46:03.164147 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Apr 21 02:46:03.164369 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Apr 21 02:46:03.164457 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Apr 21 02:46:03.164519 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Apr 21 02:46:03.164576 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Apr 21 02:46:03.164628 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Apr 21 02:46:03.164680 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Apr 21 02:46:03.164733 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Apr 21 02:46:03.164789 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Apr 21 02:46:03.164842 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 21 02:46:03.164902 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Apr 21 02:46:03.164956 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Apr 21 02:46:03.165009 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Apr 21 02:46:03.165066 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Apr 21 02:46:03.165118 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Apr 21 02:46:03.165125 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 21 02:46:03.165135 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 21 02:46:03.165144 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 21 02:46:03.165154 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 21 02:46:03.165162 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 21 02:46:03.165170 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 21 02:46:03.165178 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 21 02:46:03.165185 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 21 02:46:03.165194 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 21 02:46:03.165336 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 21 02:46:03.165347 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 21 02:46:03.165355 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 21 02:46:03.165367 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 21 02:46:03.165376 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 21 02:46:03.165386 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 21 02:46:03.165395 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 21 02:46:03.165404 kernel: iommu: Default domain type: Translated Apr 21 02:46:03.165414 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 21 02:46:03.165421 kernel: efivars: Registered efivars operations Apr 21 02:46:03.165427 kernel: PCI: Using ACPI for IRQ routing Apr 21 02:46:03.165433 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 21 02:46:03.165440 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Apr 21 02:46:03.165450 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Apr 21 02:46:03.165458 kernel: e820: reserve RAM buffer [mem 0x9b2e1018-0x9bffffff] Apr 21 02:46:03.165466 kernel: e820: reserve RAM buffer [mem 0x9b31e018-0x9bffffff] Apr 21 02:46:03.165474 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Apr 21 02:46:03.165481 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Apr 21 02:46:03.165490 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Apr 21 02:46:03.165499 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Apr 21 02:46:03.165585 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 21 02:46:03.165672 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 21 02:46:03.165749 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 21 02:46:03.165762 kernel: vgaarb: loaded Apr 21 02:46:03.165771 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 21 02:46:03.165781 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 21 02:46:03.165790 kernel: clocksource: Switched to clocksource kvm-clock Apr 21 02:46:03.165800 kernel: VFS: Disk quotas dquot_6.6.0 Apr 21 02:46:03.165809 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 21 02:46:03.165821 kernel: pnp: PnP ACPI init Apr 21 02:46:03.165885 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Apr 21 02:46:03.165894 kernel: pnp: PnP ACPI: found 6 devices Apr 21 02:46:03.165900 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 21 02:46:03.165916 kernel: NET: Registered PF_INET protocol family Apr 21 02:46:03.165923 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 21 02:46:03.165929 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 21 02:46:03.165935 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 21 02:46:03.165941 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 21 02:46:03.165948 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 21 02:46:03.165954 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 21 02:46:03.165960 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 21 02:46:03.165966 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 21 02:46:03.165972 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 21 02:46:03.165978 kernel: NET: Registered PF_XDP protocol family Apr 21 02:46:03.166031 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Apr 21 02:46:03.166085 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Apr 21 02:46:03.166143 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 21 02:46:03.166322 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 21 02:46:03.166396 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 21 02:46:03.166465 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Apr 21 02:46:03.166536 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Apr 21 02:46:03.166609 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Apr 21 02:46:03.166620 kernel: PCI: CLS 0 bytes, default 64 Apr 21 02:46:03.166630 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 21 02:46:03.166643 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 21 02:46:03.166652 kernel: Initialise system trusted keyrings Apr 21 02:46:03.166664 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 21 02:46:03.166673 kernel: Key type asymmetric registered Apr 21 02:46:03.166681 kernel: Asymmetric key parser 'x509' registered Apr 21 02:46:03.166691 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Apr 21 02:46:03.166700 kernel: io scheduler mq-deadline registered Apr 21 02:46:03.166709 kernel: io scheduler kyber registered Apr 21 02:46:03.166718 kernel: io scheduler bfq registered Apr 21 02:46:03.166728 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 21 02:46:03.166738 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 21 02:46:03.166747 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 21 02:46:03.166757 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Apr 21 02:46:03.166767 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 21 02:46:03.166777 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 21 02:46:03.166790 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 21 02:46:03.166801 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 21 02:46:03.166812 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 21 02:46:03.166822 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 21 02:46:03.166918 kernel: rtc_cmos 00:04: RTC can wake from S4 Apr 21 02:46:03.167004 kernel: rtc_cmos 00:04: registered as rtc0 Apr 21 02:46:03.167087 kernel: rtc_cmos 00:04: setting system clock to 2026-04-21T02:46:02 UTC (1776739562) Apr 21 02:46:03.167164 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Apr 21 02:46:03.167176 kernel: intel_pstate: CPU model not supported Apr 21 02:46:03.167184 kernel: efifb: probing for efifb Apr 21 02:46:03.167194 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Apr 21 02:46:03.167323 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Apr 21 02:46:03.167333 kernel: efifb: scrolling: redraw Apr 21 02:46:03.167342 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Apr 21 02:46:03.167352 kernel: Console: switching to colour frame buffer device 160x50 Apr 21 02:46:03.167362 kernel: fb0: EFI VGA frame buffer device Apr 21 02:46:03.167374 kernel: pstore: Using crash dump compression: deflate Apr 21 02:46:03.167382 kernel: pstore: Registered efi_pstore as persistent store backend Apr 21 02:46:03.167390 kernel: NET: Registered PF_INET6 protocol family Apr 21 02:46:03.167398 kernel: Segment Routing with IPv6 Apr 21 02:46:03.167409 kernel: In-situ OAM (IOAM) with IPv6 Apr 21 02:46:03.167419 kernel: NET: Registered PF_PACKET protocol family Apr 21 02:46:03.167427 kernel: Key type dns_resolver registered Apr 21 02:46:03.167435 kernel: IPI shorthand broadcast: enabled Apr 21 02:46:03.167444 kernel: sched_clock: Marking stable (3581029172, 353846741)->(4040266414, -105390501) Apr 21 02:46:03.167453 kernel: registered taskstats version 1 Apr 21 02:46:03.167465 kernel: Loading compiled-in X.509 certificates Apr 21 02:46:03.167474 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.81-flatcar: bc6d78cd9d700d9d34e2c2c5bd3cbf2a73898336' Apr 21 02:46:03.167482 kernel: Demotion targets for Node 0: null Apr 21 02:46:03.167490 kernel: Key type .fscrypt registered Apr 21 02:46:03.167498 kernel: Key type fscrypt-provisioning registered Apr 21 02:46:03.167508 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 21 02:46:03.167518 kernel: ima: Allocated hash algorithm: sha1 Apr 21 02:46:03.167526 kernel: ima: No architecture policies found Apr 21 02:46:03.167534 kernel: clk: Disabling unused clocks Apr 21 02:46:03.167544 kernel: Warning: unable to open an initial console. Apr 21 02:46:03.167554 kernel: Freeing unused kernel image (initmem) memory: 46228K Apr 21 02:46:03.167564 kernel: Write protecting the kernel read-only data: 40960k Apr 21 02:46:03.167573 kernel: Freeing unused kernel image (rodata/data gap) memory: 548K Apr 21 02:46:03.167581 kernel: Run /init as init process Apr 21 02:46:03.167589 kernel: with arguments: Apr 21 02:46:03.167598 kernel: /init Apr 21 02:46:03.167608 kernel: with environment: Apr 21 02:46:03.167617 kernel: HOME=/ Apr 21 02:46:03.167627 kernel: TERM=linux Apr 21 02:46:03.167636 systemd[1]: Successfully made /usr/ read-only. Apr 21 02:46:03.167649 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 21 02:46:03.167661 systemd[1]: Detected virtualization kvm. Apr 21 02:46:03.167669 systemd[1]: Detected architecture x86-64. Apr 21 02:46:03.167678 systemd[1]: Running in initrd. Apr 21 02:46:03.167686 systemd[1]: No hostname configured, using default hostname. Apr 21 02:46:03.167699 systemd[1]: Hostname set to . Apr 21 02:46:03.167710 systemd[1]: Initializing machine ID from VM UUID. Apr 21 02:46:03.167718 systemd[1]: Queued start job for default target initrd.target. Apr 21 02:46:03.167727 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 21 02:46:03.167736 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 21 02:46:03.167747 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 21 02:46:03.167758 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 21 02:46:03.167767 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 21 02:46:03.167779 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 21 02:46:03.167789 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 21 02:46:03.167801 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 21 02:46:03.167811 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 21 02:46:03.167819 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 21 02:46:03.167828 systemd[1]: Reached target paths.target - Path Units. Apr 21 02:46:03.167837 systemd[1]: Reached target slices.target - Slice Units. Apr 21 02:46:03.167851 systemd[1]: Reached target swap.target - Swaps. Apr 21 02:46:03.167860 systemd[1]: Reached target timers.target - Timer Units. Apr 21 02:46:03.167868 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 21 02:46:03.167877 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 21 02:46:03.167889 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 21 02:46:03.167900 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Apr 21 02:46:03.167909 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 21 02:46:03.167918 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 21 02:46:03.167926 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 21 02:46:03.167939 systemd[1]: Reached target sockets.target - Socket Units. Apr 21 02:46:03.167950 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 21 02:46:03.167959 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 21 02:46:03.167967 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 21 02:46:03.167976 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Apr 21 02:46:03.167987 systemd[1]: Starting systemd-fsck-usr.service... Apr 21 02:46:03.167998 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 21 02:46:03.168006 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 21 02:46:03.168017 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 02:46:03.168026 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 21 02:46:03.168059 systemd-journald[205]: Collecting audit messages is disabled. Apr 21 02:46:03.168084 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 21 02:46:03.168094 systemd[1]: Finished systemd-fsck-usr.service. Apr 21 02:46:03.168104 systemd-journald[205]: Journal started Apr 21 02:46:03.168126 systemd-journald[205]: Runtime Journal (/run/log/journal/bac0659afdd74e8a9e392612032df7d2) is 6M, max 48.1M, 42.1M free. Apr 21 02:46:03.176804 systemd[1]: Started systemd-journald.service - Journal Service. Apr 21 02:46:03.179777 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 21 02:46:03.183395 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 21 02:46:03.202107 systemd-modules-load[206]: Inserted module 'overlay' Apr 21 02:46:03.217885 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 21 02:46:03.225547 systemd-tmpfiles[213]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Apr 21 02:46:03.236406 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 21 02:46:03.259344 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 21 02:46:03.262937 systemd-modules-load[206]: Inserted module 'br_netfilter' Apr 21 02:46:03.266527 kernel: Bridge firewalling registered Apr 21 02:46:03.274531 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 02:46:03.274916 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 21 02:46:03.283452 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 21 02:46:03.292882 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 21 02:46:03.310815 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 21 02:46:03.319517 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 21 02:46:03.337047 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 21 02:46:03.338410 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 21 02:46:03.350985 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 02:46:03.361544 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 21 02:46:03.391874 systemd-resolved[235]: Positive Trust Anchors: Apr 21 02:46:03.391922 systemd-resolved[235]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 21 02:46:03.391946 systemd-resolved[235]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 21 02:46:03.393923 systemd-resolved[235]: Defaulting to hostname 'linux'. Apr 21 02:46:03.394818 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 21 02:46:03.395108 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 21 02:46:03.447112 dracut-cmdline[246]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=bff44a95b1e301b8c626c31d9593bbb30c469579bd546b0b84b6f8eaed8c72f7 Apr 21 02:46:03.584344 kernel: SCSI subsystem initialized Apr 21 02:46:03.593370 kernel: Loading iSCSI transport class v2.0-870. Apr 21 02:46:03.607383 kernel: iscsi: registered transport (tcp) Apr 21 02:46:03.630700 kernel: iscsi: registered transport (qla4xxx) Apr 21 02:46:03.630780 kernel: QLogic iSCSI HBA Driver Apr 21 02:46:03.656933 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 21 02:46:03.682062 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 21 02:46:03.688627 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 21 02:46:03.735658 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 21 02:46:03.736917 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 21 02:46:03.811433 kernel: raid6: avx512x4 gen() 41529 MB/s Apr 21 02:46:03.830429 kernel: raid6: avx512x2 gen() 39745 MB/s Apr 21 02:46:03.849391 kernel: raid6: avx512x1 gen() 40439 MB/s Apr 21 02:46:03.868392 kernel: raid6: avx2x4 gen() 34277 MB/s Apr 21 02:46:03.887403 kernel: raid6: avx2x2 gen() 33900 MB/s Apr 21 02:46:03.908182 kernel: raid6: avx2x1 gen() 26071 MB/s Apr 21 02:46:03.908347 kernel: raid6: using algorithm avx512x4 gen() 41529 MB/s Apr 21 02:46:03.929356 kernel: raid6: .... xor() 9373 MB/s, rmw enabled Apr 21 02:46:03.929399 kernel: raid6: using avx512x2 recovery algorithm Apr 21 02:46:03.963977 kernel: xor: automatically using best checksumming function avx Apr 21 02:46:04.267478 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 21 02:46:04.279984 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 21 02:46:04.286777 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 21 02:46:04.330191 systemd-udevd[456]: Using default interface naming scheme 'v255'. Apr 21 02:46:04.333948 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 21 02:46:04.347073 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 21 02:46:04.387547 dracut-pre-trigger[467]: rd.md=0: removing MD RAID activation Apr 21 02:46:04.435752 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 21 02:46:04.444557 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 21 02:46:04.522465 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 21 02:46:04.539096 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 21 02:46:04.599362 kernel: cryptd: max_cpu_qlen set to 1000 Apr 21 02:46:04.599418 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Apr 21 02:46:04.621339 kernel: libata version 3.00 loaded. Apr 21 02:46:04.633351 kernel: ahci 0000:00:1f.2: version 3.0 Apr 21 02:46:04.633509 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 21 02:46:04.635592 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 21 02:46:04.656117 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Apr 21 02:46:04.656682 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Apr 21 02:46:04.656769 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 21 02:46:04.635707 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 02:46:04.670031 kernel: scsi host0: ahci Apr 21 02:46:04.670171 kernel: scsi host1: ahci Apr 21 02:46:04.656496 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 02:46:04.721804 kernel: scsi host2: ahci Apr 21 02:46:04.721959 kernel: scsi host3: ahci Apr 21 02:46:04.722054 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Apr 21 02:46:04.722129 kernel: scsi host4: ahci Apr 21 02:46:04.722364 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 21 02:46:04.722379 kernel: scsi host5: ahci Apr 21 02:46:04.722464 kernel: GPT:9289727 != 19775487 Apr 21 02:46:04.722474 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 lpm-pol 1 Apr 21 02:46:04.722483 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 21 02:46:04.722495 kernel: GPT:9289727 != 19775487 Apr 21 02:46:04.722503 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 21 02:46:04.722511 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 21 02:46:04.722520 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 lpm-pol 1 Apr 21 02:46:04.722528 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 lpm-pol 1 Apr 21 02:46:04.722537 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 lpm-pol 1 Apr 21 02:46:04.722546 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 lpm-pol 1 Apr 21 02:46:04.722554 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 lpm-pol 1 Apr 21 02:46:04.680962 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 02:46:04.744347 kernel: AES CTR mode by8 optimization enabled Apr 21 02:46:04.783203 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Apr 21 02:46:04.795455 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Apr 21 02:46:04.807686 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Apr 21 02:46:04.815171 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 21 02:46:04.823657 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Apr 21 02:46:04.833630 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Apr 21 02:46:04.834628 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 21 02:46:04.844116 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 21 02:46:04.844167 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 02:46:04.862682 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 02:46:04.881876 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 02:46:04.900871 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 21 02:46:04.900895 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 21 02:46:04.900907 disk-uuid[642]: Primary Header is updated. Apr 21 02:46:04.900907 disk-uuid[642]: Secondary Entries is updated. Apr 21 02:46:04.900907 disk-uuid[642]: Secondary Header is updated. Apr 21 02:46:04.893729 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Apr 21 02:46:04.949573 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 02:46:05.048347 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 21 02:46:05.052405 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 21 02:46:05.056424 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 21 02:46:05.059405 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 21 02:46:05.064387 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 21 02:46:05.064417 kernel: ata3.00: LPM support broken, forcing max_power Apr 21 02:46:05.069975 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 21 02:46:05.069994 kernel: ata3.00: applying bridge limits Apr 21 02:46:05.076365 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 21 02:46:05.081584 kernel: ata3.00: LPM support broken, forcing max_power Apr 21 02:46:05.081610 kernel: ata3.00: configured for UDMA/100 Apr 21 02:46:05.087406 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 21 02:46:05.145537 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 21 02:46:05.145768 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 21 02:46:05.159409 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Apr 21 02:46:05.490322 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 21 02:46:05.495422 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 21 02:46:05.504999 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 21 02:46:05.509939 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 21 02:46:05.529358 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 21 02:46:05.560050 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 21 02:46:05.902378 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 21 02:46:05.903097 disk-uuid[643]: The operation has completed successfully. Apr 21 02:46:05.930046 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 21 02:46:05.930433 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 21 02:46:05.961558 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 21 02:46:05.984141 sh[688]: Success Apr 21 02:46:06.011142 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 21 02:46:06.011185 kernel: device-mapper: uevent: version 1.0.3 Apr 21 02:46:06.016385 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Apr 21 02:46:06.031398 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 21 02:46:06.063746 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 21 02:46:06.074735 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 21 02:46:06.104583 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 21 02:46:06.129414 kernel: BTRFS: device fsid f0ffb5f7-32a8-4c02-8f56-14d7d8f0dab5 devid 1 transid 34 /dev/mapper/usr (253:0) scanned by mount (701) Apr 21 02:46:06.129446 kernel: BTRFS info (device dm-0): first mount of filesystem f0ffb5f7-32a8-4c02-8f56-14d7d8f0dab5 Apr 21 02:46:06.130368 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 21 02:46:06.147394 kernel: BTRFS info (device dm-0 state E): disabling log replay at mount time Apr 21 02:46:06.147418 kernel: BTRFS info (device dm-0 state E): enabling free space tree Apr 21 02:46:06.148847 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 21 02:46:06.149470 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Apr 21 02:46:06.155944 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 21 02:46:06.156732 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 21 02:46:06.190381 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 21 02:46:06.222373 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (724) Apr 21 02:46:06.231114 kernel: BTRFS info (device vda6): first mount of filesystem c8a1e0fd-1038-473a-a82d-f70d62b109dc Apr 21 02:46:06.231138 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 21 02:46:06.242356 kernel: BTRFS info (device vda6): turning on async discard Apr 21 02:46:06.242380 kernel: BTRFS info (device vda6): enabling free space tree Apr 21 02:46:06.252372 kernel: BTRFS info (device vda6): last unmount of filesystem c8a1e0fd-1038-473a-a82d-f70d62b109dc Apr 21 02:46:06.254811 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 21 02:46:06.260171 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 21 02:46:06.356194 ignition[773]: Ignition 2.22.0 Apr 21 02:46:06.356396 ignition[773]: Stage: fetch-offline Apr 21 02:46:06.356424 ignition[773]: no configs at "/usr/lib/ignition/base.d" Apr 21 02:46:06.356430 ignition[773]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 21 02:46:06.356487 ignition[773]: parsed url from cmdline: "" Apr 21 02:46:06.356489 ignition[773]: no config URL provided Apr 21 02:46:06.356492 ignition[773]: reading system config file "/usr/lib/ignition/user.ign" Apr 21 02:46:06.356497 ignition[773]: no config at "/usr/lib/ignition/user.ign" Apr 21 02:46:06.356516 ignition[773]: op(1): [started] loading QEMU firmware config module Apr 21 02:46:06.356519 ignition[773]: op(1): executing: "modprobe" "qemu_fw_cfg" Apr 21 02:46:06.395122 ignition[773]: op(1): [finished] loading QEMU firmware config module Apr 21 02:46:06.412267 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 21 02:46:06.418993 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 21 02:46:06.475597 systemd-networkd[878]: lo: Link UP Apr 21 02:46:06.475638 systemd-networkd[878]: lo: Gained carrier Apr 21 02:46:06.476522 systemd-networkd[878]: Enumeration completed Apr 21 02:46:06.477351 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 21 02:46:06.478618 systemd-networkd[878]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 02:46:06.478620 systemd-networkd[878]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 21 02:46:06.479648 systemd-networkd[878]: eth0: Link UP Apr 21 02:46:06.479735 systemd-networkd[878]: eth0: Gained carrier Apr 21 02:46:06.479742 systemd-networkd[878]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 02:46:06.481934 systemd[1]: Reached target network.target - Network. Apr 21 02:46:06.535380 systemd-networkd[878]: eth0: DHCPv4 address 10.0.0.33/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 21 02:46:06.845495 ignition[773]: parsing config with SHA512: 81a6adbc285dbc5336e164bfefe41fdb0c61434e228b9eaa65ca97b6febc0f7314ff1b72e292088916b5c939954916c541a50b9e0d3f4d675bc0997a4860f21d Apr 21 02:46:06.896995 unknown[773]: fetched base config from "system" Apr 21 02:46:06.897046 unknown[773]: fetched user config from "qemu" Apr 21 02:46:06.905343 ignition[773]: fetch-offline: fetch-offline passed Apr 21 02:46:06.905441 ignition[773]: Ignition finished successfully Apr 21 02:46:06.914181 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 21 02:46:06.914753 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Apr 21 02:46:06.915948 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 21 02:46:07.007065 ignition[883]: Ignition 2.22.0 Apr 21 02:46:07.007152 ignition[883]: Stage: kargs Apr 21 02:46:07.007515 ignition[883]: no configs at "/usr/lib/ignition/base.d" Apr 21 02:46:07.007526 ignition[883]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 21 02:46:07.008660 ignition[883]: kargs: kargs passed Apr 21 02:46:07.008716 ignition[883]: Ignition finished successfully Apr 21 02:46:07.035473 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 21 02:46:07.050043 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 21 02:46:07.113075 ignition[891]: Ignition 2.22.0 Apr 21 02:46:07.113161 ignition[891]: Stage: disks Apr 21 02:46:07.113537 ignition[891]: no configs at "/usr/lib/ignition/base.d" Apr 21 02:46:07.113549 ignition[891]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 21 02:46:07.114641 ignition[891]: disks: disks passed Apr 21 02:46:07.114692 ignition[891]: Ignition finished successfully Apr 21 02:46:07.133454 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 21 02:46:07.140740 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 21 02:46:07.152820 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 21 02:46:07.165582 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 21 02:46:07.177518 systemd[1]: Reached target sysinit.target - System Initialization. Apr 21 02:46:07.190359 systemd[1]: Reached target basic.target - Basic System. Apr 21 02:46:07.206116 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 21 02:46:07.260059 systemd-fsck[901]: ROOT: clean, 15/553520 files, 52789/553472 blocks Apr 21 02:46:07.269184 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 21 02:46:07.271442 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 21 02:46:07.644614 kernel: EXT4-fs (vda9): mounted filesystem 146ef5ea-4935-456e-a7a6-cf0210fee567 r/w with ordered data mode. Quota mode: none. Apr 21 02:46:07.646516 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 21 02:46:07.658562 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 21 02:46:07.675562 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 21 02:46:07.700593 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 21 02:46:07.737101 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (909) Apr 21 02:46:07.737135 kernel: BTRFS info (device vda6): first mount of filesystem c8a1e0fd-1038-473a-a82d-f70d62b109dc Apr 21 02:46:07.737148 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 21 02:46:07.737806 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 21 02:46:07.737893 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 21 02:46:07.778350 kernel: BTRFS info (device vda6): turning on async discard Apr 21 02:46:07.778398 kernel: BTRFS info (device vda6): enabling free space tree Apr 21 02:46:07.737932 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 21 02:46:07.766515 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 21 02:46:07.782365 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 21 02:46:07.803015 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 21 02:46:07.903045 initrd-setup-root[933]: cut: /sysroot/etc/passwd: No such file or directory Apr 21 02:46:07.913739 initrd-setup-root[940]: cut: /sysroot/etc/group: No such file or directory Apr 21 02:46:07.923385 initrd-setup-root[947]: cut: /sysroot/etc/shadow: No such file or directory Apr 21 02:46:07.937919 initrd-setup-root[954]: cut: /sysroot/etc/gshadow: No such file or directory Apr 21 02:46:08.128607 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 21 02:46:08.130879 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 21 02:46:08.160556 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 21 02:46:08.176940 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 21 02:46:08.188119 kernel: BTRFS info (device vda6): last unmount of filesystem c8a1e0fd-1038-473a-a82d-f70d62b109dc Apr 21 02:46:08.181937 systemd-networkd[878]: eth0: Gained IPv6LL Apr 21 02:46:08.207692 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 21 02:46:08.236637 ignition[1022]: INFO : Ignition 2.22.0 Apr 21 02:46:08.236637 ignition[1022]: INFO : Stage: mount Apr 21 02:46:08.242201 ignition[1022]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 21 02:46:08.242201 ignition[1022]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 21 02:46:08.242201 ignition[1022]: INFO : mount: mount passed Apr 21 02:46:08.242201 ignition[1022]: INFO : Ignition finished successfully Apr 21 02:46:08.259478 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 21 02:46:08.268620 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 21 02:46:08.644796 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 21 02:46:08.675451 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1035) Apr 21 02:46:08.675516 kernel: BTRFS info (device vda6): first mount of filesystem c8a1e0fd-1038-473a-a82d-f70d62b109dc Apr 21 02:46:08.675532 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 21 02:46:08.690504 kernel: BTRFS info (device vda6): turning on async discard Apr 21 02:46:08.690576 kernel: BTRFS info (device vda6): enabling free space tree Apr 21 02:46:08.692535 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 21 02:46:08.739729 ignition[1052]: INFO : Ignition 2.22.0 Apr 21 02:46:08.739729 ignition[1052]: INFO : Stage: files Apr 21 02:46:08.739729 ignition[1052]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 21 02:46:08.739729 ignition[1052]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 21 02:46:08.762765 ignition[1052]: DEBUG : files: compiled without relabeling support, skipping Apr 21 02:46:08.762765 ignition[1052]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 21 02:46:08.762765 ignition[1052]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 21 02:46:08.762765 ignition[1052]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 21 02:46:08.762765 ignition[1052]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 21 02:46:08.762765 ignition[1052]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 21 02:46:08.762765 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 21 02:46:08.762765 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 21 02:46:08.746074 unknown[1052]: wrote ssh authorized keys file for user: core Apr 21 02:46:08.855766 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 21 02:46:08.992445 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 21 02:46:08.992445 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 21 02:46:09.008171 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Apr 21 02:46:09.233680 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 21 02:46:09.303880 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 21 02:46:09.303880 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Apr 21 02:46:09.318866 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Apr 21 02:46:09.318866 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 21 02:46:09.333966 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 21 02:46:09.333966 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 21 02:46:09.333966 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 21 02:46:09.333966 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 21 02:46:09.333966 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 21 02:46:09.333966 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 21 02:46:09.333966 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 21 02:46:09.333966 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 21 02:46:09.333966 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 21 02:46:09.333966 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 21 02:46:09.333966 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-x86-64.raw: attempt #1 Apr 21 02:46:09.565543 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Apr 21 02:46:09.820743 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 21 02:46:09.820743 ignition[1052]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Apr 21 02:46:09.836342 ignition[1052]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 21 02:46:09.844560 ignition[1052]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 21 02:46:09.844560 ignition[1052]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Apr 21 02:46:09.844560 ignition[1052]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Apr 21 02:46:09.844560 ignition[1052]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 21 02:46:09.844560 ignition[1052]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 21 02:46:09.844560 ignition[1052]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Apr 21 02:46:09.844560 ignition[1052]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Apr 21 02:46:09.895533 ignition[1052]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Apr 21 02:46:09.895533 ignition[1052]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Apr 21 02:46:09.895533 ignition[1052]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Apr 21 02:46:09.895533 ignition[1052]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Apr 21 02:46:09.895533 ignition[1052]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Apr 21 02:46:09.895533 ignition[1052]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 21 02:46:09.895533 ignition[1052]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 21 02:46:09.895533 ignition[1052]: INFO : files: files passed Apr 21 02:46:09.895533 ignition[1052]: INFO : Ignition finished successfully Apr 21 02:46:09.867874 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 21 02:46:09.948034 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 21 02:46:09.952756 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 21 02:46:09.989187 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 21 02:46:09.989458 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 21 02:46:10.002789 initrd-setup-root-after-ignition[1080]: grep: /sysroot/oem/oem-release: No such file or directory Apr 21 02:46:10.008462 initrd-setup-root-after-ignition[1083]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 21 02:46:10.008462 initrd-setup-root-after-ignition[1083]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 21 02:46:10.021200 initrd-setup-root-after-ignition[1087]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 21 02:46:10.021619 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 21 02:46:10.027971 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 21 02:46:10.038598 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 21 02:46:10.118065 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 21 02:46:10.118407 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 21 02:46:10.122867 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 21 02:46:10.132871 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 21 02:46:10.141066 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 21 02:46:10.141845 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 21 02:46:10.185693 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 21 02:46:10.196289 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 21 02:46:10.235974 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 21 02:46:10.241388 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 21 02:46:10.246625 systemd[1]: Stopped target timers.target - Timer Units. Apr 21 02:46:10.251454 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 21 02:46:10.251607 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 21 02:46:10.268609 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 21 02:46:10.276717 systemd[1]: Stopped target basic.target - Basic System. Apr 21 02:46:10.285353 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 21 02:46:10.298037 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 21 02:46:10.308331 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 21 02:46:10.318775 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Apr 21 02:46:10.332915 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 21 02:46:10.344882 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 21 02:46:10.351806 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 21 02:46:10.360376 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 21 02:46:10.361334 systemd[1]: Stopped target swap.target - Swaps. Apr 21 02:46:10.371971 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 21 02:46:10.372130 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 21 02:46:10.383049 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 21 02:46:10.386791 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 21 02:46:10.394768 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 21 02:46:10.394986 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 21 02:46:10.412729 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 21 02:46:10.412837 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 21 02:46:10.426757 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 21 02:46:10.427111 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 21 02:46:10.445923 systemd[1]: Stopped target paths.target - Path Units. Apr 21 02:46:10.457105 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 21 02:46:10.465726 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 21 02:46:10.467868 systemd[1]: Stopped target slices.target - Slice Units. Apr 21 02:46:10.480495 systemd[1]: Stopped target sockets.target - Socket Units. Apr 21 02:46:10.493755 systemd[1]: iscsid.socket: Deactivated successfully. Apr 21 02:46:10.493926 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 21 02:46:10.516034 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 21 02:46:10.516675 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 21 02:46:10.527049 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 21 02:46:10.527394 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 21 02:46:10.538819 systemd[1]: ignition-files.service: Deactivated successfully. Apr 21 02:46:10.538976 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 21 02:46:10.545681 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 21 02:46:10.587654 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 21 02:46:10.594677 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 21 02:46:10.594993 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 21 02:46:10.608545 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 21 02:46:10.636795 ignition[1108]: INFO : Ignition 2.22.0 Apr 21 02:46:10.636795 ignition[1108]: INFO : Stage: umount Apr 21 02:46:10.636795 ignition[1108]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 21 02:46:10.636795 ignition[1108]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 21 02:46:10.636795 ignition[1108]: INFO : umount: umount passed Apr 21 02:46:10.636795 ignition[1108]: INFO : Ignition finished successfully Apr 21 02:46:10.608682 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 21 02:46:10.639779 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 21 02:46:10.639925 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 21 02:46:10.650901 systemd[1]: Stopped target network.target - Network. Apr 21 02:46:10.668439 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 21 02:46:10.668605 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 21 02:46:10.673608 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 21 02:46:10.673742 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 21 02:46:10.688843 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 21 02:46:10.688926 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 21 02:46:10.700731 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 21 02:46:10.700807 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 21 02:46:10.713939 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 21 02:46:10.732385 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 21 02:46:10.744773 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 21 02:46:10.748801 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 21 02:46:10.748978 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 21 02:46:10.782120 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Apr 21 02:46:10.782979 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 21 02:46:10.783167 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 21 02:46:10.798909 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Apr 21 02:46:10.799557 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 21 02:46:10.799727 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 21 02:46:10.811030 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 21 02:46:10.811295 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 21 02:46:10.822563 systemd[1]: Stopped target network-pre.target - Preparation for Network. Apr 21 02:46:10.837643 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 21 02:46:10.837708 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 21 02:46:10.838578 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 21 02:46:10.838746 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 21 02:46:10.854536 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 21 02:46:10.863290 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 21 02:46:10.863432 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 21 02:46:10.874666 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 21 02:46:10.874741 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 21 02:46:10.899429 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 21 02:46:10.899515 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 21 02:46:10.906388 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 21 02:46:10.906474 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 21 02:46:10.941062 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 21 02:46:10.948848 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Apr 21 02:46:10.948932 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Apr 21 02:46:10.978662 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 21 02:46:10.978835 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 21 02:46:11.061481 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 21 02:46:11.061844 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 21 02:46:11.067601 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 21 02:46:11.067649 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 21 02:46:11.081083 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 21 02:46:11.081137 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 21 02:46:11.092404 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 21 02:46:11.092490 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 21 02:46:11.114535 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 21 02:46:11.114621 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 21 02:46:11.131938 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 21 02:46:11.132022 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 02:46:11.169435 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 21 02:46:11.169581 systemd[1]: systemd-network-generator.service: Deactivated successfully. Apr 21 02:46:11.169636 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Apr 21 02:46:11.202537 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 21 02:46:11.202634 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 21 02:46:11.212756 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 21 02:46:11.212798 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 21 02:46:11.233994 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 21 02:46:11.234081 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 21 02:46:11.243093 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 21 02:46:11.243128 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 02:46:11.254008 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Apr 21 02:46:11.254046 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Apr 21 02:46:11.254067 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Apr 21 02:46:11.254091 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Apr 21 02:46:11.254452 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 21 02:46:11.254554 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 21 02:46:11.257180 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 21 02:46:11.271743 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 21 02:46:11.293793 systemd[1]: Switching root. Apr 21 02:46:11.350511 systemd-journald[205]: Journal stopped Apr 21 02:46:12.677447 systemd-journald[205]: Received SIGTERM from PID 1 (systemd). Apr 21 02:46:12.677534 kernel: SELinux: policy capability network_peer_controls=1 Apr 21 02:46:12.677551 kernel: SELinux: policy capability open_perms=1 Apr 21 02:46:12.677572 kernel: SELinux: policy capability extended_socket_class=1 Apr 21 02:46:12.677585 kernel: SELinux: policy capability always_check_network=0 Apr 21 02:46:12.677598 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 21 02:46:12.677612 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 21 02:46:12.677625 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 21 02:46:12.677636 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 21 02:46:12.677650 kernel: SELinux: policy capability userspace_initial_context=0 Apr 21 02:46:12.677663 kernel: audit: type=1403 audit(1776739571.540:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 21 02:46:12.677681 systemd[1]: Successfully loaded SELinux policy in 61.655ms. Apr 21 02:46:12.677701 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 4.255ms. Apr 21 02:46:12.677716 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 21 02:46:12.677732 systemd[1]: Detected virtualization kvm. Apr 21 02:46:12.677746 systemd[1]: Detected architecture x86-64. Apr 21 02:46:12.677760 systemd[1]: Detected first boot. Apr 21 02:46:12.677773 systemd[1]: Initializing machine ID from VM UUID. Apr 21 02:46:12.677789 zram_generator::config[1152]: No configuration found. Apr 21 02:46:12.677807 kernel: Guest personality initialized and is inactive Apr 21 02:46:12.677821 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Apr 21 02:46:12.677833 kernel: Initialized host personality Apr 21 02:46:12.677846 kernel: NET: Registered PF_VSOCK protocol family Apr 21 02:46:12.677858 systemd[1]: Populated /etc with preset unit settings. Apr 21 02:46:12.677873 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Apr 21 02:46:12.677887 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 21 02:46:12.677903 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 21 02:46:12.677917 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 21 02:46:12.677931 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 21 02:46:12.677944 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 21 02:46:12.677958 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 21 02:46:12.677970 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 21 02:46:12.677984 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 21 02:46:12.677997 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 21 02:46:12.678012 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 21 02:46:12.678027 systemd[1]: Created slice user.slice - User and Session Slice. Apr 21 02:46:12.678042 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 21 02:46:12.678055 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 21 02:46:12.678068 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 21 02:46:12.678081 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 21 02:46:12.678097 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 21 02:46:12.678111 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 21 02:46:12.678126 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 21 02:46:12.678139 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 21 02:46:12.678153 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 21 02:46:12.678166 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 21 02:46:12.678179 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 21 02:46:12.678192 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 21 02:46:12.678404 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 21 02:46:12.678446 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 21 02:46:12.678459 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 21 02:46:12.678473 systemd[1]: Reached target slices.target - Slice Units. Apr 21 02:46:12.678490 systemd[1]: Reached target swap.target - Swaps. Apr 21 02:46:12.678505 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 21 02:46:12.678520 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 21 02:46:12.678534 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Apr 21 02:46:12.678548 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 21 02:46:12.678562 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 21 02:46:12.678576 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 21 02:46:12.678589 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 21 02:46:12.678604 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 21 02:46:12.678621 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 21 02:46:12.678636 systemd[1]: Mounting media.mount - External Media Directory... Apr 21 02:46:12.678650 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 02:46:12.678665 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 21 02:46:12.678680 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 21 02:46:12.678694 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 21 02:46:12.678708 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 21 02:46:12.678723 systemd[1]: Reached target machines.target - Containers. Apr 21 02:46:12.678739 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 21 02:46:12.678753 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 21 02:46:12.678768 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 21 02:46:12.678783 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 21 02:46:12.678797 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 21 02:46:12.678811 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 21 02:46:12.678825 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 21 02:46:12.678838 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 21 02:46:12.678853 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 21 02:46:12.678870 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 21 02:46:12.678885 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 21 02:46:12.678898 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 21 02:46:12.678911 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 21 02:46:12.678933 systemd[1]: Stopped systemd-fsck-usr.service. Apr 21 02:46:12.678948 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 21 02:46:12.678962 kernel: ACPI: bus type drm_connector registered Apr 21 02:46:12.678977 kernel: loop: module loaded Apr 21 02:46:12.678992 kernel: fuse: init (API version 7.41) Apr 21 02:46:12.679005 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 21 02:46:12.679019 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 21 02:46:12.679032 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 21 02:46:12.679047 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 21 02:46:12.679091 systemd-journald[1237]: Collecting audit messages is disabled. Apr 21 02:46:12.679132 systemd-journald[1237]: Journal started Apr 21 02:46:12.679161 systemd-journald[1237]: Runtime Journal (/run/log/journal/bac0659afdd74e8a9e392612032df7d2) is 6M, max 48.1M, 42.1M free. Apr 21 02:46:12.027497 systemd[1]: Queued start job for default target multi-user.target. Apr 21 02:46:12.041822 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Apr 21 02:46:12.042598 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 21 02:46:12.043030 systemd[1]: systemd-journald.service: Consumed 1.095s CPU time. Apr 21 02:46:12.704426 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Apr 21 02:46:12.716408 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 21 02:46:12.728067 systemd[1]: verity-setup.service: Deactivated successfully. Apr 21 02:46:12.728118 systemd[1]: Stopped verity-setup.service. Apr 21 02:46:12.741384 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 02:46:12.753618 systemd[1]: Started systemd-journald.service - Journal Service. Apr 21 02:46:12.754177 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 21 02:46:12.758615 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 21 02:46:12.762991 systemd[1]: Mounted media.mount - External Media Directory. Apr 21 02:46:12.766961 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 21 02:46:12.771517 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 21 02:46:12.775923 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 21 02:46:12.780389 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 21 02:46:12.785732 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 21 02:46:12.790993 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 21 02:46:12.791934 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 21 02:46:12.796812 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 21 02:46:12.797685 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 21 02:46:12.802709 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 21 02:46:12.803480 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 21 02:46:12.808831 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 21 02:46:12.809770 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 21 02:46:12.815654 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 21 02:46:12.815964 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 21 02:46:12.822402 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 21 02:46:12.822750 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 21 02:46:12.828049 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 21 02:46:12.833046 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 21 02:46:12.838459 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 21 02:46:12.844163 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Apr 21 02:46:12.850132 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 21 02:46:12.864539 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 21 02:46:12.870107 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 21 02:46:12.876046 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 21 02:46:12.880676 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 21 02:46:12.880744 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 21 02:46:12.886036 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Apr 21 02:46:12.893827 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 21 02:46:12.898663 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 21 02:46:12.899686 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 21 02:46:12.906821 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 21 02:46:12.911764 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 21 02:46:12.913358 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 21 02:46:12.918517 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 21 02:46:12.920411 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 21 02:46:12.920653 systemd-journald[1237]: Time spent on flushing to /var/log/journal/bac0659afdd74e8a9e392612032df7d2 is 34.516ms for 1079 entries. Apr 21 02:46:12.920653 systemd-journald[1237]: System Journal (/var/log/journal/bac0659afdd74e8a9e392612032df7d2) is 8M, max 195.6M, 187.6M free. Apr 21 02:46:12.971008 systemd-journald[1237]: Received client request to flush runtime journal. Apr 21 02:46:12.971045 kernel: loop0: detected capacity change from 0 to 219192 Apr 21 02:46:12.933073 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 21 02:46:12.943558 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 21 02:46:12.949961 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 21 02:46:12.955440 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 21 02:46:12.962445 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 21 02:46:12.970115 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 21 02:46:12.977007 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Apr 21 02:46:12.982108 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 21 02:46:12.989873 systemd-tmpfiles[1273]: ACLs are not supported, ignoring. Apr 21 02:46:12.989919 systemd-tmpfiles[1273]: ACLs are not supported, ignoring. Apr 21 02:46:12.991811 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 21 02:46:13.004816 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 21 02:46:13.010304 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 21 02:46:13.015708 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 21 02:46:13.031041 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Apr 21 02:46:13.044115 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 21 02:46:13.049279 kernel: loop1: detected capacity change from 0 to 110984 Apr 21 02:46:13.061012 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 21 02:46:13.067422 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 21 02:46:13.096201 systemd-tmpfiles[1294]: ACLs are not supported, ignoring. Apr 21 02:46:13.096634 systemd-tmpfiles[1294]: ACLs are not supported, ignoring. Apr 21 02:46:13.099494 kernel: loop2: detected capacity change from 0 to 128560 Apr 21 02:46:13.100007 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 21 02:46:13.134439 kernel: loop3: detected capacity change from 0 to 219192 Apr 21 02:46:13.155290 kernel: loop4: detected capacity change from 0 to 110984 Apr 21 02:46:13.175368 kernel: loop5: detected capacity change from 0 to 128560 Apr 21 02:46:13.193632 (sd-merge)[1298]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Apr 21 02:46:13.193971 (sd-merge)[1298]: Merged extensions into '/usr'. Apr 21 02:46:13.198975 systemd[1]: Reload requested from client PID 1272 ('systemd-sysext') (unit systemd-sysext.service)... Apr 21 02:46:13.199071 systemd[1]: Reloading... Apr 21 02:46:13.254396 zram_generator::config[1321]: No configuration found. Apr 21 02:46:13.321119 ldconfig[1267]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 21 02:46:13.407830 systemd[1]: Reloading finished in 207 ms. Apr 21 02:46:13.427600 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 21 02:46:13.432687 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 21 02:46:13.437983 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 21 02:46:13.462509 systemd[1]: Starting ensure-sysext.service... Apr 21 02:46:13.467692 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 21 02:46:13.474594 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 21 02:46:13.488676 systemd[1]: Reload requested from client PID 1363 ('systemctl') (unit ensure-sysext.service)... Apr 21 02:46:13.488721 systemd[1]: Reloading... Apr 21 02:46:13.495799 systemd-tmpfiles[1364]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Apr 21 02:46:13.495881 systemd-tmpfiles[1364]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Apr 21 02:46:13.496128 systemd-tmpfiles[1364]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 21 02:46:13.496568 systemd-tmpfiles[1364]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 21 02:46:13.497506 systemd-tmpfiles[1364]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 21 02:46:13.497817 systemd-tmpfiles[1364]: ACLs are not supported, ignoring. Apr 21 02:46:13.497923 systemd-tmpfiles[1364]: ACLs are not supported, ignoring. Apr 21 02:46:13.501180 systemd-tmpfiles[1364]: Detected autofs mount point /boot during canonicalization of boot. Apr 21 02:46:13.501189 systemd-tmpfiles[1364]: Skipping /boot Apr 21 02:46:13.501617 systemd-udevd[1365]: Using default interface naming scheme 'v255'. Apr 21 02:46:13.508724 systemd-tmpfiles[1364]: Detected autofs mount point /boot during canonicalization of boot. Apr 21 02:46:13.508796 systemd-tmpfiles[1364]: Skipping /boot Apr 21 02:46:13.545363 zram_generator::config[1388]: No configuration found. Apr 21 02:46:13.652400 kernel: mousedev: PS/2 mouse device common for all mice Apr 21 02:46:13.667303 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Apr 21 02:46:13.685592 kernel: ACPI: button: Power Button [PWRF] Apr 21 02:46:13.693459 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Apr 21 02:46:13.693653 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 21 02:46:13.700348 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 21 02:46:13.753783 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 21 02:46:13.754088 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 21 02:46:13.759526 systemd[1]: Reloading finished in 270 ms. Apr 21 02:46:13.767766 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 21 02:46:13.774692 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 21 02:46:13.892639 systemd[1]: Finished ensure-sysext.service. Apr 21 02:46:13.975022 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 02:46:13.982905 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 21 02:46:13.990815 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 21 02:46:13.995618 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 21 02:46:14.025644 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 21 02:46:14.033386 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 21 02:46:14.038854 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 21 02:46:14.045103 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 21 02:46:14.049833 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 21 02:46:14.052383 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 21 02:46:14.057292 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 21 02:46:14.058128 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 21 02:46:14.066597 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 21 02:46:14.075952 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 21 02:46:14.082715 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 21 02:46:14.093160 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 21 02:46:14.096525 augenrules[1516]: No rules Apr 21 02:46:14.110835 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 02:46:14.115600 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 02:46:14.116371 systemd[1]: audit-rules.service: Deactivated successfully. Apr 21 02:46:14.116573 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 21 02:46:14.120820 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 21 02:46:14.127179 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 21 02:46:14.127727 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 21 02:46:14.132472 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 21 02:46:14.132746 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 21 02:46:14.137100 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 21 02:46:14.137477 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 21 02:46:14.137781 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 21 02:46:14.138048 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 21 02:46:14.139302 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 21 02:46:14.141348 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 21 02:46:14.148819 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 21 02:46:14.150575 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 21 02:46:14.150664 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 21 02:46:14.151661 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 21 02:46:14.153069 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 21 02:46:14.153192 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 21 02:46:14.176818 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 21 02:46:14.203670 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 21 02:46:14.213033 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 02:46:14.263856 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 21 02:46:14.266482 systemd-networkd[1505]: lo: Link UP Apr 21 02:46:14.266488 systemd-networkd[1505]: lo: Gained carrier Apr 21 02:46:14.267619 systemd-networkd[1505]: Enumeration completed Apr 21 02:46:14.268633 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 21 02:46:14.268897 systemd-networkd[1505]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 02:46:14.268942 systemd-networkd[1505]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 21 02:46:14.269880 systemd-networkd[1505]: eth0: Link UP Apr 21 02:46:14.270405 systemd-networkd[1505]: eth0: Gained carrier Apr 21 02:46:14.270503 systemd-networkd[1505]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 02:46:14.273016 systemd[1]: Reached target time-set.target - System Time Set. Apr 21 02:46:14.275753 systemd-resolved[1510]: Positive Trust Anchors: Apr 21 02:46:14.275813 systemd-resolved[1510]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 21 02:46:14.275838 systemd-resolved[1510]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 21 02:46:14.278181 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Apr 21 02:46:14.279142 systemd-resolved[1510]: Defaulting to hostname 'linux'. Apr 21 02:46:14.283664 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 21 02:46:14.288943 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 21 02:46:14.293459 systemd[1]: Reached target network.target - Network. Apr 21 02:46:14.297002 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 21 02:46:14.301667 systemd[1]: Reached target sysinit.target - System Initialization. Apr 21 02:46:14.305779 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 21 02:46:14.306398 systemd-networkd[1505]: eth0: DHCPv4 address 10.0.0.33/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 21 02:46:14.306931 systemd-timesyncd[1512]: Network configuration changed, trying to establish connection. Apr 21 02:46:15.069517 systemd-resolved[1510]: Clock change detected. Flushing caches. Apr 21 02:46:15.069547 systemd-timesyncd[1512]: Contacted time server 10.0.0.1:123 (10.0.0.1). Apr 21 02:46:15.069581 systemd-timesyncd[1512]: Initial clock synchronization to Tue 2026-04-21 02:46:15.069453 UTC. Apr 21 02:46:15.072401 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 21 02:46:15.077319 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Apr 21 02:46:15.082293 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 21 02:46:15.086628 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 21 02:46:15.091669 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 21 02:46:15.096531 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 21 02:46:15.096593 systemd[1]: Reached target paths.target - Path Units. Apr 21 02:46:15.100207 systemd[1]: Reached target timers.target - Timer Units. Apr 21 02:46:15.104791 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 21 02:46:15.110560 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 21 02:46:15.115955 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Apr 21 02:46:15.121283 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Apr 21 02:46:15.126242 systemd[1]: Reached target ssh-access.target - SSH Access Available. Apr 21 02:46:15.132655 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 21 02:46:15.136953 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Apr 21 02:46:15.142771 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Apr 21 02:46:15.148295 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 21 02:46:15.154661 systemd[1]: Reached target sockets.target - Socket Units. Apr 21 02:46:15.158468 systemd[1]: Reached target basic.target - Basic System. Apr 21 02:46:15.162449 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 21 02:46:15.162506 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 21 02:46:15.163434 systemd[1]: Starting containerd.service - containerd container runtime... Apr 21 02:46:15.178498 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 21 02:46:15.182971 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 21 02:46:15.188523 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 21 02:46:15.193459 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 21 02:46:15.197400 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 21 02:46:15.204170 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Apr 21 02:46:15.205753 jq[1557]: false Apr 21 02:46:15.209171 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 21 02:46:15.213842 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 21 02:46:15.215390 google_oslogin_nss_cache[1559]: oslogin_cache_refresh[1559]: Refreshing passwd entry cache Apr 21 02:46:15.215392 oslogin_cache_refresh[1559]: Refreshing passwd entry cache Apr 21 02:46:15.218662 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 21 02:46:15.224137 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 21 02:46:15.227058 extend-filesystems[1558]: Found /dev/vda6 Apr 21 02:46:15.230163 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 21 02:46:15.229672 oslogin_cache_refresh[1559]: Failure getting users, quitting Apr 21 02:46:15.231379 google_oslogin_nss_cache[1559]: oslogin_cache_refresh[1559]: Failure getting users, quitting Apr 21 02:46:15.231379 google_oslogin_nss_cache[1559]: oslogin_cache_refresh[1559]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Apr 21 02:46:15.231379 google_oslogin_nss_cache[1559]: oslogin_cache_refresh[1559]: Refreshing group entry cache Apr 21 02:46:15.229683 oslogin_cache_refresh[1559]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Apr 21 02:46:15.229711 oslogin_cache_refresh[1559]: Refreshing group entry cache Apr 21 02:46:15.233679 extend-filesystems[1558]: Found /dev/vda9 Apr 21 02:46:15.239254 extend-filesystems[1558]: Checking size of /dev/vda9 Apr 21 02:46:15.236304 oslogin_cache_refresh[1559]: Failure getting groups, quitting Apr 21 02:46:15.247155 google_oslogin_nss_cache[1559]: oslogin_cache_refresh[1559]: Failure getting groups, quitting Apr 21 02:46:15.247155 google_oslogin_nss_cache[1559]: oslogin_cache_refresh[1559]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Apr 21 02:46:15.235834 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 21 02:46:15.236311 oslogin_cache_refresh[1559]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Apr 21 02:46:15.237119 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 21 02:46:15.237582 systemd[1]: Starting update-engine.service - Update Engine... Apr 21 02:46:15.244250 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 21 02:46:15.254332 extend-filesystems[1558]: Resized partition /dev/vda9 Apr 21 02:46:15.260653 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 21 02:46:15.263805 extend-filesystems[1585]: resize2fs 1.47.3 (8-Jul-2025) Apr 21 02:46:15.266296 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 21 02:46:15.275142 update_engine[1575]: I20260421 02:46:15.272320 1575 main.cc:92] Flatcar Update Engine starting Apr 21 02:46:15.275269 jq[1578]: true Apr 21 02:46:15.266463 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 21 02:46:15.266631 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Apr 21 02:46:15.266778 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Apr 21 02:46:15.271364 systemd[1]: motdgen.service: Deactivated successfully. Apr 21 02:46:15.271529 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 21 02:46:15.280202 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Apr 21 02:46:15.284786 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 21 02:46:15.285493 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 21 02:46:15.311142 jq[1589]: true Apr 21 02:46:15.317252 (ntainerd)[1598]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 21 02:46:15.335807 dbus-daemon[1555]: [system] SELinux support is enabled Apr 21 02:46:15.336868 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 21 02:46:15.341436 update_engine[1575]: I20260421 02:46:15.341351 1575 update_check_scheduler.cc:74] Next update check in 2m0s Apr 21 02:46:15.345393 tar[1588]: linux-amd64/LICENSE Apr 21 02:46:15.345379 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 21 02:46:15.345609 tar[1588]: linux-amd64/helm Apr 21 02:46:15.345399 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 21 02:46:15.348171 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Apr 21 02:46:15.352802 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 21 02:46:15.376527 extend-filesystems[1585]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 21 02:46:15.376527 extend-filesystems[1585]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 21 02:46:15.376527 extend-filesystems[1585]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Apr 21 02:46:15.352819 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 21 02:46:15.406145 sshd_keygen[1581]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 21 02:46:15.406216 extend-filesystems[1558]: Resized filesystem in /dev/vda9 Apr 21 02:46:15.420790 bash[1617]: Updated "/home/core/.ssh/authorized_keys" Apr 21 02:46:15.358482 systemd[1]: Started update-engine.service - Update Engine. Apr 21 02:46:15.364253 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 21 02:46:15.379331 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 21 02:46:15.380472 systemd-logind[1573]: Watching system buttons on /dev/input/event2 (Power Button) Apr 21 02:46:15.380486 systemd-logind[1573]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 21 02:46:15.381801 systemd-logind[1573]: New seat seat0. Apr 21 02:46:15.385338 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 21 02:46:15.395522 systemd[1]: Started systemd-logind.service - User Login Management. Apr 21 02:46:15.405319 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 21 02:46:15.419570 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 21 02:46:15.432342 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 21 02:46:15.433969 locksmithd[1616]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 21 02:46:15.438855 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 21 02:46:15.463630 systemd[1]: issuegen.service: Deactivated successfully. Apr 21 02:46:15.463910 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 21 02:46:15.469931 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 21 02:46:15.490686 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 21 02:46:15.496753 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 21 02:46:15.503293 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 21 02:46:15.507962 systemd[1]: Reached target getty.target - Login Prompts. Apr 21 02:46:15.522549 containerd[1598]: time="2026-04-21T02:46:15Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Apr 21 02:46:15.523956 containerd[1598]: time="2026-04-21T02:46:15.523904649Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Apr 21 02:46:15.533202 containerd[1598]: time="2026-04-21T02:46:15.533073993Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.2µs" Apr 21 02:46:15.533202 containerd[1598]: time="2026-04-21T02:46:15.533138628Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Apr 21 02:46:15.533202 containerd[1598]: time="2026-04-21T02:46:15.533151532Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Apr 21 02:46:15.533322 containerd[1598]: time="2026-04-21T02:46:15.533268313Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Apr 21 02:46:15.533322 containerd[1598]: time="2026-04-21T02:46:15.533281056Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Apr 21 02:46:15.533322 containerd[1598]: time="2026-04-21T02:46:15.533300623Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Apr 21 02:46:15.533360 containerd[1598]: time="2026-04-21T02:46:15.533333260Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Apr 21 02:46:15.533360 containerd[1598]: time="2026-04-21T02:46:15.533341561Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Apr 21 02:46:15.533558 containerd[1598]: time="2026-04-21T02:46:15.533494020Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Apr 21 02:46:15.533558 containerd[1598]: time="2026-04-21T02:46:15.533549562Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Apr 21 02:46:15.533596 containerd[1598]: time="2026-04-21T02:46:15.533557711Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Apr 21 02:46:15.533596 containerd[1598]: time="2026-04-21T02:46:15.533563306Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Apr 21 02:46:15.533634 containerd[1598]: time="2026-04-21T02:46:15.533612329Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Apr 21 02:46:15.533791 containerd[1598]: time="2026-04-21T02:46:15.533732536Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Apr 21 02:46:15.533813 containerd[1598]: time="2026-04-21T02:46:15.533790928Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Apr 21 02:46:15.533813 containerd[1598]: time="2026-04-21T02:46:15.533798274Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Apr 21 02:46:15.533888 containerd[1598]: time="2026-04-21T02:46:15.533850115Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Apr 21 02:46:15.534207 containerd[1598]: time="2026-04-21T02:46:15.534079786Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Apr 21 02:46:15.534207 containerd[1598]: time="2026-04-21T02:46:15.534161019Z" level=info msg="metadata content store policy set" policy=shared Apr 21 02:46:15.538836 containerd[1598]: time="2026-04-21T02:46:15.538697982Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Apr 21 02:46:15.538836 containerd[1598]: time="2026-04-21T02:46:15.538788212Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Apr 21 02:46:15.538836 containerd[1598]: time="2026-04-21T02:46:15.538798580Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Apr 21 02:46:15.538836 containerd[1598]: time="2026-04-21T02:46:15.538806246Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Apr 21 02:46:15.538836 containerd[1598]: time="2026-04-21T02:46:15.538814856Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Apr 21 02:46:15.538836 containerd[1598]: time="2026-04-21T02:46:15.538822064Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Apr 21 02:46:15.538836 containerd[1598]: time="2026-04-21T02:46:15.538830486Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Apr 21 02:46:15.538836 containerd[1598]: time="2026-04-21T02:46:15.538838822Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Apr 21 02:46:15.538836 containerd[1598]: time="2026-04-21T02:46:15.538846069Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Apr 21 02:46:15.539137 containerd[1598]: time="2026-04-21T02:46:15.538853282Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Apr 21 02:46:15.539137 containerd[1598]: time="2026-04-21T02:46:15.538860405Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Apr 21 02:46:15.539137 containerd[1598]: time="2026-04-21T02:46:15.538869239Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Apr 21 02:46:15.539137 containerd[1598]: time="2026-04-21T02:46:15.538945524Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Apr 21 02:46:15.539137 containerd[1598]: time="2026-04-21T02:46:15.538957365Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Apr 21 02:46:15.539137 containerd[1598]: time="2026-04-21T02:46:15.538967204Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Apr 21 02:46:15.539137 containerd[1598]: time="2026-04-21T02:46:15.539061967Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Apr 21 02:46:15.539137 containerd[1598]: time="2026-04-21T02:46:15.539074986Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Apr 21 02:46:15.539137 containerd[1598]: time="2026-04-21T02:46:15.539120513Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Apr 21 02:46:15.539137 containerd[1598]: time="2026-04-21T02:46:15.539129417Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Apr 21 02:46:15.539137 containerd[1598]: time="2026-04-21T02:46:15.539136303Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Apr 21 02:46:15.539276 containerd[1598]: time="2026-04-21T02:46:15.539144788Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Apr 21 02:46:15.539276 containerd[1598]: time="2026-04-21T02:46:15.539151981Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Apr 21 02:46:15.539276 containerd[1598]: time="2026-04-21T02:46:15.539159766Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Apr 21 02:46:15.539276 containerd[1598]: time="2026-04-21T02:46:15.539190676Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Apr 21 02:46:15.539276 containerd[1598]: time="2026-04-21T02:46:15.539199327Z" level=info msg="Start snapshots syncer" Apr 21 02:46:15.539276 containerd[1598]: time="2026-04-21T02:46:15.539216081Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Apr 21 02:46:15.540214 containerd[1598]: time="2026-04-21T02:46:15.540048844Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Apr 21 02:46:15.540365 containerd[1598]: time="2026-04-21T02:46:15.540270225Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Apr 21 02:46:15.540365 containerd[1598]: time="2026-04-21T02:46:15.540313055Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Apr 21 02:46:15.540395 containerd[1598]: time="2026-04-21T02:46:15.540384021Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Apr 21 02:46:15.540409 containerd[1598]: time="2026-04-21T02:46:15.540402374Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Apr 21 02:46:15.540430 containerd[1598]: time="2026-04-21T02:46:15.540413679Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Apr 21 02:46:15.540430 containerd[1598]: time="2026-04-21T02:46:15.540425119Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Apr 21 02:46:15.540456 containerd[1598]: time="2026-04-21T02:46:15.540436190Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Apr 21 02:46:15.540456 containerd[1598]: time="2026-04-21T02:46:15.540447464Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Apr 21 02:46:15.540484 containerd[1598]: time="2026-04-21T02:46:15.540458381Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Apr 21 02:46:15.540484 containerd[1598]: time="2026-04-21T02:46:15.540478047Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Apr 21 02:46:15.540512 containerd[1598]: time="2026-04-21T02:46:15.540489025Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Apr 21 02:46:15.540512 containerd[1598]: time="2026-04-21T02:46:15.540499139Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Apr 21 02:46:15.540538 containerd[1598]: time="2026-04-21T02:46:15.540528269Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Apr 21 02:46:15.540551 containerd[1598]: time="2026-04-21T02:46:15.540541822Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Apr 21 02:46:15.540565 containerd[1598]: time="2026-04-21T02:46:15.540548654Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Apr 21 02:46:15.540565 containerd[1598]: time="2026-04-21T02:46:15.540557986Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Apr 21 02:46:15.540591 containerd[1598]: time="2026-04-21T02:46:15.540565693Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Apr 21 02:46:15.540591 containerd[1598]: time="2026-04-21T02:46:15.540574688Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Apr 21 02:46:15.540591 containerd[1598]: time="2026-04-21T02:46:15.540586145Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Apr 21 02:46:15.540630 containerd[1598]: time="2026-04-21T02:46:15.540599334Z" level=info msg="runtime interface created" Apr 21 02:46:15.540630 containerd[1598]: time="2026-04-21T02:46:15.540603817Z" level=info msg="created NRI interface" Apr 21 02:46:15.540630 containerd[1598]: time="2026-04-21T02:46:15.540613641Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Apr 21 02:46:15.540630 containerd[1598]: time="2026-04-21T02:46:15.540622448Z" level=info msg="Connect containerd service" Apr 21 02:46:15.540683 containerd[1598]: time="2026-04-21T02:46:15.540638809Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 21 02:46:15.542077 containerd[1598]: time="2026-04-21T02:46:15.541828661Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 21 02:46:15.615116 containerd[1598]: time="2026-04-21T02:46:15.614683769Z" level=info msg="Start subscribing containerd event" Apr 21 02:46:15.615116 containerd[1598]: time="2026-04-21T02:46:15.614766072Z" level=info msg="Start recovering state" Apr 21 02:46:15.615116 containerd[1598]: time="2026-04-21T02:46:15.614774061Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 21 02:46:15.615116 containerd[1598]: time="2026-04-21T02:46:15.614854386Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 21 02:46:15.615116 containerd[1598]: time="2026-04-21T02:46:15.614874950Z" level=info msg="Start event monitor" Apr 21 02:46:15.615116 containerd[1598]: time="2026-04-21T02:46:15.614885439Z" level=info msg="Start cni network conf syncer for default" Apr 21 02:46:15.615116 containerd[1598]: time="2026-04-21T02:46:15.614893079Z" level=info msg="Start streaming server" Apr 21 02:46:15.615116 containerd[1598]: time="2026-04-21T02:46:15.614902505Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Apr 21 02:46:15.615116 containerd[1598]: time="2026-04-21T02:46:15.614907640Z" level=info msg="runtime interface starting up..." Apr 21 02:46:15.615116 containerd[1598]: time="2026-04-21T02:46:15.614913405Z" level=info msg="starting plugins..." Apr 21 02:46:15.615116 containerd[1598]: time="2026-04-21T02:46:15.614924079Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Apr 21 02:46:15.619712 containerd[1598]: time="2026-04-21T02:46:15.615590835Z" level=info msg="containerd successfully booted in 0.093357s" Apr 21 02:46:15.615268 systemd[1]: Started containerd.service - containerd container runtime. Apr 21 02:46:15.706402 tar[1588]: linux-amd64/README.md Apr 21 02:46:15.727496 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 21 02:46:16.173511 systemd-networkd[1505]: eth0: Gained IPv6LL Apr 21 02:46:16.176296 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 21 02:46:16.183431 systemd[1]: Reached target network-online.target - Network is Online. Apr 21 02:46:16.190355 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Apr 21 02:46:16.205543 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 02:46:16.210630 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 21 02:46:16.236949 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 21 02:46:16.241550 systemd[1]: coreos-metadata.service: Deactivated successfully. Apr 21 02:46:16.241676 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Apr 21 02:46:16.246782 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 21 02:46:16.992226 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 02:46:16.997146 (kubelet)[1689]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 21 02:46:16.997750 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 21 02:46:17.003069 systemd[1]: Startup finished in 3.688s (kernel) + 8.763s (initrd) + 4.761s (userspace) = 17.214s. Apr 21 02:46:17.465403 kubelet[1689]: E0421 02:46:17.465360 1689 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 21 02:46:17.467667 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 21 02:46:17.467811 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 21 02:46:17.468219 systemd[1]: kubelet.service: Consumed 864ms CPU time, 258.4M memory peak. Apr 21 02:46:17.623657 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 21 02:46:17.624843 systemd[1]: Started sshd@0-10.0.0.33:22-10.0.0.1:57046.service - OpenSSH per-connection server daemon (10.0.0.1:57046). Apr 21 02:46:17.711192 sshd[1702]: Accepted publickey for core from 10.0.0.1 port 57046 ssh2: RSA SHA256:9KUHGlOeyJC7ThH1mkUqpKUL96f7+Wj12eT7o4yHGwc Apr 21 02:46:17.712528 sshd-session[1702]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 02:46:17.719320 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 21 02:46:17.720199 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 21 02:46:17.726334 systemd-logind[1573]: New session 1 of user core. Apr 21 02:46:17.748230 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 21 02:46:17.750203 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 21 02:46:17.766947 (systemd)[1707]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 21 02:46:17.770130 systemd-logind[1573]: New session c1 of user core. Apr 21 02:46:17.877764 systemd[1707]: Queued start job for default target default.target. Apr 21 02:46:17.891958 systemd[1707]: Created slice app.slice - User Application Slice. Apr 21 02:46:17.892158 systemd[1707]: Reached target paths.target - Paths. Apr 21 02:46:17.892224 systemd[1707]: Reached target timers.target - Timers. Apr 21 02:46:17.893255 systemd[1707]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 21 02:46:17.905318 systemd[1707]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 21 02:46:17.905443 systemd[1707]: Reached target sockets.target - Sockets. Apr 21 02:46:17.905541 systemd[1707]: Reached target basic.target - Basic System. Apr 21 02:46:17.905596 systemd[1707]: Reached target default.target - Main User Target. Apr 21 02:46:17.905613 systemd[1707]: Startup finished in 128ms. Apr 21 02:46:17.905645 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 21 02:46:17.907358 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 21 02:46:17.916720 systemd[1]: Started sshd@1-10.0.0.33:22-10.0.0.1:57054.service - OpenSSH per-connection server daemon (10.0.0.1:57054). Apr 21 02:46:17.975887 sshd[1719]: Accepted publickey for core from 10.0.0.1 port 57054 ssh2: RSA SHA256:9KUHGlOeyJC7ThH1mkUqpKUL96f7+Wj12eT7o4yHGwc Apr 21 02:46:17.976753 sshd-session[1719]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 02:46:17.983062 systemd-logind[1573]: New session 2 of user core. Apr 21 02:46:17.999458 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 21 02:46:18.016578 sshd[1722]: Connection closed by 10.0.0.1 port 57054 Apr 21 02:46:18.016949 sshd-session[1719]: pam_unix(sshd:session): session closed for user core Apr 21 02:46:18.055589 systemd[1]: sshd@1-10.0.0.33:22-10.0.0.1:57054.service: Deactivated successfully. Apr 21 02:46:18.057279 systemd[1]: session-2.scope: Deactivated successfully. Apr 21 02:46:18.058400 systemd-logind[1573]: Session 2 logged out. Waiting for processes to exit. Apr 21 02:46:18.060627 systemd[1]: Started sshd@2-10.0.0.33:22-10.0.0.1:57064.service - OpenSSH per-connection server daemon (10.0.0.1:57064). Apr 21 02:46:18.061652 systemd-logind[1573]: Removed session 2. Apr 21 02:46:18.121077 sshd[1728]: Accepted publickey for core from 10.0.0.1 port 57064 ssh2: RSA SHA256:9KUHGlOeyJC7ThH1mkUqpKUL96f7+Wj12eT7o4yHGwc Apr 21 02:46:18.122339 sshd-session[1728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 02:46:18.127689 systemd-logind[1573]: New session 3 of user core. Apr 21 02:46:18.137448 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 21 02:46:18.146314 sshd[1731]: Connection closed by 10.0.0.1 port 57064 Apr 21 02:46:18.146423 sshd-session[1728]: pam_unix(sshd:session): session closed for user core Apr 21 02:46:18.162814 systemd[1]: sshd@2-10.0.0.33:22-10.0.0.1:57064.service: Deactivated successfully. Apr 21 02:46:18.164310 systemd[1]: session-3.scope: Deactivated successfully. Apr 21 02:46:18.165264 systemd-logind[1573]: Session 3 logged out. Waiting for processes to exit. Apr 21 02:46:18.166951 systemd[1]: Started sshd@3-10.0.0.33:22-10.0.0.1:57066.service - OpenSSH per-connection server daemon (10.0.0.1:57066). Apr 21 02:46:18.167908 systemd-logind[1573]: Removed session 3. Apr 21 02:46:18.220373 sshd[1737]: Accepted publickey for core from 10.0.0.1 port 57066 ssh2: RSA SHA256:9KUHGlOeyJC7ThH1mkUqpKUL96f7+Wj12eT7o4yHGwc Apr 21 02:46:18.221377 sshd-session[1737]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 02:46:18.226296 systemd-logind[1573]: New session 4 of user core. Apr 21 02:46:18.232337 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 21 02:46:18.245293 sshd[1740]: Connection closed by 10.0.0.1 port 57066 Apr 21 02:46:18.245569 sshd-session[1737]: pam_unix(sshd:session): session closed for user core Apr 21 02:46:18.260595 systemd[1]: sshd@3-10.0.0.33:22-10.0.0.1:57066.service: Deactivated successfully. Apr 21 02:46:18.261876 systemd[1]: session-4.scope: Deactivated successfully. Apr 21 02:46:18.262820 systemd-logind[1573]: Session 4 logged out. Waiting for processes to exit. Apr 21 02:46:18.264627 systemd[1]: Started sshd@4-10.0.0.33:22-10.0.0.1:57070.service - OpenSSH per-connection server daemon (10.0.0.1:57070). Apr 21 02:46:18.265941 systemd-logind[1573]: Removed session 4. Apr 21 02:46:18.318738 sshd[1746]: Accepted publickey for core from 10.0.0.1 port 57070 ssh2: RSA SHA256:9KUHGlOeyJC7ThH1mkUqpKUL96f7+Wj12eT7o4yHGwc Apr 21 02:46:18.319710 sshd-session[1746]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 02:46:18.324617 systemd-logind[1573]: New session 5 of user core. Apr 21 02:46:18.334259 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 21 02:46:18.352598 sudo[1750]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 21 02:46:18.352823 sudo[1750]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 02:46:18.367190 sudo[1750]: pam_unix(sudo:session): session closed for user root Apr 21 02:46:18.368226 sshd[1749]: Connection closed by 10.0.0.1 port 57070 Apr 21 02:46:18.368876 sshd-session[1746]: pam_unix(sshd:session): session closed for user core Apr 21 02:46:18.386889 systemd[1]: sshd@4-10.0.0.33:22-10.0.0.1:57070.service: Deactivated successfully. Apr 21 02:46:18.388383 systemd[1]: session-5.scope: Deactivated successfully. Apr 21 02:46:18.389314 systemd-logind[1573]: Session 5 logged out. Waiting for processes to exit. Apr 21 02:46:18.391487 systemd[1]: Started sshd@5-10.0.0.33:22-10.0.0.1:57074.service - OpenSSH per-connection server daemon (10.0.0.1:57074). Apr 21 02:46:18.392688 systemd-logind[1573]: Removed session 5. Apr 21 02:46:18.448436 sshd[1756]: Accepted publickey for core from 10.0.0.1 port 57074 ssh2: RSA SHA256:9KUHGlOeyJC7ThH1mkUqpKUL96f7+Wj12eT7o4yHGwc Apr 21 02:46:18.449726 sshd-session[1756]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 02:46:18.454467 systemd-logind[1573]: New session 6 of user core. Apr 21 02:46:18.464250 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 21 02:46:18.476427 sudo[1761]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 21 02:46:18.476651 sudo[1761]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 02:46:18.481448 sudo[1761]: pam_unix(sudo:session): session closed for user root Apr 21 02:46:18.486690 sudo[1760]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Apr 21 02:46:18.486898 sudo[1760]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 02:46:18.496725 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 21 02:46:18.545743 augenrules[1783]: No rules Apr 21 02:46:18.546817 systemd[1]: audit-rules.service: Deactivated successfully. Apr 21 02:46:18.547151 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 21 02:46:18.547944 sudo[1760]: pam_unix(sudo:session): session closed for user root Apr 21 02:46:18.549558 sshd[1759]: Connection closed by 10.0.0.1 port 57074 Apr 21 02:46:18.549714 sshd-session[1756]: pam_unix(sshd:session): session closed for user core Apr 21 02:46:18.556626 systemd[1]: sshd@5-10.0.0.33:22-10.0.0.1:57074.service: Deactivated successfully. Apr 21 02:46:18.557893 systemd[1]: session-6.scope: Deactivated successfully. Apr 21 02:46:18.558730 systemd-logind[1573]: Session 6 logged out. Waiting for processes to exit. Apr 21 02:46:18.560638 systemd[1]: Started sshd@6-10.0.0.33:22-10.0.0.1:57084.service - OpenSSH per-connection server daemon (10.0.0.1:57084). Apr 21 02:46:18.561517 systemd-logind[1573]: Removed session 6. Apr 21 02:46:18.614411 sshd[1792]: Accepted publickey for core from 10.0.0.1 port 57084 ssh2: RSA SHA256:9KUHGlOeyJC7ThH1mkUqpKUL96f7+Wj12eT7o4yHGwc Apr 21 02:46:18.615345 sshd-session[1792]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 02:46:18.619832 systemd-logind[1573]: New session 7 of user core. Apr 21 02:46:18.627271 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 21 02:46:18.638338 sudo[1796]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 21 02:46:18.638545 sudo[1796]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 02:46:18.931693 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 21 02:46:18.945321 (dockerd)[1816]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 21 02:46:19.175158 dockerd[1816]: time="2026-04-21T02:46:19.174920265Z" level=info msg="Starting up" Apr 21 02:46:19.176360 dockerd[1816]: time="2026-04-21T02:46:19.176187331Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Apr 21 02:46:19.192347 dockerd[1816]: time="2026-04-21T02:46:19.192058010Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Apr 21 02:46:19.371808 dockerd[1816]: time="2026-04-21T02:46:19.371668405Z" level=info msg="Loading containers: start." Apr 21 02:46:19.386143 kernel: Initializing XFRM netlink socket Apr 21 02:46:19.805211 systemd-networkd[1505]: docker0: Link UP Apr 21 02:46:19.811682 dockerd[1816]: time="2026-04-21T02:46:19.811582461Z" level=info msg="Loading containers: done." Apr 21 02:46:19.832262 dockerd[1816]: time="2026-04-21T02:46:19.832191276Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 21 02:46:19.832357 dockerd[1816]: time="2026-04-21T02:46:19.832280555Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Apr 21 02:46:19.832357 dockerd[1816]: time="2026-04-21T02:46:19.832332808Z" level=info msg="Initializing buildkit" Apr 21 02:46:19.872624 dockerd[1816]: time="2026-04-21T02:46:19.872530453Z" level=info msg="Completed buildkit initialization" Apr 21 02:46:19.886463 dockerd[1816]: time="2026-04-21T02:46:19.886306352Z" level=info msg="Daemon has completed initialization" Apr 21 02:46:19.886565 dockerd[1816]: time="2026-04-21T02:46:19.886506853Z" level=info msg="API listen on /run/docker.sock" Apr 21 02:46:19.886716 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 21 02:46:20.335889 containerd[1598]: time="2026-04-21T02:46:20.335768172Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.7\"" Apr 21 02:46:20.820330 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3064465586.mount: Deactivated successfully. Apr 21 02:46:21.729714 containerd[1598]: time="2026-04-21T02:46:21.729608574Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 02:46:21.731796 containerd[1598]: time="2026-04-21T02:46:21.731647264Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.7: active requests=0, bytes read=27099952" Apr 21 02:46:21.734260 containerd[1598]: time="2026-04-21T02:46:21.733969259Z" level=info msg="ImageCreate event name:\"sha256:c15709457ff55a861a7259eb631c447f9bf906267615f9d8dcc820635a0bfb95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 02:46:21.739444 containerd[1598]: time="2026-04-21T02:46:21.739256155Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b96b8464d152a24c81d7f0435fd2198f8486970cd26a9e0e9c20826c73d1441c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 02:46:21.740836 containerd[1598]: time="2026-04-21T02:46:21.740741067Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.7\" with image id \"sha256:c15709457ff55a861a7259eb631c447f9bf906267615f9d8dcc820635a0bfb95\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b96b8464d152a24c81d7f0435fd2198f8486970cd26a9e0e9c20826c73d1441c\", size \"27097113\" in 1.404945802s" Apr 21 02:46:21.740836 containerd[1598]: time="2026-04-21T02:46:21.740802306Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.7\" returns image reference \"sha256:c15709457ff55a861a7259eb631c447f9bf906267615f9d8dcc820635a0bfb95\"" Apr 21 02:46:21.742090 containerd[1598]: time="2026-04-21T02:46:21.741888540Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.7\"" Apr 21 02:46:22.691354 containerd[1598]: time="2026-04-21T02:46:22.691281609Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 02:46:22.692280 containerd[1598]: time="2026-04-21T02:46:22.692185985Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.7: active requests=0, bytes read=21252670" Apr 21 02:46:22.693193 containerd[1598]: time="2026-04-21T02:46:22.692971633Z" level=info msg="ImageCreate event name:\"sha256:23986a24c803336f2a2dfbcaaf0547ee8bcf6638f23bec8967e210909d00a97a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 02:46:22.695429 containerd[1598]: time="2026-04-21T02:46:22.695323838Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7d759bdc4fef10a3fc1ad60ce9439d58e1a4df7ebb22751f7cc0201ce55f280b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 02:46:22.696388 containerd[1598]: time="2026-04-21T02:46:22.696339049Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.7\" with image id \"sha256:23986a24c803336f2a2dfbcaaf0547ee8bcf6638f23bec8967e210909d00a97a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7d759bdc4fef10a3fc1ad60ce9439d58e1a4df7ebb22751f7cc0201ce55f280b\", size \"22819085\" in 954.107687ms" Apr 21 02:46:22.696435 containerd[1598]: time="2026-04-21T02:46:22.696394217Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.7\" returns image reference \"sha256:23986a24c803336f2a2dfbcaaf0547ee8bcf6638f23bec8967e210909d00a97a\"" Apr 21 02:46:22.697397 containerd[1598]: time="2026-04-21T02:46:22.697275083Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.7\"" Apr 21 02:46:23.437326 containerd[1598]: time="2026-04-21T02:46:23.437224425Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 02:46:23.437819 containerd[1598]: time="2026-04-21T02:46:23.437750194Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.7: active requests=0, bytes read=15810823" Apr 21 02:46:23.439615 containerd[1598]: time="2026-04-21T02:46:23.439494978Z" level=info msg="ImageCreate event name:\"sha256:568f1856b0e1c464b0b50ab2879ebd535623c1a620b1d2530ba5dd594237dc82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 02:46:23.443228 containerd[1598]: time="2026-04-21T02:46:23.443157298Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:4ab32f707ff84beaac431797999707757b885196b0b9a52d29cb67f95efce7c1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 02:46:23.444593 containerd[1598]: time="2026-04-21T02:46:23.444527588Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.7\" with image id \"sha256:568f1856b0e1c464b0b50ab2879ebd535623c1a620b1d2530ba5dd594237dc82\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:4ab32f707ff84beaac431797999707757b885196b0b9a52d29cb67f95efce7c1\", size \"17377256\" in 747.188624ms" Apr 21 02:46:23.444593 containerd[1598]: time="2026-04-21T02:46:23.444588582Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.7\" returns image reference \"sha256:568f1856b0e1c464b0b50ab2879ebd535623c1a620b1d2530ba5dd594237dc82\"" Apr 21 02:46:23.445736 containerd[1598]: time="2026-04-21T02:46:23.445645589Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.7\"" Apr 21 02:46:24.691860 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3453477069.mount: Deactivated successfully. Apr 21 02:46:25.180791 containerd[1598]: time="2026-04-21T02:46:25.180634270Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 02:46:25.182277 containerd[1598]: time="2026-04-21T02:46:25.182064494Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.7: active requests=0, bytes read=25972848" Apr 21 02:46:25.184626 containerd[1598]: time="2026-04-21T02:46:25.184370939Z" level=info msg="ImageCreate event name:\"sha256:345c2b8919907fbb425a843da24d86a16708ee53a49ad3fa2e6dc229c7b34643\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 02:46:25.188452 containerd[1598]: time="2026-04-21T02:46:25.188343478Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:062519bc0a14769e2f98c6bdff7816a17e6252de3f3c9cb102e6be33fe38d9e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 02:46:25.189359 containerd[1598]: time="2026-04-21T02:46:25.189284298Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.7\" with image id \"sha256:345c2b8919907fbb425a843da24d86a16708ee53a49ad3fa2e6dc229c7b34643\", repo tag \"registry.k8s.io/kube-proxy:v1.34.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:062519bc0a14769e2f98c6bdff7816a17e6252de3f3c9cb102e6be33fe38d9e2\", size \"25971973\" in 1.743558508s" Apr 21 02:46:25.189359 containerd[1598]: time="2026-04-21T02:46:25.189319478Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.7\" returns image reference \"sha256:345c2b8919907fbb425a843da24d86a16708ee53a49ad3fa2e6dc229c7b34643\"" Apr 21 02:46:25.190346 containerd[1598]: time="2026-04-21T02:46:25.190267487Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Apr 21 02:46:25.619488 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1517782483.mount: Deactivated successfully. Apr 21 02:46:26.718270 containerd[1598]: time="2026-04-21T02:46:26.717932218Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 02:46:26.719333 containerd[1598]: time="2026-04-21T02:46:26.719262922Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22387483" Apr 21 02:46:26.721963 containerd[1598]: time="2026-04-21T02:46:26.721802378Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 02:46:26.727118 containerd[1598]: time="2026-04-21T02:46:26.726559425Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 02:46:26.732534 containerd[1598]: time="2026-04-21T02:46:26.732447382Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 1.542082807s" Apr 21 02:46:26.732656 containerd[1598]: time="2026-04-21T02:46:26.732542329Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Apr 21 02:46:26.734435 containerd[1598]: time="2026-04-21T02:46:26.734117930Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Apr 21 02:46:27.196733 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount21491881.mount: Deactivated successfully. Apr 21 02:46:27.206353 containerd[1598]: time="2026-04-21T02:46:27.206234944Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 02:46:27.207373 containerd[1598]: time="2026-04-21T02:46:27.207293858Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321150" Apr 21 02:46:27.209300 containerd[1598]: time="2026-04-21T02:46:27.209194676Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 02:46:27.212944 containerd[1598]: time="2026-04-21T02:46:27.212835917Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 02:46:27.213562 containerd[1598]: time="2026-04-21T02:46:27.213486413Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 479.247202ms" Apr 21 02:46:27.213602 containerd[1598]: time="2026-04-21T02:46:27.213574058Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Apr 21 02:46:27.214550 containerd[1598]: time="2026-04-21T02:46:27.214474855Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\"" Apr 21 02:46:27.610782 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 21 02:46:27.613447 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 02:46:27.815324 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3233480485.mount: Deactivated successfully. Apr 21 02:46:27.829757 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 02:46:27.839422 (kubelet)[2179]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 21 02:46:27.939951 kubelet[2179]: E0421 02:46:27.939714 2179 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 21 02:46:27.944318 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 21 02:46:27.944513 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 21 02:46:27.945271 systemd[1]: kubelet.service: Consumed 276ms CPU time, 110.9M memory peak. Apr 21 02:46:28.975103 containerd[1598]: time="2026-04-21T02:46:28.974826510Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 02:46:28.976199 containerd[1598]: time="2026-04-21T02:46:28.976036406Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=22874255" Apr 21 02:46:28.977780 containerd[1598]: time="2026-04-21T02:46:28.977656833Z" level=info msg="ImageCreate event name:\"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 02:46:28.982471 containerd[1598]: time="2026-04-21T02:46:28.982318168Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 02:46:28.983834 containerd[1598]: time="2026-04-21T02:46:28.983747838Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"22871747\" in 1.769193802s" Apr 21 02:46:28.983834 containerd[1598]: time="2026-04-21T02:46:28.983831084Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\"" Apr 21 02:46:32.442233 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 02:46:32.442726 systemd[1]: kubelet.service: Consumed 276ms CPU time, 110.9M memory peak. Apr 21 02:46:32.444727 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 02:46:32.470226 systemd[1]: Reload requested from client PID 2276 ('systemctl') (unit session-7.scope)... Apr 21 02:46:32.470272 systemd[1]: Reloading... Apr 21 02:46:32.552258 zram_generator::config[2319]: No configuration found. Apr 21 02:46:32.722901 systemd[1]: Reloading finished in 252 ms. Apr 21 02:46:32.776478 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 21 02:46:32.776565 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 21 02:46:32.776817 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 02:46:32.776850 systemd[1]: kubelet.service: Consumed 102ms CPU time, 98.3M memory peak. Apr 21 02:46:32.778499 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 02:46:32.946207 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 02:46:32.959444 (kubelet)[2366]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 21 02:46:33.030135 kubelet[2366]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 21 02:46:33.030135 kubelet[2366]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 21 02:46:33.030135 kubelet[2366]: I0421 02:46:33.030101 2366 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 21 02:46:33.379714 kubelet[2366]: I0421 02:46:33.379661 2366 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Apr 21 02:46:33.379714 kubelet[2366]: I0421 02:46:33.379712 2366 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 21 02:46:33.379826 kubelet[2366]: I0421 02:46:33.379729 2366 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 21 02:46:33.379826 kubelet[2366]: I0421 02:46:33.379737 2366 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 21 02:46:33.379942 kubelet[2366]: I0421 02:46:33.379899 2366 server.go:956] "Client rotation is on, will bootstrap in background" Apr 21 02:46:33.449768 kubelet[2366]: E0421 02:46:33.449694 2366 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.33:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.33:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 21 02:46:33.450227 kubelet[2366]: I0421 02:46:33.450100 2366 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 21 02:46:33.456580 kubelet[2366]: I0421 02:46:33.456498 2366 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Apr 21 02:46:33.464533 kubelet[2366]: I0421 02:46:33.464487 2366 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 21 02:46:33.466694 kubelet[2366]: I0421 02:46:33.466546 2366 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 21 02:46:33.466803 kubelet[2366]: I0421 02:46:33.466621 2366 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 21 02:46:33.466803 kubelet[2366]: I0421 02:46:33.466734 2366 topology_manager.go:138] "Creating topology manager with none policy" Apr 21 02:46:33.466803 kubelet[2366]: I0421 02:46:33.466741 2366 container_manager_linux.go:306] "Creating device plugin manager" Apr 21 02:46:33.467095 kubelet[2366]: I0421 02:46:33.466817 2366 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Apr 21 02:46:33.470309 kubelet[2366]: I0421 02:46:33.470254 2366 state_mem.go:36] "Initialized new in-memory state store" Apr 21 02:46:33.470636 kubelet[2366]: I0421 02:46:33.470578 2366 kubelet.go:475] "Attempting to sync node with API server" Apr 21 02:46:33.470704 kubelet[2366]: I0421 02:46:33.470664 2366 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 21 02:46:33.470724 kubelet[2366]: I0421 02:46:33.470715 2366 kubelet.go:387] "Adding apiserver pod source" Apr 21 02:46:33.470739 kubelet[2366]: I0421 02:46:33.470725 2366 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 21 02:46:33.471611 kubelet[2366]: E0421 02:46:33.471539 2366 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.33:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.33:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 21 02:46:33.471707 kubelet[2366]: E0421 02:46:33.471661 2366 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.33:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.33:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 21 02:46:33.475124 kubelet[2366]: I0421 02:46:33.474257 2366 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Apr 21 02:46:33.475124 kubelet[2366]: I0421 02:46:33.474706 2366 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 21 02:46:33.475124 kubelet[2366]: I0421 02:46:33.474727 2366 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 21 02:46:33.475124 kubelet[2366]: W0421 02:46:33.474768 2366 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 21 02:46:33.481411 kubelet[2366]: I0421 02:46:33.481349 2366 server.go:1262] "Started kubelet" Apr 21 02:46:33.481846 kubelet[2366]: I0421 02:46:33.481531 2366 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 21 02:46:33.482817 kubelet[2366]: I0421 02:46:33.482738 2366 server.go:310] "Adding debug handlers to kubelet server" Apr 21 02:46:33.484725 kubelet[2366]: I0421 02:46:33.484681 2366 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 21 02:46:33.485293 kubelet[2366]: I0421 02:46:33.485219 2366 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 21 02:46:33.485293 kubelet[2366]: I0421 02:46:33.485277 2366 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 21 02:46:33.485593 kubelet[2366]: I0421 02:46:33.485429 2366 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 21 02:46:33.485867 kubelet[2366]: I0421 02:46:33.485758 2366 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 21 02:46:33.486451 kubelet[2366]: I0421 02:46:33.486439 2366 volume_manager.go:313] "Starting Kubelet Volume Manager" Apr 21 02:46:33.487935 kubelet[2366]: I0421 02:46:33.487917 2366 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 21 02:46:33.488658 kubelet[2366]: E0421 02:46:33.488465 2366 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.33:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.33:6443: connect: connection refused" interval="200ms" Apr 21 02:46:33.488658 kubelet[2366]: I0421 02:46:33.488654 2366 factory.go:223] Registration of the systemd container factory successfully Apr 21 02:46:33.488794 kubelet[2366]: I0421 02:46:33.488726 2366 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 21 02:46:33.489033 kubelet[2366]: E0421 02:46:33.486646 2366 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 21 02:46:33.489033 kubelet[2366]: I0421 02:46:33.488963 2366 reconciler.go:29] "Reconciler: start to sync state" Apr 21 02:46:33.489452 kubelet[2366]: E0421 02:46:33.489321 2366 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.33:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.33:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 21 02:46:33.489452 kubelet[2366]: E0421 02:46:33.487926 2366 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.33:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.33:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a83f3f27d30010 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-21 02:46:33.481289744 +0000 UTC m=+0.516543813,LastTimestamp:2026-04-21 02:46:33.481289744 +0000 UTC m=+0.516543813,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 21 02:46:33.490647 kubelet[2366]: E0421 02:46:33.490622 2366 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 21 02:46:33.491433 kubelet[2366]: I0421 02:46:33.490974 2366 factory.go:223] Registration of the containerd container factory successfully Apr 21 02:46:33.504623 kubelet[2366]: I0421 02:46:33.504587 2366 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 21 02:46:33.504623 kubelet[2366]: I0421 02:46:33.504597 2366 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 21 02:46:33.504623 kubelet[2366]: I0421 02:46:33.504607 2366 state_mem.go:36] "Initialized new in-memory state store" Apr 21 02:46:33.507592 kubelet[2366]: I0421 02:46:33.507546 2366 policy_none.go:49] "None policy: Start" Apr 21 02:46:33.507636 kubelet[2366]: I0421 02:46:33.507597 2366 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 21 02:46:33.507636 kubelet[2366]: I0421 02:46:33.507606 2366 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 21 02:46:33.508966 kubelet[2366]: I0421 02:46:33.508887 2366 policy_none.go:47] "Start" Apr 21 02:46:33.515377 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 21 02:46:33.526494 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 21 02:46:33.528645 kubelet[2366]: I0421 02:46:33.528516 2366 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 21 02:46:33.531102 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 21 02:46:33.532320 kubelet[2366]: I0421 02:46:33.532267 2366 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 21 02:46:33.532372 kubelet[2366]: I0421 02:46:33.532367 2366 status_manager.go:244] "Starting to sync pod status with apiserver" Apr 21 02:46:33.532416 kubelet[2366]: I0421 02:46:33.532412 2366 kubelet.go:2428] "Starting kubelet main sync loop" Apr 21 02:46:33.532484 kubelet[2366]: E0421 02:46:33.532473 2366 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 21 02:46:33.533564 kubelet[2366]: E0421 02:46:33.533548 2366 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.33:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.33:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 21 02:46:33.544807 kubelet[2366]: E0421 02:46:33.544754 2366 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 21 02:46:33.544953 kubelet[2366]: I0421 02:46:33.544912 2366 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 21 02:46:33.545044 kubelet[2366]: I0421 02:46:33.544957 2366 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 21 02:46:33.547130 kubelet[2366]: I0421 02:46:33.546765 2366 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 21 02:46:33.550844 kubelet[2366]: E0421 02:46:33.550659 2366 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 21 02:46:33.550965 kubelet[2366]: E0421 02:46:33.550922 2366 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 21 02:46:33.649108 kubelet[2366]: I0421 02:46:33.648387 2366 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 21 02:46:33.649108 kubelet[2366]: E0421 02:46:33.649077 2366 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.33:6443/api/v1/nodes\": dial tcp 10.0.0.33:6443: connect: connection refused" node="localhost" Apr 21 02:46:33.654241 systemd[1]: Created slice kubepods-burstable-pod87a74ef0489b4b574a728fa47c7d2a61.slice - libcontainer container kubepods-burstable-pod87a74ef0489b4b574a728fa47c7d2a61.slice. Apr 21 02:46:33.664829 kubelet[2366]: E0421 02:46:33.664765 2366 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 21 02:46:33.668551 systemd[1]: Created slice kubepods-burstable-podc6bb8708a026256e82ca4c5631a78b5a.slice - libcontainer container kubepods-burstable-podc6bb8708a026256e82ca4c5631a78b5a.slice. Apr 21 02:46:33.670563 kubelet[2366]: E0421 02:46:33.670509 2366 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 21 02:46:33.672686 systemd[1]: Created slice kubepods-burstable-pod824fd89300514e351ed3b68d82c665c6.slice - libcontainer container kubepods-burstable-pod824fd89300514e351ed3b68d82c665c6.slice. Apr 21 02:46:33.674379 kubelet[2366]: E0421 02:46:33.674320 2366 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 21 02:46:33.690095 kubelet[2366]: I0421 02:46:33.689913 2366 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 02:46:33.690095 kubelet[2366]: I0421 02:46:33.690086 2366 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/87a74ef0489b4b574a728fa47c7d2a61-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"87a74ef0489b4b574a728fa47c7d2a61\") " pod="kube-system/kube-apiserver-localhost" Apr 21 02:46:33.690265 kubelet[2366]: I0421 02:46:33.690243 2366 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 02:46:33.690265 kubelet[2366]: E0421 02:46:33.690255 2366 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.33:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.33:6443: connect: connection refused" interval="400ms" Apr 21 02:46:33.690371 kubelet[2366]: I0421 02:46:33.690264 2366 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 02:46:33.690782 kubelet[2366]: I0421 02:46:33.690722 2366 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/824fd89300514e351ed3b68d82c665c6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"824fd89300514e351ed3b68d82c665c6\") " pod="kube-system/kube-scheduler-localhost" Apr 21 02:46:33.690782 kubelet[2366]: I0421 02:46:33.690777 2366 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/87a74ef0489b4b574a728fa47c7d2a61-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"87a74ef0489b4b574a728fa47c7d2a61\") " pod="kube-system/kube-apiserver-localhost" Apr 21 02:46:33.690824 kubelet[2366]: I0421 02:46:33.690794 2366 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/87a74ef0489b4b574a728fa47c7d2a61-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"87a74ef0489b4b574a728fa47c7d2a61\") " pod="kube-system/kube-apiserver-localhost" Apr 21 02:46:33.690824 kubelet[2366]: I0421 02:46:33.690811 2366 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 02:46:33.690854 kubelet[2366]: I0421 02:46:33.690823 2366 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 02:46:33.852733 kubelet[2366]: I0421 02:46:33.852343 2366 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 21 02:46:33.853231 kubelet[2366]: E0421 02:46:33.852963 2366 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.33:6443/api/v1/nodes\": dial tcp 10.0.0.33:6443: connect: connection refused" node="localhost" Apr 21 02:46:33.969295 kubelet[2366]: E0421 02:46:33.968909 2366 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:46:33.970205 containerd[1598]: time="2026-04-21T02:46:33.970114207Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:87a74ef0489b4b574a728fa47c7d2a61,Namespace:kube-system,Attempt:0,}" Apr 21 02:46:33.973474 kubelet[2366]: E0421 02:46:33.973281 2366 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:46:33.974111 containerd[1598]: time="2026-04-21T02:46:33.973940944Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:c6bb8708a026256e82ca4c5631a78b5a,Namespace:kube-system,Attempt:0,}" Apr 21 02:46:33.977285 kubelet[2366]: E0421 02:46:33.977210 2366 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:46:33.977745 containerd[1598]: time="2026-04-21T02:46:33.977691089Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:824fd89300514e351ed3b68d82c665c6,Namespace:kube-system,Attempt:0,}" Apr 21 02:46:34.091965 kubelet[2366]: E0421 02:46:34.091884 2366 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.33:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.33:6443: connect: connection refused" interval="800ms" Apr 21 02:46:34.255540 kubelet[2366]: I0421 02:46:34.255366 2366 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 21 02:46:34.255766 kubelet[2366]: E0421 02:46:34.255698 2366 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.33:6443/api/v1/nodes\": dial tcp 10.0.0.33:6443: connect: connection refused" node="localhost" Apr 21 02:46:34.373501 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3759550697.mount: Deactivated successfully. Apr 21 02:46:34.380258 containerd[1598]: time="2026-04-21T02:46:34.380145528Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 02:46:34.383413 containerd[1598]: time="2026-04-21T02:46:34.383349377Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 02:46:34.384109 containerd[1598]: time="2026-04-21T02:46:34.383904394Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321070" Apr 21 02:46:34.385602 containerd[1598]: time="2026-04-21T02:46:34.385538000Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Apr 21 02:46:34.386618 containerd[1598]: time="2026-04-21T02:46:34.386565221Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 02:46:34.389372 containerd[1598]: time="2026-04-21T02:46:34.389252509Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 02:46:34.390359 containerd[1598]: time="2026-04-21T02:46:34.390332738Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Apr 21 02:46:34.391347 containerd[1598]: time="2026-04-21T02:46:34.391328186Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 02:46:34.391498 kubelet[2366]: E0421 02:46:34.391434 2366 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.33:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.33:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 21 02:46:34.392510 containerd[1598]: time="2026-04-21T02:46:34.392484554Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 415.770697ms" Apr 21 02:46:34.393115 containerd[1598]: time="2026-04-21T02:46:34.392974868Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 413.281315ms" Apr 21 02:46:34.397107 containerd[1598]: time="2026-04-21T02:46:34.396898087Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 424.267743ms" Apr 21 02:46:34.416117 containerd[1598]: time="2026-04-21T02:46:34.415295230Z" level=info msg="connecting to shim 2e67cbb97f049004e3b348bb29787d0f8583773e179f2b022926ae617d66ddb7" address="unix:///run/containerd/s/7efb00ac8f1edbcda86f0b34176e011bddc09cddcf3f3d41cbfa7e44c0372ac3" namespace=k8s.io protocol=ttrpc version=3 Apr 21 02:46:34.429127 containerd[1598]: time="2026-04-21T02:46:34.428935766Z" level=info msg="connecting to shim 21f62a319a4173f27803b690556e7a73e79f1e935edf3ccd8b7478736ce4097e" address="unix:///run/containerd/s/11ff4baf5361fbb86d8b8a4fec965369eb92f993c590956d8b16042e52317729" namespace=k8s.io protocol=ttrpc version=3 Apr 21 02:46:34.431729 containerd[1598]: time="2026-04-21T02:46:34.431688849Z" level=info msg="connecting to shim 0590e7278758cbefa4664329d228dfadb5ea42e085ba3003091199979153c81c" address="unix:///run/containerd/s/6f04b1199c61ba7b406845ddfd76b50a4f97611b47ba80e091c7b1c092951424" namespace=k8s.io protocol=ttrpc version=3 Apr 21 02:46:34.448566 systemd[1]: Started cri-containerd-2e67cbb97f049004e3b348bb29787d0f8583773e179f2b022926ae617d66ddb7.scope - libcontainer container 2e67cbb97f049004e3b348bb29787d0f8583773e179f2b022926ae617d66ddb7. Apr 21 02:46:34.476289 systemd[1]: Started cri-containerd-0590e7278758cbefa4664329d228dfadb5ea42e085ba3003091199979153c81c.scope - libcontainer container 0590e7278758cbefa4664329d228dfadb5ea42e085ba3003091199979153c81c. Apr 21 02:46:34.477498 systemd[1]: Started cri-containerd-21f62a319a4173f27803b690556e7a73e79f1e935edf3ccd8b7478736ce4097e.scope - libcontainer container 21f62a319a4173f27803b690556e7a73e79f1e935edf3ccd8b7478736ce4097e. Apr 21 02:46:34.534879 containerd[1598]: time="2026-04-21T02:46:34.534498675Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:c6bb8708a026256e82ca4c5631a78b5a,Namespace:kube-system,Attempt:0,} returns sandbox id \"2e67cbb97f049004e3b348bb29787d0f8583773e179f2b022926ae617d66ddb7\"" Apr 21 02:46:34.537802 containerd[1598]: time="2026-04-21T02:46:34.536813159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:87a74ef0489b4b574a728fa47c7d2a61,Namespace:kube-system,Attempt:0,} returns sandbox id \"21f62a319a4173f27803b690556e7a73e79f1e935edf3ccd8b7478736ce4097e\"" Apr 21 02:46:34.541505 kubelet[2366]: E0421 02:46:34.541464 2366 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:46:34.541789 kubelet[2366]: E0421 02:46:34.541777 2366 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:46:34.550840 containerd[1598]: time="2026-04-21T02:46:34.550821887Z" level=info msg="CreateContainer within sandbox \"2e67cbb97f049004e3b348bb29787d0f8583773e179f2b022926ae617d66ddb7\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 21 02:46:34.551784 containerd[1598]: time="2026-04-21T02:46:34.551721188Z" level=info msg="CreateContainer within sandbox \"21f62a319a4173f27803b690556e7a73e79f1e935edf3ccd8b7478736ce4097e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 21 02:46:34.562919 containerd[1598]: time="2026-04-21T02:46:34.562658110Z" level=info msg="Container 174a2fcf9d2bca9d3a687c53eeff93796acd2d578dfb77eb1b4dc6fffa49c93d: CDI devices from CRI Config.CDIDevices: []" Apr 21 02:46:34.565650 containerd[1598]: time="2026-04-21T02:46:34.565561063Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:824fd89300514e351ed3b68d82c665c6,Namespace:kube-system,Attempt:0,} returns sandbox id \"0590e7278758cbefa4664329d228dfadb5ea42e085ba3003091199979153c81c\"" Apr 21 02:46:34.566672 kubelet[2366]: E0421 02:46:34.566558 2366 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:46:34.571099 containerd[1598]: time="2026-04-21T02:46:34.571039648Z" level=info msg="Container a24e63fefe64b254189d4448b1574ea150ab0ff68452ad978da6d01188d35d2f: CDI devices from CRI Config.CDIDevices: []" Apr 21 02:46:34.573353 containerd[1598]: time="2026-04-21T02:46:34.573137180Z" level=info msg="CreateContainer within sandbox \"0590e7278758cbefa4664329d228dfadb5ea42e085ba3003091199979153c81c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 21 02:46:34.575622 containerd[1598]: time="2026-04-21T02:46:34.575493697Z" level=info msg="CreateContainer within sandbox \"2e67cbb97f049004e3b348bb29787d0f8583773e179f2b022926ae617d66ddb7\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"174a2fcf9d2bca9d3a687c53eeff93796acd2d578dfb77eb1b4dc6fffa49c93d\"" Apr 21 02:46:34.576632 containerd[1598]: time="2026-04-21T02:46:34.576463218Z" level=info msg="StartContainer for \"174a2fcf9d2bca9d3a687c53eeff93796acd2d578dfb77eb1b4dc6fffa49c93d\"" Apr 21 02:46:34.578590 containerd[1598]: time="2026-04-21T02:46:34.578433206Z" level=info msg="connecting to shim 174a2fcf9d2bca9d3a687c53eeff93796acd2d578dfb77eb1b4dc6fffa49c93d" address="unix:///run/containerd/s/7efb00ac8f1edbcda86f0b34176e011bddc09cddcf3f3d41cbfa7e44c0372ac3" protocol=ttrpc version=3 Apr 21 02:46:34.584420 containerd[1598]: time="2026-04-21T02:46:34.584346282Z" level=info msg="CreateContainer within sandbox \"21f62a319a4173f27803b690556e7a73e79f1e935edf3ccd8b7478736ce4097e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a24e63fefe64b254189d4448b1574ea150ab0ff68452ad978da6d01188d35d2f\"" Apr 21 02:46:34.587060 containerd[1598]: time="2026-04-21T02:46:34.586569828Z" level=info msg="StartContainer for \"a24e63fefe64b254189d4448b1574ea150ab0ff68452ad978da6d01188d35d2f\"" Apr 21 02:46:34.587892 containerd[1598]: time="2026-04-21T02:46:34.587504005Z" level=info msg="Container 01f5df4f39124cb67233a427b1e2f79e9b76c3573961bd9737bb0db18e95ad5a: CDI devices from CRI Config.CDIDevices: []" Apr 21 02:46:34.588813 containerd[1598]: time="2026-04-21T02:46:34.588795701Z" level=info msg="connecting to shim a24e63fefe64b254189d4448b1574ea150ab0ff68452ad978da6d01188d35d2f" address="unix:///run/containerd/s/11ff4baf5361fbb86d8b8a4fec965369eb92f993c590956d8b16042e52317729" protocol=ttrpc version=3 Apr 21 02:46:34.597730 kubelet[2366]: E0421 02:46:34.597525 2366 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.33:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.33:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 21 02:46:34.598778 containerd[1598]: time="2026-04-21T02:46:34.598704957Z" level=info msg="CreateContainer within sandbox \"0590e7278758cbefa4664329d228dfadb5ea42e085ba3003091199979153c81c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"01f5df4f39124cb67233a427b1e2f79e9b76c3573961bd9737bb0db18e95ad5a\"" Apr 21 02:46:34.599239 containerd[1598]: time="2026-04-21T02:46:34.599093131Z" level=info msg="StartContainer for \"01f5df4f39124cb67233a427b1e2f79e9b76c3573961bd9737bb0db18e95ad5a\"" Apr 21 02:46:34.599936 containerd[1598]: time="2026-04-21T02:46:34.599920106Z" level=info msg="connecting to shim 01f5df4f39124cb67233a427b1e2f79e9b76c3573961bd9737bb0db18e95ad5a" address="unix:///run/containerd/s/6f04b1199c61ba7b406845ddfd76b50a4f97611b47ba80e091c7b1c092951424" protocol=ttrpc version=3 Apr 21 02:46:34.600382 systemd[1]: Started cri-containerd-174a2fcf9d2bca9d3a687c53eeff93796acd2d578dfb77eb1b4dc6fffa49c93d.scope - libcontainer container 174a2fcf9d2bca9d3a687c53eeff93796acd2d578dfb77eb1b4dc6fffa49c93d. Apr 21 02:46:34.620388 systemd[1]: Started cri-containerd-a24e63fefe64b254189d4448b1574ea150ab0ff68452ad978da6d01188d35d2f.scope - libcontainer container a24e63fefe64b254189d4448b1574ea150ab0ff68452ad978da6d01188d35d2f. Apr 21 02:46:34.628273 systemd[1]: Started cri-containerd-01f5df4f39124cb67233a427b1e2f79e9b76c3573961bd9737bb0db18e95ad5a.scope - libcontainer container 01f5df4f39124cb67233a427b1e2f79e9b76c3573961bd9737bb0db18e95ad5a. Apr 21 02:46:34.697681 containerd[1598]: time="2026-04-21T02:46:34.697524191Z" level=info msg="StartContainer for \"174a2fcf9d2bca9d3a687c53eeff93796acd2d578dfb77eb1b4dc6fffa49c93d\" returns successfully" Apr 21 02:46:34.701770 containerd[1598]: time="2026-04-21T02:46:34.701748882Z" level=info msg="StartContainer for \"a24e63fefe64b254189d4448b1574ea150ab0ff68452ad978da6d01188d35d2f\" returns successfully" Apr 21 02:46:34.706875 kubelet[2366]: E0421 02:46:34.706856 2366 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.33:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.33:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 21 02:46:34.724799 containerd[1598]: time="2026-04-21T02:46:34.724775802Z" level=info msg="StartContainer for \"01f5df4f39124cb67233a427b1e2f79e9b76c3573961bd9737bb0db18e95ad5a\" returns successfully" Apr 21 02:46:34.750086 kubelet[2366]: E0421 02:46:34.749862 2366 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.33:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.33:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 21 02:46:35.059540 kubelet[2366]: I0421 02:46:35.059459 2366 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 21 02:46:35.557254 kubelet[2366]: E0421 02:46:35.556333 2366 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 21 02:46:35.557254 kubelet[2366]: E0421 02:46:35.556432 2366 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:46:35.564560 kubelet[2366]: E0421 02:46:35.564360 2366 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 21 02:46:35.565374 kubelet[2366]: E0421 02:46:35.565129 2366 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:46:35.569850 kubelet[2366]: E0421 02:46:35.569658 2366 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 21 02:46:35.570304 kubelet[2366]: E0421 02:46:35.569811 2366 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:46:35.778739 kubelet[2366]: E0421 02:46:35.778530 2366 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Apr 21 02:46:35.975350 kubelet[2366]: I0421 02:46:35.975219 2366 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 21 02:46:35.975350 kubelet[2366]: E0421 02:46:35.975286 2366 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Apr 21 02:46:35.987724 kubelet[2366]: E0421 02:46:35.987694 2366 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 21 02:46:36.087907 kubelet[2366]: E0421 02:46:36.087820 2366 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 21 02:46:36.188750 kubelet[2366]: E0421 02:46:36.188679 2366 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 21 02:46:36.289666 kubelet[2366]: E0421 02:46:36.289499 2366 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 21 02:46:36.390305 kubelet[2366]: E0421 02:46:36.390216 2366 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 21 02:46:36.491310 kubelet[2366]: E0421 02:46:36.491126 2366 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 21 02:46:36.573913 kubelet[2366]: E0421 02:46:36.573636 2366 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 21 02:46:36.574381 kubelet[2366]: E0421 02:46:36.574246 2366 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 21 02:46:36.574728 kubelet[2366]: E0421 02:46:36.574476 2366 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:46:36.574728 kubelet[2366]: E0421 02:46:36.574633 2366 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:46:36.575550 kubelet[2366]: E0421 02:46:36.575494 2366 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 21 02:46:36.575770 kubelet[2366]: E0421 02:46:36.575717 2366 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:46:36.592084 kubelet[2366]: E0421 02:46:36.591916 2366 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 21 02:46:36.693149 kubelet[2366]: E0421 02:46:36.692957 2366 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 21 02:46:36.794106 kubelet[2366]: E0421 02:46:36.793975 2366 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 21 02:46:36.894761 kubelet[2366]: E0421 02:46:36.894683 2366 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 21 02:46:36.995550 kubelet[2366]: E0421 02:46:36.995516 2366 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 21 02:46:37.096389 kubelet[2366]: E0421 02:46:37.096330 2366 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 21 02:46:37.197387 kubelet[2366]: E0421 02:46:37.196911 2366 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 21 02:46:37.297418 kubelet[2366]: E0421 02:46:37.297238 2366 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 21 02:46:37.397924 kubelet[2366]: E0421 02:46:37.397767 2366 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 21 02:46:37.498648 kubelet[2366]: E0421 02:46:37.498362 2366 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 21 02:46:37.574582 kubelet[2366]: E0421 02:46:37.574506 2366 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 21 02:46:37.574926 kubelet[2366]: E0421 02:46:37.574687 2366 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:46:37.574926 kubelet[2366]: E0421 02:46:37.574864 2366 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 21 02:46:37.575163 kubelet[2366]: E0421 02:46:37.575094 2366 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:46:37.599359 kubelet[2366]: E0421 02:46:37.599284 2366 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 21 02:46:37.700528 kubelet[2366]: E0421 02:46:37.700368 2366 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 21 02:46:37.788553 kubelet[2366]: I0421 02:46:37.788289 2366 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 21 02:46:37.803771 kubelet[2366]: I0421 02:46:37.803700 2366 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 21 02:46:37.809351 kubelet[2366]: I0421 02:46:37.809297 2366 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 21 02:46:37.903484 systemd[1]: Reload requested from client PID 2656 ('systemctl') (unit session-7.scope)... Apr 21 02:46:37.903534 systemd[1]: Reloading... Apr 21 02:46:37.987148 zram_generator::config[2698]: No configuration found. Apr 21 02:46:38.145671 kubelet[2366]: I0421 02:46:38.145548 2366 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 21 02:46:38.153085 kubelet[2366]: E0421 02:46:38.152758 2366 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Apr 21 02:46:38.153249 kubelet[2366]: E0421 02:46:38.153098 2366 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:46:38.189381 systemd[1]: Reloading finished in 285 ms. Apr 21 02:46:38.231488 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 02:46:38.250394 systemd[1]: kubelet.service: Deactivated successfully. Apr 21 02:46:38.250677 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 02:46:38.250754 systemd[1]: kubelet.service: Consumed 1.021s CPU time, 126.4M memory peak. Apr 21 02:46:38.252603 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 02:46:38.408926 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 02:46:38.422477 (kubelet)[2744]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 21 02:46:38.497652 kubelet[2744]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 21 02:46:38.497652 kubelet[2744]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 21 02:46:38.498243 kubelet[2744]: I0421 02:46:38.497688 2744 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 21 02:46:38.506587 kubelet[2744]: I0421 02:46:38.506366 2744 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Apr 21 02:46:38.506587 kubelet[2744]: I0421 02:46:38.506425 2744 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 21 02:46:38.506587 kubelet[2744]: I0421 02:46:38.506443 2744 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 21 02:46:38.506587 kubelet[2744]: I0421 02:46:38.506451 2744 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 21 02:46:38.506964 kubelet[2744]: I0421 02:46:38.506607 2744 server.go:956] "Client rotation is on, will bootstrap in background" Apr 21 02:46:38.508533 kubelet[2744]: I0421 02:46:38.508452 2744 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 21 02:46:38.512706 kubelet[2744]: I0421 02:46:38.512512 2744 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 21 02:46:38.518622 kubelet[2744]: I0421 02:46:38.518562 2744 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Apr 21 02:46:38.528589 kubelet[2744]: I0421 02:46:38.528458 2744 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 21 02:46:38.528811 kubelet[2744]: I0421 02:46:38.528725 2744 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 21 02:46:38.529221 kubelet[2744]: I0421 02:46:38.528813 2744 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 21 02:46:38.529221 kubelet[2744]: I0421 02:46:38.529130 2744 topology_manager.go:138] "Creating topology manager with none policy" Apr 21 02:46:38.529221 kubelet[2744]: I0421 02:46:38.529141 2744 container_manager_linux.go:306] "Creating device plugin manager" Apr 21 02:46:38.529221 kubelet[2744]: I0421 02:46:38.529172 2744 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Apr 21 02:46:38.529640 kubelet[2744]: I0421 02:46:38.529554 2744 state_mem.go:36] "Initialized new in-memory state store" Apr 21 02:46:38.530564 kubelet[2744]: I0421 02:46:38.529747 2744 kubelet.go:475] "Attempting to sync node with API server" Apr 21 02:46:38.530564 kubelet[2744]: I0421 02:46:38.529762 2744 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 21 02:46:38.530564 kubelet[2744]: I0421 02:46:38.529783 2744 kubelet.go:387] "Adding apiserver pod source" Apr 21 02:46:38.530564 kubelet[2744]: I0421 02:46:38.529793 2744 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 21 02:46:38.531071 kubelet[2744]: I0421 02:46:38.530796 2744 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Apr 21 02:46:38.532261 kubelet[2744]: I0421 02:46:38.532123 2744 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 21 02:46:38.532261 kubelet[2744]: I0421 02:46:38.532226 2744 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 21 02:46:38.540546 kubelet[2744]: I0421 02:46:38.540424 2744 server.go:1262] "Started kubelet" Apr 21 02:46:38.541934 kubelet[2744]: I0421 02:46:38.541911 2744 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 21 02:46:38.545294 kubelet[2744]: I0421 02:46:38.543362 2744 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 21 02:46:38.545294 kubelet[2744]: I0421 02:46:38.543407 2744 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 21 02:46:38.545294 kubelet[2744]: I0421 02:46:38.543612 2744 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 21 02:46:38.545845 kubelet[2744]: I0421 02:46:38.545738 2744 server.go:310] "Adding debug handlers to kubelet server" Apr 21 02:46:38.553689 kubelet[2744]: I0421 02:46:38.553601 2744 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 21 02:46:38.555346 kubelet[2744]: I0421 02:46:38.555095 2744 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 21 02:46:38.557145 kubelet[2744]: I0421 02:46:38.556138 2744 volume_manager.go:313] "Starting Kubelet Volume Manager" Apr 21 02:46:38.559674 kubelet[2744]: I0421 02:46:38.559659 2744 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 21 02:46:38.561235 kubelet[2744]: I0421 02:46:38.559948 2744 factory.go:223] Registration of the systemd container factory successfully Apr 21 02:46:38.561235 kubelet[2744]: I0421 02:46:38.560266 2744 reconciler.go:29] "Reconciler: start to sync state" Apr 21 02:46:38.564731 kubelet[2744]: I0421 02:46:38.564329 2744 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 21 02:46:38.566727 kubelet[2744]: I0421 02:46:38.566592 2744 factory.go:223] Registration of the containerd container factory successfully Apr 21 02:46:38.572875 kubelet[2744]: E0421 02:46:38.572609 2744 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 21 02:46:38.594533 kubelet[2744]: I0421 02:46:38.594441 2744 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 21 02:46:38.600072 kubelet[2744]: I0421 02:46:38.599934 2744 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 21 02:46:38.600072 kubelet[2744]: I0421 02:46:38.599950 2744 status_manager.go:244] "Starting to sync pod status with apiserver" Apr 21 02:46:38.600072 kubelet[2744]: I0421 02:46:38.599966 2744 kubelet.go:2428] "Starting kubelet main sync loop" Apr 21 02:46:38.600248 kubelet[2744]: E0421 02:46:38.600080 2744 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 21 02:46:38.615147 kubelet[2744]: I0421 02:46:38.614962 2744 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 21 02:46:38.615147 kubelet[2744]: I0421 02:46:38.615108 2744 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 21 02:46:38.615147 kubelet[2744]: I0421 02:46:38.615123 2744 state_mem.go:36] "Initialized new in-memory state store" Apr 21 02:46:38.615339 kubelet[2744]: I0421 02:46:38.615261 2744 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 21 02:46:38.615339 kubelet[2744]: I0421 02:46:38.615268 2744 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 21 02:46:38.615339 kubelet[2744]: I0421 02:46:38.615278 2744 policy_none.go:49] "None policy: Start" Apr 21 02:46:38.615339 kubelet[2744]: I0421 02:46:38.615287 2744 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 21 02:46:38.615339 kubelet[2744]: I0421 02:46:38.615293 2744 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 21 02:46:38.615426 kubelet[2744]: I0421 02:46:38.615362 2744 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Apr 21 02:46:38.615426 kubelet[2744]: I0421 02:46:38.615368 2744 policy_none.go:47] "Start" Apr 21 02:46:38.623285 kubelet[2744]: E0421 02:46:38.623070 2744 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 21 02:46:38.623285 kubelet[2744]: I0421 02:46:38.623289 2744 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 21 02:46:38.623396 kubelet[2744]: I0421 02:46:38.623299 2744 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 21 02:46:38.623485 kubelet[2744]: I0421 02:46:38.623461 2744 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 21 02:46:38.626383 kubelet[2744]: E0421 02:46:38.626149 2744 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 21 02:46:38.702121 kubelet[2744]: I0421 02:46:38.701880 2744 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 21 02:46:38.702541 kubelet[2744]: I0421 02:46:38.701885 2744 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 21 02:46:38.702898 kubelet[2744]: I0421 02:46:38.702248 2744 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 21 02:46:38.710889 kubelet[2744]: E0421 02:46:38.710755 2744 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 21 02:46:38.711455 kubelet[2744]: E0421 02:46:38.711336 2744 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Apr 21 02:46:38.711455 kubelet[2744]: E0421 02:46:38.711411 2744 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Apr 21 02:46:38.740552 kubelet[2744]: I0421 02:46:38.740459 2744 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 21 02:46:38.750527 kubelet[2744]: I0421 02:46:38.750491 2744 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Apr 21 02:46:38.750763 kubelet[2744]: I0421 02:46:38.750689 2744 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 21 02:46:38.761930 kubelet[2744]: I0421 02:46:38.761721 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/824fd89300514e351ed3b68d82c665c6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"824fd89300514e351ed3b68d82c665c6\") " pod="kube-system/kube-scheduler-localhost" Apr 21 02:46:38.761930 kubelet[2744]: I0421 02:46:38.761758 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/87a74ef0489b4b574a728fa47c7d2a61-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"87a74ef0489b4b574a728fa47c7d2a61\") " pod="kube-system/kube-apiserver-localhost" Apr 21 02:46:38.761930 kubelet[2744]: I0421 02:46:38.761777 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 02:46:38.761930 kubelet[2744]: I0421 02:46:38.761794 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 02:46:38.761930 kubelet[2744]: I0421 02:46:38.761815 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 02:46:38.762340 kubelet[2744]: I0421 02:46:38.761831 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 02:46:38.762340 kubelet[2744]: I0421 02:46:38.761843 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 02:46:38.762340 kubelet[2744]: I0421 02:46:38.761854 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/87a74ef0489b4b574a728fa47c7d2a61-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"87a74ef0489b4b574a728fa47c7d2a61\") " pod="kube-system/kube-apiserver-localhost" Apr 21 02:46:38.762340 kubelet[2744]: I0421 02:46:38.761870 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/87a74ef0489b4b574a728fa47c7d2a61-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"87a74ef0489b4b574a728fa47c7d2a61\") " pod="kube-system/kube-apiserver-localhost" Apr 21 02:46:38.906676 sudo[2787]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Apr 21 02:46:38.907168 sudo[2787]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Apr 21 02:46:39.011916 kubelet[2744]: E0421 02:46:39.011723 2744 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:46:39.013308 kubelet[2744]: E0421 02:46:39.013166 2744 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:46:39.013496 kubelet[2744]: E0421 02:46:39.013394 2744 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:46:39.293292 sudo[2787]: pam_unix(sudo:session): session closed for user root Apr 21 02:46:39.531531 kubelet[2744]: I0421 02:46:39.531145 2744 apiserver.go:52] "Watching apiserver" Apr 21 02:46:39.562393 kubelet[2744]: I0421 02:46:39.561706 2744 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 21 02:46:39.617385 kubelet[2744]: E0421 02:46:39.617163 2744 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:46:39.617385 kubelet[2744]: E0421 02:46:39.617422 2744 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:46:39.617928 kubelet[2744]: E0421 02:46:39.617904 2744 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:46:39.628841 kubelet[2744]: I0421 02:46:39.628552 2744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.628534985 podStartE2EDuration="2.628534985s" podCreationTimestamp="2026-04-21 02:46:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 02:46:39.625273096 +0000 UTC m=+1.197345015" watchObservedRunningTime="2026-04-21 02:46:39.628534985 +0000 UTC m=+1.200606911" Apr 21 02:46:39.628841 kubelet[2744]: I0421 02:46:39.628633 2744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.628629766 podStartE2EDuration="2.628629766s" podCreationTimestamp="2026-04-21 02:46:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 02:46:39.61511976 +0000 UTC m=+1.187191690" watchObservedRunningTime="2026-04-21 02:46:39.628629766 +0000 UTC m=+1.200701694" Apr 21 02:46:39.639963 kubelet[2744]: I0421 02:46:39.639857 2744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.639841151 podStartE2EDuration="2.639841151s" podCreationTimestamp="2026-04-21 02:46:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 02:46:39.63887631 +0000 UTC m=+1.210948234" watchObservedRunningTime="2026-04-21 02:46:39.639841151 +0000 UTC m=+1.211913081" Apr 21 02:46:40.603281 sudo[1796]: pam_unix(sudo:session): session closed for user root Apr 21 02:46:40.604387 sshd[1795]: Connection closed by 10.0.0.1 port 57084 Apr 21 02:46:40.605248 sshd-session[1792]: pam_unix(sshd:session): session closed for user core Apr 21 02:46:40.608764 systemd[1]: sshd@6-10.0.0.33:22-10.0.0.1:57084.service: Deactivated successfully. Apr 21 02:46:40.610631 systemd[1]: session-7.scope: Deactivated successfully. Apr 21 02:46:40.610878 systemd[1]: session-7.scope: Consumed 5.516s CPU time, 271.7M memory peak. Apr 21 02:46:40.612243 systemd-logind[1573]: Session 7 logged out. Waiting for processes to exit. Apr 21 02:46:40.613752 systemd-logind[1573]: Removed session 7. Apr 21 02:46:40.619645 kubelet[2744]: E0421 02:46:40.619559 2744 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:46:40.619843 kubelet[2744]: E0421 02:46:40.619788 2744 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:46:44.623717 kubelet[2744]: I0421 02:46:44.623678 2744 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 21 02:46:44.624473 containerd[1598]: time="2026-04-21T02:46:44.624323524Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 21 02:46:44.624606 kubelet[2744]: I0421 02:46:44.624498 2744 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 21 02:46:45.069170 kubelet[2744]: E0421 02:46:45.068350 2744 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:46:45.434714 systemd[1]: Created slice kubepods-besteffort-pod3bf11da8_6fdf_4247_9d45_97ef1e1a8306.slice - libcontainer container kubepods-besteffort-pod3bf11da8_6fdf_4247_9d45_97ef1e1a8306.slice. Apr 21 02:46:45.449600 systemd[1]: Created slice kubepods-burstable-pod0738958f_c984_4d55_8099_6a5cc0cbda55.slice - libcontainer container kubepods-burstable-pod0738958f_c984_4d55_8099_6a5cc0cbda55.slice. Apr 21 02:46:45.512121 kubelet[2744]: I0421 02:46:45.511926 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0738958f-c984-4d55-8099-6a5cc0cbda55-cni-path\") pod \"cilium-sxfrg\" (UID: \"0738958f-c984-4d55-8099-6a5cc0cbda55\") " pod="kube-system/cilium-sxfrg" Apr 21 02:46:45.512121 kubelet[2744]: I0421 02:46:45.512093 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0738958f-c984-4d55-8099-6a5cc0cbda55-etc-cni-netd\") pod \"cilium-sxfrg\" (UID: \"0738958f-c984-4d55-8099-6a5cc0cbda55\") " pod="kube-system/cilium-sxfrg" Apr 21 02:46:45.512121 kubelet[2744]: I0421 02:46:45.512111 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cknj8\" (UniqueName: \"kubernetes.io/projected/3bf11da8-6fdf-4247-9d45-97ef1e1a8306-kube-api-access-cknj8\") pod \"kube-proxy-scztl\" (UID: \"3bf11da8-6fdf-4247-9d45-97ef1e1a8306\") " pod="kube-system/kube-proxy-scztl" Apr 21 02:46:45.512121 kubelet[2744]: I0421 02:46:45.512124 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0738958f-c984-4d55-8099-6a5cc0cbda55-hostproc\") pod \"cilium-sxfrg\" (UID: \"0738958f-c984-4d55-8099-6a5cc0cbda55\") " pod="kube-system/cilium-sxfrg" Apr 21 02:46:45.512121 kubelet[2744]: I0421 02:46:45.512135 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0738958f-c984-4d55-8099-6a5cc0cbda55-cilium-cgroup\") pod \"cilium-sxfrg\" (UID: \"0738958f-c984-4d55-8099-6a5cc0cbda55\") " pod="kube-system/cilium-sxfrg" Apr 21 02:46:45.512393 kubelet[2744]: I0421 02:46:45.512146 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0738958f-c984-4d55-8099-6a5cc0cbda55-xtables-lock\") pod \"cilium-sxfrg\" (UID: \"0738958f-c984-4d55-8099-6a5cc0cbda55\") " pod="kube-system/cilium-sxfrg" Apr 21 02:46:45.512393 kubelet[2744]: I0421 02:46:45.512156 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0738958f-c984-4d55-8099-6a5cc0cbda55-host-proc-sys-net\") pod \"cilium-sxfrg\" (UID: \"0738958f-c984-4d55-8099-6a5cc0cbda55\") " pod="kube-system/cilium-sxfrg" Apr 21 02:46:45.512393 kubelet[2744]: I0421 02:46:45.512165 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0738958f-c984-4d55-8099-6a5cc0cbda55-hubble-tls\") pod \"cilium-sxfrg\" (UID: \"0738958f-c984-4d55-8099-6a5cc0cbda55\") " pod="kube-system/cilium-sxfrg" Apr 21 02:46:45.512393 kubelet[2744]: I0421 02:46:45.512174 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0738958f-c984-4d55-8099-6a5cc0cbda55-cilium-run\") pod \"cilium-sxfrg\" (UID: \"0738958f-c984-4d55-8099-6a5cc0cbda55\") " pod="kube-system/cilium-sxfrg" Apr 21 02:46:45.512393 kubelet[2744]: I0421 02:46:45.512185 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3bf11da8-6fdf-4247-9d45-97ef1e1a8306-xtables-lock\") pod \"kube-proxy-scztl\" (UID: \"3bf11da8-6fdf-4247-9d45-97ef1e1a8306\") " pod="kube-system/kube-proxy-scztl" Apr 21 02:46:45.512393 kubelet[2744]: I0421 02:46:45.512195 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0738958f-c984-4d55-8099-6a5cc0cbda55-cilium-config-path\") pod \"cilium-sxfrg\" (UID: \"0738958f-c984-4d55-8099-6a5cc0cbda55\") " pod="kube-system/cilium-sxfrg" Apr 21 02:46:45.512508 kubelet[2744]: I0421 02:46:45.512247 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6jwl\" (UniqueName: \"kubernetes.io/projected/0738958f-c984-4d55-8099-6a5cc0cbda55-kube-api-access-x6jwl\") pod \"cilium-sxfrg\" (UID: \"0738958f-c984-4d55-8099-6a5cc0cbda55\") " pod="kube-system/cilium-sxfrg" Apr 21 02:46:45.512508 kubelet[2744]: I0421 02:46:45.512258 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3bf11da8-6fdf-4247-9d45-97ef1e1a8306-kube-proxy\") pod \"kube-proxy-scztl\" (UID: \"3bf11da8-6fdf-4247-9d45-97ef1e1a8306\") " pod="kube-system/kube-proxy-scztl" Apr 21 02:46:45.512508 kubelet[2744]: I0421 02:46:45.512267 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3bf11da8-6fdf-4247-9d45-97ef1e1a8306-lib-modules\") pod \"kube-proxy-scztl\" (UID: \"3bf11da8-6fdf-4247-9d45-97ef1e1a8306\") " pod="kube-system/kube-proxy-scztl" Apr 21 02:46:45.512508 kubelet[2744]: I0421 02:46:45.512279 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0738958f-c984-4d55-8099-6a5cc0cbda55-bpf-maps\") pod \"cilium-sxfrg\" (UID: \"0738958f-c984-4d55-8099-6a5cc0cbda55\") " pod="kube-system/cilium-sxfrg" Apr 21 02:46:45.512508 kubelet[2744]: I0421 02:46:45.512288 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0738958f-c984-4d55-8099-6a5cc0cbda55-lib-modules\") pod \"cilium-sxfrg\" (UID: \"0738958f-c984-4d55-8099-6a5cc0cbda55\") " pod="kube-system/cilium-sxfrg" Apr 21 02:46:45.512508 kubelet[2744]: I0421 02:46:45.512298 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0738958f-c984-4d55-8099-6a5cc0cbda55-clustermesh-secrets\") pod \"cilium-sxfrg\" (UID: \"0738958f-c984-4d55-8099-6a5cc0cbda55\") " pod="kube-system/cilium-sxfrg" Apr 21 02:46:45.512619 kubelet[2744]: I0421 02:46:45.512309 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0738958f-c984-4d55-8099-6a5cc0cbda55-host-proc-sys-kernel\") pod \"cilium-sxfrg\" (UID: \"0738958f-c984-4d55-8099-6a5cc0cbda55\") " pod="kube-system/cilium-sxfrg" Apr 21 02:46:45.633884 kubelet[2744]: E0421 02:46:45.633785 2744 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:46:45.754344 kubelet[2744]: E0421 02:46:45.753723 2744 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:46:45.755149 containerd[1598]: time="2026-04-21T02:46:45.754686061Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-scztl,Uid:3bf11da8-6fdf-4247-9d45-97ef1e1a8306,Namespace:kube-system,Attempt:0,}" Apr 21 02:46:45.763600 kubelet[2744]: E0421 02:46:45.763496 2744 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:46:45.771657 containerd[1598]: time="2026-04-21T02:46:45.771474570Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-sxfrg,Uid:0738958f-c984-4d55-8099-6a5cc0cbda55,Namespace:kube-system,Attempt:0,}" Apr 21 02:46:45.809173 systemd[1]: Created slice kubepods-besteffort-pod2d9df213_08ab_46dd_9029_7c0d4453f2ec.slice - libcontainer container kubepods-besteffort-pod2d9df213_08ab_46dd_9029_7c0d4453f2ec.slice. Apr 21 02:46:45.811460 containerd[1598]: time="2026-04-21T02:46:45.811356499Z" level=info msg="connecting to shim ab65094bb4d33a5f5bf9eda384fa724d054ad10cbd3d06928049ed1638205d0d" address="unix:///run/containerd/s/87c800f14b0fd92fb12524b259de6c4bdf3fa4c5008e32a3622303f1942fc112" namespace=k8s.io protocol=ttrpc version=3 Apr 21 02:46:45.817874 kubelet[2744]: I0421 02:46:45.816494 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gsdl\" (UniqueName: \"kubernetes.io/projected/2d9df213-08ab-46dd-9029-7c0d4453f2ec-kube-api-access-8gsdl\") pod \"cilium-operator-6f9c7c5859-kd7tn\" (UID: \"2d9df213-08ab-46dd-9029-7c0d4453f2ec\") " pod="kube-system/cilium-operator-6f9c7c5859-kd7tn" Apr 21 02:46:45.819092 kubelet[2744]: I0421 02:46:45.818348 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2d9df213-08ab-46dd-9029-7c0d4453f2ec-cilium-config-path\") pod \"cilium-operator-6f9c7c5859-kd7tn\" (UID: \"2d9df213-08ab-46dd-9029-7c0d4453f2ec\") " pod="kube-system/cilium-operator-6f9c7c5859-kd7tn" Apr 21 02:46:45.846584 containerd[1598]: time="2026-04-21T02:46:45.846132072Z" level=info msg="connecting to shim c22340d83558fa63e11c3a5226f06015826f255ae448d4f89edb35db172195cd" address="unix:///run/containerd/s/cd3e30a5f543567828a55e561269ef61d4c16095e3bbd080cf9cf7aa0d7426d2" namespace=k8s.io protocol=ttrpc version=3 Apr 21 02:46:45.890198 systemd[1]: Started cri-containerd-ab65094bb4d33a5f5bf9eda384fa724d054ad10cbd3d06928049ed1638205d0d.scope - libcontainer container ab65094bb4d33a5f5bf9eda384fa724d054ad10cbd3d06928049ed1638205d0d. Apr 21 02:46:45.918367 systemd[1]: Started cri-containerd-c22340d83558fa63e11c3a5226f06015826f255ae448d4f89edb35db172195cd.scope - libcontainer container c22340d83558fa63e11c3a5226f06015826f255ae448d4f89edb35db172195cd. Apr 21 02:46:45.961584 containerd[1598]: time="2026-04-21T02:46:45.961548510Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-sxfrg,Uid:0738958f-c984-4d55-8099-6a5cc0cbda55,Namespace:kube-system,Attempt:0,} returns sandbox id \"c22340d83558fa63e11c3a5226f06015826f255ae448d4f89edb35db172195cd\"" Apr 21 02:46:45.962737 containerd[1598]: time="2026-04-21T02:46:45.962605783Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-scztl,Uid:3bf11da8-6fdf-4247-9d45-97ef1e1a8306,Namespace:kube-system,Attempt:0,} returns sandbox id \"ab65094bb4d33a5f5bf9eda384fa724d054ad10cbd3d06928049ed1638205d0d\"" Apr 21 02:46:45.963148 kubelet[2744]: E0421 02:46:45.962952 2744 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:46:45.963623 kubelet[2744]: E0421 02:46:45.963573 2744 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:46:45.964502 containerd[1598]: time="2026-04-21T02:46:45.964483228Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Apr 21 02:46:45.969863 containerd[1598]: time="2026-04-21T02:46:45.969843964Z" level=info msg="CreateContainer within sandbox \"ab65094bb4d33a5f5bf9eda384fa724d054ad10cbd3d06928049ed1638205d0d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 21 02:46:45.982486 containerd[1598]: time="2026-04-21T02:46:45.982272273Z" level=info msg="Container 635d21f9ead814dab184c57a79b091947ff366d1a34d354c9716a3e776af52fc: CDI devices from CRI Config.CDIDevices: []" Apr 21 02:46:45.995578 containerd[1598]: time="2026-04-21T02:46:45.995322288Z" level=info msg="CreateContainer within sandbox \"ab65094bb4d33a5f5bf9eda384fa724d054ad10cbd3d06928049ed1638205d0d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"635d21f9ead814dab184c57a79b091947ff366d1a34d354c9716a3e776af52fc\"" Apr 21 02:46:45.996797 containerd[1598]: time="2026-04-21T02:46:45.996659998Z" level=info msg="StartContainer for \"635d21f9ead814dab184c57a79b091947ff366d1a34d354c9716a3e776af52fc\"" Apr 21 02:46:45.998629 containerd[1598]: time="2026-04-21T02:46:45.998295907Z" level=info msg="connecting to shim 635d21f9ead814dab184c57a79b091947ff366d1a34d354c9716a3e776af52fc" address="unix:///run/containerd/s/87c800f14b0fd92fb12524b259de6c4bdf3fa4c5008e32a3622303f1942fc112" protocol=ttrpc version=3 Apr 21 02:46:46.038334 systemd[1]: Started cri-containerd-635d21f9ead814dab184c57a79b091947ff366d1a34d354c9716a3e776af52fc.scope - libcontainer container 635d21f9ead814dab184c57a79b091947ff366d1a34d354c9716a3e776af52fc. Apr 21 02:46:46.122456 kubelet[2744]: E0421 02:46:46.121873 2744 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:46:46.124573 containerd[1598]: time="2026-04-21T02:46:46.124494180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-kd7tn,Uid:2d9df213-08ab-46dd-9029-7c0d4453f2ec,Namespace:kube-system,Attempt:0,}" Apr 21 02:46:46.131937 containerd[1598]: time="2026-04-21T02:46:46.131866005Z" level=info msg="StartContainer for \"635d21f9ead814dab184c57a79b091947ff366d1a34d354c9716a3e776af52fc\" returns successfully" Apr 21 02:46:46.160283 containerd[1598]: time="2026-04-21T02:46:46.160169011Z" level=info msg="connecting to shim 3a1be57fcb5bb831ad334998c54f40bf60fdfd1bb64d7aedf333fb70bb1a3424" address="unix:///run/containerd/s/47877a78821f79bd440c33ca4aa5177630b8934bd6f8b276a88a928c731f5f61" namespace=k8s.io protocol=ttrpc version=3 Apr 21 02:46:46.193497 systemd[1]: Started cri-containerd-3a1be57fcb5bb831ad334998c54f40bf60fdfd1bb64d7aedf333fb70bb1a3424.scope - libcontainer container 3a1be57fcb5bb831ad334998c54f40bf60fdfd1bb64d7aedf333fb70bb1a3424. Apr 21 02:46:46.264511 containerd[1598]: time="2026-04-21T02:46:46.264426749Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-kd7tn,Uid:2d9df213-08ab-46dd-9029-7c0d4453f2ec,Namespace:kube-system,Attempt:0,} returns sandbox id \"3a1be57fcb5bb831ad334998c54f40bf60fdfd1bb64d7aedf333fb70bb1a3424\"" Apr 21 02:46:46.269829 kubelet[2744]: E0421 02:46:46.269153 2744 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:46:46.302386 kubelet[2744]: E0421 02:46:46.302257 2744 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:46:46.645510 kubelet[2744]: E0421 02:46:46.645313 2744 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:46:46.645510 kubelet[2744]: E0421 02:46:46.645454 2744 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:46:46.660479 kubelet[2744]: I0421 02:46:46.660370 2744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-scztl" podStartSLOduration=1.6603471490000001 podStartE2EDuration="1.660347149s" podCreationTimestamp="2026-04-21 02:46:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 02:46:46.658667884 +0000 UTC m=+8.230739813" watchObservedRunningTime="2026-04-21 02:46:46.660347149 +0000 UTC m=+8.232419081" Apr 21 02:46:47.647964 kubelet[2744]: E0421 02:46:47.647874 2744 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:46:48.225749 kubelet[2744]: E0421 02:46:48.225492 2744 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:46:48.650943 kubelet[2744]: E0421 02:46:48.650863 2744 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:46:49.652962 kubelet[2744]: E0421 02:46:49.652640 2744 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:46:52.647673 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1466358579.mount: Deactivated successfully. Apr 21 02:46:55.063385 containerd[1598]: time="2026-04-21T02:46:55.062753402Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 02:46:55.064702 containerd[1598]: time="2026-04-21T02:46:55.064643501Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Apr 21 02:46:55.066585 containerd[1598]: time="2026-04-21T02:46:55.066545135Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 02:46:55.068116 containerd[1598]: time="2026-04-21T02:46:55.067859080Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 9.103250835s" Apr 21 02:46:55.068116 containerd[1598]: time="2026-04-21T02:46:55.067888277Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Apr 21 02:46:55.070954 containerd[1598]: time="2026-04-21T02:46:55.070879936Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Apr 21 02:46:55.080508 containerd[1598]: time="2026-04-21T02:46:55.079826149Z" level=info msg="CreateContainer within sandbox \"c22340d83558fa63e11c3a5226f06015826f255ae448d4f89edb35db172195cd\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 21 02:46:55.096238 containerd[1598]: time="2026-04-21T02:46:55.095954591Z" level=info msg="Container 403683d507bdc716d781de64f8a38ce5c3e9061326d5b803b24ed369d34ddbc9: CDI devices from CRI Config.CDIDevices: []" Apr 21 02:46:55.106283 containerd[1598]: time="2026-04-21T02:46:55.106128001Z" level=info msg="CreateContainer within sandbox \"c22340d83558fa63e11c3a5226f06015826f255ae448d4f89edb35db172195cd\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"403683d507bdc716d781de64f8a38ce5c3e9061326d5b803b24ed369d34ddbc9\"" Apr 21 02:46:55.107532 containerd[1598]: time="2026-04-21T02:46:55.107455826Z" level=info msg="StartContainer for \"403683d507bdc716d781de64f8a38ce5c3e9061326d5b803b24ed369d34ddbc9\"" Apr 21 02:46:55.108311 containerd[1598]: time="2026-04-21T02:46:55.108207744Z" level=info msg="connecting to shim 403683d507bdc716d781de64f8a38ce5c3e9061326d5b803b24ed369d34ddbc9" address="unix:///run/containerd/s/cd3e30a5f543567828a55e561269ef61d4c16095e3bbd080cf9cf7aa0d7426d2" protocol=ttrpc version=3 Apr 21 02:46:55.166385 systemd[1]: Started cri-containerd-403683d507bdc716d781de64f8a38ce5c3e9061326d5b803b24ed369d34ddbc9.scope - libcontainer container 403683d507bdc716d781de64f8a38ce5c3e9061326d5b803b24ed369d34ddbc9. Apr 21 02:46:55.210467 containerd[1598]: time="2026-04-21T02:46:55.210246273Z" level=info msg="StartContainer for \"403683d507bdc716d781de64f8a38ce5c3e9061326d5b803b24ed369d34ddbc9\" returns successfully" Apr 21 02:46:55.228651 systemd[1]: cri-containerd-403683d507bdc716d781de64f8a38ce5c3e9061326d5b803b24ed369d34ddbc9.scope: Deactivated successfully. Apr 21 02:46:55.240664 containerd[1598]: time="2026-04-21T02:46:55.240429607Z" level=info msg="received container exit event container_id:\"403683d507bdc716d781de64f8a38ce5c3e9061326d5b803b24ed369d34ddbc9\" id:\"403683d507bdc716d781de64f8a38ce5c3e9061326d5b803b24ed369d34ddbc9\" pid:3180 exited_at:{seconds:1776739615 nanos:239355463}" Apr 21 02:46:55.273520 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-403683d507bdc716d781de64f8a38ce5c3e9061326d5b803b24ed369d34ddbc9-rootfs.mount: Deactivated successfully. Apr 21 02:46:55.671332 kubelet[2744]: E0421 02:46:55.671208 2744 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:46:55.681087 containerd[1598]: time="2026-04-21T02:46:55.680854068Z" level=info msg="CreateContainer within sandbox \"c22340d83558fa63e11c3a5226f06015826f255ae448d4f89edb35db172195cd\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 21 02:46:55.695764 containerd[1598]: time="2026-04-21T02:46:55.695271256Z" level=info msg="Container 5b4daa6113fed8f8e9a022cce0116136165445e46993b2ff50c704ab15f8ff5d: CDI devices from CRI Config.CDIDevices: []" Apr 21 02:46:55.705572 containerd[1598]: time="2026-04-21T02:46:55.705470996Z" level=info msg="CreateContainer within sandbox \"c22340d83558fa63e11c3a5226f06015826f255ae448d4f89edb35db172195cd\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"5b4daa6113fed8f8e9a022cce0116136165445e46993b2ff50c704ab15f8ff5d\"" Apr 21 02:46:55.708441 containerd[1598]: time="2026-04-21T02:46:55.708122207Z" level=info msg="StartContainer for \"5b4daa6113fed8f8e9a022cce0116136165445e46993b2ff50c704ab15f8ff5d\"" Apr 21 02:46:55.709724 containerd[1598]: time="2026-04-21T02:46:55.709703632Z" level=info msg="connecting to shim 5b4daa6113fed8f8e9a022cce0116136165445e46993b2ff50c704ab15f8ff5d" address="unix:///run/containerd/s/cd3e30a5f543567828a55e561269ef61d4c16095e3bbd080cf9cf7aa0d7426d2" protocol=ttrpc version=3 Apr 21 02:46:55.736472 systemd[1]: Started cri-containerd-5b4daa6113fed8f8e9a022cce0116136165445e46993b2ff50c704ab15f8ff5d.scope - libcontainer container 5b4daa6113fed8f8e9a022cce0116136165445e46993b2ff50c704ab15f8ff5d. Apr 21 02:46:55.785648 containerd[1598]: time="2026-04-21T02:46:55.785484809Z" level=info msg="StartContainer for \"5b4daa6113fed8f8e9a022cce0116136165445e46993b2ff50c704ab15f8ff5d\" returns successfully" Apr 21 02:46:55.804576 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 21 02:46:55.805090 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 21 02:46:55.806394 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Apr 21 02:46:55.808478 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 21 02:46:55.812100 systemd[1]: cri-containerd-5b4daa6113fed8f8e9a022cce0116136165445e46993b2ff50c704ab15f8ff5d.scope: Deactivated successfully. Apr 21 02:46:55.813246 containerd[1598]: time="2026-04-21T02:46:55.812688436Z" level=info msg="received container exit event container_id:\"5b4daa6113fed8f8e9a022cce0116136165445e46993b2ff50c704ab15f8ff5d\" id:\"5b4daa6113fed8f8e9a022cce0116136165445e46993b2ff50c704ab15f8ff5d\" pid:3225 exited_at:{seconds:1776739615 nanos:812516714}" Apr 21 02:46:55.852726 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 21 02:46:56.483863 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount645346740.mount: Deactivated successfully. Apr 21 02:46:56.675718 kubelet[2744]: E0421 02:46:56.675605 2744 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:46:56.685742 containerd[1598]: time="2026-04-21T02:46:56.685516651Z" level=info msg="CreateContainer within sandbox \"c22340d83558fa63e11c3a5226f06015826f255ae448d4f89edb35db172195cd\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 21 02:46:56.719831 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount819792370.mount: Deactivated successfully. Apr 21 02:46:56.723925 containerd[1598]: time="2026-04-21T02:46:56.721596242Z" level=info msg="Container 3383e46ac742022c501b9c6eef1254503448525494243de9cf3d4d6b61ba2cd1: CDI devices from CRI Config.CDIDevices: []" Apr 21 02:46:56.736435 containerd[1598]: time="2026-04-21T02:46:56.736125663Z" level=info msg="CreateContainer within sandbox \"c22340d83558fa63e11c3a5226f06015826f255ae448d4f89edb35db172195cd\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"3383e46ac742022c501b9c6eef1254503448525494243de9cf3d4d6b61ba2cd1\"" Apr 21 02:46:56.738281 containerd[1598]: time="2026-04-21T02:46:56.737946518Z" level=info msg="StartContainer for \"3383e46ac742022c501b9c6eef1254503448525494243de9cf3d4d6b61ba2cd1\"" Apr 21 02:46:56.740191 containerd[1598]: time="2026-04-21T02:46:56.739962794Z" level=info msg="connecting to shim 3383e46ac742022c501b9c6eef1254503448525494243de9cf3d4d6b61ba2cd1" address="unix:///run/containerd/s/cd3e30a5f543567828a55e561269ef61d4c16095e3bbd080cf9cf7aa0d7426d2" protocol=ttrpc version=3 Apr 21 02:46:56.769268 systemd[1]: Started cri-containerd-3383e46ac742022c501b9c6eef1254503448525494243de9cf3d4d6b61ba2cd1.scope - libcontainer container 3383e46ac742022c501b9c6eef1254503448525494243de9cf3d4d6b61ba2cd1. Apr 21 02:46:56.848220 systemd[1]: cri-containerd-3383e46ac742022c501b9c6eef1254503448525494243de9cf3d4d6b61ba2cd1.scope: Deactivated successfully. Apr 21 02:46:56.850347 containerd[1598]: time="2026-04-21T02:46:56.850306028Z" level=info msg="StartContainer for \"3383e46ac742022c501b9c6eef1254503448525494243de9cf3d4d6b61ba2cd1\" returns successfully" Apr 21 02:46:56.853197 containerd[1598]: time="2026-04-21T02:46:56.852938433Z" level=info msg="received container exit event container_id:\"3383e46ac742022c501b9c6eef1254503448525494243de9cf3d4d6b61ba2cd1\" id:\"3383e46ac742022c501b9c6eef1254503448525494243de9cf3d4d6b61ba2cd1\" pid:3285 exited_at:{seconds:1776739616 nanos:851930863}" Apr 21 02:46:57.198276 containerd[1598]: time="2026-04-21T02:46:57.197903854Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 02:46:57.198748 containerd[1598]: time="2026-04-21T02:46:57.198643291Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Apr 21 02:46:57.200737 containerd[1598]: time="2026-04-21T02:46:57.200657420Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 02:46:57.202288 containerd[1598]: time="2026-04-21T02:46:57.202200926Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.131246215s" Apr 21 02:46:57.202288 containerd[1598]: time="2026-04-21T02:46:57.202283009Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Apr 21 02:46:57.210239 containerd[1598]: time="2026-04-21T02:46:57.210117976Z" level=info msg="CreateContainer within sandbox \"3a1be57fcb5bb831ad334998c54f40bf60fdfd1bb64d7aedf333fb70bb1a3424\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Apr 21 02:46:57.222903 containerd[1598]: time="2026-04-21T02:46:57.222843257Z" level=info msg="Container 77d597393cd532d17c4852c837fab651ef22679d450bc603a7ea47e7328a3ee3: CDI devices from CRI Config.CDIDevices: []" Apr 21 02:46:57.233217 containerd[1598]: time="2026-04-21T02:46:57.233188554Z" level=info msg="CreateContainer within sandbox \"3a1be57fcb5bb831ad334998c54f40bf60fdfd1bb64d7aedf333fb70bb1a3424\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"77d597393cd532d17c4852c837fab651ef22679d450bc603a7ea47e7328a3ee3\"" Apr 21 02:46:57.235346 containerd[1598]: time="2026-04-21T02:46:57.235226714Z" level=info msg="StartContainer for \"77d597393cd532d17c4852c837fab651ef22679d450bc603a7ea47e7328a3ee3\"" Apr 21 02:46:57.236099 containerd[1598]: time="2026-04-21T02:46:57.235825269Z" level=info msg="connecting to shim 77d597393cd532d17c4852c837fab651ef22679d450bc603a7ea47e7328a3ee3" address="unix:///run/containerd/s/47877a78821f79bd440c33ca4aa5177630b8934bd6f8b276a88a928c731f5f61" protocol=ttrpc version=3 Apr 21 02:46:57.262385 systemd[1]: Started cri-containerd-77d597393cd532d17c4852c837fab651ef22679d450bc603a7ea47e7328a3ee3.scope - libcontainer container 77d597393cd532d17c4852c837fab651ef22679d450bc603a7ea47e7328a3ee3. Apr 21 02:46:57.322441 containerd[1598]: time="2026-04-21T02:46:57.322281234Z" level=info msg="StartContainer for \"77d597393cd532d17c4852c837fab651ef22679d450bc603a7ea47e7328a3ee3\" returns successfully" Apr 21 02:46:57.682097 kubelet[2744]: E0421 02:46:57.681855 2744 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:46:57.689273 kubelet[2744]: E0421 02:46:57.689197 2744 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:46:57.698556 containerd[1598]: time="2026-04-21T02:46:57.698347927Z" level=info msg="CreateContainer within sandbox \"c22340d83558fa63e11c3a5226f06015826f255ae448d4f89edb35db172195cd\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 21 02:46:57.722420 containerd[1598]: time="2026-04-21T02:46:57.722328496Z" level=info msg="Container cbe8fc89e276d5abf58bea619ad9e045e23f33d3052c59bc6ce0d7ebf9d9da2f: CDI devices from CRI Config.CDIDevices: []" Apr 21 02:46:57.735312 containerd[1598]: time="2026-04-21T02:46:57.735217814Z" level=info msg="CreateContainer within sandbox \"c22340d83558fa63e11c3a5226f06015826f255ae448d4f89edb35db172195cd\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"cbe8fc89e276d5abf58bea619ad9e045e23f33d3052c59bc6ce0d7ebf9d9da2f\"" Apr 21 02:46:57.737681 containerd[1598]: time="2026-04-21T02:46:57.737612322Z" level=info msg="StartContainer for \"cbe8fc89e276d5abf58bea619ad9e045e23f33d3052c59bc6ce0d7ebf9d9da2f\"" Apr 21 02:46:57.738796 containerd[1598]: time="2026-04-21T02:46:57.738709170Z" level=info msg="connecting to shim cbe8fc89e276d5abf58bea619ad9e045e23f33d3052c59bc6ce0d7ebf9d9da2f" address="unix:///run/containerd/s/cd3e30a5f543567828a55e561269ef61d4c16095e3bbd080cf9cf7aa0d7426d2" protocol=ttrpc version=3 Apr 21 02:46:57.746523 kubelet[2744]: I0421 02:46:57.746363 2744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6f9c7c5859-kd7tn" podStartSLOduration=1.815178499 podStartE2EDuration="12.746341729s" podCreationTimestamp="2026-04-21 02:46:45 +0000 UTC" firstStartedPulling="2026-04-21 02:46:46.27229521 +0000 UTC m=+7.844367129" lastFinishedPulling="2026-04-21 02:46:57.203458441 +0000 UTC m=+18.775530359" observedRunningTime="2026-04-21 02:46:57.702122919 +0000 UTC m=+19.274194841" watchObservedRunningTime="2026-04-21 02:46:57.746341729 +0000 UTC m=+19.318413662" Apr 21 02:46:57.791479 systemd[1]: Started cri-containerd-cbe8fc89e276d5abf58bea619ad9e045e23f33d3052c59bc6ce0d7ebf9d9da2f.scope - libcontainer container cbe8fc89e276d5abf58bea619ad9e045e23f33d3052c59bc6ce0d7ebf9d9da2f. Apr 21 02:46:57.895708 containerd[1598]: time="2026-04-21T02:46:57.895635468Z" level=info msg="StartContainer for \"cbe8fc89e276d5abf58bea619ad9e045e23f33d3052c59bc6ce0d7ebf9d9da2f\" returns successfully" Apr 21 02:46:57.907679 containerd[1598]: time="2026-04-21T02:46:57.907494921Z" level=info msg="received container exit event container_id:\"cbe8fc89e276d5abf58bea619ad9e045e23f33d3052c59bc6ce0d7ebf9d9da2f\" id:\"cbe8fc89e276d5abf58bea619ad9e045e23f33d3052c59bc6ce0d7ebf9d9da2f\" pid:3363 exited_at:{seconds:1776739617 nanos:907296611}" Apr 21 02:46:57.907589 systemd[1]: cri-containerd-cbe8fc89e276d5abf58bea619ad9e045e23f33d3052c59bc6ce0d7ebf9d9da2f.scope: Deactivated successfully. Apr 21 02:46:58.700928 kubelet[2744]: E0421 02:46:58.700732 2744 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:46:58.700928 kubelet[2744]: E0421 02:46:58.700888 2744 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:46:58.711476 containerd[1598]: time="2026-04-21T02:46:58.711384239Z" level=info msg="CreateContainer within sandbox \"c22340d83558fa63e11c3a5226f06015826f255ae448d4f89edb35db172195cd\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 21 02:46:58.738725 containerd[1598]: time="2026-04-21T02:46:58.738648526Z" level=info msg="Container 399c0260334896bb662bbf8b70b367012d6c3fb93ae43e08ea06ebcd68c59be1: CDI devices from CRI Config.CDIDevices: []" Apr 21 02:46:58.752194 containerd[1598]: time="2026-04-21T02:46:58.752066469Z" level=info msg="CreateContainer within sandbox \"c22340d83558fa63e11c3a5226f06015826f255ae448d4f89edb35db172195cd\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"399c0260334896bb662bbf8b70b367012d6c3fb93ae43e08ea06ebcd68c59be1\"" Apr 21 02:46:58.753555 containerd[1598]: time="2026-04-21T02:46:58.753414607Z" level=info msg="StartContainer for \"399c0260334896bb662bbf8b70b367012d6c3fb93ae43e08ea06ebcd68c59be1\"" Apr 21 02:46:58.754435 containerd[1598]: time="2026-04-21T02:46:58.754381083Z" level=info msg="connecting to shim 399c0260334896bb662bbf8b70b367012d6c3fb93ae43e08ea06ebcd68c59be1" address="unix:///run/containerd/s/cd3e30a5f543567828a55e561269ef61d4c16095e3bbd080cf9cf7aa0d7426d2" protocol=ttrpc version=3 Apr 21 02:46:58.779334 systemd[1]: Started cri-containerd-399c0260334896bb662bbf8b70b367012d6c3fb93ae43e08ea06ebcd68c59be1.scope - libcontainer container 399c0260334896bb662bbf8b70b367012d6c3fb93ae43e08ea06ebcd68c59be1. Apr 21 02:46:58.862373 containerd[1598]: time="2026-04-21T02:46:58.862331035Z" level=info msg="StartContainer for \"399c0260334896bb662bbf8b70b367012d6c3fb93ae43e08ea06ebcd68c59be1\" returns successfully" Apr 21 02:46:59.032826 kubelet[2744]: I0421 02:46:59.032699 2744 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Apr 21 02:46:59.084889 systemd[1]: Created slice kubepods-burstable-podf8ad73b6_8652_4ec6_9f60_be974d75218d.slice - libcontainer container kubepods-burstable-podf8ad73b6_8652_4ec6_9f60_be974d75218d.slice. Apr 21 02:46:59.094924 systemd[1]: Created slice kubepods-burstable-pod5005f3df_8449_4b12_b74f_50d6bd35d558.slice - libcontainer container kubepods-burstable-pod5005f3df_8449_4b12_b74f_50d6bd35d558.slice. Apr 21 02:46:59.142460 kubelet[2744]: I0421 02:46:59.142341 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6cgw\" (UniqueName: \"kubernetes.io/projected/5005f3df-8449-4b12-b74f-50d6bd35d558-kube-api-access-k6cgw\") pod \"coredns-66bc5c9577-cwcgr\" (UID: \"5005f3df-8449-4b12-b74f-50d6bd35d558\") " pod="kube-system/coredns-66bc5c9577-cwcgr" Apr 21 02:46:59.142460 kubelet[2744]: I0421 02:46:59.142421 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wm9kj\" (UniqueName: \"kubernetes.io/projected/f8ad73b6-8652-4ec6-9f60-be974d75218d-kube-api-access-wm9kj\") pod \"coredns-66bc5c9577-j67fj\" (UID: \"f8ad73b6-8652-4ec6-9f60-be974d75218d\") " pod="kube-system/coredns-66bc5c9577-j67fj" Apr 21 02:46:59.142460 kubelet[2744]: I0421 02:46:59.142445 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5005f3df-8449-4b12-b74f-50d6bd35d558-config-volume\") pod \"coredns-66bc5c9577-cwcgr\" (UID: \"5005f3df-8449-4b12-b74f-50d6bd35d558\") " pod="kube-system/coredns-66bc5c9577-cwcgr" Apr 21 02:46:59.142460 kubelet[2744]: I0421 02:46:59.142458 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f8ad73b6-8652-4ec6-9f60-be974d75218d-config-volume\") pod \"coredns-66bc5c9577-j67fj\" (UID: \"f8ad73b6-8652-4ec6-9f60-be974d75218d\") " pod="kube-system/coredns-66bc5c9577-j67fj" Apr 21 02:46:59.405507 kubelet[2744]: E0421 02:46:59.404841 2744 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:46:59.408698 containerd[1598]: time="2026-04-21T02:46:59.407284815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-j67fj,Uid:f8ad73b6-8652-4ec6-9f60-be974d75218d,Namespace:kube-system,Attempt:0,}" Apr 21 02:46:59.410595 kubelet[2744]: E0421 02:46:59.410323 2744 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:46:59.411489 containerd[1598]: time="2026-04-21T02:46:59.411327488Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-cwcgr,Uid:5005f3df-8449-4b12-b74f-50d6bd35d558,Namespace:kube-system,Attempt:0,}" Apr 21 02:46:59.711709 kubelet[2744]: E0421 02:46:59.711365 2744 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:46:59.730750 kubelet[2744]: I0421 02:46:59.730651 2744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-sxfrg" podStartSLOduration=5.624718925 podStartE2EDuration="14.730634973s" podCreationTimestamp="2026-04-21 02:46:45 +0000 UTC" firstStartedPulling="2026-04-21 02:46:45.963567506 +0000 UTC m=+7.535639433" lastFinishedPulling="2026-04-21 02:46:55.069483561 +0000 UTC m=+16.641555481" observedRunningTime="2026-04-21 02:46:59.727612748 +0000 UTC m=+21.299684667" watchObservedRunningTime="2026-04-21 02:46:59.730634973 +0000 UTC m=+21.302706903" Apr 21 02:47:00.714609 kubelet[2744]: E0421 02:47:00.714482 2744 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:47:00.791369 update_engine[1575]: I20260421 02:47:00.791172 1575 update_attempter.cc:509] Updating boot flags... Apr 21 02:47:00.984421 systemd-networkd[1505]: cilium_host: Link UP Apr 21 02:47:00.984512 systemd-networkd[1505]: cilium_net: Link UP Apr 21 02:47:00.984670 systemd-networkd[1505]: cilium_net: Gained carrier Apr 21 02:47:00.984760 systemd-networkd[1505]: cilium_host: Gained carrier Apr 21 02:47:01.132648 systemd-networkd[1505]: cilium_vxlan: Link UP Apr 21 02:47:01.132653 systemd-networkd[1505]: cilium_vxlan: Gained carrier Apr 21 02:47:01.363227 kernel: NET: Registered PF_ALG protocol family Apr 21 02:47:01.613514 systemd-networkd[1505]: cilium_host: Gained IPv6LL Apr 21 02:47:01.716737 kubelet[2744]: E0421 02:47:01.716565 2744 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:47:01.806443 systemd-networkd[1505]: cilium_net: Gained IPv6LL Apr 21 02:47:02.172696 systemd-networkd[1505]: lxc_health: Link UP Apr 21 02:47:02.182230 systemd-networkd[1505]: lxc_health: Gained carrier Apr 21 02:47:02.381392 systemd-networkd[1505]: cilium_vxlan: Gained IPv6LL Apr 21 02:47:02.516096 systemd-networkd[1505]: lxc63ef1cc39a89: Link UP Apr 21 02:47:02.523383 kernel: eth0: renamed from tmpb7796 Apr 21 02:47:02.527292 systemd-networkd[1505]: lxc48add1af4506: Link UP Apr 21 02:47:02.541858 systemd-networkd[1505]: lxc63ef1cc39a89: Gained carrier Apr 21 02:47:02.549206 kernel: eth0: renamed from tmpf719c Apr 21 02:47:02.553712 systemd-networkd[1505]: lxc48add1af4506: Gained carrier Apr 21 02:47:03.597483 systemd-networkd[1505]: lxc63ef1cc39a89: Gained IPv6LL Apr 21 02:47:03.726337 systemd-networkd[1505]: lxc_health: Gained IPv6LL Apr 21 02:47:03.758555 kubelet[2744]: E0421 02:47:03.758477 2744 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:47:04.430454 systemd-networkd[1505]: lxc48add1af4506: Gained IPv6LL Apr 21 02:47:06.407321 containerd[1598]: time="2026-04-21T02:47:06.406932567Z" level=info msg="connecting to shim f719c8c1e341e7ebe103ee7f7ef820081bc93dd94cdb53d798bc2dbd3cf32973" address="unix:///run/containerd/s/9c4afdc5bde27b7b5b004f80f1d36c19e979ee4546aa1d958d29c5a57df6984f" namespace=k8s.io protocol=ttrpc version=3 Apr 21 02:47:06.407857 containerd[1598]: time="2026-04-21T02:47:06.407290695Z" level=info msg="connecting to shim b7796cd00de67fd1e6fe7abf66addcf8c57e1da76bcee2296934edc2fd297f34" address="unix:///run/containerd/s/53710bf1b291f4f314e5499b3134a7719df553818bcc34728ee07d469a20ad92" namespace=k8s.io protocol=ttrpc version=3 Apr 21 02:47:06.446323 systemd[1]: Started cri-containerd-b7796cd00de67fd1e6fe7abf66addcf8c57e1da76bcee2296934edc2fd297f34.scope - libcontainer container b7796cd00de67fd1e6fe7abf66addcf8c57e1da76bcee2296934edc2fd297f34. Apr 21 02:47:06.465567 systemd[1]: Started cri-containerd-f719c8c1e341e7ebe103ee7f7ef820081bc93dd94cdb53d798bc2dbd3cf32973.scope - libcontainer container f719c8c1e341e7ebe103ee7f7ef820081bc93dd94cdb53d798bc2dbd3cf32973. Apr 21 02:47:06.473659 systemd-resolved[1510]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 21 02:47:06.485160 systemd-resolved[1510]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 21 02:47:06.551634 containerd[1598]: time="2026-04-21T02:47:06.551555939Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-cwcgr,Uid:5005f3df-8449-4b12-b74f-50d6bd35d558,Namespace:kube-system,Attempt:0,} returns sandbox id \"b7796cd00de67fd1e6fe7abf66addcf8c57e1da76bcee2296934edc2fd297f34\"" Apr 21 02:47:06.553305 containerd[1598]: time="2026-04-21T02:47:06.552867401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-j67fj,Uid:f8ad73b6-8652-4ec6-9f60-be974d75218d,Namespace:kube-system,Attempt:0,} returns sandbox id \"f719c8c1e341e7ebe103ee7f7ef820081bc93dd94cdb53d798bc2dbd3cf32973\"" Apr 21 02:47:06.553774 kubelet[2744]: E0421 02:47:06.553560 2744 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:47:06.553774 kubelet[2744]: E0421 02:47:06.553647 2744 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:47:06.560702 containerd[1598]: time="2026-04-21T02:47:06.560444393Z" level=info msg="CreateContainer within sandbox \"b7796cd00de67fd1e6fe7abf66addcf8c57e1da76bcee2296934edc2fd297f34\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 21 02:47:06.562661 containerd[1598]: time="2026-04-21T02:47:06.562421109Z" level=info msg="CreateContainer within sandbox \"f719c8c1e341e7ebe103ee7f7ef820081bc93dd94cdb53d798bc2dbd3cf32973\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 21 02:47:06.581322 containerd[1598]: time="2026-04-21T02:47:06.580883710Z" level=info msg="Container 6caf83a5968ccf2f7e671487f2d99cb708a56b938c851186642771e984fec8e6: CDI devices from CRI Config.CDIDevices: []" Apr 21 02:47:06.589934 containerd[1598]: time="2026-04-21T02:47:06.589902713Z" level=info msg="CreateContainer within sandbox \"b7796cd00de67fd1e6fe7abf66addcf8c57e1da76bcee2296934edc2fd297f34\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6caf83a5968ccf2f7e671487f2d99cb708a56b938c851186642771e984fec8e6\"" Apr 21 02:47:06.591955 containerd[1598]: time="2026-04-21T02:47:06.591923186Z" level=info msg="StartContainer for \"6caf83a5968ccf2f7e671487f2d99cb708a56b938c851186642771e984fec8e6\"" Apr 21 02:47:06.592547 containerd[1598]: time="2026-04-21T02:47:06.591968039Z" level=info msg="Container e7fb0d426a2b2d99a25a285115fa94a167d1d17992a304c25fde589d01609154: CDI devices from CRI Config.CDIDevices: []" Apr 21 02:47:06.592940 containerd[1598]: time="2026-04-21T02:47:06.592923452Z" level=info msg="connecting to shim 6caf83a5968ccf2f7e671487f2d99cb708a56b938c851186642771e984fec8e6" address="unix:///run/containerd/s/53710bf1b291f4f314e5499b3134a7719df553818bcc34728ee07d469a20ad92" protocol=ttrpc version=3 Apr 21 02:47:06.611526 containerd[1598]: time="2026-04-21T02:47:06.611457188Z" level=info msg="CreateContainer within sandbox \"f719c8c1e341e7ebe103ee7f7ef820081bc93dd94cdb53d798bc2dbd3cf32973\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e7fb0d426a2b2d99a25a285115fa94a167d1d17992a304c25fde589d01609154\"" Apr 21 02:47:06.613551 containerd[1598]: time="2026-04-21T02:47:06.613467420Z" level=info msg="StartContainer for \"e7fb0d426a2b2d99a25a285115fa94a167d1d17992a304c25fde589d01609154\"" Apr 21 02:47:06.615184 containerd[1598]: time="2026-04-21T02:47:06.614817886Z" level=info msg="connecting to shim e7fb0d426a2b2d99a25a285115fa94a167d1d17992a304c25fde589d01609154" address="unix:///run/containerd/s/9c4afdc5bde27b7b5b004f80f1d36c19e979ee4546aa1d958d29c5a57df6984f" protocol=ttrpc version=3 Apr 21 02:47:06.628751 systemd[1]: Started cri-containerd-6caf83a5968ccf2f7e671487f2d99cb708a56b938c851186642771e984fec8e6.scope - libcontainer container 6caf83a5968ccf2f7e671487f2d99cb708a56b938c851186642771e984fec8e6. Apr 21 02:47:06.653524 systemd[1]: Started cri-containerd-e7fb0d426a2b2d99a25a285115fa94a167d1d17992a304c25fde589d01609154.scope - libcontainer container e7fb0d426a2b2d99a25a285115fa94a167d1d17992a304c25fde589d01609154. Apr 21 02:47:06.703180 containerd[1598]: time="2026-04-21T02:47:06.702808016Z" level=info msg="StartContainer for \"6caf83a5968ccf2f7e671487f2d99cb708a56b938c851186642771e984fec8e6\" returns successfully" Apr 21 02:47:06.716780 containerd[1598]: time="2026-04-21T02:47:06.716686417Z" level=info msg="StartContainer for \"e7fb0d426a2b2d99a25a285115fa94a167d1d17992a304c25fde589d01609154\" returns successfully" Apr 21 02:47:06.788185 kubelet[2744]: E0421 02:47:06.787960 2744 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:47:06.789194 kubelet[2744]: E0421 02:47:06.788627 2744 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:47:06.814076 kubelet[2744]: I0421 02:47:06.813646 2744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-cwcgr" podStartSLOduration=21.813625567 podStartE2EDuration="21.813625567s" podCreationTimestamp="2026-04-21 02:46:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 02:47:06.812913737 +0000 UTC m=+28.384985656" watchObservedRunningTime="2026-04-21 02:47:06.813625567 +0000 UTC m=+28.385697572" Apr 21 02:47:07.792174 kubelet[2744]: E0421 02:47:07.789862 2744 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:47:07.792174 kubelet[2744]: E0421 02:47:07.790089 2744 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:47:07.807843 kubelet[2744]: I0421 02:47:07.807586 2744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-j67fj" podStartSLOduration=22.807571496 podStartE2EDuration="22.807571496s" podCreationTimestamp="2026-04-21 02:46:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 02:47:06.831807335 +0000 UTC m=+28.403879254" watchObservedRunningTime="2026-04-21 02:47:07.807571496 +0000 UTC m=+29.379643425" Apr 21 02:47:08.793852 kubelet[2744]: E0421 02:47:08.793754 2744 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:47:08.793852 kubelet[2744]: E0421 02:47:08.793837 2744 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:47:09.565281 systemd[1]: Started sshd@7-10.0.0.33:22-10.0.0.1:49336.service - OpenSSH per-connection server daemon (10.0.0.1:49336). Apr 21 02:47:09.626931 sshd[4089]: Accepted publickey for core from 10.0.0.1 port 49336 ssh2: RSA SHA256:9KUHGlOeyJC7ThH1mkUqpKUL96f7+Wj12eT7o4yHGwc Apr 21 02:47:09.628357 sshd-session[4089]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 02:47:09.634314 systemd-logind[1573]: New session 8 of user core. Apr 21 02:47:09.642308 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 21 02:47:09.771964 sshd[4092]: Connection closed by 10.0.0.1 port 49336 Apr 21 02:47:09.773302 sshd-session[4089]: pam_unix(sshd:session): session closed for user core Apr 21 02:47:09.777466 systemd[1]: sshd@7-10.0.0.33:22-10.0.0.1:49336.service: Deactivated successfully. Apr 21 02:47:09.779471 systemd[1]: session-8.scope: Deactivated successfully. Apr 21 02:47:09.780898 systemd-logind[1573]: Session 8 logged out. Waiting for processes to exit. Apr 21 02:47:09.782870 systemd-logind[1573]: Removed session 8. Apr 21 02:47:14.638787 kubelet[2744]: I0421 02:47:14.638655 2744 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 21 02:47:14.643830 kubelet[2744]: E0421 02:47:14.643475 2744 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:47:14.788851 systemd[1]: Started sshd@8-10.0.0.33:22-10.0.0.1:49350.service - OpenSSH per-connection server daemon (10.0.0.1:49350). Apr 21 02:47:14.827677 kubelet[2744]: E0421 02:47:14.827563 2744 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:47:14.876219 sshd[4107]: Accepted publickey for core from 10.0.0.1 port 49350 ssh2: RSA SHA256:9KUHGlOeyJC7ThH1mkUqpKUL96f7+Wj12eT7o4yHGwc Apr 21 02:47:14.878457 sshd-session[4107]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 02:47:14.888841 systemd-logind[1573]: New session 9 of user core. Apr 21 02:47:14.899584 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 21 02:47:15.072525 sshd[4110]: Connection closed by 10.0.0.1 port 49350 Apr 21 02:47:15.072939 sshd-session[4107]: pam_unix(sshd:session): session closed for user core Apr 21 02:47:15.081609 systemd[1]: sshd@8-10.0.0.33:22-10.0.0.1:49350.service: Deactivated successfully. Apr 21 02:47:15.084307 systemd[1]: session-9.scope: Deactivated successfully. Apr 21 02:47:15.087598 systemd-logind[1573]: Session 9 logged out. Waiting for processes to exit. Apr 21 02:47:15.090346 systemd-logind[1573]: Removed session 9. Apr 21 02:47:20.086716 systemd[1]: Started sshd@9-10.0.0.33:22-10.0.0.1:58302.service - OpenSSH per-connection server daemon (10.0.0.1:58302). Apr 21 02:47:20.162657 sshd[4128]: Accepted publickey for core from 10.0.0.1 port 58302 ssh2: RSA SHA256:9KUHGlOeyJC7ThH1mkUqpKUL96f7+Wj12eT7o4yHGwc Apr 21 02:47:20.164604 sshd-session[4128]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 02:47:20.171639 systemd-logind[1573]: New session 10 of user core. Apr 21 02:47:20.179628 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 21 02:47:20.310544 sshd[4131]: Connection closed by 10.0.0.1 port 58302 Apr 21 02:47:20.311638 sshd-session[4128]: pam_unix(sshd:session): session closed for user core Apr 21 02:47:20.318675 systemd[1]: sshd@9-10.0.0.33:22-10.0.0.1:58302.service: Deactivated successfully. Apr 21 02:47:20.322482 systemd[1]: session-10.scope: Deactivated successfully. Apr 21 02:47:20.324414 systemd-logind[1573]: Session 10 logged out. Waiting for processes to exit. Apr 21 02:47:20.328474 systemd-logind[1573]: Removed session 10. Apr 21 02:47:25.327326 systemd[1]: Started sshd@10-10.0.0.33:22-10.0.0.1:46348.service - OpenSSH per-connection server daemon (10.0.0.1:46348). Apr 21 02:47:25.383184 sshd[4145]: Accepted publickey for core from 10.0.0.1 port 46348 ssh2: RSA SHA256:9KUHGlOeyJC7ThH1mkUqpKUL96f7+Wj12eT7o4yHGwc Apr 21 02:47:25.384486 sshd-session[4145]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 02:47:25.391870 systemd-logind[1573]: New session 11 of user core. Apr 21 02:47:25.405569 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 21 02:47:25.502711 sshd[4148]: Connection closed by 10.0.0.1 port 46348 Apr 21 02:47:25.503892 sshd-session[4145]: pam_unix(sshd:session): session closed for user core Apr 21 02:47:25.512103 systemd[1]: sshd@10-10.0.0.33:22-10.0.0.1:46348.service: Deactivated successfully. Apr 21 02:47:25.514560 systemd[1]: session-11.scope: Deactivated successfully. Apr 21 02:47:25.515961 systemd-logind[1573]: Session 11 logged out. Waiting for processes to exit. Apr 21 02:47:25.519786 systemd[1]: Started sshd@11-10.0.0.33:22-10.0.0.1:46352.service - OpenSSH per-connection server daemon (10.0.0.1:46352). Apr 21 02:47:25.520936 systemd-logind[1573]: Removed session 11. Apr 21 02:47:25.577582 sshd[4162]: Accepted publickey for core from 10.0.0.1 port 46352 ssh2: RSA SHA256:9KUHGlOeyJC7ThH1mkUqpKUL96f7+Wj12eT7o4yHGwc Apr 21 02:47:25.579667 sshd-session[4162]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 02:47:25.585889 systemd-logind[1573]: New session 12 of user core. Apr 21 02:47:25.593466 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 21 02:47:25.759248 sshd[4165]: Connection closed by 10.0.0.1 port 46352 Apr 21 02:47:25.760565 sshd-session[4162]: pam_unix(sshd:session): session closed for user core Apr 21 02:47:25.781598 systemd[1]: sshd@11-10.0.0.33:22-10.0.0.1:46352.service: Deactivated successfully. Apr 21 02:47:25.784893 systemd[1]: session-12.scope: Deactivated successfully. Apr 21 02:47:25.789267 systemd-logind[1573]: Session 12 logged out. Waiting for processes to exit. Apr 21 02:47:25.802490 systemd[1]: Started sshd@12-10.0.0.33:22-10.0.0.1:46354.service - OpenSSH per-connection server daemon (10.0.0.1:46354). Apr 21 02:47:25.820667 systemd-logind[1573]: Removed session 12. Apr 21 02:47:25.888436 sshd[4176]: Accepted publickey for core from 10.0.0.1 port 46354 ssh2: RSA SHA256:9KUHGlOeyJC7ThH1mkUqpKUL96f7+Wj12eT7o4yHGwc Apr 21 02:47:25.889348 sshd-session[4176]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 02:47:25.898107 systemd-logind[1573]: New session 13 of user core. Apr 21 02:47:25.904476 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 21 02:47:26.028748 sshd[4179]: Connection closed by 10.0.0.1 port 46354 Apr 21 02:47:26.029239 sshd-session[4176]: pam_unix(sshd:session): session closed for user core Apr 21 02:47:26.034288 systemd[1]: sshd@12-10.0.0.33:22-10.0.0.1:46354.service: Deactivated successfully. Apr 21 02:47:26.036748 systemd[1]: session-13.scope: Deactivated successfully. Apr 21 02:47:26.038790 systemd-logind[1573]: Session 13 logged out. Waiting for processes to exit. Apr 21 02:47:26.041313 systemd-logind[1573]: Removed session 13. Apr 21 02:47:31.040404 systemd[1]: Started sshd@13-10.0.0.33:22-10.0.0.1:46368.service - OpenSSH per-connection server daemon (10.0.0.1:46368). Apr 21 02:47:31.113065 sshd[4192]: Accepted publickey for core from 10.0.0.1 port 46368 ssh2: RSA SHA256:9KUHGlOeyJC7ThH1mkUqpKUL96f7+Wj12eT7o4yHGwc Apr 21 02:47:31.114833 sshd-session[4192]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 02:47:31.122227 systemd-logind[1573]: New session 14 of user core. Apr 21 02:47:31.129363 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 21 02:47:31.245577 sshd[4195]: Connection closed by 10.0.0.1 port 46368 Apr 21 02:47:31.245849 sshd-session[4192]: pam_unix(sshd:session): session closed for user core Apr 21 02:47:31.250521 systemd[1]: sshd@13-10.0.0.33:22-10.0.0.1:46368.service: Deactivated successfully. Apr 21 02:47:31.252713 systemd[1]: session-14.scope: Deactivated successfully. Apr 21 02:47:31.253970 systemd-logind[1573]: Session 14 logged out. Waiting for processes to exit. Apr 21 02:47:31.255878 systemd-logind[1573]: Removed session 14. Apr 21 02:47:36.256501 systemd[1]: Started sshd@14-10.0.0.33:22-10.0.0.1:32812.service - OpenSSH per-connection server daemon (10.0.0.1:32812). Apr 21 02:47:36.316928 sshd[4211]: Accepted publickey for core from 10.0.0.1 port 32812 ssh2: RSA SHA256:9KUHGlOeyJC7ThH1mkUqpKUL96f7+Wj12eT7o4yHGwc Apr 21 02:47:36.318794 sshd-session[4211]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 02:47:36.325148 systemd-logind[1573]: New session 15 of user core. Apr 21 02:47:36.333585 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 21 02:47:36.445651 sshd[4214]: Connection closed by 10.0.0.1 port 32812 Apr 21 02:47:36.446278 sshd-session[4211]: pam_unix(sshd:session): session closed for user core Apr 21 02:47:36.459142 systemd[1]: sshd@14-10.0.0.33:22-10.0.0.1:32812.service: Deactivated successfully. Apr 21 02:47:36.460967 systemd[1]: session-15.scope: Deactivated successfully. Apr 21 02:47:36.462472 systemd-logind[1573]: Session 15 logged out. Waiting for processes to exit. Apr 21 02:47:36.464913 systemd[1]: Started sshd@15-10.0.0.33:22-10.0.0.1:32818.service - OpenSSH per-connection server daemon (10.0.0.1:32818). Apr 21 02:47:36.466423 systemd-logind[1573]: Removed session 15. Apr 21 02:47:36.525441 sshd[4228]: Accepted publickey for core from 10.0.0.1 port 32818 ssh2: RSA SHA256:9KUHGlOeyJC7ThH1mkUqpKUL96f7+Wj12eT7o4yHGwc Apr 21 02:47:36.526829 sshd-session[4228]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 02:47:36.532959 systemd-logind[1573]: New session 16 of user core. Apr 21 02:47:36.539268 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 21 02:47:36.761199 sshd[4231]: Connection closed by 10.0.0.1 port 32818 Apr 21 02:47:36.761639 sshd-session[4228]: pam_unix(sshd:session): session closed for user core Apr 21 02:47:36.775890 systemd[1]: sshd@15-10.0.0.33:22-10.0.0.1:32818.service: Deactivated successfully. Apr 21 02:47:36.777840 systemd[1]: session-16.scope: Deactivated successfully. Apr 21 02:47:36.779253 systemd-logind[1573]: Session 16 logged out. Waiting for processes to exit. Apr 21 02:47:36.781121 systemd[1]: Started sshd@16-10.0.0.33:22-10.0.0.1:32826.service - OpenSSH per-connection server daemon (10.0.0.1:32826). Apr 21 02:47:36.783113 systemd-logind[1573]: Removed session 16. Apr 21 02:47:36.842236 sshd[4243]: Accepted publickey for core from 10.0.0.1 port 32826 ssh2: RSA SHA256:9KUHGlOeyJC7ThH1mkUqpKUL96f7+Wj12eT7o4yHGwc Apr 21 02:47:36.843410 sshd-session[4243]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 02:47:36.849413 systemd-logind[1573]: New session 17 of user core. Apr 21 02:47:36.859226 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 21 02:47:37.499532 sshd[4246]: Connection closed by 10.0.0.1 port 32826 Apr 21 02:47:37.500361 sshd-session[4243]: pam_unix(sshd:session): session closed for user core Apr 21 02:47:37.514742 systemd[1]: sshd@16-10.0.0.33:22-10.0.0.1:32826.service: Deactivated successfully. Apr 21 02:47:37.517540 systemd[1]: session-17.scope: Deactivated successfully. Apr 21 02:47:37.520481 systemd-logind[1573]: Session 17 logged out. Waiting for processes to exit. Apr 21 02:47:37.524692 systemd[1]: Started sshd@17-10.0.0.33:22-10.0.0.1:32836.service - OpenSSH per-connection server daemon (10.0.0.1:32836). Apr 21 02:47:37.527291 systemd-logind[1573]: Removed session 17. Apr 21 02:47:37.574257 sshd[4262]: Accepted publickey for core from 10.0.0.1 port 32836 ssh2: RSA SHA256:9KUHGlOeyJC7ThH1mkUqpKUL96f7+Wj12eT7o4yHGwc Apr 21 02:47:37.575574 sshd-session[4262]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 02:47:37.580970 systemd-logind[1573]: New session 18 of user core. Apr 21 02:47:37.585351 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 21 02:47:37.763669 sshd[4265]: Connection closed by 10.0.0.1 port 32836 Apr 21 02:47:37.764375 sshd-session[4262]: pam_unix(sshd:session): session closed for user core Apr 21 02:47:37.771494 systemd[1]: sshd@17-10.0.0.33:22-10.0.0.1:32836.service: Deactivated successfully. Apr 21 02:47:37.774351 systemd[1]: session-18.scope: Deactivated successfully. Apr 21 02:47:37.776836 systemd-logind[1573]: Session 18 logged out. Waiting for processes to exit. Apr 21 02:47:37.779290 systemd[1]: Started sshd@18-10.0.0.33:22-10.0.0.1:32846.service - OpenSSH per-connection server daemon (10.0.0.1:32846). Apr 21 02:47:37.780760 systemd-logind[1573]: Removed session 18. Apr 21 02:47:37.833580 sshd[4277]: Accepted publickey for core from 10.0.0.1 port 32846 ssh2: RSA SHA256:9KUHGlOeyJC7ThH1mkUqpKUL96f7+Wj12eT7o4yHGwc Apr 21 02:47:37.834913 sshd-session[4277]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 02:47:37.842251 systemd-logind[1573]: New session 19 of user core. Apr 21 02:47:37.848411 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 21 02:47:37.939858 sshd[4280]: Connection closed by 10.0.0.1 port 32846 Apr 21 02:47:37.940332 sshd-session[4277]: pam_unix(sshd:session): session closed for user core Apr 21 02:47:37.943330 systemd[1]: sshd@18-10.0.0.33:22-10.0.0.1:32846.service: Deactivated successfully. Apr 21 02:47:37.945933 systemd[1]: session-19.scope: Deactivated successfully. Apr 21 02:47:37.947652 systemd-logind[1573]: Session 19 logged out. Waiting for processes to exit. Apr 21 02:47:37.949585 systemd-logind[1573]: Removed session 19. Apr 21 02:47:42.960311 systemd[1]: Started sshd@19-10.0.0.33:22-10.0.0.1:32850.service - OpenSSH per-connection server daemon (10.0.0.1:32850). Apr 21 02:47:43.020833 sshd[4300]: Accepted publickey for core from 10.0.0.1 port 32850 ssh2: RSA SHA256:9KUHGlOeyJC7ThH1mkUqpKUL96f7+Wj12eT7o4yHGwc Apr 21 02:47:43.022309 sshd-session[4300]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 02:47:43.026789 systemd-logind[1573]: New session 20 of user core. Apr 21 02:47:43.033245 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 21 02:47:43.163713 sshd[4303]: Connection closed by 10.0.0.1 port 32850 Apr 21 02:47:43.164118 sshd-session[4300]: pam_unix(sshd:session): session closed for user core Apr 21 02:47:43.167807 systemd[1]: sshd@19-10.0.0.33:22-10.0.0.1:32850.service: Deactivated successfully. Apr 21 02:47:43.170260 systemd[1]: session-20.scope: Deactivated successfully. Apr 21 02:47:43.171231 systemd-logind[1573]: Session 20 logged out. Waiting for processes to exit. Apr 21 02:47:43.172892 systemd-logind[1573]: Removed session 20. Apr 21 02:47:48.179893 systemd[1]: Started sshd@20-10.0.0.33:22-10.0.0.1:34382.service - OpenSSH per-connection server daemon (10.0.0.1:34382). Apr 21 02:47:48.242647 sshd[4319]: Accepted publickey for core from 10.0.0.1 port 34382 ssh2: RSA SHA256:9KUHGlOeyJC7ThH1mkUqpKUL96f7+Wj12eT7o4yHGwc Apr 21 02:47:48.243633 sshd-session[4319]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 02:47:48.248950 systemd-logind[1573]: New session 21 of user core. Apr 21 02:47:48.256228 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 21 02:47:48.327781 sshd[4322]: Connection closed by 10.0.0.1 port 34382 Apr 21 02:47:48.328140 sshd-session[4319]: pam_unix(sshd:session): session closed for user core Apr 21 02:47:48.331270 systemd[1]: sshd@20-10.0.0.33:22-10.0.0.1:34382.service: Deactivated successfully. Apr 21 02:47:48.333378 systemd[1]: session-21.scope: Deactivated successfully. Apr 21 02:47:48.334780 systemd-logind[1573]: Session 21 logged out. Waiting for processes to exit. Apr 21 02:47:48.336406 systemd-logind[1573]: Removed session 21. Apr 21 02:47:53.339493 systemd[1]: Started sshd@21-10.0.0.33:22-10.0.0.1:34386.service - OpenSSH per-connection server daemon (10.0.0.1:34386). Apr 21 02:47:53.397796 sshd[4335]: Accepted publickey for core from 10.0.0.1 port 34386 ssh2: RSA SHA256:9KUHGlOeyJC7ThH1mkUqpKUL96f7+Wj12eT7o4yHGwc Apr 21 02:47:53.398791 sshd-session[4335]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 02:47:53.403880 systemd-logind[1573]: New session 22 of user core. Apr 21 02:47:53.414177 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 21 02:47:53.499499 sshd[4338]: Connection closed by 10.0.0.1 port 34386 Apr 21 02:47:53.500054 sshd-session[4335]: pam_unix(sshd:session): session closed for user core Apr 21 02:47:53.508441 systemd[1]: sshd@21-10.0.0.33:22-10.0.0.1:34386.service: Deactivated successfully. Apr 21 02:47:53.510496 systemd[1]: session-22.scope: Deactivated successfully. Apr 21 02:47:53.511501 systemd-logind[1573]: Session 22 logged out. Waiting for processes to exit. Apr 21 02:47:53.513777 systemd[1]: Started sshd@22-10.0.0.33:22-10.0.0.1:34388.service - OpenSSH per-connection server daemon (10.0.0.1:34388). Apr 21 02:47:53.515433 systemd-logind[1573]: Removed session 22. Apr 21 02:47:53.564461 sshd[4351]: Accepted publickey for core from 10.0.0.1 port 34388 ssh2: RSA SHA256:9KUHGlOeyJC7ThH1mkUqpKUL96f7+Wj12eT7o4yHGwc Apr 21 02:47:53.565731 sshd-session[4351]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 02:47:53.571474 systemd-logind[1573]: New session 23 of user core. Apr 21 02:47:53.580358 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 21 02:47:53.601497 kubelet[2744]: E0421 02:47:53.601367 2744 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:47:55.050695 containerd[1598]: time="2026-04-21T02:47:55.049875684Z" level=info msg="StopContainer for \"77d597393cd532d17c4852c837fab651ef22679d450bc603a7ea47e7328a3ee3\" with timeout 30 (s)" Apr 21 02:47:55.062650 containerd[1598]: time="2026-04-21T02:47:55.062611412Z" level=info msg="Stop container \"77d597393cd532d17c4852c837fab651ef22679d450bc603a7ea47e7328a3ee3\" with signal terminated" Apr 21 02:47:55.095255 systemd[1]: cri-containerd-77d597393cd532d17c4852c837fab651ef22679d450bc603a7ea47e7328a3ee3.scope: Deactivated successfully. Apr 21 02:47:55.101769 containerd[1598]: time="2026-04-21T02:47:55.101572584Z" level=info msg="received container exit event container_id:\"77d597393cd532d17c4852c837fab651ef22679d450bc603a7ea47e7328a3ee3\" id:\"77d597393cd532d17c4852c837fab651ef22679d450bc603a7ea47e7328a3ee3\" pid:3328 exited_at:{seconds:1776739675 nanos:99864411}" Apr 21 02:47:55.104676 containerd[1598]: time="2026-04-21T02:47:55.104337718Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 21 02:47:55.104676 containerd[1598]: time="2026-04-21T02:47:55.104507064Z" level=info msg="StopContainer for \"399c0260334896bb662bbf8b70b367012d6c3fb93ae43e08ea06ebcd68c59be1\" with timeout 2 (s)" Apr 21 02:47:55.104933 containerd[1598]: time="2026-04-21T02:47:55.104883539Z" level=info msg="Stop container \"399c0260334896bb662bbf8b70b367012d6c3fb93ae43e08ea06ebcd68c59be1\" with signal terminated" Apr 21 02:47:55.121283 systemd-networkd[1505]: lxc_health: Link DOWN Apr 21 02:47:55.121313 systemd-networkd[1505]: lxc_health: Lost carrier Apr 21 02:47:55.133788 systemd[1]: cri-containerd-399c0260334896bb662bbf8b70b367012d6c3fb93ae43e08ea06ebcd68c59be1.scope: Deactivated successfully. Apr 21 02:47:55.134085 systemd[1]: cri-containerd-399c0260334896bb662bbf8b70b367012d6c3fb93ae43e08ea06ebcd68c59be1.scope: Consumed 7.668s CPU time, 127.9M memory peak, 296K read from disk, 14.5M written to disk. Apr 21 02:47:55.135655 containerd[1598]: time="2026-04-21T02:47:55.135548128Z" level=info msg="received container exit event container_id:\"399c0260334896bb662bbf8b70b367012d6c3fb93ae43e08ea06ebcd68c59be1\" id:\"399c0260334896bb662bbf8b70b367012d6c3fb93ae43e08ea06ebcd68c59be1\" pid:3403 exited_at:{seconds:1776739675 nanos:134695429}" Apr 21 02:47:55.140909 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-77d597393cd532d17c4852c837fab651ef22679d450bc603a7ea47e7328a3ee3-rootfs.mount: Deactivated successfully. Apr 21 02:47:55.157629 containerd[1598]: time="2026-04-21T02:47:55.157554786Z" level=info msg="StopContainer for \"77d597393cd532d17c4852c837fab651ef22679d450bc603a7ea47e7328a3ee3\" returns successfully" Apr 21 02:47:55.159119 containerd[1598]: time="2026-04-21T02:47:55.159082123Z" level=info msg="StopPodSandbox for \"3a1be57fcb5bb831ad334998c54f40bf60fdfd1bb64d7aedf333fb70bb1a3424\"" Apr 21 02:47:55.167185 containerd[1598]: time="2026-04-21T02:47:55.167097778Z" level=info msg="Container to stop \"77d597393cd532d17c4852c837fab651ef22679d450bc603a7ea47e7328a3ee3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 21 02:47:55.174651 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-399c0260334896bb662bbf8b70b367012d6c3fb93ae43e08ea06ebcd68c59be1-rootfs.mount: Deactivated successfully. Apr 21 02:47:55.183316 systemd[1]: cri-containerd-3a1be57fcb5bb831ad334998c54f40bf60fdfd1bb64d7aedf333fb70bb1a3424.scope: Deactivated successfully. Apr 21 02:47:55.188899 containerd[1598]: time="2026-04-21T02:47:55.188658721Z" level=info msg="received sandbox exit event container_id:\"3a1be57fcb5bb831ad334998c54f40bf60fdfd1bb64d7aedf333fb70bb1a3424\" id:\"3a1be57fcb5bb831ad334998c54f40bf60fdfd1bb64d7aedf333fb70bb1a3424\" exit_status:137 exited_at:{seconds:1776739675 nanos:187466280}" monitor_name=podsandbox Apr 21 02:47:55.195368 containerd[1598]: time="2026-04-21T02:47:55.195300542Z" level=info msg="StopContainer for \"399c0260334896bb662bbf8b70b367012d6c3fb93ae43e08ea06ebcd68c59be1\" returns successfully" Apr 21 02:47:55.196304 containerd[1598]: time="2026-04-21T02:47:55.196285819Z" level=info msg="StopPodSandbox for \"c22340d83558fa63e11c3a5226f06015826f255ae448d4f89edb35db172195cd\"" Apr 21 02:47:55.196624 containerd[1598]: time="2026-04-21T02:47:55.196413216Z" level=info msg="Container to stop \"403683d507bdc716d781de64f8a38ce5c3e9061326d5b803b24ed369d34ddbc9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 21 02:47:55.196624 containerd[1598]: time="2026-04-21T02:47:55.196425940Z" level=info msg="Container to stop \"5b4daa6113fed8f8e9a022cce0116136165445e46993b2ff50c704ab15f8ff5d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 21 02:47:55.196624 containerd[1598]: time="2026-04-21T02:47:55.196432612Z" level=info msg="Container to stop \"3383e46ac742022c501b9c6eef1254503448525494243de9cf3d4d6b61ba2cd1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 21 02:47:55.196624 containerd[1598]: time="2026-04-21T02:47:55.196440824Z" level=info msg="Container to stop \"cbe8fc89e276d5abf58bea619ad9e045e23f33d3052c59bc6ce0d7ebf9d9da2f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 21 02:47:55.196624 containerd[1598]: time="2026-04-21T02:47:55.196446424Z" level=info msg="Container to stop \"399c0260334896bb662bbf8b70b367012d6c3fb93ae43e08ea06ebcd68c59be1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 21 02:47:55.208668 systemd[1]: cri-containerd-c22340d83558fa63e11c3a5226f06015826f255ae448d4f89edb35db172195cd.scope: Deactivated successfully. Apr 21 02:47:55.211277 containerd[1598]: time="2026-04-21T02:47:55.210952800Z" level=info msg="received sandbox exit event container_id:\"c22340d83558fa63e11c3a5226f06015826f255ae448d4f89edb35db172195cd\" id:\"c22340d83558fa63e11c3a5226f06015826f255ae448d4f89edb35db172195cd\" exit_status:137 exited_at:{seconds:1776739675 nanos:210367973}" monitor_name=podsandbox Apr 21 02:47:55.234810 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3a1be57fcb5bb831ad334998c54f40bf60fdfd1bb64d7aedf333fb70bb1a3424-rootfs.mount: Deactivated successfully. Apr 21 02:47:55.243135 containerd[1598]: time="2026-04-21T02:47:55.242881104Z" level=info msg="shim disconnected" id=3a1be57fcb5bb831ad334998c54f40bf60fdfd1bb64d7aedf333fb70bb1a3424 namespace=k8s.io Apr 21 02:47:55.243135 containerd[1598]: time="2026-04-21T02:47:55.242908902Z" level=warning msg="cleaning up after shim disconnected" id=3a1be57fcb5bb831ad334998c54f40bf60fdfd1bb64d7aedf333fb70bb1a3424 namespace=k8s.io Apr 21 02:47:55.243504 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c22340d83558fa63e11c3a5226f06015826f255ae448d4f89edb35db172195cd-rootfs.mount: Deactivated successfully. Apr 21 02:47:55.261722 containerd[1598]: time="2026-04-21T02:47:55.242917461Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 02:47:55.261864 containerd[1598]: time="2026-04-21T02:47:55.244707936Z" level=info msg="shim disconnected" id=c22340d83558fa63e11c3a5226f06015826f255ae448d4f89edb35db172195cd namespace=k8s.io Apr 21 02:47:55.261864 containerd[1598]: time="2026-04-21T02:47:55.261859628Z" level=warning msg="cleaning up after shim disconnected" id=c22340d83558fa63e11c3a5226f06015826f255ae448d4f89edb35db172195cd namespace=k8s.io Apr 21 02:47:55.261936 containerd[1598]: time="2026-04-21T02:47:55.261868625Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 02:47:55.290127 containerd[1598]: time="2026-04-21T02:47:55.290046434Z" level=info msg="TearDown network for sandbox \"c22340d83558fa63e11c3a5226f06015826f255ae448d4f89edb35db172195cd\" successfully" Apr 21 02:47:55.290248 containerd[1598]: time="2026-04-21T02:47:55.290221023Z" level=info msg="StopPodSandbox for \"c22340d83558fa63e11c3a5226f06015826f255ae448d4f89edb35db172195cd\" returns successfully" Apr 21 02:47:55.290708 containerd[1598]: time="2026-04-21T02:47:55.290660988Z" level=info msg="TearDown network for sandbox \"3a1be57fcb5bb831ad334998c54f40bf60fdfd1bb64d7aedf333fb70bb1a3424\" successfully" Apr 21 02:47:55.290708 containerd[1598]: time="2026-04-21T02:47:55.290697465Z" level=info msg="StopPodSandbox for \"3a1be57fcb5bb831ad334998c54f40bf60fdfd1bb64d7aedf333fb70bb1a3424\" returns successfully" Apr 21 02:47:55.290948 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3a1be57fcb5bb831ad334998c54f40bf60fdfd1bb64d7aedf333fb70bb1a3424-shm.mount: Deactivated successfully. Apr 21 02:47:55.300592 containerd[1598]: time="2026-04-21T02:47:55.300538989Z" level=info msg="received sandbox container exit event sandbox_id:\"c22340d83558fa63e11c3a5226f06015826f255ae448d4f89edb35db172195cd\" exit_status:137 exited_at:{seconds:1776739675 nanos:210367973}" monitor_name=criService Apr 21 02:47:55.301882 containerd[1598]: time="2026-04-21T02:47:55.300696212Z" level=info msg="received sandbox container exit event sandbox_id:\"3a1be57fcb5bb831ad334998c54f40bf60fdfd1bb64d7aedf333fb70bb1a3424\" exit_status:137 exited_at:{seconds:1776739675 nanos:187466280}" monitor_name=criService Apr 21 02:47:55.506472 kubelet[2744]: I0421 02:47:55.506402 2744 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0738958f-c984-4d55-8099-6a5cc0cbda55-etc-cni-netd\") pod \"0738958f-c984-4d55-8099-6a5cc0cbda55\" (UID: \"0738958f-c984-4d55-8099-6a5cc0cbda55\") " Apr 21 02:47:55.507450 kubelet[2744]: I0421 02:47:55.507345 2744 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0738958f-c984-4d55-8099-6a5cc0cbda55-host-proc-sys-net\") pod \"0738958f-c984-4d55-8099-6a5cc0cbda55\" (UID: \"0738958f-c984-4d55-8099-6a5cc0cbda55\") " Apr 21 02:47:55.507450 kubelet[2744]: I0421 02:47:55.507437 2744 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8gsdl\" (UniqueName: \"kubernetes.io/projected/2d9df213-08ab-46dd-9029-7c0d4453f2ec-kube-api-access-8gsdl\") pod \"2d9df213-08ab-46dd-9029-7c0d4453f2ec\" (UID: \"2d9df213-08ab-46dd-9029-7c0d4453f2ec\") " Apr 21 02:47:55.507557 kubelet[2744]: I0421 02:47:55.507474 2744 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2d9df213-08ab-46dd-9029-7c0d4453f2ec-cilium-config-path\") pod \"2d9df213-08ab-46dd-9029-7c0d4453f2ec\" (UID: \"2d9df213-08ab-46dd-9029-7c0d4453f2ec\") " Apr 21 02:47:55.507557 kubelet[2744]: I0421 02:47:55.506539 2744 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0738958f-c984-4d55-8099-6a5cc0cbda55-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "0738958f-c984-4d55-8099-6a5cc0cbda55" (UID: "0738958f-c984-4d55-8099-6a5cc0cbda55"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 21 02:47:55.507557 kubelet[2744]: I0421 02:47:55.507492 2744 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0738958f-c984-4d55-8099-6a5cc0cbda55-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "0738958f-c984-4d55-8099-6a5cc0cbda55" (UID: "0738958f-c984-4d55-8099-6a5cc0cbda55"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 21 02:47:55.507557 kubelet[2744]: I0421 02:47:55.507533 2744 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0738958f-c984-4d55-8099-6a5cc0cbda55-hostproc" (OuterVolumeSpecName: "hostproc") pod "0738958f-c984-4d55-8099-6a5cc0cbda55" (UID: "0738958f-c984-4d55-8099-6a5cc0cbda55"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 21 02:47:55.507557 kubelet[2744]: I0421 02:47:55.507504 2744 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0738958f-c984-4d55-8099-6a5cc0cbda55-hostproc\") pod \"0738958f-c984-4d55-8099-6a5cc0cbda55\" (UID: \"0738958f-c984-4d55-8099-6a5cc0cbda55\") " Apr 21 02:47:55.507692 kubelet[2744]: I0421 02:47:55.507573 2744 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0738958f-c984-4d55-8099-6a5cc0cbda55-host-proc-sys-kernel\") pod \"0738958f-c984-4d55-8099-6a5cc0cbda55\" (UID: \"0738958f-c984-4d55-8099-6a5cc0cbda55\") " Apr 21 02:47:55.507692 kubelet[2744]: I0421 02:47:55.507592 2744 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x6jwl\" (UniqueName: \"kubernetes.io/projected/0738958f-c984-4d55-8099-6a5cc0cbda55-kube-api-access-x6jwl\") pod \"0738958f-c984-4d55-8099-6a5cc0cbda55\" (UID: \"0738958f-c984-4d55-8099-6a5cc0cbda55\") " Apr 21 02:47:55.507692 kubelet[2744]: I0421 02:47:55.507607 2744 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0738958f-c984-4d55-8099-6a5cc0cbda55-bpf-maps\") pod \"0738958f-c984-4d55-8099-6a5cc0cbda55\" (UID: \"0738958f-c984-4d55-8099-6a5cc0cbda55\") " Apr 21 02:47:55.507692 kubelet[2744]: I0421 02:47:55.507620 2744 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0738958f-c984-4d55-8099-6a5cc0cbda55-xtables-lock\") pod \"0738958f-c984-4d55-8099-6a5cc0cbda55\" (UID: \"0738958f-c984-4d55-8099-6a5cc0cbda55\") " Apr 21 02:47:55.507692 kubelet[2744]: I0421 02:47:55.507636 2744 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0738958f-c984-4d55-8099-6a5cc0cbda55-cni-path\") pod \"0738958f-c984-4d55-8099-6a5cc0cbda55\" (UID: \"0738958f-c984-4d55-8099-6a5cc0cbda55\") " Apr 21 02:47:55.507692 kubelet[2744]: I0421 02:47:55.507649 2744 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0738958f-c984-4d55-8099-6a5cc0cbda55-cilium-run\") pod \"0738958f-c984-4d55-8099-6a5cc0cbda55\" (UID: \"0738958f-c984-4d55-8099-6a5cc0cbda55\") " Apr 21 02:47:55.507788 kubelet[2744]: I0421 02:47:55.507663 2744 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0738958f-c984-4d55-8099-6a5cc0cbda55-hubble-tls\") pod \"0738958f-c984-4d55-8099-6a5cc0cbda55\" (UID: \"0738958f-c984-4d55-8099-6a5cc0cbda55\") " Apr 21 02:47:55.507788 kubelet[2744]: I0421 02:47:55.507675 2744 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0738958f-c984-4d55-8099-6a5cc0cbda55-lib-modules\") pod \"0738958f-c984-4d55-8099-6a5cc0cbda55\" (UID: \"0738958f-c984-4d55-8099-6a5cc0cbda55\") " Apr 21 02:47:55.507788 kubelet[2744]: I0421 02:47:55.507691 2744 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0738958f-c984-4d55-8099-6a5cc0cbda55-clustermesh-secrets\") pod \"0738958f-c984-4d55-8099-6a5cc0cbda55\" (UID: \"0738958f-c984-4d55-8099-6a5cc0cbda55\") " Apr 21 02:47:55.507788 kubelet[2744]: I0421 02:47:55.507711 2744 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0738958f-c984-4d55-8099-6a5cc0cbda55-cilium-config-path\") pod \"0738958f-c984-4d55-8099-6a5cc0cbda55\" (UID: \"0738958f-c984-4d55-8099-6a5cc0cbda55\") " Apr 21 02:47:55.507788 kubelet[2744]: I0421 02:47:55.507721 2744 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0738958f-c984-4d55-8099-6a5cc0cbda55-cilium-cgroup\") pod \"0738958f-c984-4d55-8099-6a5cc0cbda55\" (UID: \"0738958f-c984-4d55-8099-6a5cc0cbda55\") " Apr 21 02:47:55.507788 kubelet[2744]: I0421 02:47:55.507754 2744 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0738958f-c984-4d55-8099-6a5cc0cbda55-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Apr 21 02:47:55.507788 kubelet[2744]: I0421 02:47:55.507761 2744 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0738958f-c984-4d55-8099-6a5cc0cbda55-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Apr 21 02:47:55.507895 kubelet[2744]: I0421 02:47:55.507770 2744 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0738958f-c984-4d55-8099-6a5cc0cbda55-hostproc\") on node \"localhost\" DevicePath \"\"" Apr 21 02:47:55.507895 kubelet[2744]: I0421 02:47:55.507784 2744 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0738958f-c984-4d55-8099-6a5cc0cbda55-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "0738958f-c984-4d55-8099-6a5cc0cbda55" (UID: "0738958f-c984-4d55-8099-6a5cc0cbda55"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 21 02:47:55.507895 kubelet[2744]: I0421 02:47:55.507796 2744 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0738958f-c984-4d55-8099-6a5cc0cbda55-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "0738958f-c984-4d55-8099-6a5cc0cbda55" (UID: "0738958f-c984-4d55-8099-6a5cc0cbda55"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 21 02:47:55.507895 kubelet[2744]: I0421 02:47:55.507796 2744 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0738958f-c984-4d55-8099-6a5cc0cbda55-cni-path" (OuterVolumeSpecName: "cni-path") pod "0738958f-c984-4d55-8099-6a5cc0cbda55" (UID: "0738958f-c984-4d55-8099-6a5cc0cbda55"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 21 02:47:55.507895 kubelet[2744]: I0421 02:47:55.507885 2744 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0738958f-c984-4d55-8099-6a5cc0cbda55-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "0738958f-c984-4d55-8099-6a5cc0cbda55" (UID: "0738958f-c984-4d55-8099-6a5cc0cbda55"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 21 02:47:55.509415 kubelet[2744]: I0421 02:47:55.509278 2744 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0738958f-c984-4d55-8099-6a5cc0cbda55-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "0738958f-c984-4d55-8099-6a5cc0cbda55" (UID: "0738958f-c984-4d55-8099-6a5cc0cbda55"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 21 02:47:55.509415 kubelet[2744]: I0421 02:47:55.509330 2744 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0738958f-c984-4d55-8099-6a5cc0cbda55-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "0738958f-c984-4d55-8099-6a5cc0cbda55" (UID: "0738958f-c984-4d55-8099-6a5cc0cbda55"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 21 02:47:55.513085 kubelet[2744]: I0421 02:47:55.511044 2744 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d9df213-08ab-46dd-9029-7c0d4453f2ec-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "2d9df213-08ab-46dd-9029-7c0d4453f2ec" (UID: "2d9df213-08ab-46dd-9029-7c0d4453f2ec"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 21 02:47:55.513085 kubelet[2744]: I0421 02:47:55.511074 2744 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0738958f-c984-4d55-8099-6a5cc0cbda55-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "0738958f-c984-4d55-8099-6a5cc0cbda55" (UID: "0738958f-c984-4d55-8099-6a5cc0cbda55"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 21 02:47:55.513379 kubelet[2744]: I0421 02:47:55.513182 2744 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0738958f-c984-4d55-8099-6a5cc0cbda55-kube-api-access-x6jwl" (OuterVolumeSpecName: "kube-api-access-x6jwl") pod "0738958f-c984-4d55-8099-6a5cc0cbda55" (UID: "0738958f-c984-4d55-8099-6a5cc0cbda55"). InnerVolumeSpecName "kube-api-access-x6jwl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 21 02:47:55.513568 kubelet[2744]: I0421 02:47:55.513467 2744 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0738958f-c984-4d55-8099-6a5cc0cbda55-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "0738958f-c984-4d55-8099-6a5cc0cbda55" (UID: "0738958f-c984-4d55-8099-6a5cc0cbda55"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 21 02:47:55.513727 kubelet[2744]: I0421 02:47:55.513581 2744 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d9df213-08ab-46dd-9029-7c0d4453f2ec-kube-api-access-8gsdl" (OuterVolumeSpecName: "kube-api-access-8gsdl") pod "2d9df213-08ab-46dd-9029-7c0d4453f2ec" (UID: "2d9df213-08ab-46dd-9029-7c0d4453f2ec"). InnerVolumeSpecName "kube-api-access-8gsdl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 21 02:47:55.513913 kubelet[2744]: I0421 02:47:55.513865 2744 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0738958f-c984-4d55-8099-6a5cc0cbda55-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "0738958f-c984-4d55-8099-6a5cc0cbda55" (UID: "0738958f-c984-4d55-8099-6a5cc0cbda55"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 21 02:47:55.514783 kubelet[2744]: I0421 02:47:55.514737 2744 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0738958f-c984-4d55-8099-6a5cc0cbda55-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "0738958f-c984-4d55-8099-6a5cc0cbda55" (UID: "0738958f-c984-4d55-8099-6a5cc0cbda55"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 21 02:47:55.609107 kubelet[2744]: I0421 02:47:55.608673 2744 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0738958f-c984-4d55-8099-6a5cc0cbda55-cni-path\") on node \"localhost\" DevicePath \"\"" Apr 21 02:47:55.609107 kubelet[2744]: I0421 02:47:55.608784 2744 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0738958f-c984-4d55-8099-6a5cc0cbda55-cilium-run\") on node \"localhost\" DevicePath \"\"" Apr 21 02:47:55.609107 kubelet[2744]: I0421 02:47:55.608807 2744 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0738958f-c984-4d55-8099-6a5cc0cbda55-hubble-tls\") on node \"localhost\" DevicePath \"\"" Apr 21 02:47:55.609107 kubelet[2744]: I0421 02:47:55.608813 2744 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0738958f-c984-4d55-8099-6a5cc0cbda55-lib-modules\") on node \"localhost\" DevicePath \"\"" Apr 21 02:47:55.609107 kubelet[2744]: I0421 02:47:55.608820 2744 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0738958f-c984-4d55-8099-6a5cc0cbda55-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Apr 21 02:47:55.609107 kubelet[2744]: I0421 02:47:55.608832 2744 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0738958f-c984-4d55-8099-6a5cc0cbda55-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Apr 21 02:47:55.609107 kubelet[2744]: I0421 02:47:55.608838 2744 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0738958f-c984-4d55-8099-6a5cc0cbda55-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Apr 21 02:47:55.609107 kubelet[2744]: I0421 02:47:55.608843 2744 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8gsdl\" (UniqueName: \"kubernetes.io/projected/2d9df213-08ab-46dd-9029-7c0d4453f2ec-kube-api-access-8gsdl\") on node \"localhost\" DevicePath \"\"" Apr 21 02:47:55.609431 kubelet[2744]: I0421 02:47:55.608885 2744 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2d9df213-08ab-46dd-9029-7c0d4453f2ec-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Apr 21 02:47:55.609431 kubelet[2744]: I0421 02:47:55.608897 2744 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0738958f-c984-4d55-8099-6a5cc0cbda55-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Apr 21 02:47:55.609431 kubelet[2744]: I0421 02:47:55.608903 2744 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-x6jwl\" (UniqueName: \"kubernetes.io/projected/0738958f-c984-4d55-8099-6a5cc0cbda55-kube-api-access-x6jwl\") on node \"localhost\" DevicePath \"\"" Apr 21 02:47:55.609431 kubelet[2744]: I0421 02:47:55.608912 2744 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0738958f-c984-4d55-8099-6a5cc0cbda55-bpf-maps\") on node \"localhost\" DevicePath \"\"" Apr 21 02:47:55.609431 kubelet[2744]: I0421 02:47:55.608918 2744 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0738958f-c984-4d55-8099-6a5cc0cbda55-xtables-lock\") on node \"localhost\" DevicePath \"\"" Apr 21 02:47:55.989816 kubelet[2744]: I0421 02:47:55.989653 2744 scope.go:117] "RemoveContainer" containerID="77d597393cd532d17c4852c837fab651ef22679d450bc603a7ea47e7328a3ee3" Apr 21 02:47:55.997535 containerd[1598]: time="2026-04-21T02:47:55.997477137Z" level=info msg="RemoveContainer for \"77d597393cd532d17c4852c837fab651ef22679d450bc603a7ea47e7328a3ee3\"" Apr 21 02:47:56.000689 systemd[1]: Removed slice kubepods-besteffort-pod2d9df213_08ab_46dd_9029_7c0d4453f2ec.slice - libcontainer container kubepods-besteffort-pod2d9df213_08ab_46dd_9029_7c0d4453f2ec.slice. Apr 21 02:47:56.003692 containerd[1598]: time="2026-04-21T02:47:56.003620944Z" level=info msg="RemoveContainer for \"77d597393cd532d17c4852c837fab651ef22679d450bc603a7ea47e7328a3ee3\" returns successfully" Apr 21 02:47:56.004281 kubelet[2744]: I0421 02:47:56.004121 2744 scope.go:117] "RemoveContainer" containerID="77d597393cd532d17c4852c837fab651ef22679d450bc603a7ea47e7328a3ee3" Apr 21 02:47:56.004356 containerd[1598]: time="2026-04-21T02:47:56.004323076Z" level=error msg="ContainerStatus for \"77d597393cd532d17c4852c837fab651ef22679d450bc603a7ea47e7328a3ee3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"77d597393cd532d17c4852c837fab651ef22679d450bc603a7ea47e7328a3ee3\": not found" Apr 21 02:47:56.004729 kubelet[2744]: E0421 02:47:56.004555 2744 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"77d597393cd532d17c4852c837fab651ef22679d450bc603a7ea47e7328a3ee3\": not found" containerID="77d597393cd532d17c4852c837fab651ef22679d450bc603a7ea47e7328a3ee3" Apr 21 02:47:56.004823 kubelet[2744]: I0421 02:47:56.004710 2744 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"77d597393cd532d17c4852c837fab651ef22679d450bc603a7ea47e7328a3ee3"} err="failed to get container status \"77d597393cd532d17c4852c837fab651ef22679d450bc603a7ea47e7328a3ee3\": rpc error: code = NotFound desc = an error occurred when try to find container \"77d597393cd532d17c4852c837fab651ef22679d450bc603a7ea47e7328a3ee3\": not found" Apr 21 02:47:56.004823 kubelet[2744]: I0421 02:47:56.004758 2744 scope.go:117] "RemoveContainer" containerID="399c0260334896bb662bbf8b70b367012d6c3fb93ae43e08ea06ebcd68c59be1" Apr 21 02:47:56.008424 containerd[1598]: time="2026-04-21T02:47:56.007953684Z" level=info msg="RemoveContainer for \"399c0260334896bb662bbf8b70b367012d6c3fb93ae43e08ea06ebcd68c59be1\"" Apr 21 02:47:56.014494 systemd[1]: Removed slice kubepods-burstable-pod0738958f_c984_4d55_8099_6a5cc0cbda55.slice - libcontainer container kubepods-burstable-pod0738958f_c984_4d55_8099_6a5cc0cbda55.slice. Apr 21 02:47:56.015652 systemd[1]: kubepods-burstable-pod0738958f_c984_4d55_8099_6a5cc0cbda55.slice: Consumed 7.805s CPU time, 128.3M memory peak, 308K read from disk, 14.8M written to disk. Apr 21 02:47:56.016698 containerd[1598]: time="2026-04-21T02:47:56.016673212Z" level=info msg="RemoveContainer for \"399c0260334896bb662bbf8b70b367012d6c3fb93ae43e08ea06ebcd68c59be1\" returns successfully" Apr 21 02:47:56.018140 kubelet[2744]: I0421 02:47:56.017892 2744 scope.go:117] "RemoveContainer" containerID="cbe8fc89e276d5abf58bea619ad9e045e23f33d3052c59bc6ce0d7ebf9d9da2f" Apr 21 02:47:56.022938 containerd[1598]: time="2026-04-21T02:47:56.022732374Z" level=info msg="RemoveContainer for \"cbe8fc89e276d5abf58bea619ad9e045e23f33d3052c59bc6ce0d7ebf9d9da2f\"" Apr 21 02:47:56.028094 containerd[1598]: time="2026-04-21T02:47:56.027935710Z" level=info msg="RemoveContainer for \"cbe8fc89e276d5abf58bea619ad9e045e23f33d3052c59bc6ce0d7ebf9d9da2f\" returns successfully" Apr 21 02:47:56.028383 kubelet[2744]: I0421 02:47:56.028276 2744 scope.go:117] "RemoveContainer" containerID="3383e46ac742022c501b9c6eef1254503448525494243de9cf3d4d6b61ba2cd1" Apr 21 02:47:56.031949 containerd[1598]: time="2026-04-21T02:47:56.031857594Z" level=info msg="RemoveContainer for \"3383e46ac742022c501b9c6eef1254503448525494243de9cf3d4d6b61ba2cd1\"" Apr 21 02:47:56.038777 containerd[1598]: time="2026-04-21T02:47:56.038637614Z" level=info msg="RemoveContainer for \"3383e46ac742022c501b9c6eef1254503448525494243de9cf3d4d6b61ba2cd1\" returns successfully" Apr 21 02:47:56.039251 kubelet[2744]: I0421 02:47:56.039232 2744 scope.go:117] "RemoveContainer" containerID="5b4daa6113fed8f8e9a022cce0116136165445e46993b2ff50c704ab15f8ff5d" Apr 21 02:47:56.041417 containerd[1598]: time="2026-04-21T02:47:56.041358429Z" level=info msg="RemoveContainer for \"5b4daa6113fed8f8e9a022cce0116136165445e46993b2ff50c704ab15f8ff5d\"" Apr 21 02:47:56.044430 containerd[1598]: time="2026-04-21T02:47:56.044391621Z" level=info msg="RemoveContainer for \"5b4daa6113fed8f8e9a022cce0116136165445e46993b2ff50c704ab15f8ff5d\" returns successfully" Apr 21 02:47:56.044611 kubelet[2744]: I0421 02:47:56.044516 2744 scope.go:117] "RemoveContainer" containerID="403683d507bdc716d781de64f8a38ce5c3e9061326d5b803b24ed369d34ddbc9" Apr 21 02:47:56.046947 containerd[1598]: time="2026-04-21T02:47:56.046589212Z" level=info msg="RemoveContainer for \"403683d507bdc716d781de64f8a38ce5c3e9061326d5b803b24ed369d34ddbc9\"" Apr 21 02:47:56.050648 containerd[1598]: time="2026-04-21T02:47:56.050489531Z" level=info msg="RemoveContainer for \"403683d507bdc716d781de64f8a38ce5c3e9061326d5b803b24ed369d34ddbc9\" returns successfully" Apr 21 02:47:56.050838 kubelet[2744]: I0421 02:47:56.050822 2744 scope.go:117] "RemoveContainer" containerID="399c0260334896bb662bbf8b70b367012d6c3fb93ae43e08ea06ebcd68c59be1" Apr 21 02:47:56.051706 containerd[1598]: time="2026-04-21T02:47:56.051588665Z" level=error msg="ContainerStatus for \"399c0260334896bb662bbf8b70b367012d6c3fb93ae43e08ea06ebcd68c59be1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"399c0260334896bb662bbf8b70b367012d6c3fb93ae43e08ea06ebcd68c59be1\": not found" Apr 21 02:47:56.051914 kubelet[2744]: E0421 02:47:56.051713 2744 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"399c0260334896bb662bbf8b70b367012d6c3fb93ae43e08ea06ebcd68c59be1\": not found" containerID="399c0260334896bb662bbf8b70b367012d6c3fb93ae43e08ea06ebcd68c59be1" Apr 21 02:47:56.051914 kubelet[2744]: I0421 02:47:56.051744 2744 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"399c0260334896bb662bbf8b70b367012d6c3fb93ae43e08ea06ebcd68c59be1"} err="failed to get container status \"399c0260334896bb662bbf8b70b367012d6c3fb93ae43e08ea06ebcd68c59be1\": rpc error: code = NotFound desc = an error occurred when try to find container \"399c0260334896bb662bbf8b70b367012d6c3fb93ae43e08ea06ebcd68c59be1\": not found" Apr 21 02:47:56.051914 kubelet[2744]: I0421 02:47:56.051767 2744 scope.go:117] "RemoveContainer" containerID="cbe8fc89e276d5abf58bea619ad9e045e23f33d3052c59bc6ce0d7ebf9d9da2f" Apr 21 02:47:56.052135 containerd[1598]: time="2026-04-21T02:47:56.052085433Z" level=error msg="ContainerStatus for \"cbe8fc89e276d5abf58bea619ad9e045e23f33d3052c59bc6ce0d7ebf9d9da2f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cbe8fc89e276d5abf58bea619ad9e045e23f33d3052c59bc6ce0d7ebf9d9da2f\": not found" Apr 21 02:47:56.052524 kubelet[2744]: E0421 02:47:56.052419 2744 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cbe8fc89e276d5abf58bea619ad9e045e23f33d3052c59bc6ce0d7ebf9d9da2f\": not found" containerID="cbe8fc89e276d5abf58bea619ad9e045e23f33d3052c59bc6ce0d7ebf9d9da2f" Apr 21 02:47:56.052524 kubelet[2744]: I0421 02:47:56.052463 2744 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cbe8fc89e276d5abf58bea619ad9e045e23f33d3052c59bc6ce0d7ebf9d9da2f"} err="failed to get container status \"cbe8fc89e276d5abf58bea619ad9e045e23f33d3052c59bc6ce0d7ebf9d9da2f\": rpc error: code = NotFound desc = an error occurred when try to find container \"cbe8fc89e276d5abf58bea619ad9e045e23f33d3052c59bc6ce0d7ebf9d9da2f\": not found" Apr 21 02:47:56.052524 kubelet[2744]: I0421 02:47:56.052496 2744 scope.go:117] "RemoveContainer" containerID="3383e46ac742022c501b9c6eef1254503448525494243de9cf3d4d6b61ba2cd1" Apr 21 02:47:56.052938 containerd[1598]: time="2026-04-21T02:47:56.052866748Z" level=error msg="ContainerStatus for \"3383e46ac742022c501b9c6eef1254503448525494243de9cf3d4d6b61ba2cd1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3383e46ac742022c501b9c6eef1254503448525494243de9cf3d4d6b61ba2cd1\": not found" Apr 21 02:47:56.053363 kubelet[2744]: E0421 02:47:56.053326 2744 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3383e46ac742022c501b9c6eef1254503448525494243de9cf3d4d6b61ba2cd1\": not found" containerID="3383e46ac742022c501b9c6eef1254503448525494243de9cf3d4d6b61ba2cd1" Apr 21 02:47:56.053363 kubelet[2744]: I0421 02:47:56.053349 2744 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3383e46ac742022c501b9c6eef1254503448525494243de9cf3d4d6b61ba2cd1"} err="failed to get container status \"3383e46ac742022c501b9c6eef1254503448525494243de9cf3d4d6b61ba2cd1\": rpc error: code = NotFound desc = an error occurred when try to find container \"3383e46ac742022c501b9c6eef1254503448525494243de9cf3d4d6b61ba2cd1\": not found" Apr 21 02:47:56.053412 kubelet[2744]: I0421 02:47:56.053367 2744 scope.go:117] "RemoveContainer" containerID="5b4daa6113fed8f8e9a022cce0116136165445e46993b2ff50c704ab15f8ff5d" Apr 21 02:47:56.054376 containerd[1598]: time="2026-04-21T02:47:56.054265641Z" level=error msg="ContainerStatus for \"5b4daa6113fed8f8e9a022cce0116136165445e46993b2ff50c704ab15f8ff5d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5b4daa6113fed8f8e9a022cce0116136165445e46993b2ff50c704ab15f8ff5d\": not found" Apr 21 02:47:56.054482 kubelet[2744]: E0421 02:47:56.054466 2744 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5b4daa6113fed8f8e9a022cce0116136165445e46993b2ff50c704ab15f8ff5d\": not found" containerID="5b4daa6113fed8f8e9a022cce0116136165445e46993b2ff50c704ab15f8ff5d" Apr 21 02:47:56.054513 kubelet[2744]: I0421 02:47:56.054495 2744 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5b4daa6113fed8f8e9a022cce0116136165445e46993b2ff50c704ab15f8ff5d"} err="failed to get container status \"5b4daa6113fed8f8e9a022cce0116136165445e46993b2ff50c704ab15f8ff5d\": rpc error: code = NotFound desc = an error occurred when try to find container \"5b4daa6113fed8f8e9a022cce0116136165445e46993b2ff50c704ab15f8ff5d\": not found" Apr 21 02:47:56.054538 kubelet[2744]: I0421 02:47:56.054513 2744 scope.go:117] "RemoveContainer" containerID="403683d507bdc716d781de64f8a38ce5c3e9061326d5b803b24ed369d34ddbc9" Apr 21 02:47:56.054904 containerd[1598]: time="2026-04-21T02:47:56.054713846Z" level=error msg="ContainerStatus for \"403683d507bdc716d781de64f8a38ce5c3e9061326d5b803b24ed369d34ddbc9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"403683d507bdc716d781de64f8a38ce5c3e9061326d5b803b24ed369d34ddbc9\": not found" Apr 21 02:47:56.055252 kubelet[2744]: E0421 02:47:56.054921 2744 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"403683d507bdc716d781de64f8a38ce5c3e9061326d5b803b24ed369d34ddbc9\": not found" containerID="403683d507bdc716d781de64f8a38ce5c3e9061326d5b803b24ed369d34ddbc9" Apr 21 02:47:56.055252 kubelet[2744]: I0421 02:47:56.054939 2744 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"403683d507bdc716d781de64f8a38ce5c3e9061326d5b803b24ed369d34ddbc9"} err="failed to get container status \"403683d507bdc716d781de64f8a38ce5c3e9061326d5b803b24ed369d34ddbc9\": rpc error: code = NotFound desc = an error occurred when try to find container \"403683d507bdc716d781de64f8a38ce5c3e9061326d5b803b24ed369d34ddbc9\": not found" Apr 21 02:47:56.141105 systemd[1]: var-lib-kubelet-pods-2d9df213\x2d08ab\x2d46dd\x2d9029\x2d7c0d4453f2ec-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8gsdl.mount: Deactivated successfully. Apr 21 02:47:56.141285 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c22340d83558fa63e11c3a5226f06015826f255ae448d4f89edb35db172195cd-shm.mount: Deactivated successfully. Apr 21 02:47:56.141333 systemd[1]: var-lib-kubelet-pods-0738958f\x2dc984\x2d4d55\x2d8099\x2d6a5cc0cbda55-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dx6jwl.mount: Deactivated successfully. Apr 21 02:47:56.141378 systemd[1]: var-lib-kubelet-pods-0738958f\x2dc984\x2d4d55\x2d8099\x2d6a5cc0cbda55-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Apr 21 02:47:56.141428 systemd[1]: var-lib-kubelet-pods-0738958f\x2dc984\x2d4d55\x2d8099\x2d6a5cc0cbda55-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Apr 21 02:47:56.604781 kubelet[2744]: I0421 02:47:56.604710 2744 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0738958f-c984-4d55-8099-6a5cc0cbda55" path="/var/lib/kubelet/pods/0738958f-c984-4d55-8099-6a5cc0cbda55/volumes" Apr 21 02:47:56.605328 kubelet[2744]: I0421 02:47:56.605278 2744 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d9df213-08ab-46dd-9029-7c0d4453f2ec" path="/var/lib/kubelet/pods/2d9df213-08ab-46dd-9029-7c0d4453f2ec/volumes" Apr 21 02:47:56.991813 sshd[4354]: Connection closed by 10.0.0.1 port 34388 Apr 21 02:47:56.992630 sshd-session[4351]: pam_unix(sshd:session): session closed for user core Apr 21 02:47:57.000262 systemd[1]: sshd@22-10.0.0.33:22-10.0.0.1:34388.service: Deactivated successfully. Apr 21 02:47:57.001849 systemd[1]: session-23.scope: Deactivated successfully. Apr 21 02:47:57.003692 systemd-logind[1573]: Session 23 logged out. Waiting for processes to exit. Apr 21 02:47:57.005318 systemd[1]: Started sshd@23-10.0.0.33:22-10.0.0.1:50260.service - OpenSSH per-connection server daemon (10.0.0.1:50260). Apr 21 02:47:57.007812 systemd-logind[1573]: Removed session 23. Apr 21 02:47:57.063214 sshd[4501]: Accepted publickey for core from 10.0.0.1 port 50260 ssh2: RSA SHA256:9KUHGlOeyJC7ThH1mkUqpKUL96f7+Wj12eT7o4yHGwc Apr 21 02:47:57.064255 sshd-session[4501]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 02:47:57.069106 systemd-logind[1573]: New session 24 of user core. Apr 21 02:47:57.075229 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 21 02:47:57.513888 sshd[4504]: Connection closed by 10.0.0.1 port 50260 Apr 21 02:47:57.514486 sshd-session[4501]: pam_unix(sshd:session): session closed for user core Apr 21 02:47:57.524839 systemd[1]: sshd@23-10.0.0.33:22-10.0.0.1:50260.service: Deactivated successfully. Apr 21 02:47:57.527810 systemd[1]: session-24.scope: Deactivated successfully. Apr 21 02:47:57.530671 systemd-logind[1573]: Session 24 logged out. Waiting for processes to exit. Apr 21 02:47:57.539551 systemd-logind[1573]: Removed session 24. Apr 21 02:47:57.543753 systemd[1]: Started sshd@24-10.0.0.33:22-10.0.0.1:50274.service - OpenSSH per-connection server daemon (10.0.0.1:50274). Apr 21 02:47:57.559913 systemd[1]: Created slice kubepods-burstable-podd99a4781_c48a_4d6d_81e8_c1941164483e.slice - libcontainer container kubepods-burstable-podd99a4781_c48a_4d6d_81e8_c1941164483e.slice. Apr 21 02:47:57.608804 sshd[4517]: Accepted publickey for core from 10.0.0.1 port 50274 ssh2: RSA SHA256:9KUHGlOeyJC7ThH1mkUqpKUL96f7+Wj12eT7o4yHGwc Apr 21 02:47:57.610315 sshd-session[4517]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 02:47:57.614887 systemd-logind[1573]: New session 25 of user core. Apr 21 02:47:57.626384 kubelet[2744]: I0421 02:47:57.626314 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2trz\" (UniqueName: \"kubernetes.io/projected/d99a4781-c48a-4d6d-81e8-c1941164483e-kube-api-access-r2trz\") pod \"cilium-nxwbq\" (UID: \"d99a4781-c48a-4d6d-81e8-c1941164483e\") " pod="kube-system/cilium-nxwbq" Apr 21 02:47:57.626384 kubelet[2744]: I0421 02:47:57.626375 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d99a4781-c48a-4d6d-81e8-c1941164483e-xtables-lock\") pod \"cilium-nxwbq\" (UID: \"d99a4781-c48a-4d6d-81e8-c1941164483e\") " pod="kube-system/cilium-nxwbq" Apr 21 02:47:57.626384 kubelet[2744]: I0421 02:47:57.626391 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/d99a4781-c48a-4d6d-81e8-c1941164483e-cilium-ipsec-secrets\") pod \"cilium-nxwbq\" (UID: \"d99a4781-c48a-4d6d-81e8-c1941164483e\") " pod="kube-system/cilium-nxwbq" Apr 21 02:47:57.626682 kubelet[2744]: I0421 02:47:57.626402 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d99a4781-c48a-4d6d-81e8-c1941164483e-hostproc\") pod \"cilium-nxwbq\" (UID: \"d99a4781-c48a-4d6d-81e8-c1941164483e\") " pod="kube-system/cilium-nxwbq" Apr 21 02:47:57.626682 kubelet[2744]: I0421 02:47:57.626416 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d99a4781-c48a-4d6d-81e8-c1941164483e-cilium-cgroup\") pod \"cilium-nxwbq\" (UID: \"d99a4781-c48a-4d6d-81e8-c1941164483e\") " pod="kube-system/cilium-nxwbq" Apr 21 02:47:57.626682 kubelet[2744]: I0421 02:47:57.626426 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d99a4781-c48a-4d6d-81e8-c1941164483e-etc-cni-netd\") pod \"cilium-nxwbq\" (UID: \"d99a4781-c48a-4d6d-81e8-c1941164483e\") " pod="kube-system/cilium-nxwbq" Apr 21 02:47:57.626682 kubelet[2744]: I0421 02:47:57.626436 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d99a4781-c48a-4d6d-81e8-c1941164483e-bpf-maps\") pod \"cilium-nxwbq\" (UID: \"d99a4781-c48a-4d6d-81e8-c1941164483e\") " pod="kube-system/cilium-nxwbq" Apr 21 02:47:57.626682 kubelet[2744]: I0421 02:47:57.626446 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d99a4781-c48a-4d6d-81e8-c1941164483e-cni-path\") pod \"cilium-nxwbq\" (UID: \"d99a4781-c48a-4d6d-81e8-c1941164483e\") " pod="kube-system/cilium-nxwbq" Apr 21 02:47:57.626682 kubelet[2744]: I0421 02:47:57.626475 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d99a4781-c48a-4d6d-81e8-c1941164483e-cilium-config-path\") pod \"cilium-nxwbq\" (UID: \"d99a4781-c48a-4d6d-81e8-c1941164483e\") " pod="kube-system/cilium-nxwbq" Apr 21 02:47:57.626774 kubelet[2744]: I0421 02:47:57.626500 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d99a4781-c48a-4d6d-81e8-c1941164483e-host-proc-sys-kernel\") pod \"cilium-nxwbq\" (UID: \"d99a4781-c48a-4d6d-81e8-c1941164483e\") " pod="kube-system/cilium-nxwbq" Apr 21 02:47:57.626774 kubelet[2744]: I0421 02:47:57.626518 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d99a4781-c48a-4d6d-81e8-c1941164483e-cilium-run\") pod \"cilium-nxwbq\" (UID: \"d99a4781-c48a-4d6d-81e8-c1941164483e\") " pod="kube-system/cilium-nxwbq" Apr 21 02:47:57.626774 kubelet[2744]: I0421 02:47:57.626529 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d99a4781-c48a-4d6d-81e8-c1941164483e-lib-modules\") pod \"cilium-nxwbq\" (UID: \"d99a4781-c48a-4d6d-81e8-c1941164483e\") " pod="kube-system/cilium-nxwbq" Apr 21 02:47:57.626774 kubelet[2744]: I0421 02:47:57.626540 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d99a4781-c48a-4d6d-81e8-c1941164483e-clustermesh-secrets\") pod \"cilium-nxwbq\" (UID: \"d99a4781-c48a-4d6d-81e8-c1941164483e\") " pod="kube-system/cilium-nxwbq" Apr 21 02:47:57.626774 kubelet[2744]: I0421 02:47:57.626553 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d99a4781-c48a-4d6d-81e8-c1941164483e-host-proc-sys-net\") pod \"cilium-nxwbq\" (UID: \"d99a4781-c48a-4d6d-81e8-c1941164483e\") " pod="kube-system/cilium-nxwbq" Apr 21 02:47:57.626774 kubelet[2744]: I0421 02:47:57.626566 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d99a4781-c48a-4d6d-81e8-c1941164483e-hubble-tls\") pod \"cilium-nxwbq\" (UID: \"d99a4781-c48a-4d6d-81e8-c1941164483e\") " pod="kube-system/cilium-nxwbq" Apr 21 02:47:57.627307 systemd[1]: Started session-25.scope - Session 25 of User core. Apr 21 02:47:57.638213 sshd[4520]: Connection closed by 10.0.0.1 port 50274 Apr 21 02:47:57.638503 sshd-session[4517]: pam_unix(sshd:session): session closed for user core Apr 21 02:47:57.648864 systemd[1]: sshd@24-10.0.0.33:22-10.0.0.1:50274.service: Deactivated successfully. Apr 21 02:47:57.650770 systemd[1]: session-25.scope: Deactivated successfully. Apr 21 02:47:57.651656 systemd-logind[1573]: Session 25 logged out. Waiting for processes to exit. Apr 21 02:47:57.654156 systemd[1]: Started sshd@25-10.0.0.33:22-10.0.0.1:50280.service - OpenSSH per-connection server daemon (10.0.0.1:50280). Apr 21 02:47:57.656086 systemd-logind[1573]: Removed session 25. Apr 21 02:47:57.704964 sshd[4527]: Accepted publickey for core from 10.0.0.1 port 50280 ssh2: RSA SHA256:9KUHGlOeyJC7ThH1mkUqpKUL96f7+Wj12eT7o4yHGwc Apr 21 02:47:57.706134 sshd-session[4527]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 02:47:57.712399 systemd-logind[1573]: New session 26 of user core. Apr 21 02:47:57.718367 systemd[1]: Started session-26.scope - Session 26 of User core. Apr 21 02:47:57.871870 kubelet[2744]: E0421 02:47:57.871795 2744 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:47:57.872569 containerd[1598]: time="2026-04-21T02:47:57.872541349Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nxwbq,Uid:d99a4781-c48a-4d6d-81e8-c1941164483e,Namespace:kube-system,Attempt:0,}" Apr 21 02:47:57.894482 containerd[1598]: time="2026-04-21T02:47:57.894326648Z" level=info msg="connecting to shim 4f1f1162d4bda49e03fded4680da7f4107c937279042a54b4aeae44a83210ef2" address="unix:///run/containerd/s/315a0986bf2964df9d0519662f97ce697cbeec27a8beee1b36bdf3291cafc0d3" namespace=k8s.io protocol=ttrpc version=3 Apr 21 02:47:57.924413 systemd[1]: Started cri-containerd-4f1f1162d4bda49e03fded4680da7f4107c937279042a54b4aeae44a83210ef2.scope - libcontainer container 4f1f1162d4bda49e03fded4680da7f4107c937279042a54b4aeae44a83210ef2. Apr 21 02:47:57.956142 containerd[1598]: time="2026-04-21T02:47:57.955961666Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nxwbq,Uid:d99a4781-c48a-4d6d-81e8-c1941164483e,Namespace:kube-system,Attempt:0,} returns sandbox id \"4f1f1162d4bda49e03fded4680da7f4107c937279042a54b4aeae44a83210ef2\"" Apr 21 02:47:57.957685 kubelet[2744]: E0421 02:47:57.957633 2744 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:47:57.973554 containerd[1598]: time="2026-04-21T02:47:57.972841018Z" level=info msg="CreateContainer within sandbox \"4f1f1162d4bda49e03fded4680da7f4107c937279042a54b4aeae44a83210ef2\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 21 02:47:57.982742 containerd[1598]: time="2026-04-21T02:47:57.982660512Z" level=info msg="Container 758f0addc590dde29327e8857ab65b33314a04ada43be465cc1e1fdd5c3e0c39: CDI devices from CRI Config.CDIDevices: []" Apr 21 02:47:57.990730 containerd[1598]: time="2026-04-21T02:47:57.990659768Z" level=info msg="CreateContainer within sandbox \"4f1f1162d4bda49e03fded4680da7f4107c937279042a54b4aeae44a83210ef2\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"758f0addc590dde29327e8857ab65b33314a04ada43be465cc1e1fdd5c3e0c39\"" Apr 21 02:47:57.992475 containerd[1598]: time="2026-04-21T02:47:57.992371205Z" level=info msg="StartContainer for \"758f0addc590dde29327e8857ab65b33314a04ada43be465cc1e1fdd5c3e0c39\"" Apr 21 02:47:57.994538 containerd[1598]: time="2026-04-21T02:47:57.994420545Z" level=info msg="connecting to shim 758f0addc590dde29327e8857ab65b33314a04ada43be465cc1e1fdd5c3e0c39" address="unix:///run/containerd/s/315a0986bf2964df9d0519662f97ce697cbeec27a8beee1b36bdf3291cafc0d3" protocol=ttrpc version=3 Apr 21 02:47:58.021513 systemd[1]: Started cri-containerd-758f0addc590dde29327e8857ab65b33314a04ada43be465cc1e1fdd5c3e0c39.scope - libcontainer container 758f0addc590dde29327e8857ab65b33314a04ada43be465cc1e1fdd5c3e0c39. Apr 21 02:47:58.059631 containerd[1598]: time="2026-04-21T02:47:58.059499024Z" level=info msg="StartContainer for \"758f0addc590dde29327e8857ab65b33314a04ada43be465cc1e1fdd5c3e0c39\" returns successfully" Apr 21 02:47:58.070141 systemd[1]: cri-containerd-758f0addc590dde29327e8857ab65b33314a04ada43be465cc1e1fdd5c3e0c39.scope: Deactivated successfully. Apr 21 02:47:58.074516 containerd[1598]: time="2026-04-21T02:47:58.074416850Z" level=info msg="received container exit event container_id:\"758f0addc590dde29327e8857ab65b33314a04ada43be465cc1e1fdd5c3e0c39\" id:\"758f0addc590dde29327e8857ab65b33314a04ada43be465cc1e1fdd5c3e0c39\" pid:4603 exited_at:{seconds:1776739678 nanos:73706971}" Apr 21 02:47:58.601547 kubelet[2744]: E0421 02:47:58.601474 2744 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:47:58.659779 kubelet[2744]: E0421 02:47:58.659702 2744 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 21 02:47:59.022696 kubelet[2744]: E0421 02:47:59.022547 2744 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:47:59.028652 containerd[1598]: time="2026-04-21T02:47:59.028524396Z" level=info msg="CreateContainer within sandbox \"4f1f1162d4bda49e03fded4680da7f4107c937279042a54b4aeae44a83210ef2\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 21 02:47:59.039060 containerd[1598]: time="2026-04-21T02:47:59.038937142Z" level=info msg="Container c08e22b60ee612e9d91a3e013ceb8de1f1944e97f7e1efe037f6034b9ba19d39: CDI devices from CRI Config.CDIDevices: []" Apr 21 02:47:59.049468 containerd[1598]: time="2026-04-21T02:47:59.049393082Z" level=info msg="CreateContainer within sandbox \"4f1f1162d4bda49e03fded4680da7f4107c937279042a54b4aeae44a83210ef2\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c08e22b60ee612e9d91a3e013ceb8de1f1944e97f7e1efe037f6034b9ba19d39\"" Apr 21 02:47:59.050055 containerd[1598]: time="2026-04-21T02:47:59.049930643Z" level=info msg="StartContainer for \"c08e22b60ee612e9d91a3e013ceb8de1f1944e97f7e1efe037f6034b9ba19d39\"" Apr 21 02:47:59.050650 containerd[1598]: time="2026-04-21T02:47:59.050628789Z" level=info msg="connecting to shim c08e22b60ee612e9d91a3e013ceb8de1f1944e97f7e1efe037f6034b9ba19d39" address="unix:///run/containerd/s/315a0986bf2964df9d0519662f97ce697cbeec27a8beee1b36bdf3291cafc0d3" protocol=ttrpc version=3 Apr 21 02:47:59.068251 systemd[1]: Started cri-containerd-c08e22b60ee612e9d91a3e013ceb8de1f1944e97f7e1efe037f6034b9ba19d39.scope - libcontainer container c08e22b60ee612e9d91a3e013ceb8de1f1944e97f7e1efe037f6034b9ba19d39. Apr 21 02:47:59.101445 containerd[1598]: time="2026-04-21T02:47:59.101325183Z" level=info msg="StartContainer for \"c08e22b60ee612e9d91a3e013ceb8de1f1944e97f7e1efe037f6034b9ba19d39\" returns successfully" Apr 21 02:47:59.106887 systemd[1]: cri-containerd-c08e22b60ee612e9d91a3e013ceb8de1f1944e97f7e1efe037f6034b9ba19d39.scope: Deactivated successfully. Apr 21 02:47:59.107278 containerd[1598]: time="2026-04-21T02:47:59.107214132Z" level=info msg="received container exit event container_id:\"c08e22b60ee612e9d91a3e013ceb8de1f1944e97f7e1efe037f6034b9ba19d39\" id:\"c08e22b60ee612e9d91a3e013ceb8de1f1944e97f7e1efe037f6034b9ba19d39\" pid:4649 exited_at:{seconds:1776739679 nanos:106880424}" Apr 21 02:47:59.739394 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c08e22b60ee612e9d91a3e013ceb8de1f1944e97f7e1efe037f6034b9ba19d39-rootfs.mount: Deactivated successfully. Apr 21 02:48:00.028793 kubelet[2744]: E0421 02:48:00.028385 2744 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:48:00.034754 containerd[1598]: time="2026-04-21T02:48:00.034701761Z" level=info msg="CreateContainer within sandbox \"4f1f1162d4bda49e03fded4680da7f4107c937279042a54b4aeae44a83210ef2\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 21 02:48:00.048943 containerd[1598]: time="2026-04-21T02:48:00.048563400Z" level=info msg="Container 084592ba6fe172c5a854a4f70b117db952c4e96f2c8c5e08d4373fe02d679097: CDI devices from CRI Config.CDIDevices: []" Apr 21 02:48:00.057579 containerd[1598]: time="2026-04-21T02:48:00.057413780Z" level=info msg="CreateContainer within sandbox \"4f1f1162d4bda49e03fded4680da7f4107c937279042a54b4aeae44a83210ef2\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"084592ba6fe172c5a854a4f70b117db952c4e96f2c8c5e08d4373fe02d679097\"" Apr 21 02:48:00.058338 containerd[1598]: time="2026-04-21T02:48:00.058282836Z" level=info msg="StartContainer for \"084592ba6fe172c5a854a4f70b117db952c4e96f2c8c5e08d4373fe02d679097\"" Apr 21 02:48:00.059468 containerd[1598]: time="2026-04-21T02:48:00.059422781Z" level=info msg="connecting to shim 084592ba6fe172c5a854a4f70b117db952c4e96f2c8c5e08d4373fe02d679097" address="unix:///run/containerd/s/315a0986bf2964df9d0519662f97ce697cbeec27a8beee1b36bdf3291cafc0d3" protocol=ttrpc version=3 Apr 21 02:48:00.078167 systemd[1]: Started cri-containerd-084592ba6fe172c5a854a4f70b117db952c4e96f2c8c5e08d4373fe02d679097.scope - libcontainer container 084592ba6fe172c5a854a4f70b117db952c4e96f2c8c5e08d4373fe02d679097. Apr 21 02:48:00.148210 systemd[1]: cri-containerd-084592ba6fe172c5a854a4f70b117db952c4e96f2c8c5e08d4373fe02d679097.scope: Deactivated successfully. Apr 21 02:48:00.148633 containerd[1598]: time="2026-04-21T02:48:00.148487075Z" level=info msg="StartContainer for \"084592ba6fe172c5a854a4f70b117db952c4e96f2c8c5e08d4373fe02d679097\" returns successfully" Apr 21 02:48:00.151670 containerd[1598]: time="2026-04-21T02:48:00.151540699Z" level=info msg="received container exit event container_id:\"084592ba6fe172c5a854a4f70b117db952c4e96f2c8c5e08d4373fe02d679097\" id:\"084592ba6fe172c5a854a4f70b117db952c4e96f2c8c5e08d4373fe02d679097\" pid:4695 exited_at:{seconds:1776739680 nanos:150274501}" Apr 21 02:48:00.179080 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-084592ba6fe172c5a854a4f70b117db952c4e96f2c8c5e08d4373fe02d679097-rootfs.mount: Deactivated successfully. Apr 21 02:48:00.651404 kubelet[2744]: I0421 02:48:00.651279 2744 setters.go:543] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-21T02:48:00Z","lastTransitionTime":"2026-04-21T02:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Apr 21 02:48:01.034593 kubelet[2744]: E0421 02:48:01.034476 2744 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:48:01.040279 containerd[1598]: time="2026-04-21T02:48:01.040213604Z" level=info msg="CreateContainer within sandbox \"4f1f1162d4bda49e03fded4680da7f4107c937279042a54b4aeae44a83210ef2\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 21 02:48:01.052375 containerd[1598]: time="2026-04-21T02:48:01.052061803Z" level=info msg="Container ba0fcabefacbcbc40b4d8bf4dd4e5a96811b69c0fe9e27f020ab5a349a16d1af: CDI devices from CRI Config.CDIDevices: []" Apr 21 02:48:01.054807 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3734446232.mount: Deactivated successfully. Apr 21 02:48:01.062054 containerd[1598]: time="2026-04-21T02:48:01.061885004Z" level=info msg="CreateContainer within sandbox \"4f1f1162d4bda49e03fded4680da7f4107c937279042a54b4aeae44a83210ef2\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ba0fcabefacbcbc40b4d8bf4dd4e5a96811b69c0fe9e27f020ab5a349a16d1af\"" Apr 21 02:48:01.062610 containerd[1598]: time="2026-04-21T02:48:01.062594338Z" level=info msg="StartContainer for \"ba0fcabefacbcbc40b4d8bf4dd4e5a96811b69c0fe9e27f020ab5a349a16d1af\"" Apr 21 02:48:01.063643 containerd[1598]: time="2026-04-21T02:48:01.063619968Z" level=info msg="connecting to shim ba0fcabefacbcbc40b4d8bf4dd4e5a96811b69c0fe9e27f020ab5a349a16d1af" address="unix:///run/containerd/s/315a0986bf2964df9d0519662f97ce697cbeec27a8beee1b36bdf3291cafc0d3" protocol=ttrpc version=3 Apr 21 02:48:01.088336 systemd[1]: Started cri-containerd-ba0fcabefacbcbc40b4d8bf4dd4e5a96811b69c0fe9e27f020ab5a349a16d1af.scope - libcontainer container ba0fcabefacbcbc40b4d8bf4dd4e5a96811b69c0fe9e27f020ab5a349a16d1af. Apr 21 02:48:01.126566 systemd[1]: cri-containerd-ba0fcabefacbcbc40b4d8bf4dd4e5a96811b69c0fe9e27f020ab5a349a16d1af.scope: Deactivated successfully. Apr 21 02:48:01.129257 containerd[1598]: time="2026-04-21T02:48:01.128859792Z" level=info msg="received container exit event container_id:\"ba0fcabefacbcbc40b4d8bf4dd4e5a96811b69c0fe9e27f020ab5a349a16d1af\" id:\"ba0fcabefacbcbc40b4d8bf4dd4e5a96811b69c0fe9e27f020ab5a349a16d1af\" pid:4734 exited_at:{seconds:1776739681 nanos:127292125}" Apr 21 02:48:01.132424 containerd[1598]: time="2026-04-21T02:48:01.132347430Z" level=info msg="StartContainer for \"ba0fcabefacbcbc40b4d8bf4dd4e5a96811b69c0fe9e27f020ab5a349a16d1af\" returns successfully" Apr 21 02:48:01.158358 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ba0fcabefacbcbc40b4d8bf4dd4e5a96811b69c0fe9e27f020ab5a349a16d1af-rootfs.mount: Deactivated successfully. Apr 21 02:48:02.041622 kubelet[2744]: E0421 02:48:02.041563 2744 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:48:02.054620 containerd[1598]: time="2026-04-21T02:48:02.054500147Z" level=info msg="CreateContainer within sandbox \"4f1f1162d4bda49e03fded4680da7f4107c937279042a54b4aeae44a83210ef2\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 21 02:48:02.067895 containerd[1598]: time="2026-04-21T02:48:02.067828110Z" level=info msg="Container e7579943bb675a821f90219e3102bf42056bf4c7031e3bcb902d7bf5e4788eb9: CDI devices from CRI Config.CDIDevices: []" Apr 21 02:48:02.078972 containerd[1598]: time="2026-04-21T02:48:02.078545154Z" level=info msg="CreateContainer within sandbox \"4f1f1162d4bda49e03fded4680da7f4107c937279042a54b4aeae44a83210ef2\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e7579943bb675a821f90219e3102bf42056bf4c7031e3bcb902d7bf5e4788eb9\"" Apr 21 02:48:02.079738 containerd[1598]: time="2026-04-21T02:48:02.079664074Z" level=info msg="StartContainer for \"e7579943bb675a821f90219e3102bf42056bf4c7031e3bcb902d7bf5e4788eb9\"" Apr 21 02:48:02.081228 containerd[1598]: time="2026-04-21T02:48:02.081057926Z" level=info msg="connecting to shim e7579943bb675a821f90219e3102bf42056bf4c7031e3bcb902d7bf5e4788eb9" address="unix:///run/containerd/s/315a0986bf2964df9d0519662f97ce697cbeec27a8beee1b36bdf3291cafc0d3" protocol=ttrpc version=3 Apr 21 02:48:02.109230 systemd[1]: Started cri-containerd-e7579943bb675a821f90219e3102bf42056bf4c7031e3bcb902d7bf5e4788eb9.scope - libcontainer container e7579943bb675a821f90219e3102bf42056bf4c7031e3bcb902d7bf5e4788eb9. Apr 21 02:48:02.162535 containerd[1598]: time="2026-04-21T02:48:02.162442699Z" level=info msg="StartContainer for \"e7579943bb675a821f90219e3102bf42056bf4c7031e3bcb902d7bf5e4788eb9\" returns successfully" Apr 21 02:48:02.493090 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-vaes-avx10_256)) Apr 21 02:48:03.050109 kubelet[2744]: E0421 02:48:03.049957 2744 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:48:04.053589 kubelet[2744]: E0421 02:48:04.053543 2744 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:48:05.057428 kubelet[2744]: E0421 02:48:05.056962 2744 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:48:05.588779 systemd-networkd[1505]: lxc_health: Link UP Apr 21 02:48:05.590523 systemd-networkd[1505]: lxc_health: Gained carrier Apr 21 02:48:05.894068 kubelet[2744]: I0421 02:48:05.893512 2744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-nxwbq" podStartSLOduration=8.893499133 podStartE2EDuration="8.893499133s" podCreationTimestamp="2026-04-21 02:47:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 02:48:03.067060449 +0000 UTC m=+84.639132376" watchObservedRunningTime="2026-04-21 02:48:05.893499133 +0000 UTC m=+87.465571062" Apr 21 02:48:06.062357 kubelet[2744]: E0421 02:48:06.062259 2744 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:48:06.893525 systemd-networkd[1505]: lxc_health: Gained IPv6LL Apr 21 02:48:07.062444 kubelet[2744]: E0421 02:48:07.062353 2744 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:48:12.702287 sshd[4530]: Connection closed by 10.0.0.1 port 50280 Apr 21 02:48:12.702973 sshd-session[4527]: pam_unix(sshd:session): session closed for user core Apr 21 02:48:12.708555 systemd[1]: sshd@25-10.0.0.33:22-10.0.0.1:50280.service: Deactivated successfully. Apr 21 02:48:12.710727 systemd[1]: session-26.scope: Deactivated successfully. Apr 21 02:48:12.712728 systemd-logind[1573]: Session 26 logged out. Waiting for processes to exit. Apr 21 02:48:12.714780 systemd-logind[1573]: Removed session 26. Apr 21 02:48:13.840379 kernel: hrtimer: interrupt took 29088782 ns