Nov 5 00:04:56.087812 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Tue Nov 4 22:00:22 -00 2025 Nov 5 00:04:56.087839 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c57c40de146020da5f35a7230cc1da8f1a5a7a7af49d0754317609f7e94976e2 Nov 5 00:04:56.087848 kernel: BIOS-provided physical RAM map: Nov 5 00:04:56.087855 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000002ffff] usable Nov 5 00:04:56.087861 kernel: BIOS-e820: [mem 0x0000000000030000-0x000000000004ffff] reserved Nov 5 00:04:56.087871 kernel: BIOS-e820: [mem 0x0000000000050000-0x000000000009efff] usable Nov 5 00:04:56.087879 kernel: BIOS-e820: [mem 0x000000000009f000-0x000000000009ffff] reserved Nov 5 00:04:56.087886 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009b8ecfff] usable Nov 5 00:04:56.087893 kernel: BIOS-e820: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved Nov 5 00:04:56.087900 kernel: BIOS-e820: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data Nov 5 00:04:56.087907 kernel: BIOS-e820: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS Nov 5 00:04:56.087914 kernel: BIOS-e820: [mem 0x000000009bbff000-0x000000009bfb0fff] usable Nov 5 00:04:56.087920 kernel: BIOS-e820: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved Nov 5 00:04:56.087930 kernel: BIOS-e820: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS Nov 5 00:04:56.087938 kernel: BIOS-e820: [mem 0x000000009bfb7000-0x000000009bffffff] usable Nov 5 00:04:56.087946 kernel: BIOS-e820: [mem 0x000000009c000000-0x000000009cffffff] reserved Nov 5 00:04:56.087953 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Nov 5 00:04:56.087963 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 5 00:04:56.087970 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Nov 5 00:04:56.087977 kernel: NX (Execute Disable) protection: active Nov 5 00:04:56.087985 kernel: APIC: Static calls initialized Nov 5 00:04:56.087992 kernel: e820: update [mem 0x9a13d018-0x9a146c57] usable ==> usable Nov 5 00:04:56.088000 kernel: e820: update [mem 0x9a100018-0x9a13ce57] usable ==> usable Nov 5 00:04:56.088007 kernel: extended physical RAM map: Nov 5 00:04:56.088015 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000002ffff] usable Nov 5 00:04:56.088023 kernel: reserve setup_data: [mem 0x0000000000030000-0x000000000004ffff] reserved Nov 5 00:04:56.088030 kernel: reserve setup_data: [mem 0x0000000000050000-0x000000000009efff] usable Nov 5 00:04:56.088038 kernel: reserve setup_data: [mem 0x000000000009f000-0x000000000009ffff] reserved Nov 5 00:04:56.088048 kernel: reserve setup_data: [mem 0x0000000000100000-0x000000009a100017] usable Nov 5 00:04:56.088055 kernel: reserve setup_data: [mem 0x000000009a100018-0x000000009a13ce57] usable Nov 5 00:04:56.088062 kernel: reserve setup_data: [mem 0x000000009a13ce58-0x000000009a13d017] usable Nov 5 00:04:56.088070 kernel: reserve setup_data: [mem 0x000000009a13d018-0x000000009a146c57] usable Nov 5 00:04:56.088077 kernel: reserve setup_data: [mem 0x000000009a146c58-0x000000009b8ecfff] usable Nov 5 00:04:56.088085 kernel: reserve setup_data: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved Nov 5 00:04:56.088092 kernel: reserve setup_data: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data Nov 5 00:04:56.088100 kernel: reserve setup_data: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS Nov 5 00:04:56.088107 kernel: reserve setup_data: [mem 0x000000009bbff000-0x000000009bfb0fff] usable Nov 5 00:04:56.088115 kernel: reserve setup_data: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved Nov 5 00:04:56.088124 kernel: reserve setup_data: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS Nov 5 00:04:56.088132 kernel: reserve setup_data: [mem 0x000000009bfb7000-0x000000009bffffff] usable Nov 5 00:04:56.088143 kernel: reserve setup_data: [mem 0x000000009c000000-0x000000009cffffff] reserved Nov 5 00:04:56.088151 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Nov 5 00:04:56.088158 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 5 00:04:56.088168 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Nov 5 00:04:56.088176 kernel: efi: EFI v2.7 by EDK II Nov 5 00:04:56.088184 kernel: efi: SMBIOS=0x9b9d5000 ACPI=0x9bb7e000 ACPI 2.0=0x9bb7e014 MEMATTR=0x9a1af018 RNG=0x9bb73018 Nov 5 00:04:56.088191 kernel: random: crng init done Nov 5 00:04:56.088199 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 Nov 5 00:04:56.088207 kernel: secureboot: Secure boot enabled Nov 5 00:04:56.088214 kernel: SMBIOS 2.8 present. Nov 5 00:04:56.088222 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Nov 5 00:04:56.088230 kernel: DMI: Memory slots populated: 1/1 Nov 5 00:04:56.088240 kernel: Hypervisor detected: KVM Nov 5 00:04:56.088247 kernel: last_pfn = 0x9c000 max_arch_pfn = 0x400000000 Nov 5 00:04:56.088255 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 5 00:04:56.088263 kernel: kvm-clock: using sched offset of 4555043102 cycles Nov 5 00:04:56.088271 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 5 00:04:56.088279 kernel: tsc: Detected 2794.750 MHz processor Nov 5 00:04:56.088287 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 5 00:04:56.088295 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 5 00:04:56.088303 kernel: last_pfn = 0x9c000 max_arch_pfn = 0x400000000 Nov 5 00:04:56.088314 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Nov 5 00:04:56.088322 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 5 00:04:56.088330 kernel: Using GB pages for direct mapping Nov 5 00:04:56.088338 kernel: ACPI: Early table checksum verification disabled Nov 5 00:04:56.088346 kernel: ACPI: RSDP 0x000000009BB7E014 000024 (v02 BOCHS ) Nov 5 00:04:56.088354 kernel: ACPI: XSDT 0x000000009BB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Nov 5 00:04:56.088362 kernel: ACPI: FACP 0x000000009BB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 00:04:56.088373 kernel: ACPI: DSDT 0x000000009BB7A000 002237 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 00:04:56.088381 kernel: ACPI: FACS 0x000000009BBDD000 000040 Nov 5 00:04:56.088389 kernel: ACPI: APIC 0x000000009BB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 00:04:56.088397 kernel: ACPI: HPET 0x000000009BB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 00:04:56.088405 kernel: ACPI: MCFG 0x000000009BB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 00:04:56.088413 kernel: ACPI: WAET 0x000000009BB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 00:04:56.088421 kernel: ACPI: BGRT 0x000000009BB74000 000038 (v01 INTEL EDK2 00000002 01000013) Nov 5 00:04:56.088439 kernel: ACPI: Reserving FACP table memory at [mem 0x9bb79000-0x9bb790f3] Nov 5 00:04:56.088447 kernel: ACPI: Reserving DSDT table memory at [mem 0x9bb7a000-0x9bb7c236] Nov 5 00:04:56.088455 kernel: ACPI: Reserving FACS table memory at [mem 0x9bbdd000-0x9bbdd03f] Nov 5 00:04:56.088463 kernel: ACPI: Reserving APIC table memory at [mem 0x9bb78000-0x9bb7808f] Nov 5 00:04:56.088471 kernel: ACPI: Reserving HPET table memory at [mem 0x9bb77000-0x9bb77037] Nov 5 00:04:56.088479 kernel: ACPI: Reserving MCFG table memory at [mem 0x9bb76000-0x9bb7603b] Nov 5 00:04:56.088487 kernel: ACPI: Reserving WAET table memory at [mem 0x9bb75000-0x9bb75027] Nov 5 00:04:56.088495 kernel: ACPI: Reserving BGRT table memory at [mem 0x9bb74000-0x9bb74037] Nov 5 00:04:56.088505 kernel: No NUMA configuration found Nov 5 00:04:56.088513 kernel: Faking a node at [mem 0x0000000000000000-0x000000009bffffff] Nov 5 00:04:56.088521 kernel: NODE_DATA(0) allocated [mem 0x9bf57dc0-0x9bf5efff] Nov 5 00:04:56.088529 kernel: Zone ranges: Nov 5 00:04:56.088537 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 5 00:04:56.088545 kernel: DMA32 [mem 0x0000000001000000-0x000000009bffffff] Nov 5 00:04:56.088553 kernel: Normal empty Nov 5 00:04:56.088563 kernel: Device empty Nov 5 00:04:56.088571 kernel: Movable zone start for each node Nov 5 00:04:56.088579 kernel: Early memory node ranges Nov 5 00:04:56.088587 kernel: node 0: [mem 0x0000000000001000-0x000000000002ffff] Nov 5 00:04:56.088595 kernel: node 0: [mem 0x0000000000050000-0x000000000009efff] Nov 5 00:04:56.088603 kernel: node 0: [mem 0x0000000000100000-0x000000009b8ecfff] Nov 5 00:04:56.088611 kernel: node 0: [mem 0x000000009bbff000-0x000000009bfb0fff] Nov 5 00:04:56.088619 kernel: node 0: [mem 0x000000009bfb7000-0x000000009bffffff] Nov 5 00:04:56.088629 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009bffffff] Nov 5 00:04:56.088637 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 5 00:04:56.088645 kernel: On node 0, zone DMA: 32 pages in unavailable ranges Nov 5 00:04:56.088653 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Nov 5 00:04:56.088661 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Nov 5 00:04:56.088717 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Nov 5 00:04:56.088726 kernel: On node 0, zone DMA32: 16384 pages in unavailable ranges Nov 5 00:04:56.088737 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 5 00:04:56.088745 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 5 00:04:56.088753 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 5 00:04:56.088761 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 5 00:04:56.088769 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 5 00:04:56.088777 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 5 00:04:56.088785 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 5 00:04:56.088793 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 5 00:04:56.088803 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 5 00:04:56.088811 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 5 00:04:56.088819 kernel: TSC deadline timer available Nov 5 00:04:56.088827 kernel: CPU topo: Max. logical packages: 1 Nov 5 00:04:56.088835 kernel: CPU topo: Max. logical dies: 1 Nov 5 00:04:56.088851 kernel: CPU topo: Max. dies per package: 1 Nov 5 00:04:56.088860 kernel: CPU topo: Max. threads per core: 1 Nov 5 00:04:56.088868 kernel: CPU topo: Num. cores per package: 4 Nov 5 00:04:56.088876 kernel: CPU topo: Num. threads per package: 4 Nov 5 00:04:56.088884 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Nov 5 00:04:56.088895 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 5 00:04:56.088904 kernel: kvm-guest: KVM setup pv remote TLB flush Nov 5 00:04:56.088912 kernel: kvm-guest: setup PV sched yield Nov 5 00:04:56.088920 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Nov 5 00:04:56.088930 kernel: Booting paravirtualized kernel on KVM Nov 5 00:04:56.088939 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 5 00:04:56.088948 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Nov 5 00:04:56.088956 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Nov 5 00:04:56.088964 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Nov 5 00:04:56.088973 kernel: pcpu-alloc: [0] 0 1 2 3 Nov 5 00:04:56.088981 kernel: kvm-guest: PV spinlocks enabled Nov 5 00:04:56.088991 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 5 00:04:56.089001 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c57c40de146020da5f35a7230cc1da8f1a5a7a7af49d0754317609f7e94976e2 Nov 5 00:04:56.089010 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 5 00:04:56.089018 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 5 00:04:56.089026 kernel: Fallback order for Node 0: 0 Nov 5 00:04:56.089034 kernel: Built 1 zonelists, mobility grouping on. Total pages: 638054 Nov 5 00:04:56.089045 kernel: Policy zone: DMA32 Nov 5 00:04:56.089053 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 5 00:04:56.089061 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Nov 5 00:04:56.089070 kernel: ftrace: allocating 40092 entries in 157 pages Nov 5 00:04:56.089078 kernel: ftrace: allocated 157 pages with 5 groups Nov 5 00:04:56.089086 kernel: Dynamic Preempt: voluntary Nov 5 00:04:56.089094 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 5 00:04:56.089103 kernel: rcu: RCU event tracing is enabled. Nov 5 00:04:56.089113 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Nov 5 00:04:56.089122 kernel: Trampoline variant of Tasks RCU enabled. Nov 5 00:04:56.089130 kernel: Rude variant of Tasks RCU enabled. Nov 5 00:04:56.089139 kernel: Tracing variant of Tasks RCU enabled. Nov 5 00:04:56.089147 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 5 00:04:56.089155 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Nov 5 00:04:56.089163 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 5 00:04:56.089174 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 5 00:04:56.089183 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 5 00:04:56.089191 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Nov 5 00:04:56.089199 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 5 00:04:56.089208 kernel: Console: colour dummy device 80x25 Nov 5 00:04:56.089216 kernel: printk: legacy console [ttyS0] enabled Nov 5 00:04:56.089224 kernel: ACPI: Core revision 20240827 Nov 5 00:04:56.089233 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 5 00:04:56.089243 kernel: APIC: Switch to symmetric I/O mode setup Nov 5 00:04:56.089251 kernel: x2apic enabled Nov 5 00:04:56.089260 kernel: APIC: Switched APIC routing to: physical x2apic Nov 5 00:04:56.089268 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Nov 5 00:04:56.089276 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Nov 5 00:04:56.089285 kernel: kvm-guest: setup PV IPIs Nov 5 00:04:56.089293 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 5 00:04:56.089304 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Nov 5 00:04:56.089312 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Nov 5 00:04:56.089320 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Nov 5 00:04:56.089329 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Nov 5 00:04:56.089337 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Nov 5 00:04:56.089345 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 5 00:04:56.089354 kernel: Spectre V2 : Mitigation: Retpolines Nov 5 00:04:56.089364 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 5 00:04:56.089372 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Nov 5 00:04:56.089381 kernel: active return thunk: retbleed_return_thunk Nov 5 00:04:56.089389 kernel: RETBleed: Mitigation: untrained return thunk Nov 5 00:04:56.089397 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 5 00:04:56.089406 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 5 00:04:56.089414 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Nov 5 00:04:56.089425 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Nov 5 00:04:56.089449 kernel: active return thunk: srso_return_thunk Nov 5 00:04:56.089457 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Nov 5 00:04:56.089466 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 5 00:04:56.089474 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 5 00:04:56.089483 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 5 00:04:56.089493 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 5 00:04:56.089502 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Nov 5 00:04:56.089510 kernel: Freeing SMP alternatives memory: 32K Nov 5 00:04:56.089519 kernel: pid_max: default: 32768 minimum: 301 Nov 5 00:04:56.089527 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Nov 5 00:04:56.089535 kernel: landlock: Up and running. Nov 5 00:04:56.089543 kernel: SELinux: Initializing. Nov 5 00:04:56.089554 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 5 00:04:56.089562 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 5 00:04:56.089570 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Nov 5 00:04:56.089579 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Nov 5 00:04:56.089587 kernel: ... version: 0 Nov 5 00:04:56.089595 kernel: ... bit width: 48 Nov 5 00:04:56.089604 kernel: ... generic registers: 6 Nov 5 00:04:56.089612 kernel: ... value mask: 0000ffffffffffff Nov 5 00:04:56.089622 kernel: ... max period: 00007fffffffffff Nov 5 00:04:56.089630 kernel: ... fixed-purpose events: 0 Nov 5 00:04:56.089639 kernel: ... event mask: 000000000000003f Nov 5 00:04:56.089647 kernel: signal: max sigframe size: 1776 Nov 5 00:04:56.089655 kernel: rcu: Hierarchical SRCU implementation. Nov 5 00:04:56.089664 kernel: rcu: Max phase no-delay instances is 400. Nov 5 00:04:56.089684 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Nov 5 00:04:56.089695 kernel: smp: Bringing up secondary CPUs ... Nov 5 00:04:56.089703 kernel: smpboot: x86: Booting SMP configuration: Nov 5 00:04:56.089712 kernel: .... node #0, CPUs: #1 #2 #3 Nov 5 00:04:56.089720 kernel: smp: Brought up 1 node, 4 CPUs Nov 5 00:04:56.089728 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Nov 5 00:04:56.089737 kernel: Memory: 2431744K/2552216K available (14336K kernel code, 2443K rwdata, 26064K rodata, 15936K init, 2108K bss, 114536K reserved, 0K cma-reserved) Nov 5 00:04:56.089745 kernel: devtmpfs: initialized Nov 5 00:04:56.089756 kernel: x86/mm: Memory block size: 128MB Nov 5 00:04:56.089764 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bb7f000-0x9bbfefff] (524288 bytes) Nov 5 00:04:56.089772 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bfb5000-0x9bfb6fff] (8192 bytes) Nov 5 00:04:56.089781 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 5 00:04:56.089789 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Nov 5 00:04:56.089797 kernel: pinctrl core: initialized pinctrl subsystem Nov 5 00:04:56.089806 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 5 00:04:56.089816 kernel: audit: initializing netlink subsys (disabled) Nov 5 00:04:56.089825 kernel: audit: type=2000 audit(1762301094.074:1): state=initialized audit_enabled=0 res=1 Nov 5 00:04:56.089833 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 5 00:04:56.089842 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 5 00:04:56.089850 kernel: cpuidle: using governor menu Nov 5 00:04:56.089858 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 5 00:04:56.089866 kernel: dca service started, version 1.12.1 Nov 5 00:04:56.089877 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Nov 5 00:04:56.089885 kernel: PCI: Using configuration type 1 for base access Nov 5 00:04:56.089894 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 5 00:04:56.089902 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 5 00:04:56.089910 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 5 00:04:56.089919 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 5 00:04:56.089927 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 5 00:04:56.089937 kernel: ACPI: Added _OSI(Module Device) Nov 5 00:04:56.089945 kernel: ACPI: Added _OSI(Processor Device) Nov 5 00:04:56.089954 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 5 00:04:56.089962 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 5 00:04:56.089970 kernel: ACPI: Interpreter enabled Nov 5 00:04:56.089978 kernel: ACPI: PM: (supports S0 S5) Nov 5 00:04:56.089987 kernel: ACPI: Using IOAPIC for interrupt routing Nov 5 00:04:56.089997 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 5 00:04:56.090006 kernel: PCI: Using E820 reservations for host bridge windows Nov 5 00:04:56.090014 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Nov 5 00:04:56.090022 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 5 00:04:56.090248 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 5 00:04:56.090443 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Nov 5 00:04:56.090618 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Nov 5 00:04:56.090629 kernel: PCI host bridge to bus 0000:00 Nov 5 00:04:56.090812 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 5 00:04:56.090966 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 5 00:04:56.091118 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 5 00:04:56.091270 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Nov 5 00:04:56.091429 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Nov 5 00:04:56.091591 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Nov 5 00:04:56.091764 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 5 00:04:56.091949 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Nov 5 00:04:56.092125 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Nov 5 00:04:56.092294 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Nov 5 00:04:56.092481 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Nov 5 00:04:56.092647 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Nov 5 00:04:56.092850 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 5 00:04:56.093024 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Nov 5 00:04:56.093190 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Nov 5 00:04:56.093358 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Nov 5 00:04:56.093532 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Nov 5 00:04:56.093738 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Nov 5 00:04:56.093917 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Nov 5 00:04:56.094083 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Nov 5 00:04:56.094246 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Nov 5 00:04:56.094424 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Nov 5 00:04:56.094605 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Nov 5 00:04:56.094803 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Nov 5 00:04:56.094967 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Nov 5 00:04:56.095129 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Nov 5 00:04:56.095305 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Nov 5 00:04:56.095475 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Nov 5 00:04:56.095647 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Nov 5 00:04:56.095851 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Nov 5 00:04:56.096016 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Nov 5 00:04:56.096189 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Nov 5 00:04:56.096356 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Nov 5 00:04:56.096368 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 5 00:04:56.096377 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 5 00:04:56.096385 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 5 00:04:56.096394 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 5 00:04:56.096402 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Nov 5 00:04:56.096414 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Nov 5 00:04:56.096422 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Nov 5 00:04:56.096439 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Nov 5 00:04:56.096448 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Nov 5 00:04:56.096456 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Nov 5 00:04:56.096464 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Nov 5 00:04:56.096473 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Nov 5 00:04:56.096481 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Nov 5 00:04:56.096492 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Nov 5 00:04:56.096500 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Nov 5 00:04:56.096508 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Nov 5 00:04:56.096516 kernel: iommu: Default domain type: Translated Nov 5 00:04:56.096525 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 5 00:04:56.096533 kernel: efivars: Registered efivars operations Nov 5 00:04:56.096541 kernel: PCI: Using ACPI for IRQ routing Nov 5 00:04:56.096552 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 5 00:04:56.096560 kernel: e820: reserve RAM buffer [mem 0x0009f000-0x0009ffff] Nov 5 00:04:56.096568 kernel: e820: reserve RAM buffer [mem 0x9a100018-0x9bffffff] Nov 5 00:04:56.096577 kernel: e820: reserve RAM buffer [mem 0x9a13d018-0x9bffffff] Nov 5 00:04:56.096585 kernel: e820: reserve RAM buffer [mem 0x9b8ed000-0x9bffffff] Nov 5 00:04:56.096593 kernel: e820: reserve RAM buffer [mem 0x9bfb1000-0x9bffffff] Nov 5 00:04:56.096776 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Nov 5 00:04:56.096943 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Nov 5 00:04:56.097107 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 5 00:04:56.097117 kernel: vgaarb: loaded Nov 5 00:04:56.097126 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 5 00:04:56.097134 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 5 00:04:56.097143 kernel: clocksource: Switched to clocksource kvm-clock Nov 5 00:04:56.097151 kernel: VFS: Disk quotas dquot_6.6.0 Nov 5 00:04:56.097163 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 5 00:04:56.097172 kernel: pnp: PnP ACPI init Nov 5 00:04:56.097350 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Nov 5 00:04:56.097362 kernel: pnp: PnP ACPI: found 6 devices Nov 5 00:04:56.097371 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 5 00:04:56.097379 kernel: NET: Registered PF_INET protocol family Nov 5 00:04:56.097391 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 5 00:04:56.097400 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 5 00:04:56.097408 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 5 00:04:56.097417 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 5 00:04:56.097425 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 5 00:04:56.097441 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 5 00:04:56.097450 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 5 00:04:56.097461 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 5 00:04:56.097469 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 5 00:04:56.097478 kernel: NET: Registered PF_XDP protocol family Nov 5 00:04:56.097645 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Nov 5 00:04:56.097877 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Nov 5 00:04:56.098034 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 5 00:04:56.098191 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 5 00:04:56.098342 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 5 00:04:56.098500 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Nov 5 00:04:56.098651 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Nov 5 00:04:56.098835 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Nov 5 00:04:56.098847 kernel: PCI: CLS 0 bytes, default 64 Nov 5 00:04:56.098856 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Nov 5 00:04:56.098868 kernel: Initialise system trusted keyrings Nov 5 00:04:56.098876 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 5 00:04:56.098885 kernel: Key type asymmetric registered Nov 5 00:04:56.098893 kernel: Asymmetric key parser 'x509' registered Nov 5 00:04:56.098916 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Nov 5 00:04:56.098927 kernel: io scheduler mq-deadline registered Nov 5 00:04:56.098936 kernel: io scheduler kyber registered Nov 5 00:04:56.098948 kernel: io scheduler bfq registered Nov 5 00:04:56.098957 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 5 00:04:56.098966 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Nov 5 00:04:56.098975 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Nov 5 00:04:56.098983 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Nov 5 00:04:56.098992 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 5 00:04:56.099001 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 5 00:04:56.099011 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 5 00:04:56.099020 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 5 00:04:56.099029 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 5 00:04:56.099197 kernel: rtc_cmos 00:04: RTC can wake from S4 Nov 5 00:04:56.099209 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 5 00:04:56.099363 kernel: rtc_cmos 00:04: registered as rtc0 Nov 5 00:04:56.099530 kernel: rtc_cmos 00:04: setting system clock to 2025-11-05T00:04:54 UTC (1762301094) Nov 5 00:04:56.099705 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Nov 5 00:04:56.099717 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Nov 5 00:04:56.099726 kernel: efifb: probing for efifb Nov 5 00:04:56.099734 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Nov 5 00:04:56.099743 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Nov 5 00:04:56.099752 kernel: efifb: scrolling: redraw Nov 5 00:04:56.099760 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Nov 5 00:04:56.099772 kernel: Console: switching to colour frame buffer device 160x50 Nov 5 00:04:56.099783 kernel: fb0: EFI VGA frame buffer device Nov 5 00:04:56.099792 kernel: pstore: Using crash dump compression: deflate Nov 5 00:04:56.099800 kernel: pstore: Registered efi_pstore as persistent store backend Nov 5 00:04:56.099811 kernel: NET: Registered PF_INET6 protocol family Nov 5 00:04:56.099819 kernel: Segment Routing with IPv6 Nov 5 00:04:56.099828 kernel: In-situ OAM (IOAM) with IPv6 Nov 5 00:04:56.099837 kernel: NET: Registered PF_PACKET protocol family Nov 5 00:04:56.099846 kernel: Key type dns_resolver registered Nov 5 00:04:56.099854 kernel: IPI shorthand broadcast: enabled Nov 5 00:04:56.099863 kernel: sched_clock: Marking stable (1041002880, 260315963)->(1412554369, -111235526) Nov 5 00:04:56.099874 kernel: registered taskstats version 1 Nov 5 00:04:56.099882 kernel: Loading compiled-in X.509 certificates Nov 5 00:04:56.099891 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: ace064fb6689a15889f35c6439909c760a72ef44' Nov 5 00:04:56.099900 kernel: Demotion targets for Node 0: null Nov 5 00:04:56.099909 kernel: Key type .fscrypt registered Nov 5 00:04:56.099917 kernel: Key type fscrypt-provisioning registered Nov 5 00:04:56.099926 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 5 00:04:56.099936 kernel: ima: Allocated hash algorithm: sha1 Nov 5 00:04:56.099945 kernel: ima: No architecture policies found Nov 5 00:04:56.099953 kernel: clk: Disabling unused clocks Nov 5 00:04:56.099962 kernel: Freeing unused kernel image (initmem) memory: 15936K Nov 5 00:04:56.099971 kernel: Write protecting the kernel read-only data: 40960k Nov 5 00:04:56.099979 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Nov 5 00:04:56.099988 kernel: Run /init as init process Nov 5 00:04:56.099999 kernel: with arguments: Nov 5 00:04:56.100007 kernel: /init Nov 5 00:04:56.100016 kernel: with environment: Nov 5 00:04:56.100025 kernel: HOME=/ Nov 5 00:04:56.100033 kernel: TERM=linux Nov 5 00:04:56.100042 kernel: SCSI subsystem initialized Nov 5 00:04:56.100050 kernel: libata version 3.00 loaded. Nov 5 00:04:56.100219 kernel: ahci 0000:00:1f.2: version 3.0 Nov 5 00:04:56.100233 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Nov 5 00:04:56.100402 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Nov 5 00:04:56.100584 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Nov 5 00:04:56.100764 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Nov 5 00:04:56.100952 kernel: scsi host0: ahci Nov 5 00:04:56.101133 kernel: scsi host1: ahci Nov 5 00:04:56.101312 kernel: scsi host2: ahci Nov 5 00:04:56.101611 kernel: scsi host3: ahci Nov 5 00:04:56.102015 kernel: scsi host4: ahci Nov 5 00:04:56.102339 kernel: scsi host5: ahci Nov 5 00:04:56.102352 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 26 lpm-pol 1 Nov 5 00:04:56.102365 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 26 lpm-pol 1 Nov 5 00:04:56.102374 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 26 lpm-pol 1 Nov 5 00:04:56.102382 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 26 lpm-pol 1 Nov 5 00:04:56.102391 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 26 lpm-pol 1 Nov 5 00:04:56.102400 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 26 lpm-pol 1 Nov 5 00:04:56.102409 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 5 00:04:56.102418 kernel: ata1: SATA link down (SStatus 0 SControl 300) Nov 5 00:04:56.102428 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 5 00:04:56.102445 kernel: ata2: SATA link down (SStatus 0 SControl 300) Nov 5 00:04:56.102454 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 5 00:04:56.102462 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Nov 5 00:04:56.102472 kernel: ata3.00: LPM support broken, forcing max_power Nov 5 00:04:56.102482 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Nov 5 00:04:56.102492 kernel: ata3.00: applying bridge limits Nov 5 00:04:56.102504 kernel: ata3.00: LPM support broken, forcing max_power Nov 5 00:04:56.102512 kernel: ata3.00: configured for UDMA/100 Nov 5 00:04:56.102733 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Nov 5 00:04:56.102920 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Nov 5 00:04:56.103087 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Nov 5 00:04:56.103099 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 5 00:04:56.103112 kernel: GPT:16515071 != 27000831 Nov 5 00:04:56.103121 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 5 00:04:56.103130 kernel: GPT:16515071 != 27000831 Nov 5 00:04:56.103138 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 5 00:04:56.103147 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 5 00:04:56.103331 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Nov 5 00:04:56.103343 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 5 00:04:56.103535 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Nov 5 00:04:56.103547 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 5 00:04:56.103556 kernel: device-mapper: uevent: version 1.0.3 Nov 5 00:04:56.103565 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Nov 5 00:04:56.103574 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Nov 5 00:04:56.103582 kernel: raid6: avx2x4 gen() 30445 MB/s Nov 5 00:04:56.103591 kernel: raid6: avx2x2 gen() 30958 MB/s Nov 5 00:04:56.103603 kernel: raid6: avx2x1 gen() 25889 MB/s Nov 5 00:04:56.103611 kernel: raid6: using algorithm avx2x2 gen() 30958 MB/s Nov 5 00:04:56.103620 kernel: raid6: .... xor() 19875 MB/s, rmw enabled Nov 5 00:04:56.103629 kernel: raid6: using avx2x2 recovery algorithm Nov 5 00:04:56.103637 kernel: xor: automatically using best checksumming function avx Nov 5 00:04:56.103646 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 5 00:04:56.103655 kernel: BTRFS: device fsid f719dc90-1cf7-4f08-a80f-0dda441372cc devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (181) Nov 5 00:04:56.103679 kernel: BTRFS info (device dm-0): first mount of filesystem f719dc90-1cf7-4f08-a80f-0dda441372cc Nov 5 00:04:56.103688 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 5 00:04:56.103697 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 5 00:04:56.103706 kernel: BTRFS info (device dm-0): enabling free space tree Nov 5 00:04:56.103714 kernel: loop: module loaded Nov 5 00:04:56.103723 kernel: loop0: detected capacity change from 0 to 100120 Nov 5 00:04:56.103732 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 5 00:04:56.103744 systemd[1]: Successfully made /usr/ read-only. Nov 5 00:04:56.103756 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 5 00:04:56.103766 systemd[1]: Detected virtualization kvm. Nov 5 00:04:56.103775 systemd[1]: Detected architecture x86-64. Nov 5 00:04:56.103784 systemd[1]: Running in initrd. Nov 5 00:04:56.103793 systemd[1]: No hostname configured, using default hostname. Nov 5 00:04:56.103804 systemd[1]: Hostname set to . Nov 5 00:04:56.103813 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Nov 5 00:04:56.103822 systemd[1]: Queued start job for default target initrd.target. Nov 5 00:04:56.103831 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 5 00:04:56.103840 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 5 00:04:56.103850 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 5 00:04:56.103861 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 5 00:04:56.103871 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 5 00:04:56.103881 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 5 00:04:56.103891 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 5 00:04:56.103900 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 5 00:04:56.103911 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 5 00:04:56.103921 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Nov 5 00:04:56.103930 systemd[1]: Reached target paths.target - Path Units. Nov 5 00:04:56.103939 systemd[1]: Reached target slices.target - Slice Units. Nov 5 00:04:56.103948 systemd[1]: Reached target swap.target - Swaps. Nov 5 00:04:56.103957 systemd[1]: Reached target timers.target - Timer Units. Nov 5 00:04:56.103966 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 5 00:04:56.103978 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 5 00:04:56.103987 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 5 00:04:56.103996 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Nov 5 00:04:56.104005 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 5 00:04:56.104014 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 5 00:04:56.104023 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 5 00:04:56.104033 systemd[1]: Reached target sockets.target - Socket Units. Nov 5 00:04:56.104044 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 5 00:04:56.104053 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 5 00:04:56.104063 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 5 00:04:56.104072 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 5 00:04:56.104082 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Nov 5 00:04:56.104091 systemd[1]: Starting systemd-fsck-usr.service... Nov 5 00:04:56.104100 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 5 00:04:56.104112 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 5 00:04:56.104121 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 00:04:56.104131 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 5 00:04:56.104142 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 5 00:04:56.104151 systemd[1]: Finished systemd-fsck-usr.service. Nov 5 00:04:56.104161 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 5 00:04:56.104191 systemd-journald[315]: Collecting audit messages is disabled. Nov 5 00:04:56.104213 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 5 00:04:56.104223 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 5 00:04:56.104232 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 5 00:04:56.104242 systemd-journald[315]: Journal started Nov 5 00:04:56.104261 systemd-journald[315]: Runtime Journal (/run/log/journal/b7d81d7548214527b3e79411cad8081b) is 5.9M, max 47.9M, 41.9M free. Nov 5 00:04:56.106964 systemd[1]: Started systemd-journald.service - Journal Service. Nov 5 00:04:56.113260 systemd-modules-load[317]: Inserted module 'br_netfilter' Nov 5 00:04:56.114134 kernel: Bridge firewalling registered Nov 5 00:04:56.116283 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 5 00:04:56.117494 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 5 00:04:56.122409 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 5 00:04:56.131818 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 00:04:56.132062 systemd-tmpfiles[338]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Nov 5 00:04:56.135916 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 5 00:04:56.141097 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 5 00:04:56.145010 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 5 00:04:56.157913 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 5 00:04:56.160803 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 5 00:04:56.171918 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 5 00:04:56.175903 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 5 00:04:56.201461 dracut-cmdline[361]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c57c40de146020da5f35a7230cc1da8f1a5a7a7af49d0754317609f7e94976e2 Nov 5 00:04:56.218159 systemd-resolved[349]: Positive Trust Anchors: Nov 5 00:04:56.218175 systemd-resolved[349]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 5 00:04:56.218179 systemd-resolved[349]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 5 00:04:56.218210 systemd-resolved[349]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 5 00:04:56.243828 systemd-resolved[349]: Defaulting to hostname 'linux'. Nov 5 00:04:56.244881 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 5 00:04:56.248728 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 5 00:04:56.330701 kernel: Loading iSCSI transport class v2.0-870. Nov 5 00:04:56.344692 kernel: iscsi: registered transport (tcp) Nov 5 00:04:56.367699 kernel: iscsi: registered transport (qla4xxx) Nov 5 00:04:56.367737 kernel: QLogic iSCSI HBA Driver Nov 5 00:04:56.393168 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 5 00:04:56.413087 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 5 00:04:56.415137 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 5 00:04:56.474062 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 5 00:04:56.477298 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 5 00:04:56.479700 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 5 00:04:56.517142 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 5 00:04:56.519567 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 5 00:04:56.546799 systemd-udevd[595]: Using default interface naming scheme 'v257'. Nov 5 00:04:56.559655 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 5 00:04:56.561273 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 5 00:04:56.589093 dracut-pre-trigger[644]: rd.md=0: removing MD RAID activation Nov 5 00:04:56.605946 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 5 00:04:56.608596 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 5 00:04:56.626875 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 5 00:04:56.628972 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 5 00:04:56.660449 systemd-networkd[722]: lo: Link UP Nov 5 00:04:56.660456 systemd-networkd[722]: lo: Gained carrier Nov 5 00:04:56.661037 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 5 00:04:56.662398 systemd[1]: Reached target network.target - Network. Nov 5 00:04:56.724245 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 5 00:04:56.728874 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 5 00:04:56.778079 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 5 00:04:56.812759 kernel: cryptd: max_cpu_qlen set to 1000 Nov 5 00:04:56.820544 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 5 00:04:56.833603 systemd-networkd[722]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 5 00:04:56.834569 systemd-networkd[722]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 5 00:04:56.837477 systemd-networkd[722]: eth0: Link UP Nov 5 00:04:56.837692 systemd-networkd[722]: eth0: Gained carrier Nov 5 00:04:56.837703 systemd-networkd[722]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 5 00:04:56.849184 kernel: AES CTR mode by8 optimization enabled Nov 5 00:04:56.853721 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Nov 5 00:04:56.851950 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 5 00:04:56.856755 systemd-networkd[722]: eth0: DHCPv4 address 10.0.0.3/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 5 00:04:56.866509 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 5 00:04:56.879105 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 5 00:04:56.880009 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 5 00:04:56.880067 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 00:04:56.883317 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 00:04:56.894352 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 00:04:56.905976 disk-uuid[836]: Primary Header is updated. Nov 5 00:04:56.905976 disk-uuid[836]: Secondary Entries is updated. Nov 5 00:04:56.905976 disk-uuid[836]: Secondary Header is updated. Nov 5 00:04:56.916468 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 5 00:04:56.919949 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 00:04:56.925308 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 5 00:04:56.928825 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 5 00:04:56.931871 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 5 00:04:56.941202 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 5 00:04:56.965612 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 5 00:04:57.951969 disk-uuid[841]: Warning: The kernel is still using the old partition table. Nov 5 00:04:57.951969 disk-uuid[841]: The new table will be used at the next reboot or after you Nov 5 00:04:57.951969 disk-uuid[841]: run partprobe(8) or kpartx(8) Nov 5 00:04:57.951969 disk-uuid[841]: The operation has completed successfully. Nov 5 00:04:57.965016 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 5 00:04:57.965155 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 5 00:04:57.967413 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 5 00:04:58.001694 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (865) Nov 5 00:04:58.001722 kernel: BTRFS info (device vda6): first mount of filesystem c1921af5-b472-4b94-8690-4d6daf91a8cd Nov 5 00:04:58.001734 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 5 00:04:58.006814 kernel: BTRFS info (device vda6): turning on async discard Nov 5 00:04:58.006880 kernel: BTRFS info (device vda6): enabling free space tree Nov 5 00:04:58.014693 kernel: BTRFS info (device vda6): last unmount of filesystem c1921af5-b472-4b94-8690-4d6daf91a8cd Nov 5 00:04:58.015086 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 5 00:04:58.019159 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 5 00:04:58.124338 ignition[884]: Ignition 2.22.0 Nov 5 00:04:58.124351 ignition[884]: Stage: fetch-offline Nov 5 00:04:58.124397 ignition[884]: no configs at "/usr/lib/ignition/base.d" Nov 5 00:04:58.124410 ignition[884]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 5 00:04:58.124505 ignition[884]: parsed url from cmdline: "" Nov 5 00:04:58.124509 ignition[884]: no config URL provided Nov 5 00:04:58.124515 ignition[884]: reading system config file "/usr/lib/ignition/user.ign" Nov 5 00:04:58.124526 ignition[884]: no config at "/usr/lib/ignition/user.ign" Nov 5 00:04:58.124569 ignition[884]: op(1): [started] loading QEMU firmware config module Nov 5 00:04:58.124574 ignition[884]: op(1): executing: "modprobe" "qemu_fw_cfg" Nov 5 00:04:58.141604 ignition[884]: op(1): [finished] loading QEMU firmware config module Nov 5 00:04:58.221341 ignition[884]: parsing config with SHA512: c00df8bd277fa1de1cbb22a2f07dfe4cbef8b4cfdb92b4ab8a46b2d8bf24886ad6d2ff308dfc7c621e3aec89d69a54956385cd2a4e983ccc367912bcf1351601 Nov 5 00:04:58.225553 unknown[884]: fetched base config from "system" Nov 5 00:04:58.225598 unknown[884]: fetched user config from "qemu" Nov 5 00:04:58.226793 ignition[884]: fetch-offline: fetch-offline passed Nov 5 00:04:58.226892 ignition[884]: Ignition finished successfully Nov 5 00:04:58.230526 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 5 00:04:58.232380 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 5 00:04:58.233308 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 5 00:04:58.270058 ignition[894]: Ignition 2.22.0 Nov 5 00:04:58.270069 ignition[894]: Stage: kargs Nov 5 00:04:58.270218 ignition[894]: no configs at "/usr/lib/ignition/base.d" Nov 5 00:04:58.270228 ignition[894]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 5 00:04:58.271072 ignition[894]: kargs: kargs passed Nov 5 00:04:58.271106 ignition[894]: Ignition finished successfully Nov 5 00:04:58.278459 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 5 00:04:58.283138 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 5 00:04:58.317484 ignition[902]: Ignition 2.22.0 Nov 5 00:04:58.317496 ignition[902]: Stage: disks Nov 5 00:04:58.317632 ignition[902]: no configs at "/usr/lib/ignition/base.d" Nov 5 00:04:58.317642 ignition[902]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 5 00:04:58.318430 ignition[902]: disks: disks passed Nov 5 00:04:58.318476 ignition[902]: Ignition finished successfully Nov 5 00:04:58.326759 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 5 00:04:58.330034 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 5 00:04:58.330979 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 5 00:04:58.334048 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 5 00:04:58.338094 systemd[1]: Reached target sysinit.target - System Initialization. Nov 5 00:04:58.341130 systemd[1]: Reached target basic.target - Basic System. Nov 5 00:04:58.347036 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 5 00:04:58.391203 systemd-fsck[913]: ROOT: clean, 15/456736 files, 38230/456704 blocks Nov 5 00:04:58.398709 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 5 00:04:58.403204 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 5 00:04:58.517692 kernel: EXT4-fs (vda9): mounted filesystem cfb29ed0-6faf-41a8-b421-3abc514e4975 r/w with ordered data mode. Quota mode: none. Nov 5 00:04:58.517990 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 5 00:04:58.519129 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 5 00:04:58.522131 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 5 00:04:58.526570 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 5 00:04:58.527433 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 5 00:04:58.527466 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 5 00:04:58.527489 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 5 00:04:58.542690 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (921) Nov 5 00:04:58.546127 kernel: BTRFS info (device vda6): first mount of filesystem c1921af5-b472-4b94-8690-4d6daf91a8cd Nov 5 00:04:58.546153 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 5 00:04:58.549776 kernel: BTRFS info (device vda6): turning on async discard Nov 5 00:04:58.549798 kernel: BTRFS info (device vda6): enabling free space tree Nov 5 00:04:58.550914 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 5 00:04:58.557139 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 5 00:04:58.559967 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 5 00:04:58.606773 systemd-networkd[722]: eth0: Gained IPv6LL Nov 5 00:04:58.614466 initrd-setup-root[945]: cut: /sysroot/etc/passwd: No such file or directory Nov 5 00:04:58.620098 initrd-setup-root[952]: cut: /sysroot/etc/group: No such file or directory Nov 5 00:04:58.625760 initrd-setup-root[959]: cut: /sysroot/etc/shadow: No such file or directory Nov 5 00:04:58.630907 initrd-setup-root[966]: cut: /sysroot/etc/gshadow: No such file or directory Nov 5 00:04:58.723315 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 5 00:04:58.725511 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 5 00:04:58.728130 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 5 00:04:58.753981 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 5 00:04:58.756363 kernel: BTRFS info (device vda6): last unmount of filesystem c1921af5-b472-4b94-8690-4d6daf91a8cd Nov 5 00:04:58.771789 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 5 00:04:58.791862 ignition[1034]: INFO : Ignition 2.22.0 Nov 5 00:04:58.791862 ignition[1034]: INFO : Stage: mount Nov 5 00:04:58.794409 ignition[1034]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 5 00:04:58.794409 ignition[1034]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 5 00:04:58.798200 ignition[1034]: INFO : mount: mount passed Nov 5 00:04:58.799410 ignition[1034]: INFO : Ignition finished successfully Nov 5 00:04:58.803142 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 5 00:04:58.806301 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 5 00:04:59.519851 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 5 00:04:59.553935 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1047) Nov 5 00:04:59.553965 kernel: BTRFS info (device vda6): first mount of filesystem c1921af5-b472-4b94-8690-4d6daf91a8cd Nov 5 00:04:59.553977 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 5 00:04:59.559018 kernel: BTRFS info (device vda6): turning on async discard Nov 5 00:04:59.559083 kernel: BTRFS info (device vda6): enabling free space tree Nov 5 00:04:59.560694 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 5 00:04:59.591983 ignition[1064]: INFO : Ignition 2.22.0 Nov 5 00:04:59.591983 ignition[1064]: INFO : Stage: files Nov 5 00:04:59.594427 ignition[1064]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 5 00:04:59.594427 ignition[1064]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 5 00:04:59.594427 ignition[1064]: DEBUG : files: compiled without relabeling support, skipping Nov 5 00:04:59.600142 ignition[1064]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 5 00:04:59.600142 ignition[1064]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 5 00:04:59.607445 ignition[1064]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 5 00:04:59.609733 ignition[1064]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 5 00:04:59.612141 unknown[1064]: wrote ssh authorized keys file for user: core Nov 5 00:04:59.613769 ignition[1064]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 5 00:04:59.615948 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 5 00:04:59.615948 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Nov 5 00:04:59.660808 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 5 00:05:00.074587 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 5 00:05:00.077661 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 5 00:05:00.077661 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Nov 5 00:05:00.317106 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Nov 5 00:05:00.416321 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 5 00:05:00.419235 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Nov 5 00:05:00.419235 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Nov 5 00:05:00.419235 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 5 00:05:00.419235 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 5 00:05:00.419235 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 5 00:05:00.419235 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 5 00:05:00.419235 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 5 00:05:00.419235 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 5 00:05:00.441481 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 5 00:05:00.441481 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 5 00:05:00.441481 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 5 00:05:00.441481 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 5 00:05:00.441481 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 5 00:05:00.441481 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Nov 5 00:05:00.817984 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Nov 5 00:05:01.170693 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 5 00:05:01.170693 ignition[1064]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Nov 5 00:05:01.177168 ignition[1064]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 5 00:05:01.181908 ignition[1064]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 5 00:05:01.181908 ignition[1064]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Nov 5 00:05:01.181908 ignition[1064]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Nov 5 00:05:01.190539 ignition[1064]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 5 00:05:01.190539 ignition[1064]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 5 00:05:01.190539 ignition[1064]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Nov 5 00:05:01.190539 ignition[1064]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Nov 5 00:05:01.207466 ignition[1064]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Nov 5 00:05:01.211509 ignition[1064]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Nov 5 00:05:01.214080 ignition[1064]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Nov 5 00:05:01.214080 ignition[1064]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Nov 5 00:05:01.214080 ignition[1064]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Nov 5 00:05:01.214080 ignition[1064]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 5 00:05:01.214080 ignition[1064]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 5 00:05:01.214080 ignition[1064]: INFO : files: files passed Nov 5 00:05:01.214080 ignition[1064]: INFO : Ignition finished successfully Nov 5 00:05:01.222151 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 5 00:05:01.225367 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 5 00:05:01.229201 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 5 00:05:01.247094 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 5 00:05:01.247226 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 5 00:05:01.253803 initrd-setup-root-after-ignition[1095]: grep: /sysroot/oem/oem-release: No such file or directory Nov 5 00:05:01.258558 initrd-setup-root-after-ignition[1097]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 5 00:05:01.258558 initrd-setup-root-after-ignition[1097]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 5 00:05:01.265450 initrd-setup-root-after-ignition[1101]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 5 00:05:01.260648 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 5 00:05:01.262487 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 5 00:05:01.273293 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 5 00:05:01.313047 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 5 00:05:01.313172 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 5 00:05:01.315476 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 5 00:05:01.319075 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 5 00:05:01.322640 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 5 00:05:01.326968 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 5 00:05:01.353586 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 5 00:05:01.355490 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 5 00:05:01.378624 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 5 00:05:01.378788 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 5 00:05:01.379646 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 5 00:05:01.385795 systemd[1]: Stopped target timers.target - Timer Units. Nov 5 00:05:01.386700 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 5 00:05:01.386811 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 5 00:05:01.393495 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 5 00:05:01.394349 systemd[1]: Stopped target basic.target - Basic System. Nov 5 00:05:01.400955 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 5 00:05:01.401658 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 5 00:05:01.405186 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 5 00:05:01.408479 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Nov 5 00:05:01.412090 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 5 00:05:01.415232 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 5 00:05:01.418412 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 5 00:05:01.422127 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 5 00:05:01.425171 systemd[1]: Stopped target swap.target - Swaps. Nov 5 00:05:01.428237 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 5 00:05:01.428358 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 5 00:05:01.433159 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 5 00:05:01.434293 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 5 00:05:01.438424 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 5 00:05:01.443016 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 5 00:05:01.444091 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 5 00:05:01.444218 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 5 00:05:01.450635 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 5 00:05:01.450769 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 5 00:05:01.451556 systemd[1]: Stopped target paths.target - Path Units. Nov 5 00:05:01.457386 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 5 00:05:01.463809 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 5 00:05:01.469000 systemd[1]: Stopped target slices.target - Slice Units. Nov 5 00:05:01.470148 systemd[1]: Stopped target sockets.target - Socket Units. Nov 5 00:05:01.473239 systemd[1]: iscsid.socket: Deactivated successfully. Nov 5 00:05:01.473338 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 5 00:05:01.476349 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 5 00:05:01.476438 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 5 00:05:01.479131 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 5 00:05:01.479245 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 5 00:05:01.482409 systemd[1]: ignition-files.service: Deactivated successfully. Nov 5 00:05:01.482512 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 5 00:05:01.490790 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 5 00:05:01.491552 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 5 00:05:01.491692 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 5 00:05:01.492994 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 5 00:05:01.500388 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 5 00:05:01.500511 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 5 00:05:01.501073 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 5 00:05:01.501177 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 5 00:05:01.509165 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 5 00:05:01.509266 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 5 00:05:01.521524 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 5 00:05:01.521657 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 5 00:05:01.538383 ignition[1122]: INFO : Ignition 2.22.0 Nov 5 00:05:01.538383 ignition[1122]: INFO : Stage: umount Nov 5 00:05:01.540976 ignition[1122]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 5 00:05:01.540976 ignition[1122]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 5 00:05:01.540976 ignition[1122]: INFO : umount: umount passed Nov 5 00:05:01.540976 ignition[1122]: INFO : Ignition finished successfully Nov 5 00:05:01.541851 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 5 00:05:01.541988 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 5 00:05:01.544406 systemd[1]: Stopped target network.target - Network. Nov 5 00:05:01.547482 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 5 00:05:01.547538 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 5 00:05:01.548186 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 5 00:05:01.548239 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 5 00:05:01.549000 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 5 00:05:01.549049 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 5 00:05:01.553686 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 5 00:05:01.553738 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 5 00:05:01.554264 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 5 00:05:01.560469 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 5 00:05:01.565065 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 5 00:05:01.571314 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 5 00:05:01.571436 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 5 00:05:01.578463 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 5 00:05:01.578589 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 5 00:05:01.585496 systemd[1]: Stopped target network-pre.target - Preparation for Network. Nov 5 00:05:01.586932 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 5 00:05:01.586977 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 5 00:05:01.588256 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 5 00:05:01.594020 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 5 00:05:01.594868 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 5 00:05:01.597120 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 5 00:05:01.597168 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 5 00:05:01.600507 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 5 00:05:01.600555 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 5 00:05:01.604170 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 5 00:05:01.625891 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 5 00:05:01.627550 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 5 00:05:01.631970 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 5 00:05:01.632029 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 5 00:05:01.634387 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 5 00:05:01.634432 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 5 00:05:01.637736 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 5 00:05:01.637796 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 5 00:05:01.643427 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 5 00:05:01.643478 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 5 00:05:01.647949 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 5 00:05:01.648000 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 5 00:05:01.656506 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 5 00:05:01.659023 systemd[1]: systemd-network-generator.service: Deactivated successfully. Nov 5 00:05:01.659086 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Nov 5 00:05:01.660160 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 5 00:05:01.660209 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 5 00:05:01.665081 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 5 00:05:01.665130 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 00:05:01.676322 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 5 00:05:01.676434 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 5 00:05:01.679696 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 5 00:05:01.680088 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 5 00:05:01.687821 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 5 00:05:01.687939 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 5 00:05:01.692162 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 5 00:05:01.693068 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 5 00:05:01.693123 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 5 00:05:01.696926 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 5 00:05:01.718442 systemd[1]: Switching root. Nov 5 00:05:01.761391 systemd-journald[315]: Journal stopped Nov 5 00:05:03.493924 systemd-journald[315]: Received SIGTERM from PID 1 (systemd). Nov 5 00:05:03.493985 kernel: SELinux: policy capability network_peer_controls=1 Nov 5 00:05:03.494000 kernel: SELinux: policy capability open_perms=1 Nov 5 00:05:03.494013 kernel: SELinux: policy capability extended_socket_class=1 Nov 5 00:05:03.494027 kernel: SELinux: policy capability always_check_network=0 Nov 5 00:05:03.494038 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 5 00:05:03.494055 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 5 00:05:03.494067 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 5 00:05:03.494078 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 5 00:05:03.494090 kernel: SELinux: policy capability userspace_initial_context=0 Nov 5 00:05:03.494102 kernel: audit: type=1403 audit(1762301102.658:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 5 00:05:03.494121 systemd[1]: Successfully loaded SELinux policy in 66.898ms. Nov 5 00:05:03.494141 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 8.414ms. Nov 5 00:05:03.494154 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 5 00:05:03.494168 systemd[1]: Detected virtualization kvm. Nov 5 00:05:03.494180 systemd[1]: Detected architecture x86-64. Nov 5 00:05:03.494197 systemd[1]: Detected first boot. Nov 5 00:05:03.494218 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Nov 5 00:05:03.494234 zram_generator::config[1167]: No configuration found. Nov 5 00:05:03.494251 kernel: Guest personality initialized and is inactive Nov 5 00:05:03.494263 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Nov 5 00:05:03.494275 kernel: Initialized host personality Nov 5 00:05:03.494287 kernel: NET: Registered PF_VSOCK protocol family Nov 5 00:05:03.494298 systemd[1]: Populated /etc with preset unit settings. Nov 5 00:05:03.494311 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 5 00:05:03.494326 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 5 00:05:03.494339 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 5 00:05:03.494352 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 5 00:05:03.494369 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 5 00:05:03.494382 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 5 00:05:03.494395 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 5 00:05:03.494410 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 5 00:05:03.494423 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 5 00:05:03.494436 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 5 00:05:03.494449 systemd[1]: Created slice user.slice - User and Session Slice. Nov 5 00:05:03.494462 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 5 00:05:03.494476 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 5 00:05:03.494489 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 5 00:05:03.494503 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 5 00:05:03.494516 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 5 00:05:03.494529 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 5 00:05:03.494542 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 5 00:05:03.494554 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 5 00:05:03.494567 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 5 00:05:03.494579 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 5 00:05:03.494593 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 5 00:05:03.494608 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 5 00:05:03.494621 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 5 00:05:03.494634 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 5 00:05:03.494647 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 5 00:05:03.494659 systemd[1]: Reached target slices.target - Slice Units. Nov 5 00:05:03.494685 systemd[1]: Reached target swap.target - Swaps. Nov 5 00:05:03.494700 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 5 00:05:03.494713 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 5 00:05:03.494725 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Nov 5 00:05:03.494738 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 5 00:05:03.494751 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 5 00:05:03.494768 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 5 00:05:03.494780 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 5 00:05:03.494795 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 5 00:05:03.494807 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 5 00:05:03.494820 systemd[1]: Mounting media.mount - External Media Directory... Nov 5 00:05:03.494833 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 00:05:03.494846 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 5 00:05:03.494858 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 5 00:05:03.494871 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 5 00:05:03.494886 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 5 00:05:03.494898 systemd[1]: Reached target machines.target - Containers. Nov 5 00:05:03.494912 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 5 00:05:03.494925 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 5 00:05:03.494938 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 5 00:05:03.494951 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 5 00:05:03.494964 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 5 00:05:03.494978 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 5 00:05:03.494991 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 5 00:05:03.495004 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 5 00:05:03.495018 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 5 00:05:03.495031 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 5 00:05:03.495044 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 5 00:05:03.495059 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 5 00:05:03.495072 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 5 00:05:03.495084 systemd[1]: Stopped systemd-fsck-usr.service. Nov 5 00:05:03.495098 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 5 00:05:03.495111 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 5 00:05:03.495124 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 5 00:05:03.495137 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 5 00:05:03.495152 kernel: ACPI: bus type drm_connector registered Nov 5 00:05:03.495164 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 5 00:05:03.495176 kernel: fuse: init (API version 7.41) Nov 5 00:05:03.495189 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Nov 5 00:05:03.495204 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 5 00:05:03.495226 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 00:05:03.495239 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 5 00:05:03.495251 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 5 00:05:03.495265 systemd[1]: Mounted media.mount - External Media Directory. Nov 5 00:05:03.495277 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 5 00:05:03.495308 systemd-journald[1245]: Collecting audit messages is disabled. Nov 5 00:05:03.495333 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 5 00:05:03.495346 systemd-journald[1245]: Journal started Nov 5 00:05:03.495368 systemd-journald[1245]: Runtime Journal (/run/log/journal/b7d81d7548214527b3e79411cad8081b) is 5.9M, max 47.9M, 41.9M free. Nov 5 00:05:03.196033 systemd[1]: Queued start job for default target multi-user.target. Nov 5 00:05:03.209444 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 5 00:05:03.209935 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 5 00:05:03.499690 systemd[1]: Started systemd-journald.service - Journal Service. Nov 5 00:05:03.501765 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 5 00:05:03.503682 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 5 00:05:03.505950 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 5 00:05:03.508255 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 5 00:05:03.508473 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 5 00:05:03.510878 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 5 00:05:03.511101 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 5 00:05:03.513270 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 5 00:05:03.513486 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 5 00:05:03.515525 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 5 00:05:03.516034 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 5 00:05:03.518307 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 5 00:05:03.518515 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 5 00:05:03.520579 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 5 00:05:03.520810 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 5 00:05:03.522937 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 5 00:05:03.525204 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 5 00:05:03.528472 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 5 00:05:03.530948 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Nov 5 00:05:03.547593 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 5 00:05:03.549778 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Nov 5 00:05:03.553040 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 5 00:05:03.555807 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 5 00:05:03.557680 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 5 00:05:03.557709 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 5 00:05:03.560253 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Nov 5 00:05:03.562401 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 5 00:05:03.566062 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 5 00:05:03.569153 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 5 00:05:03.571037 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 5 00:05:03.572046 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 5 00:05:03.572735 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 5 00:05:03.575785 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 5 00:05:03.588802 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 5 00:05:03.592171 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 5 00:05:03.595284 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 5 00:05:03.597757 systemd-journald[1245]: Time spent on flushing to /var/log/journal/b7d81d7548214527b3e79411cad8081b is 16.735ms for 1026 entries. Nov 5 00:05:03.597757 systemd-journald[1245]: System Journal (/var/log/journal/b7d81d7548214527b3e79411cad8081b) is 8M, max 163.5M, 155.5M free. Nov 5 00:05:03.631024 systemd-journald[1245]: Received client request to flush runtime journal. Nov 5 00:05:03.631075 kernel: loop1: detected capacity change from 0 to 110984 Nov 5 00:05:03.598564 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 5 00:05:03.601790 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 5 00:05:03.604996 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 5 00:05:03.612414 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 5 00:05:03.617782 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Nov 5 00:05:03.620186 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 5 00:05:03.633957 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 5 00:05:03.642875 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 5 00:05:03.643776 kernel: loop2: detected capacity change from 0 to 224512 Nov 5 00:05:03.648803 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 5 00:05:03.651933 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 5 00:05:03.657372 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Nov 5 00:05:03.664797 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 5 00:05:03.670691 kernel: loop3: detected capacity change from 0 to 128048 Nov 5 00:05:03.681006 systemd-tmpfiles[1305]: ACLs are not supported, ignoring. Nov 5 00:05:03.681045 systemd-tmpfiles[1305]: ACLs are not supported, ignoring. Nov 5 00:05:03.686910 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 5 00:05:03.699701 kernel: loop4: detected capacity change from 0 to 110984 Nov 5 00:05:03.707709 kernel: loop5: detected capacity change from 0 to 224512 Nov 5 00:05:03.712959 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 5 00:05:03.717690 kernel: loop6: detected capacity change from 0 to 128048 Nov 5 00:05:03.725550 (sd-merge)[1311]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Nov 5 00:05:03.730007 (sd-merge)[1311]: Merged extensions into '/usr'. Nov 5 00:05:03.736889 systemd[1]: Reload requested from client PID 1286 ('systemd-sysext') (unit systemd-sysext.service)... Nov 5 00:05:03.736910 systemd[1]: Reloading... Nov 5 00:05:03.796995 systemd-resolved[1303]: Positive Trust Anchors: Nov 5 00:05:03.797015 systemd-resolved[1303]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 5 00:05:03.797019 systemd-resolved[1303]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 5 00:05:03.797052 systemd-resolved[1303]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 5 00:05:03.800990 zram_generator::config[1341]: No configuration found. Nov 5 00:05:03.805901 systemd-resolved[1303]: Defaulting to hostname 'linux'. Nov 5 00:05:03.990579 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 5 00:05:03.990900 systemd[1]: Reloading finished in 253 ms. Nov 5 00:05:04.018927 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 5 00:05:04.021178 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 5 00:05:04.025748 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 5 00:05:04.055090 systemd[1]: Starting ensure-sysext.service... Nov 5 00:05:04.057396 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 5 00:05:04.075765 systemd-tmpfiles[1382]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Nov 5 00:05:04.075803 systemd-tmpfiles[1382]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Nov 5 00:05:04.076094 systemd-tmpfiles[1382]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 5 00:05:04.076383 systemd-tmpfiles[1382]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 5 00:05:04.077312 systemd-tmpfiles[1382]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 5 00:05:04.077584 systemd-tmpfiles[1382]: ACLs are not supported, ignoring. Nov 5 00:05:04.077658 systemd-tmpfiles[1382]: ACLs are not supported, ignoring. Nov 5 00:05:04.101859 systemd-tmpfiles[1382]: Detected autofs mount point /boot during canonicalization of boot. Nov 5 00:05:04.101872 systemd-tmpfiles[1382]: Skipping /boot Nov 5 00:05:04.103659 systemd[1]: Reload requested from client PID 1381 ('systemctl') (unit ensure-sysext.service)... Nov 5 00:05:04.103692 systemd[1]: Reloading... Nov 5 00:05:04.112200 systemd-tmpfiles[1382]: Detected autofs mount point /boot during canonicalization of boot. Nov 5 00:05:04.112211 systemd-tmpfiles[1382]: Skipping /boot Nov 5 00:05:04.164697 zram_generator::config[1415]: No configuration found. Nov 5 00:05:04.332745 systemd[1]: Reloading finished in 228 ms. Nov 5 00:05:04.358238 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 5 00:05:04.382384 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 5 00:05:04.392990 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 5 00:05:04.395724 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 5 00:05:04.418512 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 5 00:05:04.423479 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 5 00:05:04.426389 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 5 00:05:04.429614 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 5 00:05:04.434786 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 00:05:04.434952 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 5 00:05:04.443256 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 5 00:05:04.448922 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 5 00:05:04.452711 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 5 00:05:04.454565 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 5 00:05:04.454695 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 5 00:05:04.454789 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 00:05:04.455955 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 5 00:05:04.456171 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 5 00:05:04.462045 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 5 00:05:04.462620 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 5 00:05:04.467443 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 5 00:05:04.470222 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 5 00:05:04.470570 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 5 00:05:04.486224 systemd-udevd[1459]: Using default interface naming scheme 'v257'. Nov 5 00:05:04.486292 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 00:05:04.486545 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 5 00:05:04.488919 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 5 00:05:04.492341 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 5 00:05:04.496852 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 5 00:05:04.501505 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 5 00:05:04.503478 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 5 00:05:04.503641 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 5 00:05:04.503817 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 00:05:04.505482 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 5 00:05:04.509713 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 5 00:05:04.510080 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 5 00:05:04.512415 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 5 00:05:04.512719 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 5 00:05:04.515794 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 5 00:05:04.516034 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 5 00:05:04.518018 augenrules[1487]: No rules Nov 5 00:05:04.518999 systemd[1]: audit-rules.service: Deactivated successfully. Nov 5 00:05:04.519363 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 5 00:05:04.521634 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 5 00:05:04.521928 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 5 00:05:04.530953 systemd[1]: Finished ensure-sysext.service. Nov 5 00:05:04.540040 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 5 00:05:04.540168 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 5 00:05:04.543810 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 5 00:05:04.545824 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 5 00:05:04.551580 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 5 00:05:04.554365 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 5 00:05:04.557468 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 5 00:05:04.636801 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 5 00:05:04.641517 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 5 00:05:04.647608 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 5 00:05:04.666127 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 5 00:05:04.668463 systemd[1]: Reached target time-set.target - System Time Set. Nov 5 00:05:04.672887 systemd-networkd[1511]: lo: Link UP Nov 5 00:05:04.672900 systemd-networkd[1511]: lo: Gained carrier Nov 5 00:05:04.675266 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 5 00:05:04.676656 systemd-networkd[1511]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 5 00:05:04.676679 systemd-networkd[1511]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 5 00:05:04.677387 systemd-networkd[1511]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 5 00:05:04.677446 systemd-networkd[1511]: eth0: Link UP Nov 5 00:05:04.677570 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 5 00:05:04.677877 systemd-networkd[1511]: eth0: Gained carrier Nov 5 00:05:04.677892 systemd-networkd[1511]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 5 00:05:04.679831 systemd[1]: Reached target network.target - Network. Nov 5 00:05:04.683909 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Nov 5 00:05:04.686982 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 5 00:05:04.696692 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Nov 5 00:05:04.696725 kernel: mousedev: PS/2 mouse device common for all mice Nov 5 00:05:04.696979 systemd-networkd[1511]: eth0: DHCPv4 address 10.0.0.3/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 5 00:05:04.698412 systemd-timesyncd[1499]: Network configuration changed, trying to establish connection. Nov 5 00:05:05.531291 systemd-resolved[1303]: Clock change detected. Flushing caches. Nov 5 00:05:05.531488 systemd-timesyncd[1499]: Contacted time server 10.0.0.1:123 (10.0.0.1). Nov 5 00:05:05.531590 systemd-timesyncd[1499]: Initial clock synchronization to Wed 2025-11-05 00:05:05.530942 UTC. Nov 5 00:05:05.535911 kernel: ACPI: button: Power Button [PWRF] Nov 5 00:05:05.550387 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Nov 5 00:05:05.567843 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Nov 5 00:05:05.568210 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Nov 5 00:05:05.568501 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Nov 5 00:05:05.687293 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 00:05:05.734801 kernel: kvm_amd: TSC scaling supported Nov 5 00:05:05.734852 kernel: kvm_amd: Nested Virtualization enabled Nov 5 00:05:05.734909 kernel: kvm_amd: Nested Paging enabled Nov 5 00:05:05.734924 kernel: kvm_amd: LBR virtualization supported Nov 5 00:05:05.734740 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 5 00:05:05.735543 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 00:05:05.737005 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Nov 5 00:05:05.737027 kernel: kvm_amd: Virtual GIF supported Nov 5 00:05:05.747932 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 00:05:05.773894 kernel: EDAC MC: Ver: 3.0.0 Nov 5 00:05:05.808701 ldconfig[1453]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 5 00:05:05.812038 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 00:05:05.814665 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 5 00:05:05.818541 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 5 00:05:05.836784 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 5 00:05:05.838813 systemd[1]: Reached target sysinit.target - System Initialization. Nov 5 00:05:05.840629 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 5 00:05:05.842655 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 5 00:05:05.844675 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Nov 5 00:05:05.846701 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 5 00:05:05.848507 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 5 00:05:05.850546 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 5 00:05:05.852568 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 5 00:05:05.852599 systemd[1]: Reached target paths.target - Path Units. Nov 5 00:05:05.854074 systemd[1]: Reached target timers.target - Timer Units. Nov 5 00:05:05.856244 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 5 00:05:05.859381 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 5 00:05:05.863038 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Nov 5 00:05:05.865203 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Nov 5 00:05:05.867237 systemd[1]: Reached target ssh-access.target - SSH Access Available. Nov 5 00:05:05.873951 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 5 00:05:05.876066 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Nov 5 00:05:05.878540 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 5 00:05:05.880941 systemd[1]: Reached target sockets.target - Socket Units. Nov 5 00:05:05.882513 systemd[1]: Reached target basic.target - Basic System. Nov 5 00:05:05.884070 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 5 00:05:05.884103 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 5 00:05:05.885120 systemd[1]: Starting containerd.service - containerd container runtime... Nov 5 00:05:05.887765 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 5 00:05:05.890178 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 5 00:05:05.891800 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 5 00:05:05.897252 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 5 00:05:05.898939 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 5 00:05:05.900295 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Nov 5 00:05:05.901248 jq[1577]: false Nov 5 00:05:05.902584 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 5 00:05:05.905995 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 5 00:05:05.909432 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 5 00:05:05.911792 extend-filesystems[1578]: Found /dev/vda6 Nov 5 00:05:05.915639 extend-filesystems[1578]: Found /dev/vda9 Nov 5 00:05:05.918061 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 5 00:05:05.919774 extend-filesystems[1578]: Checking size of /dev/vda9 Nov 5 00:05:05.924898 google_oslogin_nss_cache[1579]: oslogin_cache_refresh[1579]: Refreshing passwd entry cache Nov 5 00:05:05.923307 oslogin_cache_refresh[1579]: Refreshing passwd entry cache Nov 5 00:05:05.924909 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 5 00:05:05.926679 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 5 00:05:05.927124 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 5 00:05:05.928397 systemd[1]: Starting update-engine.service - Update Engine... Nov 5 00:05:05.930478 google_oslogin_nss_cache[1579]: oslogin_cache_refresh[1579]: Failure getting users, quitting Nov 5 00:05:05.930478 google_oslogin_nss_cache[1579]: oslogin_cache_refresh[1579]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 5 00:05:05.930478 google_oslogin_nss_cache[1579]: oslogin_cache_refresh[1579]: Refreshing group entry cache Nov 5 00:05:05.930031 oslogin_cache_refresh[1579]: Failure getting users, quitting Nov 5 00:05:05.930051 oslogin_cache_refresh[1579]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 5 00:05:05.930103 oslogin_cache_refresh[1579]: Refreshing group entry cache Nov 5 00:05:05.933603 extend-filesystems[1578]: Resized partition /dev/vda9 Nov 5 00:05:05.933961 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 5 00:05:05.942580 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 5 00:05:05.943825 jq[1600]: true Nov 5 00:05:05.946582 google_oslogin_nss_cache[1579]: oslogin_cache_refresh[1579]: Failure getting groups, quitting Nov 5 00:05:05.946582 google_oslogin_nss_cache[1579]: oslogin_cache_refresh[1579]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 5 00:05:05.945969 oslogin_cache_refresh[1579]: Failure getting groups, quitting Nov 5 00:05:05.945978 oslogin_cache_refresh[1579]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 5 00:05:05.947163 extend-filesystems[1604]: resize2fs 1.47.3 (8-Jul-2025) Nov 5 00:05:05.949111 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 5 00:05:05.949401 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 5 00:05:05.949719 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Nov 5 00:05:05.950168 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Nov 5 00:05:05.952915 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Nov 5 00:05:05.954677 systemd[1]: motdgen.service: Deactivated successfully. Nov 5 00:05:05.956168 update_engine[1594]: I20251105 00:05:05.956095 1594 main.cc:92] Flatcar Update Engine starting Nov 5 00:05:05.957583 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 5 00:05:05.963732 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 5 00:05:05.964488 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 5 00:05:05.973575 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Nov 5 00:05:05.982181 jq[1614]: true Nov 5 00:05:05.989817 (ntainerd)[1616]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 5 00:05:05.997586 extend-filesystems[1604]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 5 00:05:05.997586 extend-filesystems[1604]: old_desc_blocks = 1, new_desc_blocks = 1 Nov 5 00:05:05.997586 extend-filesystems[1604]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Nov 5 00:05:06.004976 extend-filesystems[1578]: Resized filesystem in /dev/vda9 Nov 5 00:05:06.009348 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 5 00:05:06.009629 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 5 00:05:06.021863 tar[1612]: linux-amd64/LICENSE Nov 5 00:05:06.021863 tar[1612]: linux-amd64/helm Nov 5 00:05:06.038706 dbus-daemon[1575]: [system] SELinux support is enabled Nov 5 00:05:06.041064 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 5 00:05:06.045838 systemd-logind[1590]: Watching system buttons on /dev/input/event2 (Power Button) Nov 5 00:05:06.045868 systemd-logind[1590]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 5 00:05:06.047094 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 5 00:05:06.047466 update_engine[1594]: I20251105 00:05:06.047216 1594 update_check_scheduler.cc:74] Next update check in 11m8s Nov 5 00:05:06.047496 bash[1644]: Updated "/home/core/.ssh/authorized_keys" Nov 5 00:05:06.047124 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 5 00:05:06.047397 systemd-logind[1590]: New seat seat0. Nov 5 00:05:06.050055 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 5 00:05:06.050069 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 5 00:05:06.052409 systemd[1]: Started systemd-logind.service - User Login Management. Nov 5 00:05:06.054491 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 5 00:05:06.058232 systemd[1]: Started update-engine.service - Update Engine. Nov 5 00:05:06.060745 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Nov 5 00:05:06.062581 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 5 00:05:06.148629 locksmithd[1646]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 5 00:05:06.231852 containerd[1616]: time="2025-11-05T00:05:06Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Nov 5 00:05:06.233646 containerd[1616]: time="2025-11-05T00:05:06.233598062Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Nov 5 00:05:06.243019 containerd[1616]: time="2025-11-05T00:05:06.242964384Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.63µs" Nov 5 00:05:06.243019 containerd[1616]: time="2025-11-05T00:05:06.243006282Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Nov 5 00:05:06.243072 containerd[1616]: time="2025-11-05T00:05:06.243026651Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Nov 5 00:05:06.243260 containerd[1616]: time="2025-11-05T00:05:06.243231344Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Nov 5 00:05:06.243260 containerd[1616]: time="2025-11-05T00:05:06.243252574Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Nov 5 00:05:06.243312 containerd[1616]: time="2025-11-05T00:05:06.243279845Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 5 00:05:06.243374 containerd[1616]: time="2025-11-05T00:05:06.243344236Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 5 00:05:06.243374 containerd[1616]: time="2025-11-05T00:05:06.243365776Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 5 00:05:06.243663 containerd[1616]: time="2025-11-05T00:05:06.243629230Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 5 00:05:06.243663 containerd[1616]: time="2025-11-05T00:05:06.243650079Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 5 00:05:06.243663 containerd[1616]: time="2025-11-05T00:05:06.243661471Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 5 00:05:06.243727 containerd[1616]: time="2025-11-05T00:05:06.243670087Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Nov 5 00:05:06.243775 containerd[1616]: time="2025-11-05T00:05:06.243754636Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Nov 5 00:05:06.244033 containerd[1616]: time="2025-11-05T00:05:06.244001699Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 5 00:05:06.244056 containerd[1616]: time="2025-11-05T00:05:06.244036785Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 5 00:05:06.244056 containerd[1616]: time="2025-11-05T00:05:06.244047925Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Nov 5 00:05:06.244102 containerd[1616]: time="2025-11-05T00:05:06.244083382Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Nov 5 00:05:06.244344 containerd[1616]: time="2025-11-05T00:05:06.244315748Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Nov 5 00:05:06.244425 containerd[1616]: time="2025-11-05T00:05:06.244394575Z" level=info msg="metadata content store policy set" policy=shared Nov 5 00:05:06.249974 containerd[1616]: time="2025-11-05T00:05:06.249920006Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Nov 5 00:05:06.249974 containerd[1616]: time="2025-11-05T00:05:06.249957766Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Nov 5 00:05:06.249974 containerd[1616]: time="2025-11-05T00:05:06.249971963Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Nov 5 00:05:06.250046 containerd[1616]: time="2025-11-05T00:05:06.249984857Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Nov 5 00:05:06.250046 containerd[1616]: time="2025-11-05T00:05:06.249998423Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Nov 5 00:05:06.250046 containerd[1616]: time="2025-11-05T00:05:06.250013531Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Nov 5 00:05:06.250046 containerd[1616]: time="2025-11-05T00:05:06.250027227Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Nov 5 00:05:06.250046 containerd[1616]: time="2025-11-05T00:05:06.250039339Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Nov 5 00:05:06.250135 containerd[1616]: time="2025-11-05T00:05:06.250050530Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Nov 5 00:05:06.250135 containerd[1616]: time="2025-11-05T00:05:06.250066631Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Nov 5 00:05:06.250135 containerd[1616]: time="2025-11-05T00:05:06.250076830Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Nov 5 00:05:06.250135 containerd[1616]: time="2025-11-05T00:05:06.250089944Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Nov 5 00:05:06.250315 containerd[1616]: time="2025-11-05T00:05:06.250199670Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Nov 5 00:05:06.250315 containerd[1616]: time="2025-11-05T00:05:06.250230949Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Nov 5 00:05:06.250315 containerd[1616]: time="2025-11-05T00:05:06.250247760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Nov 5 00:05:06.250315 containerd[1616]: time="2025-11-05T00:05:06.250258100Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Nov 5 00:05:06.250315 containerd[1616]: time="2025-11-05T00:05:06.250268689Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Nov 5 00:05:06.250315 containerd[1616]: time="2025-11-05T00:05:06.250279810Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Nov 5 00:05:06.250315 containerd[1616]: time="2025-11-05T00:05:06.250290761Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Nov 5 00:05:06.250315 containerd[1616]: time="2025-11-05T00:05:06.250302162Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Nov 5 00:05:06.250315 containerd[1616]: time="2025-11-05T00:05:06.250313403Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Nov 5 00:05:06.250493 containerd[1616]: time="2025-11-05T00:05:06.250332800Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Nov 5 00:05:06.250493 containerd[1616]: time="2025-11-05T00:05:06.250344802Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Nov 5 00:05:06.250493 containerd[1616]: time="2025-11-05T00:05:06.250422528Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Nov 5 00:05:06.250493 containerd[1616]: time="2025-11-05T00:05:06.250436694Z" level=info msg="Start snapshots syncer" Nov 5 00:05:06.250493 containerd[1616]: time="2025-11-05T00:05:06.250454678Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Nov 5 00:05:06.250723 containerd[1616]: time="2025-11-05T00:05:06.250677786Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Nov 5 00:05:06.250723 containerd[1616]: time="2025-11-05T00:05:06.250721679Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Nov 5 00:05:06.250837 containerd[1616]: time="2025-11-05T00:05:06.250779888Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Nov 5 00:05:06.251029 containerd[1616]: time="2025-11-05T00:05:06.250867352Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Nov 5 00:05:06.251029 containerd[1616]: time="2025-11-05T00:05:06.251020369Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Nov 5 00:05:06.251098 containerd[1616]: time="2025-11-05T00:05:06.251032802Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Nov 5 00:05:06.251098 containerd[1616]: time="2025-11-05T00:05:06.251096131Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Nov 5 00:05:06.251146 containerd[1616]: time="2025-11-05T00:05:06.251108244Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Nov 5 00:05:06.251146 containerd[1616]: time="2025-11-05T00:05:06.251118863Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Nov 5 00:05:06.251146 containerd[1616]: time="2025-11-05T00:05:06.251129774Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Nov 5 00:05:06.251198 containerd[1616]: time="2025-11-05T00:05:06.251151184Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Nov 5 00:05:06.251198 containerd[1616]: time="2025-11-05T00:05:06.251162585Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Nov 5 00:05:06.251198 containerd[1616]: time="2025-11-05T00:05:06.251183084Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Nov 5 00:05:06.251259 containerd[1616]: time="2025-11-05T00:05:06.251218951Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 5 00:05:06.251259 containerd[1616]: time="2025-11-05T00:05:06.251237536Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 5 00:05:06.251259 containerd[1616]: time="2025-11-05T00:05:06.251246002Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 5 00:05:06.251339 containerd[1616]: time="2025-11-05T00:05:06.251255520Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 5 00:05:06.251339 containerd[1616]: time="2025-11-05T00:05:06.251332053Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Nov 5 00:05:06.251386 containerd[1616]: time="2025-11-05T00:05:06.251342693Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Nov 5 00:05:06.251386 containerd[1616]: time="2025-11-05T00:05:06.251354866Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Nov 5 00:05:06.251386 containerd[1616]: time="2025-11-05T00:05:06.251381546Z" level=info msg="runtime interface created" Nov 5 00:05:06.251386 containerd[1616]: time="2025-11-05T00:05:06.251387287Z" level=info msg="created NRI interface" Nov 5 00:05:06.251461 containerd[1616]: time="2025-11-05T00:05:06.251396143Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Nov 5 00:05:06.251461 containerd[1616]: time="2025-11-05T00:05:06.251406653Z" level=info msg="Connect containerd service" Nov 5 00:05:06.251461 containerd[1616]: time="2025-11-05T00:05:06.251438312Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 5 00:05:06.253811 containerd[1616]: time="2025-11-05T00:05:06.253750327Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 5 00:05:06.327515 tar[1612]: linux-amd64/README.md Nov 5 00:05:06.347976 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 5 00:05:06.367838 containerd[1616]: time="2025-11-05T00:05:06.367753273Z" level=info msg="Start subscribing containerd event" Nov 5 00:05:06.367838 containerd[1616]: time="2025-11-05T00:05:06.367834185Z" level=info msg="Start recovering state" Nov 5 00:05:06.367986 containerd[1616]: time="2025-11-05T00:05:06.367953498Z" level=info msg="Start event monitor" Nov 5 00:05:06.368022 containerd[1616]: time="2025-11-05T00:05:06.367989005Z" level=info msg="Start cni network conf syncer for default" Nov 5 00:05:06.368022 containerd[1616]: time="2025-11-05T00:05:06.367997581Z" level=info msg="Start streaming server" Nov 5 00:05:06.368022 containerd[1616]: time="2025-11-05T00:05:06.368008882Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Nov 5 00:05:06.368022 containerd[1616]: time="2025-11-05T00:05:06.368016056Z" level=info msg="runtime interface starting up..." Nov 5 00:05:06.368022 containerd[1616]: time="2025-11-05T00:05:06.368022207Z" level=info msg="starting plugins..." Nov 5 00:05:06.368126 containerd[1616]: time="2025-11-05T00:05:06.368038057Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Nov 5 00:05:06.368510 containerd[1616]: time="2025-11-05T00:05:06.368439750Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 5 00:05:06.368649 containerd[1616]: time="2025-11-05T00:05:06.368635187Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 5 00:05:06.368814 containerd[1616]: time="2025-11-05T00:05:06.368765892Z" level=info msg="containerd successfully booted in 0.137441s" Nov 5 00:05:06.368853 systemd[1]: Started containerd.service - containerd container runtime. Nov 5 00:05:06.577007 sshd_keygen[1608]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 5 00:05:06.600091 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 5 00:05:06.603606 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 5 00:05:06.630620 systemd[1]: issuegen.service: Deactivated successfully. Nov 5 00:05:06.630869 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 5 00:05:06.634019 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 5 00:05:06.652478 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 5 00:05:06.655821 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 5 00:05:06.658439 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 5 00:05:06.660342 systemd[1]: Reached target getty.target - Login Prompts. Nov 5 00:05:06.988998 systemd-networkd[1511]: eth0: Gained IPv6LL Nov 5 00:05:06.991584 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 5 00:05:06.994172 systemd[1]: Reached target network-online.target - Network is Online. Nov 5 00:05:06.997318 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Nov 5 00:05:07.000333 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 00:05:07.002264 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 5 00:05:07.039118 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 5 00:05:07.041611 systemd[1]: coreos-metadata.service: Deactivated successfully. Nov 5 00:05:07.041865 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Nov 5 00:05:07.044741 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 5 00:05:07.704251 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 00:05:07.706610 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 5 00:05:07.708193 (kubelet)[1716]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 5 00:05:07.709634 systemd[1]: Startup finished in 2.193s (kernel) + 6.912s (initrd) + 4.284s (userspace) = 13.389s. Nov 5 00:05:08.105366 kubelet[1716]: E1105 00:05:08.105315 1716 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 5 00:05:08.109197 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 5 00:05:08.109397 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 5 00:05:08.109756 systemd[1]: kubelet.service: Consumed 959ms CPU time, 265.1M memory peak. Nov 5 00:05:08.626152 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 5 00:05:08.627388 systemd[1]: Started sshd@0-10.0.0.3:22-10.0.0.1:53874.service - OpenSSH per-connection server daemon (10.0.0.1:53874). Nov 5 00:05:08.708664 sshd[1729]: Accepted publickey for core from 10.0.0.1 port 53874 ssh2: RSA SHA256:76q7OxBnofV7CHovkMIRFzt50oWq5VGuNm0JMuzHE4A Nov 5 00:05:08.710242 sshd-session[1729]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 00:05:08.716487 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 5 00:05:08.717577 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 5 00:05:08.723253 systemd-logind[1590]: New session 1 of user core. Nov 5 00:05:08.739179 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 5 00:05:08.742048 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 5 00:05:08.765175 (systemd)[1734]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 5 00:05:08.767386 systemd-logind[1590]: New session c1 of user core. Nov 5 00:05:08.905993 systemd[1734]: Queued start job for default target default.target. Nov 5 00:05:08.917062 systemd[1734]: Created slice app.slice - User Application Slice. Nov 5 00:05:08.917086 systemd[1734]: Reached target paths.target - Paths. Nov 5 00:05:08.917124 systemd[1734]: Reached target timers.target - Timers. Nov 5 00:05:08.918521 systemd[1734]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 5 00:05:08.928893 systemd[1734]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 5 00:05:08.929071 systemd[1734]: Reached target sockets.target - Sockets. Nov 5 00:05:08.929112 systemd[1734]: Reached target basic.target - Basic System. Nov 5 00:05:08.929152 systemd[1734]: Reached target default.target - Main User Target. Nov 5 00:05:08.929188 systemd[1734]: Startup finished in 155ms. Nov 5 00:05:08.929523 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 5 00:05:08.942984 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 5 00:05:09.008833 systemd[1]: Started sshd@1-10.0.0.3:22-10.0.0.1:53882.service - OpenSSH per-connection server daemon (10.0.0.1:53882). Nov 5 00:05:09.092322 sshd[1745]: Accepted publickey for core from 10.0.0.1 port 53882 ssh2: RSA SHA256:76q7OxBnofV7CHovkMIRFzt50oWq5VGuNm0JMuzHE4A Nov 5 00:05:09.093613 sshd-session[1745]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 00:05:09.097745 systemd-logind[1590]: New session 2 of user core. Nov 5 00:05:09.107997 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 5 00:05:09.160517 sshd[1748]: Connection closed by 10.0.0.1 port 53882 Nov 5 00:05:09.160845 sshd-session[1745]: pam_unix(sshd:session): session closed for user core Nov 5 00:05:09.169287 systemd[1]: sshd@1-10.0.0.3:22-10.0.0.1:53882.service: Deactivated successfully. Nov 5 00:05:09.171296 systemd[1]: session-2.scope: Deactivated successfully. Nov 5 00:05:09.172021 systemd-logind[1590]: Session 2 logged out. Waiting for processes to exit. Nov 5 00:05:09.174640 systemd[1]: Started sshd@2-10.0.0.3:22-10.0.0.1:53886.service - OpenSSH per-connection server daemon (10.0.0.1:53886). Nov 5 00:05:09.175228 systemd-logind[1590]: Removed session 2. Nov 5 00:05:09.236038 sshd[1754]: Accepted publickey for core from 10.0.0.1 port 53886 ssh2: RSA SHA256:76q7OxBnofV7CHovkMIRFzt50oWq5VGuNm0JMuzHE4A Nov 5 00:05:09.237204 sshd-session[1754]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 00:05:09.240905 systemd-logind[1590]: New session 3 of user core. Nov 5 00:05:09.251981 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 5 00:05:09.300476 sshd[1758]: Connection closed by 10.0.0.1 port 53886 Nov 5 00:05:09.300770 sshd-session[1754]: pam_unix(sshd:session): session closed for user core Nov 5 00:05:09.313421 systemd[1]: sshd@2-10.0.0.3:22-10.0.0.1:53886.service: Deactivated successfully. Nov 5 00:05:09.315154 systemd[1]: session-3.scope: Deactivated successfully. Nov 5 00:05:09.315850 systemd-logind[1590]: Session 3 logged out. Waiting for processes to exit. Nov 5 00:05:09.318428 systemd[1]: Started sshd@3-10.0.0.3:22-10.0.0.1:53900.service - OpenSSH per-connection server daemon (10.0.0.1:53900). Nov 5 00:05:09.318955 systemd-logind[1590]: Removed session 3. Nov 5 00:05:09.372512 sshd[1764]: Accepted publickey for core from 10.0.0.1 port 53900 ssh2: RSA SHA256:76q7OxBnofV7CHovkMIRFzt50oWq5VGuNm0JMuzHE4A Nov 5 00:05:09.373658 sshd-session[1764]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 00:05:09.377570 systemd-logind[1590]: New session 4 of user core. Nov 5 00:05:09.384991 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 5 00:05:09.437468 sshd[1768]: Connection closed by 10.0.0.1 port 53900 Nov 5 00:05:09.438002 sshd-session[1764]: pam_unix(sshd:session): session closed for user core Nov 5 00:05:09.450385 systemd[1]: sshd@3-10.0.0.3:22-10.0.0.1:53900.service: Deactivated successfully. Nov 5 00:05:09.452098 systemd[1]: session-4.scope: Deactivated successfully. Nov 5 00:05:09.452809 systemd-logind[1590]: Session 4 logged out. Waiting for processes to exit. Nov 5 00:05:09.455366 systemd[1]: Started sshd@4-10.0.0.3:22-10.0.0.1:53916.service - OpenSSH per-connection server daemon (10.0.0.1:53916). Nov 5 00:05:09.455903 systemd-logind[1590]: Removed session 4. Nov 5 00:05:09.506828 sshd[1774]: Accepted publickey for core from 10.0.0.1 port 53916 ssh2: RSA SHA256:76q7OxBnofV7CHovkMIRFzt50oWq5VGuNm0JMuzHE4A Nov 5 00:05:09.508596 sshd-session[1774]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 00:05:09.512433 systemd-logind[1590]: New session 5 of user core. Nov 5 00:05:09.522991 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 5 00:05:09.583273 sudo[1779]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 5 00:05:09.583574 sudo[1779]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 00:05:09.599721 sudo[1779]: pam_unix(sudo:session): session closed for user root Nov 5 00:05:09.601501 sshd[1778]: Connection closed by 10.0.0.1 port 53916 Nov 5 00:05:09.601923 sshd-session[1774]: pam_unix(sshd:session): session closed for user core Nov 5 00:05:09.615572 systemd[1]: sshd@4-10.0.0.3:22-10.0.0.1:53916.service: Deactivated successfully. Nov 5 00:05:09.617335 systemd[1]: session-5.scope: Deactivated successfully. Nov 5 00:05:09.618040 systemd-logind[1590]: Session 5 logged out. Waiting for processes to exit. Nov 5 00:05:09.620708 systemd[1]: Started sshd@5-10.0.0.3:22-10.0.0.1:53920.service - OpenSSH per-connection server daemon (10.0.0.1:53920). Nov 5 00:05:09.621249 systemd-logind[1590]: Removed session 5. Nov 5 00:05:09.677851 sshd[1785]: Accepted publickey for core from 10.0.0.1 port 53920 ssh2: RSA SHA256:76q7OxBnofV7CHovkMIRFzt50oWq5VGuNm0JMuzHE4A Nov 5 00:05:09.679044 sshd-session[1785]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 00:05:09.683209 systemd-logind[1590]: New session 6 of user core. Nov 5 00:05:09.696994 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 5 00:05:09.750708 sudo[1790]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 5 00:05:09.751021 sudo[1790]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 00:05:09.758420 sudo[1790]: pam_unix(sudo:session): session closed for user root Nov 5 00:05:09.765750 sudo[1789]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 5 00:05:09.766072 sudo[1789]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 00:05:09.775838 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 5 00:05:09.824178 augenrules[1812]: No rules Nov 5 00:05:09.825707 systemd[1]: audit-rules.service: Deactivated successfully. Nov 5 00:05:09.825982 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 5 00:05:09.827021 sudo[1789]: pam_unix(sudo:session): session closed for user root Nov 5 00:05:09.828627 sshd[1788]: Connection closed by 10.0.0.1 port 53920 Nov 5 00:05:09.828957 sshd-session[1785]: pam_unix(sshd:session): session closed for user core Nov 5 00:05:09.837476 systemd[1]: sshd@5-10.0.0.3:22-10.0.0.1:53920.service: Deactivated successfully. Nov 5 00:05:09.839294 systemd[1]: session-6.scope: Deactivated successfully. Nov 5 00:05:09.839991 systemd-logind[1590]: Session 6 logged out. Waiting for processes to exit. Nov 5 00:05:09.842545 systemd[1]: Started sshd@6-10.0.0.3:22-10.0.0.1:53930.service - OpenSSH per-connection server daemon (10.0.0.1:53930). Nov 5 00:05:09.843082 systemd-logind[1590]: Removed session 6. Nov 5 00:05:09.899629 sshd[1821]: Accepted publickey for core from 10.0.0.1 port 53930 ssh2: RSA SHA256:76q7OxBnofV7CHovkMIRFzt50oWq5VGuNm0JMuzHE4A Nov 5 00:05:09.900759 sshd-session[1821]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 00:05:09.904642 systemd-logind[1590]: New session 7 of user core. Nov 5 00:05:09.914984 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 5 00:05:09.968195 sudo[1825]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 5 00:05:09.968539 sudo[1825]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 00:05:10.330644 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 5 00:05:10.352155 (dockerd)[1845]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 5 00:05:10.605677 dockerd[1845]: time="2025-11-05T00:05:10.605553567Z" level=info msg="Starting up" Nov 5 00:05:10.606506 dockerd[1845]: time="2025-11-05T00:05:10.606475115Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Nov 5 00:05:10.617796 dockerd[1845]: time="2025-11-05T00:05:10.617749024Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Nov 5 00:05:11.368238 dockerd[1845]: time="2025-11-05T00:05:11.368178309Z" level=info msg="Loading containers: start." Nov 5 00:05:11.378900 kernel: Initializing XFRM netlink socket Nov 5 00:05:11.625537 systemd-networkd[1511]: docker0: Link UP Nov 5 00:05:11.629707 dockerd[1845]: time="2025-11-05T00:05:11.629666118Z" level=info msg="Loading containers: done." Nov 5 00:05:11.642916 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1246541268-merged.mount: Deactivated successfully. Nov 5 00:05:11.644609 dockerd[1845]: time="2025-11-05T00:05:11.644558171Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 5 00:05:11.644727 dockerd[1845]: time="2025-11-05T00:05:11.644641637Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Nov 5 00:05:11.644814 dockerd[1845]: time="2025-11-05T00:05:11.644761923Z" level=info msg="Initializing buildkit" Nov 5 00:05:11.673666 dockerd[1845]: time="2025-11-05T00:05:11.672493700Z" level=info msg="Completed buildkit initialization" Nov 5 00:05:11.678137 dockerd[1845]: time="2025-11-05T00:05:11.678098459Z" level=info msg="Daemon has completed initialization" Nov 5 00:05:11.678227 dockerd[1845]: time="2025-11-05T00:05:11.678153282Z" level=info msg="API listen on /run/docker.sock" Nov 5 00:05:11.678331 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 5 00:05:12.454642 containerd[1616]: time="2025-11-05T00:05:12.454600690Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Nov 5 00:05:12.965530 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3358207497.mount: Deactivated successfully. Nov 5 00:05:13.859306 containerd[1616]: time="2025-11-05T00:05:13.859258203Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 00:05:13.860042 containerd[1616]: time="2025-11-05T00:05:13.860001647Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=28837916" Nov 5 00:05:13.861116 containerd[1616]: time="2025-11-05T00:05:13.861069369Z" level=info msg="ImageCreate event name:\"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 00:05:13.863786 containerd[1616]: time="2025-11-05T00:05:13.863744795Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 00:05:13.864728 containerd[1616]: time="2025-11-05T00:05:13.864693474Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"28834515\" in 1.410051086s" Nov 5 00:05:13.864728 containerd[1616]: time="2025-11-05T00:05:13.864727858Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Nov 5 00:05:13.865313 containerd[1616]: time="2025-11-05T00:05:13.865282729Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Nov 5 00:05:14.988182 containerd[1616]: time="2025-11-05T00:05:14.988135046Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 00:05:14.988938 containerd[1616]: time="2025-11-05T00:05:14.988894541Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=24787027" Nov 5 00:05:14.990185 containerd[1616]: time="2025-11-05T00:05:14.990149073Z" level=info msg="ImageCreate event name:\"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 00:05:14.992545 containerd[1616]: time="2025-11-05T00:05:14.992498778Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 00:05:14.993307 containerd[1616]: time="2025-11-05T00:05:14.993277378Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"26421706\" in 1.127969903s" Nov 5 00:05:14.993307 containerd[1616]: time="2025-11-05T00:05:14.993308296Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Nov 5 00:05:14.993703 containerd[1616]: time="2025-11-05T00:05:14.993683249Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Nov 5 00:05:16.473671 containerd[1616]: time="2025-11-05T00:05:16.473619059Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 00:05:16.474412 containerd[1616]: time="2025-11-05T00:05:16.474389864Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=19176289" Nov 5 00:05:16.475674 containerd[1616]: time="2025-11-05T00:05:16.475629268Z" level=info msg="ImageCreate event name:\"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 00:05:16.478110 containerd[1616]: time="2025-11-05T00:05:16.478064203Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 00:05:16.478955 containerd[1616]: time="2025-11-05T00:05:16.478922603Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"20810986\" in 1.485215679s" Nov 5 00:05:16.478955 containerd[1616]: time="2025-11-05T00:05:16.478954122Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Nov 5 00:05:16.479462 containerd[1616]: time="2025-11-05T00:05:16.479431968Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Nov 5 00:05:17.562080 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount365674221.mount: Deactivated successfully. Nov 5 00:05:18.295074 containerd[1616]: time="2025-11-05T00:05:18.295020406Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 00:05:18.295742 containerd[1616]: time="2025-11-05T00:05:18.295694279Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=30924206" Nov 5 00:05:18.297270 containerd[1616]: time="2025-11-05T00:05:18.297218026Z" level=info msg="ImageCreate event name:\"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 00:05:18.299141 containerd[1616]: time="2025-11-05T00:05:18.299110324Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 00:05:18.299556 containerd[1616]: time="2025-11-05T00:05:18.299513681Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"30923225\" in 1.820047558s" Nov 5 00:05:18.299596 containerd[1616]: time="2025-11-05T00:05:18.299553846Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Nov 5 00:05:18.300028 containerd[1616]: time="2025-11-05T00:05:18.300003699Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Nov 5 00:05:18.360226 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 5 00:05:18.361767 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 00:05:18.564108 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 00:05:18.579151 (kubelet)[2146]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 5 00:05:18.646432 kubelet[2146]: E1105 00:05:18.646372 2146 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 5 00:05:18.653179 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 5 00:05:18.653398 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 5 00:05:18.653812 systemd[1]: kubelet.service: Consumed 246ms CPU time, 110.5M memory peak. Nov 5 00:05:19.236398 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1311353437.mount: Deactivated successfully. Nov 5 00:05:19.882174 containerd[1616]: time="2025-11-05T00:05:19.882122051Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 00:05:19.882822 containerd[1616]: time="2025-11-05T00:05:19.882774524Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Nov 5 00:05:19.883897 containerd[1616]: time="2025-11-05T00:05:19.883852766Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 00:05:19.886357 containerd[1616]: time="2025-11-05T00:05:19.886330361Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 00:05:19.887277 containerd[1616]: time="2025-11-05T00:05:19.887253863Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.587221781s" Nov 5 00:05:19.887321 containerd[1616]: time="2025-11-05T00:05:19.887279982Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Nov 5 00:05:19.887748 containerd[1616]: time="2025-11-05T00:05:19.887726419Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 5 00:05:20.424900 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1518444594.mount: Deactivated successfully. Nov 5 00:05:20.431391 containerd[1616]: time="2025-11-05T00:05:20.431348220Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 5 00:05:20.432118 containerd[1616]: time="2025-11-05T00:05:20.432072337Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Nov 5 00:05:20.433098 containerd[1616]: time="2025-11-05T00:05:20.433062765Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 5 00:05:20.434813 containerd[1616]: time="2025-11-05T00:05:20.434777510Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 5 00:05:20.435401 containerd[1616]: time="2025-11-05T00:05:20.435357066Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 547.603477ms" Nov 5 00:05:20.435401 containerd[1616]: time="2025-11-05T00:05:20.435390760Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 5 00:05:20.435863 containerd[1616]: time="2025-11-05T00:05:20.435837627Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Nov 5 00:05:21.000681 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2200590472.mount: Deactivated successfully. Nov 5 00:05:23.137222 containerd[1616]: time="2025-11-05T00:05:23.137161649Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 00:05:23.137866 containerd[1616]: time="2025-11-05T00:05:23.137802511Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Nov 5 00:05:23.138936 containerd[1616]: time="2025-11-05T00:05:23.138905048Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 00:05:23.141410 containerd[1616]: time="2025-11-05T00:05:23.141364259Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 00:05:23.142324 containerd[1616]: time="2025-11-05T00:05:23.142292269Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.706426739s" Nov 5 00:05:23.142370 containerd[1616]: time="2025-11-05T00:05:23.142322927Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Nov 5 00:05:25.214542 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 00:05:25.214704 systemd[1]: kubelet.service: Consumed 246ms CPU time, 110.5M memory peak. Nov 5 00:05:25.216845 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 00:05:25.239328 systemd[1]: Reload requested from client PID 2295 ('systemctl') (unit session-7.scope)... Nov 5 00:05:25.239344 systemd[1]: Reloading... Nov 5 00:05:25.301905 zram_generator::config[2338]: No configuration found. Nov 5 00:05:25.604423 systemd[1]: Reloading finished in 364 ms. Nov 5 00:05:25.682509 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 5 00:05:25.682606 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 5 00:05:25.682942 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 00:05:25.682991 systemd[1]: kubelet.service: Consumed 146ms CPU time, 98.2M memory peak. Nov 5 00:05:25.684458 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 00:05:25.864126 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 00:05:25.868093 (kubelet)[2387]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 5 00:05:25.902553 kubelet[2387]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 5 00:05:25.902553 kubelet[2387]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 5 00:05:25.902553 kubelet[2387]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 5 00:05:25.902786 kubelet[2387]: I1105 00:05:25.902605 2387 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 5 00:05:26.255333 kubelet[2387]: I1105 00:05:26.255288 2387 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 5 00:05:26.255333 kubelet[2387]: I1105 00:05:26.255320 2387 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 5 00:05:26.255641 kubelet[2387]: I1105 00:05:26.255617 2387 server.go:954] "Client rotation is on, will bootstrap in background" Nov 5 00:05:26.283104 kubelet[2387]: E1105 00:05:26.283061 2387 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.3:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.3:6443: connect: connection refused" logger="UnhandledError" Nov 5 00:05:26.283817 kubelet[2387]: I1105 00:05:26.283786 2387 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 5 00:05:26.291563 kubelet[2387]: I1105 00:05:26.291542 2387 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 5 00:05:26.296391 kubelet[2387]: I1105 00:05:26.296374 2387 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 5 00:05:26.297432 kubelet[2387]: I1105 00:05:26.297392 2387 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 5 00:05:26.297590 kubelet[2387]: I1105 00:05:26.297422 2387 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 5 00:05:26.297701 kubelet[2387]: I1105 00:05:26.297591 2387 topology_manager.go:138] "Creating topology manager with none policy" Nov 5 00:05:26.297701 kubelet[2387]: I1105 00:05:26.297601 2387 container_manager_linux.go:304] "Creating device plugin manager" Nov 5 00:05:26.297751 kubelet[2387]: I1105 00:05:26.297720 2387 state_mem.go:36] "Initialized new in-memory state store" Nov 5 00:05:26.300412 kubelet[2387]: I1105 00:05:26.300387 2387 kubelet.go:446] "Attempting to sync node with API server" Nov 5 00:05:26.300450 kubelet[2387]: I1105 00:05:26.300414 2387 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 5 00:05:26.300450 kubelet[2387]: I1105 00:05:26.300439 2387 kubelet.go:352] "Adding apiserver pod source" Nov 5 00:05:26.300450 kubelet[2387]: I1105 00:05:26.300449 2387 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 5 00:05:26.302061 kubelet[2387]: W1105 00:05:26.301980 2387 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.3:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.3:6443: connect: connection refused Nov 5 00:05:26.302061 kubelet[2387]: E1105 00:05:26.302029 2387 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.3:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.3:6443: connect: connection refused" logger="UnhandledError" Nov 5 00:05:26.302744 kubelet[2387]: I1105 00:05:26.302458 2387 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 5 00:05:26.303322 kubelet[2387]: W1105 00:05:26.303269 2387 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.3:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.3:6443: connect: connection refused Nov 5 00:05:26.303372 kubelet[2387]: E1105 00:05:26.303322 2387 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.3:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.3:6443: connect: connection refused" logger="UnhandledError" Nov 5 00:05:26.303490 kubelet[2387]: I1105 00:05:26.303469 2387 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 5 00:05:26.304135 kubelet[2387]: W1105 00:05:26.304114 2387 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 5 00:05:26.306069 kubelet[2387]: I1105 00:05:26.306042 2387 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 5 00:05:26.306107 kubelet[2387]: I1105 00:05:26.306089 2387 server.go:1287] "Started kubelet" Nov 5 00:05:26.311184 kubelet[2387]: I1105 00:05:26.309934 2387 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 5 00:05:26.311184 kubelet[2387]: I1105 00:05:26.310212 2387 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 5 00:05:26.311184 kubelet[2387]: I1105 00:05:26.310533 2387 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 5 00:05:26.311184 kubelet[2387]: I1105 00:05:26.311032 2387 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 5 00:05:26.311184 kubelet[2387]: I1105 00:05:26.311107 2387 server.go:479] "Adding debug handlers to kubelet server" Nov 5 00:05:26.312222 kubelet[2387]: I1105 00:05:26.311775 2387 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 5 00:05:26.312571 kubelet[2387]: E1105 00:05:26.312527 2387 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 5 00:05:26.312813 kubelet[2387]: E1105 00:05:26.312777 2387 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 5 00:05:26.312848 kubelet[2387]: I1105 00:05:26.312821 2387 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 5 00:05:26.313688 kubelet[2387]: I1105 00:05:26.312979 2387 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 5 00:05:26.313688 kubelet[2387]: I1105 00:05:26.313034 2387 reconciler.go:26] "Reconciler: start to sync state" Nov 5 00:05:26.313688 kubelet[2387]: W1105 00:05:26.313290 2387 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.3:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.3:6443: connect: connection refused Nov 5 00:05:26.313688 kubelet[2387]: E1105 00:05:26.313320 2387 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.3:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.3:6443: connect: connection refused" logger="UnhandledError" Nov 5 00:05:26.313688 kubelet[2387]: I1105 00:05:26.313481 2387 factory.go:221] Registration of the systemd container factory successfully Nov 5 00:05:26.313688 kubelet[2387]: I1105 00:05:26.313542 2387 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 5 00:05:26.314449 kubelet[2387]: E1105 00:05:26.314417 2387 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.3:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.3:6443: connect: connection refused" interval="200ms" Nov 5 00:05:26.314547 kubelet[2387]: I1105 00:05:26.314524 2387 factory.go:221] Registration of the containerd container factory successfully Nov 5 00:05:26.315063 kubelet[2387]: E1105 00:05:26.314088 2387 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.3:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.3:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1874f3898cbe719e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-11-05 00:05:26.306066846 +0000 UTC m=+0.434529083,LastTimestamp:2025-11-05 00:05:26.306066846 +0000 UTC m=+0.434529083,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 5 00:05:26.328082 kubelet[2387]: I1105 00:05:26.327730 2387 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 5 00:05:26.328082 kubelet[2387]: I1105 00:05:26.327745 2387 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 5 00:05:26.328082 kubelet[2387]: I1105 00:05:26.327768 2387 state_mem.go:36] "Initialized new in-memory state store" Nov 5 00:05:26.329818 kubelet[2387]: I1105 00:05:26.329777 2387 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 5 00:05:26.331146 kubelet[2387]: I1105 00:05:26.331110 2387 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 5 00:05:26.331146 kubelet[2387]: I1105 00:05:26.331142 2387 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 5 00:05:26.331329 kubelet[2387]: I1105 00:05:26.331162 2387 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 5 00:05:26.331329 kubelet[2387]: I1105 00:05:26.331169 2387 kubelet.go:2382] "Starting kubelet main sync loop" Nov 5 00:05:26.331329 kubelet[2387]: E1105 00:05:26.331221 2387 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 5 00:05:26.331731 kubelet[2387]: W1105 00:05:26.331669 2387 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.3:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.3:6443: connect: connection refused Nov 5 00:05:26.331731 kubelet[2387]: E1105 00:05:26.331713 2387 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.3:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.3:6443: connect: connection refused" logger="UnhandledError" Nov 5 00:05:26.413811 kubelet[2387]: E1105 00:05:26.413769 2387 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 5 00:05:26.432093 kubelet[2387]: E1105 00:05:26.432059 2387 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 5 00:05:26.514468 kubelet[2387]: E1105 00:05:26.514383 2387 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 5 00:05:26.515764 kubelet[2387]: E1105 00:05:26.515714 2387 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.3:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.3:6443: connect: connection refused" interval="400ms" Nov 5 00:05:26.615043 kubelet[2387]: E1105 00:05:26.615005 2387 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 5 00:05:26.632169 kubelet[2387]: E1105 00:05:26.632134 2387 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 5 00:05:26.715613 kubelet[2387]: E1105 00:05:26.715568 2387 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 5 00:05:26.816490 kubelet[2387]: E1105 00:05:26.816416 2387 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 5 00:05:26.820284 kubelet[2387]: I1105 00:05:26.820251 2387 policy_none.go:49] "None policy: Start" Nov 5 00:05:26.820284 kubelet[2387]: I1105 00:05:26.820271 2387 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 5 00:05:26.820284 kubelet[2387]: I1105 00:05:26.820282 2387 state_mem.go:35] "Initializing new in-memory state store" Nov 5 00:05:26.825941 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 5 00:05:26.840625 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 5 00:05:26.843776 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 5 00:05:26.864759 kubelet[2387]: I1105 00:05:26.864661 2387 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 5 00:05:26.864863 kubelet[2387]: I1105 00:05:26.864856 2387 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 5 00:05:26.864905 kubelet[2387]: I1105 00:05:26.864866 2387 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 5 00:05:26.865141 kubelet[2387]: I1105 00:05:26.865120 2387 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 5 00:05:26.865939 kubelet[2387]: E1105 00:05:26.865914 2387 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 5 00:05:26.865994 kubelet[2387]: E1105 00:05:26.865948 2387 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Nov 5 00:05:26.916464 kubelet[2387]: E1105 00:05:26.916430 2387 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.3:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.3:6443: connect: connection refused" interval="800ms" Nov 5 00:05:26.966366 kubelet[2387]: I1105 00:05:26.966324 2387 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 5 00:05:26.966576 kubelet[2387]: E1105 00:05:26.966549 2387 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.3:6443/api/v1/nodes\": dial tcp 10.0.0.3:6443: connect: connection refused" node="localhost" Nov 5 00:05:27.039725 systemd[1]: Created slice kubepods-burstable-poda1d51be1ff02022474f2598f6e43038f.slice - libcontainer container kubepods-burstable-poda1d51be1ff02022474f2598f6e43038f.slice. Nov 5 00:05:27.050836 kubelet[2387]: E1105 00:05:27.050796 2387 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 00:05:27.053013 systemd[1]: Created slice kubepods-burstable-poda005e50c14d9f254d8ad75cc820f0f22.slice - libcontainer container kubepods-burstable-poda005e50c14d9f254d8ad75cc820f0f22.slice. Nov 5 00:05:27.073930 kubelet[2387]: E1105 00:05:27.073030 2387 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 00:05:27.075000 systemd[1]: Created slice kubepods-burstable-pod4654b122dbb389158fe3c0766e603624.slice - libcontainer container kubepods-burstable-pod4654b122dbb389158fe3c0766e603624.slice. Nov 5 00:05:27.076954 kubelet[2387]: E1105 00:05:27.076930 2387 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 00:05:27.118303 kubelet[2387]: I1105 00:05:27.118265 2387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 00:05:27.118347 kubelet[2387]: I1105 00:05:27.118325 2387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 00:05:27.118413 kubelet[2387]: I1105 00:05:27.118388 2387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a1d51be1ff02022474f2598f6e43038f-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a1d51be1ff02022474f2598f6e43038f\") " pod="kube-system/kube-scheduler-localhost" Nov 5 00:05:27.118449 kubelet[2387]: I1105 00:05:27.118430 2387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a005e50c14d9f254d8ad75cc820f0f22-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a005e50c14d9f254d8ad75cc820f0f22\") " pod="kube-system/kube-apiserver-localhost" Nov 5 00:05:27.118530 kubelet[2387]: I1105 00:05:27.118502 2387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 00:05:27.118597 kubelet[2387]: I1105 00:05:27.118537 2387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 00:05:27.118597 kubelet[2387]: I1105 00:05:27.118562 2387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a005e50c14d9f254d8ad75cc820f0f22-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a005e50c14d9f254d8ad75cc820f0f22\") " pod="kube-system/kube-apiserver-localhost" Nov 5 00:05:27.118597 kubelet[2387]: I1105 00:05:27.118584 2387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a005e50c14d9f254d8ad75cc820f0f22-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a005e50c14d9f254d8ad75cc820f0f22\") " pod="kube-system/kube-apiserver-localhost" Nov 5 00:05:27.118693 kubelet[2387]: I1105 00:05:27.118606 2387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 00:05:27.143694 kubelet[2387]: W1105 00:05:27.143636 2387 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.3:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.3:6443: connect: connection refused Nov 5 00:05:27.143745 kubelet[2387]: E1105 00:05:27.143694 2387 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.3:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.3:6443: connect: connection refused" logger="UnhandledError" Nov 5 00:05:27.168985 kubelet[2387]: I1105 00:05:27.168965 2387 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 5 00:05:27.169297 kubelet[2387]: E1105 00:05:27.169263 2387 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.3:6443/api/v1/nodes\": dial tcp 10.0.0.3:6443: connect: connection refused" node="localhost" Nov 5 00:05:27.351916 kubelet[2387]: E1105 00:05:27.351850 2387 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 00:05:27.352439 containerd[1616]: time="2025-11-05T00:05:27.352402885Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a1d51be1ff02022474f2598f6e43038f,Namespace:kube-system,Attempt:0,}" Nov 5 00:05:27.371556 containerd[1616]: time="2025-11-05T00:05:27.371514450Z" level=info msg="connecting to shim ce5e326a4fff04a75246268f77928b19767a134082eb875d7150dd9396b91e72" address="unix:///run/containerd/s/0b610a608d800fa55c0228698061e3f13cf97169a0d25192c64a4f83da5b37e9" namespace=k8s.io protocol=ttrpc version=3 Nov 5 00:05:27.375654 kubelet[2387]: E1105 00:05:27.375628 2387 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 00:05:27.376356 containerd[1616]: time="2025-11-05T00:05:27.376117240Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a005e50c14d9f254d8ad75cc820f0f22,Namespace:kube-system,Attempt:0,}" Nov 5 00:05:27.378781 kubelet[2387]: E1105 00:05:27.377247 2387 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 00:05:27.378828 containerd[1616]: time="2025-11-05T00:05:27.377516884Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4654b122dbb389158fe3c0766e603624,Namespace:kube-system,Attempt:0,}" Nov 5 00:05:27.403083 systemd[1]: Started cri-containerd-ce5e326a4fff04a75246268f77928b19767a134082eb875d7150dd9396b91e72.scope - libcontainer container ce5e326a4fff04a75246268f77928b19767a134082eb875d7150dd9396b91e72. Nov 5 00:05:27.409802 containerd[1616]: time="2025-11-05T00:05:27.409763186Z" level=info msg="connecting to shim c129c4ad39019b7325b24f25386d49c3316138ae3f26340e68023fc79d0c0e01" address="unix:///run/containerd/s/d505b43d3e8ce309a89f388e7b5dfa7e19449ad5a09640531eadb960a29d0fc6" namespace=k8s.io protocol=ttrpc version=3 Nov 5 00:05:27.410319 containerd[1616]: time="2025-11-05T00:05:27.410280135Z" level=info msg="connecting to shim 76a2609e0e17dfe5e9e35d55dabfdbf8ed3de0f7425dd3762e8134ce8575656f" address="unix:///run/containerd/s/1b3f75b16791b47ddfe4626beb11303903a4af3448c7ea8e96ebb2282ae887b7" namespace=k8s.io protocol=ttrpc version=3 Nov 5 00:05:27.436991 systemd[1]: Started cri-containerd-76a2609e0e17dfe5e9e35d55dabfdbf8ed3de0f7425dd3762e8134ce8575656f.scope - libcontainer container 76a2609e0e17dfe5e9e35d55dabfdbf8ed3de0f7425dd3762e8134ce8575656f. Nov 5 00:05:27.441144 systemd[1]: Started cri-containerd-c129c4ad39019b7325b24f25386d49c3316138ae3f26340e68023fc79d0c0e01.scope - libcontainer container c129c4ad39019b7325b24f25386d49c3316138ae3f26340e68023fc79d0c0e01. Nov 5 00:05:27.467637 containerd[1616]: time="2025-11-05T00:05:27.467597335Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a1d51be1ff02022474f2598f6e43038f,Namespace:kube-system,Attempt:0,} returns sandbox id \"ce5e326a4fff04a75246268f77928b19767a134082eb875d7150dd9396b91e72\"" Nov 5 00:05:27.468754 kubelet[2387]: E1105 00:05:27.468714 2387 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 00:05:27.471133 containerd[1616]: time="2025-11-05T00:05:27.471094152Z" level=info msg="CreateContainer within sandbox \"ce5e326a4fff04a75246268f77928b19767a134082eb875d7150dd9396b91e72\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 5 00:05:27.485949 containerd[1616]: time="2025-11-05T00:05:27.485892889Z" level=info msg="Container 2bd60c4d571d17e479968473b9a8478064d27dc4b1894f00119e837c57f685cb: CDI devices from CRI Config.CDIDevices: []" Nov 5 00:05:27.486531 containerd[1616]: time="2025-11-05T00:05:27.486488175Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a005e50c14d9f254d8ad75cc820f0f22,Namespace:kube-system,Attempt:0,} returns sandbox id \"76a2609e0e17dfe5e9e35d55dabfdbf8ed3de0f7425dd3762e8134ce8575656f\"" Nov 5 00:05:27.487103 kubelet[2387]: E1105 00:05:27.487068 2387 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 00:05:27.488920 containerd[1616]: time="2025-11-05T00:05:27.488837370Z" level=info msg="CreateContainer within sandbox \"76a2609e0e17dfe5e9e35d55dabfdbf8ed3de0f7425dd3762e8134ce8575656f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 5 00:05:27.496299 containerd[1616]: time="2025-11-05T00:05:27.496258655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4654b122dbb389158fe3c0766e603624,Namespace:kube-system,Attempt:0,} returns sandbox id \"c129c4ad39019b7325b24f25386d49c3316138ae3f26340e68023fc79d0c0e01\"" Nov 5 00:05:27.496911 containerd[1616]: time="2025-11-05T00:05:27.496889247Z" level=info msg="CreateContainer within sandbox \"ce5e326a4fff04a75246268f77928b19767a134082eb875d7150dd9396b91e72\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"2bd60c4d571d17e479968473b9a8478064d27dc4b1894f00119e837c57f685cb\"" Nov 5 00:05:27.497263 kubelet[2387]: E1105 00:05:27.497234 2387 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 00:05:27.497303 containerd[1616]: time="2025-11-05T00:05:27.497244904Z" level=info msg="StartContainer for \"2bd60c4d571d17e479968473b9a8478064d27dc4b1894f00119e837c57f685cb\"" Nov 5 00:05:27.498142 containerd[1616]: time="2025-11-05T00:05:27.498104907Z" level=info msg="connecting to shim 2bd60c4d571d17e479968473b9a8478064d27dc4b1894f00119e837c57f685cb" address="unix:///run/containerd/s/0b610a608d800fa55c0228698061e3f13cf97169a0d25192c64a4f83da5b37e9" protocol=ttrpc version=3 Nov 5 00:05:27.499757 containerd[1616]: time="2025-11-05T00:05:27.499716198Z" level=info msg="CreateContainer within sandbox \"c129c4ad39019b7325b24f25386d49c3316138ae3f26340e68023fc79d0c0e01\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 5 00:05:27.501262 containerd[1616]: time="2025-11-05T00:05:27.501230958Z" level=info msg="Container 614991955ae11e9ca9a186c209226f9fc0059b000898add0228de2d8e6c05afa: CDI devices from CRI Config.CDIDevices: []" Nov 5 00:05:27.508902 containerd[1616]: time="2025-11-05T00:05:27.508692769Z" level=info msg="CreateContainer within sandbox \"76a2609e0e17dfe5e9e35d55dabfdbf8ed3de0f7425dd3762e8134ce8575656f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"614991955ae11e9ca9a186c209226f9fc0059b000898add0228de2d8e6c05afa\"" Nov 5 00:05:27.509323 containerd[1616]: time="2025-11-05T00:05:27.509295800Z" level=info msg="StartContainer for \"614991955ae11e9ca9a186c209226f9fc0059b000898add0228de2d8e6c05afa\"" Nov 5 00:05:27.510467 containerd[1616]: time="2025-11-05T00:05:27.510395712Z" level=info msg="connecting to shim 614991955ae11e9ca9a186c209226f9fc0059b000898add0228de2d8e6c05afa" address="unix:///run/containerd/s/1b3f75b16791b47ddfe4626beb11303903a4af3448c7ea8e96ebb2282ae887b7" protocol=ttrpc version=3 Nov 5 00:05:27.512732 containerd[1616]: time="2025-11-05T00:05:27.512695143Z" level=info msg="Container 306d9c567faa27ce76519dd7051a32eeff2e4e5b2313e92181c752ab5fe030ba: CDI devices from CRI Config.CDIDevices: []" Nov 5 00:05:27.517993 systemd[1]: Started cri-containerd-2bd60c4d571d17e479968473b9a8478064d27dc4b1894f00119e837c57f685cb.scope - libcontainer container 2bd60c4d571d17e479968473b9a8478064d27dc4b1894f00119e837c57f685cb. Nov 5 00:05:27.522099 containerd[1616]: time="2025-11-05T00:05:27.521991033Z" level=info msg="CreateContainer within sandbox \"c129c4ad39019b7325b24f25386d49c3316138ae3f26340e68023fc79d0c0e01\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"306d9c567faa27ce76519dd7051a32eeff2e4e5b2313e92181c752ab5fe030ba\"" Nov 5 00:05:27.522999 containerd[1616]: time="2025-11-05T00:05:27.522924333Z" level=info msg="StartContainer for \"306d9c567faa27ce76519dd7051a32eeff2e4e5b2313e92181c752ab5fe030ba\"" Nov 5 00:05:27.524795 containerd[1616]: time="2025-11-05T00:05:27.524769282Z" level=info msg="connecting to shim 306d9c567faa27ce76519dd7051a32eeff2e4e5b2313e92181c752ab5fe030ba" address="unix:///run/containerd/s/d505b43d3e8ce309a89f388e7b5dfa7e19449ad5a09640531eadb960a29d0fc6" protocol=ttrpc version=3 Nov 5 00:05:27.533020 systemd[1]: Started cri-containerd-614991955ae11e9ca9a186c209226f9fc0059b000898add0228de2d8e6c05afa.scope - libcontainer container 614991955ae11e9ca9a186c209226f9fc0059b000898add0228de2d8e6c05afa. Nov 5 00:05:27.549002 systemd[1]: Started cri-containerd-306d9c567faa27ce76519dd7051a32eeff2e4e5b2313e92181c752ab5fe030ba.scope - libcontainer container 306d9c567faa27ce76519dd7051a32eeff2e4e5b2313e92181c752ab5fe030ba. Nov 5 00:05:27.571113 kubelet[2387]: I1105 00:05:27.571068 2387 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 5 00:05:27.571360 kubelet[2387]: E1105 00:05:27.571336 2387 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.3:6443/api/v1/nodes\": dial tcp 10.0.0.3:6443: connect: connection refused" node="localhost" Nov 5 00:05:27.590143 containerd[1616]: time="2025-11-05T00:05:27.590003476Z" level=info msg="StartContainer for \"2bd60c4d571d17e479968473b9a8478064d27dc4b1894f00119e837c57f685cb\" returns successfully" Nov 5 00:05:27.591954 containerd[1616]: time="2025-11-05T00:05:27.591861150Z" level=info msg="StartContainer for \"614991955ae11e9ca9a186c209226f9fc0059b000898add0228de2d8e6c05afa\" returns successfully" Nov 5 00:05:27.810232 containerd[1616]: time="2025-11-05T00:05:27.810187131Z" level=info msg="StartContainer for \"306d9c567faa27ce76519dd7051a32eeff2e4e5b2313e92181c752ab5fe030ba\" returns successfully" Nov 5 00:05:28.338541 kubelet[2387]: E1105 00:05:28.338373 2387 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 00:05:28.338541 kubelet[2387]: E1105 00:05:28.338523 2387 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 00:05:28.341595 kubelet[2387]: E1105 00:05:28.341419 2387 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 00:05:28.341595 kubelet[2387]: E1105 00:05:28.341544 2387 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 00:05:28.343355 kubelet[2387]: E1105 00:05:28.343327 2387 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 00:05:28.343435 kubelet[2387]: E1105 00:05:28.343405 2387 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 00:05:28.375138 kubelet[2387]: I1105 00:05:28.375067 2387 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 5 00:05:28.384981 kubelet[2387]: E1105 00:05:28.384946 2387 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Nov 5 00:05:28.484421 kubelet[2387]: I1105 00:05:28.484358 2387 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 5 00:05:28.514962 kubelet[2387]: I1105 00:05:28.514917 2387 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 5 00:05:28.524106 kubelet[2387]: E1105 00:05:28.524069 2387 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Nov 5 00:05:28.524106 kubelet[2387]: I1105 00:05:28.524112 2387 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 5 00:05:28.525747 kubelet[2387]: E1105 00:05:28.525724 2387 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Nov 5 00:05:28.525747 kubelet[2387]: I1105 00:05:28.525744 2387 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 5 00:05:28.529024 kubelet[2387]: E1105 00:05:28.528990 2387 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Nov 5 00:05:29.303787 kubelet[2387]: I1105 00:05:29.303758 2387 apiserver.go:52] "Watching apiserver" Nov 5 00:05:29.313301 kubelet[2387]: I1105 00:05:29.313270 2387 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 5 00:05:29.343647 kubelet[2387]: I1105 00:05:29.343606 2387 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 5 00:05:29.344068 kubelet[2387]: I1105 00:05:29.343749 2387 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 5 00:05:29.344068 kubelet[2387]: I1105 00:05:29.343938 2387 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 5 00:05:29.345232 kubelet[2387]: E1105 00:05:29.345193 2387 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Nov 5 00:05:29.345383 kubelet[2387]: E1105 00:05:29.345354 2387 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 00:05:29.345748 kubelet[2387]: E1105 00:05:29.345704 2387 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Nov 5 00:05:29.345748 kubelet[2387]: E1105 00:05:29.345717 2387 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Nov 5 00:05:29.345861 kubelet[2387]: E1105 00:05:29.345833 2387 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 00:05:29.345917 kubelet[2387]: E1105 00:05:29.345835 2387 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 00:05:30.304563 systemd[1]: Reload requested from client PID 2660 ('systemctl') (unit session-7.scope)... Nov 5 00:05:30.304584 systemd[1]: Reloading... Nov 5 00:05:30.387914 zram_generator::config[2704]: No configuration found. Nov 5 00:05:30.611455 systemd[1]: Reloading finished in 306 ms. Nov 5 00:05:30.642118 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 00:05:30.663238 systemd[1]: kubelet.service: Deactivated successfully. Nov 5 00:05:30.663543 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 00:05:30.663598 systemd[1]: kubelet.service: Consumed 857ms CPU time, 129.7M memory peak. Nov 5 00:05:30.665474 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 00:05:30.886573 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 00:05:30.901205 (kubelet)[2749]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 5 00:05:30.938047 kubelet[2749]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 5 00:05:30.938047 kubelet[2749]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 5 00:05:30.938047 kubelet[2749]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 5 00:05:30.938348 kubelet[2749]: I1105 00:05:30.938098 2749 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 5 00:05:30.943931 kubelet[2749]: I1105 00:05:30.943904 2749 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 5 00:05:30.943931 kubelet[2749]: I1105 00:05:30.943920 2749 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 5 00:05:30.944118 kubelet[2749]: I1105 00:05:30.944096 2749 server.go:954] "Client rotation is on, will bootstrap in background" Nov 5 00:05:30.945119 kubelet[2749]: I1105 00:05:30.945104 2749 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 5 00:05:30.946992 kubelet[2749]: I1105 00:05:30.946925 2749 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 5 00:05:30.951988 kubelet[2749]: I1105 00:05:30.951944 2749 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 5 00:05:30.956347 kubelet[2749]: I1105 00:05:30.956310 2749 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 5 00:05:30.956594 kubelet[2749]: I1105 00:05:30.956552 2749 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 5 00:05:30.956769 kubelet[2749]: I1105 00:05:30.956583 2749 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 5 00:05:30.956769 kubelet[2749]: I1105 00:05:30.956771 2749 topology_manager.go:138] "Creating topology manager with none policy" Nov 5 00:05:30.956888 kubelet[2749]: I1105 00:05:30.956779 2749 container_manager_linux.go:304] "Creating device plugin manager" Nov 5 00:05:30.956888 kubelet[2749]: I1105 00:05:30.956834 2749 state_mem.go:36] "Initialized new in-memory state store" Nov 5 00:05:30.957023 kubelet[2749]: I1105 00:05:30.957006 2749 kubelet.go:446] "Attempting to sync node with API server" Nov 5 00:05:30.957047 kubelet[2749]: I1105 00:05:30.957032 2749 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 5 00:05:30.957071 kubelet[2749]: I1105 00:05:30.957055 2749 kubelet.go:352] "Adding apiserver pod source" Nov 5 00:05:30.957071 kubelet[2749]: I1105 00:05:30.957066 2749 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 5 00:05:30.957738 kubelet[2749]: I1105 00:05:30.957594 2749 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 5 00:05:30.957942 kubelet[2749]: I1105 00:05:30.957924 2749 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 5 00:05:30.958330 kubelet[2749]: I1105 00:05:30.958305 2749 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 5 00:05:30.958359 kubelet[2749]: I1105 00:05:30.958335 2749 server.go:1287] "Started kubelet" Nov 5 00:05:30.960440 kubelet[2749]: I1105 00:05:30.960385 2749 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 5 00:05:30.962833 kubelet[2749]: I1105 00:05:30.962802 2749 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 5 00:05:30.963910 kubelet[2749]: I1105 00:05:30.963894 2749 server.go:479] "Adding debug handlers to kubelet server" Nov 5 00:05:30.966717 kubelet[2749]: I1105 00:05:30.966661 2749 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 5 00:05:30.966913 kubelet[2749]: I1105 00:05:30.966895 2749 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 5 00:05:30.967139 kubelet[2749]: I1105 00:05:30.967107 2749 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 5 00:05:30.968960 kubelet[2749]: I1105 00:05:30.968938 2749 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 5 00:05:30.969119 kubelet[2749]: E1105 00:05:30.969093 2749 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 5 00:05:30.971380 kubelet[2749]: I1105 00:05:30.971333 2749 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 5 00:05:30.971685 kubelet[2749]: I1105 00:05:30.971607 2749 reconciler.go:26] "Reconciler: start to sync state" Nov 5 00:05:30.972636 kubelet[2749]: E1105 00:05:30.972600 2749 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 5 00:05:30.973067 kubelet[2749]: I1105 00:05:30.973035 2749 factory.go:221] Registration of the containerd container factory successfully Nov 5 00:05:30.973136 kubelet[2749]: I1105 00:05:30.973110 2749 factory.go:221] Registration of the systemd container factory successfully Nov 5 00:05:30.973267 kubelet[2749]: I1105 00:05:30.973251 2749 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 5 00:05:30.979981 kubelet[2749]: I1105 00:05:30.979746 2749 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 5 00:05:30.980996 kubelet[2749]: I1105 00:05:30.980967 2749 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 5 00:05:30.980996 kubelet[2749]: I1105 00:05:30.980994 2749 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 5 00:05:30.981050 kubelet[2749]: I1105 00:05:30.981011 2749 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 5 00:05:30.981050 kubelet[2749]: I1105 00:05:30.981019 2749 kubelet.go:2382] "Starting kubelet main sync loop" Nov 5 00:05:30.981127 kubelet[2749]: E1105 00:05:30.981065 2749 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 5 00:05:31.014661 kubelet[2749]: I1105 00:05:31.014639 2749 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 5 00:05:31.014661 kubelet[2749]: I1105 00:05:31.014655 2749 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 5 00:05:31.014754 kubelet[2749]: I1105 00:05:31.014671 2749 state_mem.go:36] "Initialized new in-memory state store" Nov 5 00:05:31.014843 kubelet[2749]: I1105 00:05:31.014823 2749 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 5 00:05:31.014871 kubelet[2749]: I1105 00:05:31.014836 2749 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 5 00:05:31.014871 kubelet[2749]: I1105 00:05:31.014855 2749 policy_none.go:49] "None policy: Start" Nov 5 00:05:31.014934 kubelet[2749]: I1105 00:05:31.014893 2749 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 5 00:05:31.014934 kubelet[2749]: I1105 00:05:31.014905 2749 state_mem.go:35] "Initializing new in-memory state store" Nov 5 00:05:31.015017 kubelet[2749]: I1105 00:05:31.015002 2749 state_mem.go:75] "Updated machine memory state" Nov 5 00:05:31.019205 kubelet[2749]: I1105 00:05:31.018779 2749 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 5 00:05:31.019205 kubelet[2749]: I1105 00:05:31.018957 2749 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 5 00:05:31.019205 kubelet[2749]: I1105 00:05:31.018967 2749 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 5 00:05:31.019205 kubelet[2749]: I1105 00:05:31.019162 2749 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 5 00:05:31.019806 kubelet[2749]: E1105 00:05:31.019777 2749 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 5 00:05:31.082344 kubelet[2749]: I1105 00:05:31.082327 2749 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 5 00:05:31.082484 kubelet[2749]: I1105 00:05:31.082449 2749 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 5 00:05:31.082624 kubelet[2749]: I1105 00:05:31.082357 2749 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 5 00:05:31.123585 kubelet[2749]: I1105 00:05:31.123566 2749 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 5 00:05:31.129679 kubelet[2749]: I1105 00:05:31.129387 2749 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Nov 5 00:05:31.129679 kubelet[2749]: I1105 00:05:31.129449 2749 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 5 00:05:31.171969 kubelet[2749]: I1105 00:05:31.171890 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 00:05:31.171969 kubelet[2749]: I1105 00:05:31.171915 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a005e50c14d9f254d8ad75cc820f0f22-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a005e50c14d9f254d8ad75cc820f0f22\") " pod="kube-system/kube-apiserver-localhost" Nov 5 00:05:31.171969 kubelet[2749]: I1105 00:05:31.171937 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a005e50c14d9f254d8ad75cc820f0f22-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a005e50c14d9f254d8ad75cc820f0f22\") " pod="kube-system/kube-apiserver-localhost" Nov 5 00:05:31.171969 kubelet[2749]: I1105 00:05:31.171953 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a005e50c14d9f254d8ad75cc820f0f22-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a005e50c14d9f254d8ad75cc820f0f22\") " pod="kube-system/kube-apiserver-localhost" Nov 5 00:05:31.171969 kubelet[2749]: I1105 00:05:31.171969 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 00:05:31.172110 kubelet[2749]: I1105 00:05:31.171985 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 00:05:31.172110 kubelet[2749]: I1105 00:05:31.172001 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 00:05:31.172110 kubelet[2749]: I1105 00:05:31.172045 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 00:05:31.172110 kubelet[2749]: I1105 00:05:31.172076 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a1d51be1ff02022474f2598f6e43038f-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a1d51be1ff02022474f2598f6e43038f\") " pod="kube-system/kube-scheduler-localhost" Nov 5 00:05:31.309580 sudo[2786]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Nov 5 00:05:31.309950 sudo[2786]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Nov 5 00:05:31.386947 kubelet[2749]: E1105 00:05:31.386919 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 00:05:31.387242 kubelet[2749]: E1105 00:05:31.387223 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 00:05:31.387334 kubelet[2749]: E1105 00:05:31.387315 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 00:05:31.604548 sudo[2786]: pam_unix(sudo:session): session closed for user root Nov 5 00:05:31.958034 kubelet[2749]: I1105 00:05:31.957942 2749 apiserver.go:52] "Watching apiserver" Nov 5 00:05:31.971941 kubelet[2749]: I1105 00:05:31.971907 2749 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 5 00:05:31.996567 kubelet[2749]: E1105 00:05:31.996532 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 00:05:31.996740 kubelet[2749]: I1105 00:05:31.996650 2749 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 5 00:05:31.997689 kubelet[2749]: I1105 00:05:31.997655 2749 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 5 00:05:32.003990 kubelet[2749]: E1105 00:05:32.003928 2749 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Nov 5 00:05:32.004322 kubelet[2749]: E1105 00:05:32.004251 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 00:05:32.004825 kubelet[2749]: E1105 00:05:32.004799 2749 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Nov 5 00:05:32.005094 kubelet[2749]: E1105 00:05:32.005028 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 00:05:32.024481 kubelet[2749]: I1105 00:05:32.024109 2749 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.024089907 podStartE2EDuration="1.024089907s" podCreationTimestamp="2025-11-05 00:05:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 00:05:32.015633191 +0000 UTC m=+1.109893731" watchObservedRunningTime="2025-11-05 00:05:32.024089907 +0000 UTC m=+1.118350447" Nov 5 00:05:32.031241 kubelet[2749]: I1105 00:05:32.031183 2749 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.031161998 podStartE2EDuration="1.031161998s" podCreationTimestamp="2025-11-05 00:05:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 00:05:32.024385191 +0000 UTC m=+1.118645721" watchObservedRunningTime="2025-11-05 00:05:32.031161998 +0000 UTC m=+1.125422538" Nov 5 00:05:32.031335 kubelet[2749]: I1105 00:05:32.031269 2749 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.031265702 podStartE2EDuration="1.031265702s" podCreationTimestamp="2025-11-05 00:05:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 00:05:32.031117043 +0000 UTC m=+1.125377573" watchObservedRunningTime="2025-11-05 00:05:32.031265702 +0000 UTC m=+1.125526242" Nov 5 00:05:32.998083 kubelet[2749]: E1105 00:05:32.998052 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 00:05:32.998516 kubelet[2749]: E1105 00:05:32.998140 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 00:05:33.438526 sudo[1825]: pam_unix(sudo:session): session closed for user root Nov 5 00:05:33.440125 sshd[1824]: Connection closed by 10.0.0.1 port 53930 Nov 5 00:05:33.440483 sshd-session[1821]: pam_unix(sshd:session): session closed for user core Nov 5 00:05:33.444280 systemd[1]: sshd@6-10.0.0.3:22-10.0.0.1:53930.service: Deactivated successfully. Nov 5 00:05:33.446604 systemd[1]: session-7.scope: Deactivated successfully. Nov 5 00:05:33.446831 systemd[1]: session-7.scope: Consumed 3.690s CPU time, 253.8M memory peak. Nov 5 00:05:33.449052 systemd-logind[1590]: Session 7 logged out. Waiting for processes to exit. Nov 5 00:05:33.450443 systemd-logind[1590]: Removed session 7. Nov 5 00:05:35.288362 kubelet[2749]: E1105 00:05:35.288336 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 00:05:35.922837 kubelet[2749]: I1105 00:05:35.922796 2749 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 5 00:05:35.923124 containerd[1616]: time="2025-11-05T00:05:35.923088879Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 5 00:05:35.923420 kubelet[2749]: I1105 00:05:35.923254 2749 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 5 00:05:36.433045 systemd[1]: Created slice kubepods-besteffort-pode7ab22f7_8045_41b3_a876_647e074b7171.slice - libcontainer container kubepods-besteffort-pode7ab22f7_8045_41b3_a876_647e074b7171.slice. Nov 5 00:05:36.450445 systemd[1]: Created slice kubepods-burstable-pod9e082060_ad3b_46f6_a953_f94bd98790b3.slice - libcontainer container kubepods-burstable-pod9e082060_ad3b_46f6_a953_f94bd98790b3.slice. Nov 5 00:05:36.507821 kubelet[2749]: I1105 00:05:36.507788 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9e082060-ad3b-46f6-a953-f94bd98790b3-hostproc\") pod \"cilium-p7wtc\" (UID: \"9e082060-ad3b-46f6-a953-f94bd98790b3\") " pod="kube-system/cilium-p7wtc" Nov 5 00:05:36.507821 kubelet[2749]: I1105 00:05:36.507820 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9e082060-ad3b-46f6-a953-f94bd98790b3-etc-cni-netd\") pod \"cilium-p7wtc\" (UID: \"9e082060-ad3b-46f6-a953-f94bd98790b3\") " pod="kube-system/cilium-p7wtc" Nov 5 00:05:36.508187 kubelet[2749]: I1105 00:05:36.507842 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9e082060-ad3b-46f6-a953-f94bd98790b3-host-proc-sys-kernel\") pod \"cilium-p7wtc\" (UID: \"9e082060-ad3b-46f6-a953-f94bd98790b3\") " pod="kube-system/cilium-p7wtc" Nov 5 00:05:36.508187 kubelet[2749]: I1105 00:05:36.507858 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9e082060-ad3b-46f6-a953-f94bd98790b3-hubble-tls\") pod \"cilium-p7wtc\" (UID: \"9e082060-ad3b-46f6-a953-f94bd98790b3\") " pod="kube-system/cilium-p7wtc" Nov 5 00:05:36.508187 kubelet[2749]: I1105 00:05:36.507896 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9e082060-ad3b-46f6-a953-f94bd98790b3-bpf-maps\") pod \"cilium-p7wtc\" (UID: \"9e082060-ad3b-46f6-a953-f94bd98790b3\") " pod="kube-system/cilium-p7wtc" Nov 5 00:05:36.508187 kubelet[2749]: I1105 00:05:36.507913 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9e082060-ad3b-46f6-a953-f94bd98790b3-lib-modules\") pod \"cilium-p7wtc\" (UID: \"9e082060-ad3b-46f6-a953-f94bd98790b3\") " pod="kube-system/cilium-p7wtc" Nov 5 00:05:36.508187 kubelet[2749]: I1105 00:05:36.507929 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fs9js\" (UniqueName: \"kubernetes.io/projected/9e082060-ad3b-46f6-a953-f94bd98790b3-kube-api-access-fs9js\") pod \"cilium-p7wtc\" (UID: \"9e082060-ad3b-46f6-a953-f94bd98790b3\") " pod="kube-system/cilium-p7wtc" Nov 5 00:05:36.508187 kubelet[2749]: I1105 00:05:36.507944 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9e082060-ad3b-46f6-a953-f94bd98790b3-clustermesh-secrets\") pod \"cilium-p7wtc\" (UID: \"9e082060-ad3b-46f6-a953-f94bd98790b3\") " pod="kube-system/cilium-p7wtc" Nov 5 00:05:36.508338 kubelet[2749]: I1105 00:05:36.507959 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9e082060-ad3b-46f6-a953-f94bd98790b3-cilium-config-path\") pod \"cilium-p7wtc\" (UID: \"9e082060-ad3b-46f6-a953-f94bd98790b3\") " pod="kube-system/cilium-p7wtc" Nov 5 00:05:36.508338 kubelet[2749]: I1105 00:05:36.507976 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e7ab22f7-8045-41b3-a876-647e074b7171-kube-proxy\") pod \"kube-proxy-5rtn7\" (UID: \"e7ab22f7-8045-41b3-a876-647e074b7171\") " pod="kube-system/kube-proxy-5rtn7" Nov 5 00:05:36.508338 kubelet[2749]: I1105 00:05:36.507993 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9e082060-ad3b-46f6-a953-f94bd98790b3-cilium-run\") pod \"cilium-p7wtc\" (UID: \"9e082060-ad3b-46f6-a953-f94bd98790b3\") " pod="kube-system/cilium-p7wtc" Nov 5 00:05:36.508338 kubelet[2749]: I1105 00:05:36.508009 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e7ab22f7-8045-41b3-a876-647e074b7171-xtables-lock\") pod \"kube-proxy-5rtn7\" (UID: \"e7ab22f7-8045-41b3-a876-647e074b7171\") " pod="kube-system/kube-proxy-5rtn7" Nov 5 00:05:36.508338 kubelet[2749]: I1105 00:05:36.508025 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e7ab22f7-8045-41b3-a876-647e074b7171-lib-modules\") pod \"kube-proxy-5rtn7\" (UID: \"e7ab22f7-8045-41b3-a876-647e074b7171\") " pod="kube-system/kube-proxy-5rtn7" Nov 5 00:05:36.508338 kubelet[2749]: I1105 00:05:36.508041 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9e082060-ad3b-46f6-a953-f94bd98790b3-cilium-cgroup\") pod \"cilium-p7wtc\" (UID: \"9e082060-ad3b-46f6-a953-f94bd98790b3\") " pod="kube-system/cilium-p7wtc" Nov 5 00:05:36.508476 kubelet[2749]: I1105 00:05:36.508057 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52jrq\" (UniqueName: \"kubernetes.io/projected/e7ab22f7-8045-41b3-a876-647e074b7171-kube-api-access-52jrq\") pod \"kube-proxy-5rtn7\" (UID: \"e7ab22f7-8045-41b3-a876-647e074b7171\") " pod="kube-system/kube-proxy-5rtn7" Nov 5 00:05:36.508476 kubelet[2749]: I1105 00:05:36.508071 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9e082060-ad3b-46f6-a953-f94bd98790b3-cni-path\") pod \"cilium-p7wtc\" (UID: \"9e082060-ad3b-46f6-a953-f94bd98790b3\") " pod="kube-system/cilium-p7wtc" Nov 5 00:05:36.508476 kubelet[2749]: I1105 00:05:36.508085 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9e082060-ad3b-46f6-a953-f94bd98790b3-xtables-lock\") pod \"cilium-p7wtc\" (UID: \"9e082060-ad3b-46f6-a953-f94bd98790b3\") " pod="kube-system/cilium-p7wtc" Nov 5 00:05:36.508476 kubelet[2749]: I1105 00:05:36.508100 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9e082060-ad3b-46f6-a953-f94bd98790b3-host-proc-sys-net\") pod \"cilium-p7wtc\" (UID: \"9e082060-ad3b-46f6-a953-f94bd98790b3\") " pod="kube-system/cilium-p7wtc" Nov 5 00:05:36.522277 kubelet[2749]: E1105 00:05:36.522240 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 00:05:36.619820 kubelet[2749]: E1105 00:05:36.619788 2749 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Nov 5 00:05:36.619820 kubelet[2749]: E1105 00:05:36.619812 2749 projected.go:194] Error preparing data for projected volume kube-api-access-52jrq for pod kube-system/kube-proxy-5rtn7: configmap "kube-root-ca.crt" not found Nov 5 00:05:36.621019 kubelet[2749]: E1105 00:05:36.619857 2749 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e7ab22f7-8045-41b3-a876-647e074b7171-kube-api-access-52jrq podName:e7ab22f7-8045-41b3-a876-647e074b7171 nodeName:}" failed. No retries permitted until 2025-11-05 00:05:37.119841731 +0000 UTC m=+6.214102271 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-52jrq" (UniqueName: "kubernetes.io/projected/e7ab22f7-8045-41b3-a876-647e074b7171-kube-api-access-52jrq") pod "kube-proxy-5rtn7" (UID: "e7ab22f7-8045-41b3-a876-647e074b7171") : configmap "kube-root-ca.crt" not found Nov 5 00:05:36.622275 kubelet[2749]: E1105 00:05:36.622223 2749 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Nov 5 00:05:36.622374 kubelet[2749]: E1105 00:05:36.622341 2749 projected.go:194] Error preparing data for projected volume kube-api-access-fs9js for pod kube-system/cilium-p7wtc: configmap "kube-root-ca.crt" not found Nov 5 00:05:36.622528 kubelet[2749]: E1105 00:05:36.622515 2749 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9e082060-ad3b-46f6-a953-f94bd98790b3-kube-api-access-fs9js podName:9e082060-ad3b-46f6-a953-f94bd98790b3 nodeName:}" failed. No retries permitted until 2025-11-05 00:05:37.122495667 +0000 UTC m=+6.216756207 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-fs9js" (UniqueName: "kubernetes.io/projected/9e082060-ad3b-46f6-a953-f94bd98790b3-kube-api-access-fs9js") pod "cilium-p7wtc" (UID: "9e082060-ad3b-46f6-a953-f94bd98790b3") : configmap "kube-root-ca.crt" not found Nov 5 00:05:36.980263 systemd[1]: Created slice kubepods-besteffort-podd113c0dc_7aeb_4225_9ae0_77ff41813a7a.slice - libcontainer container kubepods-besteffort-podd113c0dc_7aeb_4225_9ae0_77ff41813a7a.slice. Nov 5 00:05:37.010670 kubelet[2749]: I1105 00:05:37.010632 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l54c5\" (UniqueName: \"kubernetes.io/projected/d113c0dc-7aeb-4225-9ae0-77ff41813a7a-kube-api-access-l54c5\") pod \"cilium-operator-6c4d7847fc-899bf\" (UID: \"d113c0dc-7aeb-4225-9ae0-77ff41813a7a\") " pod="kube-system/cilium-operator-6c4d7847fc-899bf" Nov 5 00:05:37.010752 kubelet[2749]: I1105 00:05:37.010731 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d113c0dc-7aeb-4225-9ae0-77ff41813a7a-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-899bf\" (UID: \"d113c0dc-7aeb-4225-9ae0-77ff41813a7a\") " pod="kube-system/cilium-operator-6c4d7847fc-899bf" Nov 5 00:05:37.284093 kubelet[2749]: E1105 00:05:37.284000 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 00:05:37.284493 containerd[1616]: time="2025-11-05T00:05:37.284397392Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-899bf,Uid:d113c0dc-7aeb-4225-9ae0-77ff41813a7a,Namespace:kube-system,Attempt:0,}" Nov 5 00:05:37.322545 containerd[1616]: time="2025-11-05T00:05:37.322506848Z" level=info msg="connecting to shim 5833d8798378b70dd852e137161dbc03e5366be14172651e85471179576f64a6" address="unix:///run/containerd/s/febe21543ddf3ba42a30f74bc4c4c62885be02f1183a3bd6dd5765e26433bad8" namespace=k8s.io protocol=ttrpc version=3 Nov 5 00:05:37.347149 kubelet[2749]: E1105 00:05:37.347107 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 00:05:37.347962 containerd[1616]: time="2025-11-05T00:05:37.347552468Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5rtn7,Uid:e7ab22f7-8045-41b3-a876-647e074b7171,Namespace:kube-system,Attempt:0,}" Nov 5 00:05:37.353066 kubelet[2749]: E1105 00:05:37.352845 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 00:05:37.354576 containerd[1616]: time="2025-11-05T00:05:37.354536954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-p7wtc,Uid:9e082060-ad3b-46f6-a953-f94bd98790b3,Namespace:kube-system,Attempt:0,}" Nov 5 00:05:37.371556 containerd[1616]: time="2025-11-05T00:05:37.371517102Z" level=info msg="connecting to shim 9e758fa33924bf849f0083de15ed17d9ffde641ee6be9384a8b7114cebbceeba" address="unix:///run/containerd/s/8cd31c6ea81cded57be5944e327367436cd3a1fa81acbb61b1678a9ec8faf3d9" namespace=k8s.io protocol=ttrpc version=3 Nov 5 00:05:37.381778 containerd[1616]: time="2025-11-05T00:05:37.381733197Z" level=info msg="connecting to shim 54233661c7fa1220917b520c26abf3a10f292c738d79975fc391f8c01fb591b3" address="unix:///run/containerd/s/7b8f4405095ba5fc2727840f955c60811df3d8719b35f2319bdcac8e19868079" namespace=k8s.io protocol=ttrpc version=3 Nov 5 00:05:37.382028 systemd[1]: Started cri-containerd-5833d8798378b70dd852e137161dbc03e5366be14172651e85471179576f64a6.scope - libcontainer container 5833d8798378b70dd852e137161dbc03e5366be14172651e85471179576f64a6. Nov 5 00:05:37.401105 systemd[1]: Started cri-containerd-9e758fa33924bf849f0083de15ed17d9ffde641ee6be9384a8b7114cebbceeba.scope - libcontainer container 9e758fa33924bf849f0083de15ed17d9ffde641ee6be9384a8b7114cebbceeba. Nov 5 00:05:37.426100 systemd[1]: Started cri-containerd-54233661c7fa1220917b520c26abf3a10f292c738d79975fc391f8c01fb591b3.scope - libcontainer container 54233661c7fa1220917b520c26abf3a10f292c738d79975fc391f8c01fb591b3. Nov 5 00:05:37.436905 containerd[1616]: time="2025-11-05T00:05:37.436761155Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5rtn7,Uid:e7ab22f7-8045-41b3-a876-647e074b7171,Namespace:kube-system,Attempt:0,} returns sandbox id \"9e758fa33924bf849f0083de15ed17d9ffde641ee6be9384a8b7114cebbceeba\"" Nov 5 00:05:37.438269 kubelet[2749]: E1105 00:05:37.438245 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 00:05:37.443583 containerd[1616]: time="2025-11-05T00:05:37.443481696Z" level=info msg="CreateContainer within sandbox \"9e758fa33924bf849f0083de15ed17d9ffde641ee6be9384a8b7114cebbceeba\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 5 00:05:37.451488 containerd[1616]: time="2025-11-05T00:05:37.451459754Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-899bf,Uid:d113c0dc-7aeb-4225-9ae0-77ff41813a7a,Namespace:kube-system,Attempt:0,} returns sandbox id \"5833d8798378b70dd852e137161dbc03e5366be14172651e85471179576f64a6\"" Nov 5 00:05:37.452354 kubelet[2749]: E1105 00:05:37.452335 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 00:05:37.456254 containerd[1616]: time="2025-11-05T00:05:37.456156701Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Nov 5 00:05:37.457077 containerd[1616]: time="2025-11-05T00:05:37.457049806Z" level=info msg="Container 9a39b72d1737438595ee1b106eae22e8c2f7a5992b76d7916ac85dfc5b9a97ef: CDI devices from CRI Config.CDIDevices: []" Nov 5 00:05:37.465142 containerd[1616]: time="2025-11-05T00:05:37.465094810Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-p7wtc,Uid:9e082060-ad3b-46f6-a953-f94bd98790b3,Namespace:kube-system,Attempt:0,} returns sandbox id \"54233661c7fa1220917b520c26abf3a10f292c738d79975fc391f8c01fb591b3\"" Nov 5 00:05:37.466229 containerd[1616]: time="2025-11-05T00:05:37.466196726Z" level=info msg="CreateContainer within sandbox \"9e758fa33924bf849f0083de15ed17d9ffde641ee6be9384a8b7114cebbceeba\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"9a39b72d1737438595ee1b106eae22e8c2f7a5992b76d7916ac85dfc5b9a97ef\"" Nov 5 00:05:37.466814 containerd[1616]: time="2025-11-05T00:05:37.466782525Z" level=info msg="StartContainer for \"9a39b72d1737438595ee1b106eae22e8c2f7a5992b76d7916ac85dfc5b9a97ef\"" Nov 5 00:05:37.468202 kubelet[2749]: E1105 00:05:37.468181 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 00:05:37.468467 containerd[1616]: time="2025-11-05T00:05:37.468444441Z" level=info msg="connecting to shim 9a39b72d1737438595ee1b106eae22e8c2f7a5992b76d7916ac85dfc5b9a97ef" address="unix:///run/containerd/s/8cd31c6ea81cded57be5944e327367436cd3a1fa81acbb61b1678a9ec8faf3d9" protocol=ttrpc version=3 Nov 5 00:05:37.489012 systemd[1]: Started cri-containerd-9a39b72d1737438595ee1b106eae22e8c2f7a5992b76d7916ac85dfc5b9a97ef.scope - libcontainer container 9a39b72d1737438595ee1b106eae22e8c2f7a5992b76d7916ac85dfc5b9a97ef. Nov 5 00:05:37.529319 containerd[1616]: time="2025-11-05T00:05:37.529291369Z" level=info msg="StartContainer for \"9a39b72d1737438595ee1b106eae22e8c2f7a5992b76d7916ac85dfc5b9a97ef\" returns successfully" Nov 5 00:05:38.009048 kubelet[2749]: E1105 00:05:38.009017 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 00:05:38.514467 kubelet[2749]: E1105 00:05:38.514434 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 00:05:38.526264 kubelet[2749]: I1105 00:05:38.526213 2749 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5rtn7" podStartSLOduration=2.526135762 podStartE2EDuration="2.526135762s" podCreationTimestamp="2025-11-05 00:05:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 00:05:38.015870326 +0000 UTC m=+7.110130866" watchObservedRunningTime="2025-11-05 00:05:38.526135762 +0000 UTC m=+7.620396322" Nov 5 00:05:39.011688 kubelet[2749]: E1105 00:05:39.011654 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 00:05:39.103317 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3276803939.mount: Deactivated successfully. Nov 5 00:05:43.120039 containerd[1616]: time="2025-11-05T00:05:43.119983935Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 00:05:43.120703 containerd[1616]: time="2025-11-05T00:05:43.120646635Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Nov 5 00:05:43.121790 containerd[1616]: time="2025-11-05T00:05:43.121759438Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 00:05:43.122892 containerd[1616]: time="2025-11-05T00:05:43.122844087Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 5.666635167s" Nov 5 00:05:43.122921 containerd[1616]: time="2025-11-05T00:05:43.122893081Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Nov 5 00:05:43.123759 containerd[1616]: time="2025-11-05T00:05:43.123724826Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Nov 5 00:05:43.125186 containerd[1616]: time="2025-11-05T00:05:43.124932872Z" level=info msg="CreateContainer within sandbox \"5833d8798378b70dd852e137161dbc03e5366be14172651e85471179576f64a6\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Nov 5 00:05:43.133103 containerd[1616]: time="2025-11-05T00:05:43.133061673Z" level=info msg="Container 101ca1e0ced8c982d5e1c0bb6173f0b73975d9397dab99f025600143dc61125e: CDI devices from CRI Config.CDIDevices: []" Nov 5 00:05:43.138230 containerd[1616]: time="2025-11-05T00:05:43.138193450Z" level=info msg="CreateContainer within sandbox \"5833d8798378b70dd852e137161dbc03e5366be14172651e85471179576f64a6\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"101ca1e0ced8c982d5e1c0bb6173f0b73975d9397dab99f025600143dc61125e\"" Nov 5 00:05:43.138715 containerd[1616]: time="2025-11-05T00:05:43.138580312Z" level=info msg="StartContainer for \"101ca1e0ced8c982d5e1c0bb6173f0b73975d9397dab99f025600143dc61125e\"" Nov 5 00:05:43.139293 containerd[1616]: time="2025-11-05T00:05:43.139270284Z" level=info msg="connecting to shim 101ca1e0ced8c982d5e1c0bb6173f0b73975d9397dab99f025600143dc61125e" address="unix:///run/containerd/s/febe21543ddf3ba42a30f74bc4c4c62885be02f1183a3bd6dd5765e26433bad8" protocol=ttrpc version=3 Nov 5 00:05:43.166098 systemd[1]: Started cri-containerd-101ca1e0ced8c982d5e1c0bb6173f0b73975d9397dab99f025600143dc61125e.scope - libcontainer container 101ca1e0ced8c982d5e1c0bb6173f0b73975d9397dab99f025600143dc61125e. Nov 5 00:05:43.194947 containerd[1616]: time="2025-11-05T00:05:43.194859134Z" level=info msg="StartContainer for \"101ca1e0ced8c982d5e1c0bb6173f0b73975d9397dab99f025600143dc61125e\" returns successfully" Nov 5 00:05:44.020799 kubelet[2749]: E1105 00:05:44.020071 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 00:05:44.028123 kubelet[2749]: I1105 00:05:44.028074 2749 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-899bf" podStartSLOduration=2.357481843 podStartE2EDuration="8.028064144s" podCreationTimestamp="2025-11-05 00:05:36 +0000 UTC" firstStartedPulling="2025-11-05 00:05:37.453030259 +0000 UTC m=+6.547290799" lastFinishedPulling="2025-11-05 00:05:43.12361256 +0000 UTC m=+12.217873100" observedRunningTime="2025-11-05 00:05:44.026893944 +0000 UTC m=+13.121154485" watchObservedRunningTime="2025-11-05 00:05:44.028064144 +0000 UTC m=+13.122324685" Nov 5 00:05:45.021504 kubelet[2749]: E1105 00:05:45.021473 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 00:05:45.294020 kubelet[2749]: E1105 00:05:45.293238 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 00:05:46.526303 kubelet[2749]: E1105 00:05:46.526269 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 00:05:47.024371 kubelet[2749]: E1105 00:05:47.024337 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 00:05:51.327565 update_engine[1594]: I20251105 00:05:51.327493 1594 update_attempter.cc:509] Updating boot flags... Nov 5 00:05:56.498502 systemd[1]: Started sshd@7-10.0.0.3:22-10.0.0.1:60946.service - OpenSSH per-connection server daemon (10.0.0.1:60946). Nov 5 00:05:56.543147 sshd[3204]: Accepted publickey for core from 10.0.0.1 port 60946 ssh2: RSA SHA256:76q7OxBnofV7CHovkMIRFzt50oWq5VGuNm0JMuzHE4A Nov 5 00:05:56.545016 sshd-session[3204]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 00:05:56.549055 systemd-logind[1590]: New session 8 of user core. Nov 5 00:05:56.556015 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 5 00:05:56.683795 sshd[3207]: Connection closed by 10.0.0.1 port 60946 Nov 5 00:05:56.684098 sshd-session[3204]: pam_unix(sshd:session): session closed for user core Nov 5 00:05:56.688640 systemd[1]: sshd@7-10.0.0.3:22-10.0.0.1:60946.service: Deactivated successfully. Nov 5 00:05:56.690581 systemd[1]: session-8.scope: Deactivated successfully. Nov 5 00:05:56.691327 systemd-logind[1590]: Session 8 logged out. Waiting for processes to exit. Nov 5 00:05:56.692310 systemd-logind[1590]: Removed session 8. Nov 5 00:06:00.746591 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3009931065.mount: Deactivated successfully. Nov 5 00:06:01.695361 systemd[1]: Started sshd@8-10.0.0.3:22-10.0.0.1:60954.service - OpenSSH per-connection server daemon (10.0.0.1:60954). Nov 5 00:06:01.757117 sshd[3236]: Accepted publickey for core from 10.0.0.1 port 60954 ssh2: RSA SHA256:76q7OxBnofV7CHovkMIRFzt50oWq5VGuNm0JMuzHE4A Nov 5 00:06:01.758431 sshd-session[3236]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 00:06:01.762336 systemd-logind[1590]: New session 9 of user core. Nov 5 00:06:01.776998 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 5 00:06:02.002481 sshd[3239]: Connection closed by 10.0.0.1 port 60954 Nov 5 00:06:02.002799 sshd-session[3236]: pam_unix(sshd:session): session closed for user core Nov 5 00:06:02.007525 systemd[1]: sshd@8-10.0.0.3:22-10.0.0.1:60954.service: Deactivated successfully. Nov 5 00:06:02.009864 systemd[1]: session-9.scope: Deactivated successfully. Nov 5 00:06:02.010728 systemd-logind[1590]: Session 9 logged out. Waiting for processes to exit. Nov 5 00:06:02.011969 systemd-logind[1590]: Removed session 9. Nov 5 00:06:04.069007 containerd[1616]: time="2025-11-05T00:06:04.068952166Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 00:06:04.069794 containerd[1616]: time="2025-11-05T00:06:04.069736324Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Nov 5 00:06:04.070856 containerd[1616]: time="2025-11-05T00:06:04.070823565Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 00:06:04.072119 containerd[1616]: time="2025-11-05T00:06:04.072092418Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 20.948336663s" Nov 5 00:06:04.072159 containerd[1616]: time="2025-11-05T00:06:04.072119448Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Nov 5 00:06:04.085619 containerd[1616]: time="2025-11-05T00:06:04.085590612Z" level=info msg="CreateContainer within sandbox \"54233661c7fa1220917b520c26abf3a10f292c738d79975fc391f8c01fb591b3\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 5 00:06:04.092356 containerd[1616]: time="2025-11-05T00:06:04.092315354Z" level=info msg="Container 7e93bee5a254c9c4de9888652c40d878f972af256e6ec00e1b9982f3cdaa2fed: CDI devices from CRI Config.CDIDevices: []" Nov 5 00:06:04.097407 containerd[1616]: time="2025-11-05T00:06:04.097360448Z" level=info msg="CreateContainer within sandbox \"54233661c7fa1220917b520c26abf3a10f292c738d79975fc391f8c01fb591b3\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"7e93bee5a254c9c4de9888652c40d878f972af256e6ec00e1b9982f3cdaa2fed\"" Nov 5 00:06:04.100529 containerd[1616]: time="2025-11-05T00:06:04.100493025Z" level=info msg="StartContainer for \"7e93bee5a254c9c4de9888652c40d878f972af256e6ec00e1b9982f3cdaa2fed\"" Nov 5 00:06:04.101553 containerd[1616]: time="2025-11-05T00:06:04.101522707Z" level=info msg="connecting to shim 7e93bee5a254c9c4de9888652c40d878f972af256e6ec00e1b9982f3cdaa2fed" address="unix:///run/containerd/s/7b8f4405095ba5fc2727840f955c60811df3d8719b35f2319bdcac8e19868079" protocol=ttrpc version=3 Nov 5 00:06:04.125021 systemd[1]: Started cri-containerd-7e93bee5a254c9c4de9888652c40d878f972af256e6ec00e1b9982f3cdaa2fed.scope - libcontainer container 7e93bee5a254c9c4de9888652c40d878f972af256e6ec00e1b9982f3cdaa2fed. Nov 5 00:06:04.154257 containerd[1616]: time="2025-11-05T00:06:04.154217059Z" level=info msg="StartContainer for \"7e93bee5a254c9c4de9888652c40d878f972af256e6ec00e1b9982f3cdaa2fed\" returns successfully" Nov 5 00:06:04.165552 systemd[1]: cri-containerd-7e93bee5a254c9c4de9888652c40d878f972af256e6ec00e1b9982f3cdaa2fed.scope: Deactivated successfully. Nov 5 00:06:04.166157 systemd[1]: cri-containerd-7e93bee5a254c9c4de9888652c40d878f972af256e6ec00e1b9982f3cdaa2fed.scope: Consumed 25ms CPU time, 7.1M memory peak, 4K read from disk, 3M written to disk. Nov 5 00:06:04.166831 containerd[1616]: time="2025-11-05T00:06:04.166789668Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7e93bee5a254c9c4de9888652c40d878f972af256e6ec00e1b9982f3cdaa2fed\" id:\"7e93bee5a254c9c4de9888652c40d878f972af256e6ec00e1b9982f3cdaa2fed\" pid:3280 exited_at:{seconds:1762301164 nanos:166295246}" Nov 5 00:06:04.166890 containerd[1616]: time="2025-11-05T00:06:04.166842799Z" level=info msg="received exit event container_id:\"7e93bee5a254c9c4de9888652c40d878f972af256e6ec00e1b9982f3cdaa2fed\" id:\"7e93bee5a254c9c4de9888652c40d878f972af256e6ec00e1b9982f3cdaa2fed\" pid:3280 exited_at:{seconds:1762301164 nanos:166295246}" Nov 5 00:06:04.186344 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7e93bee5a254c9c4de9888652c40d878f972af256e6ec00e1b9982f3cdaa2fed-rootfs.mount: Deactivated successfully. Nov 5 00:06:05.049561 kubelet[2749]: E1105 00:06:05.049526 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 00:06:06.052656 kubelet[2749]: E1105 00:06:06.052492 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 00:06:06.055081 containerd[1616]: time="2025-11-05T00:06:06.055029860Z" level=info msg="CreateContainer within sandbox \"54233661c7fa1220917b520c26abf3a10f292c738d79975fc391f8c01fb591b3\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 5 00:06:06.063523 containerd[1616]: time="2025-11-05T00:06:06.063482239Z" level=info msg="Container cb94a9f014c00a7deb50ae755062b4130c50cd2da08a9d7e47e7d2c1d8d81ee6: CDI devices from CRI Config.CDIDevices: []" Nov 5 00:06:06.068320 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1991181040.mount: Deactivated successfully. Nov 5 00:06:06.069739 containerd[1616]: time="2025-11-05T00:06:06.069688885Z" level=info msg="CreateContainer within sandbox \"54233661c7fa1220917b520c26abf3a10f292c738d79975fc391f8c01fb591b3\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"cb94a9f014c00a7deb50ae755062b4130c50cd2da08a9d7e47e7d2c1d8d81ee6\"" Nov 5 00:06:06.070255 containerd[1616]: time="2025-11-05T00:06:06.070220447Z" level=info msg="StartContainer for \"cb94a9f014c00a7deb50ae755062b4130c50cd2da08a9d7e47e7d2c1d8d81ee6\"" Nov 5 00:06:06.071128 containerd[1616]: time="2025-11-05T00:06:06.071086870Z" level=info msg="connecting to shim cb94a9f014c00a7deb50ae755062b4130c50cd2da08a9d7e47e7d2c1d8d81ee6" address="unix:///run/containerd/s/7b8f4405095ba5fc2727840f955c60811df3d8719b35f2319bdcac8e19868079" protocol=ttrpc version=3 Nov 5 00:06:06.092007 systemd[1]: Started cri-containerd-cb94a9f014c00a7deb50ae755062b4130c50cd2da08a9d7e47e7d2c1d8d81ee6.scope - libcontainer container cb94a9f014c00a7deb50ae755062b4130c50cd2da08a9d7e47e7d2c1d8d81ee6. Nov 5 00:06:06.119725 containerd[1616]: time="2025-11-05T00:06:06.119678159Z" level=info msg="StartContainer for \"cb94a9f014c00a7deb50ae755062b4130c50cd2da08a9d7e47e7d2c1d8d81ee6\" returns successfully" Nov 5 00:06:06.133633 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 5 00:06:06.133866 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 5 00:06:06.134353 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Nov 5 00:06:06.135768 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 5 00:06:06.137168 systemd[1]: cri-containerd-cb94a9f014c00a7deb50ae755062b4130c50cd2da08a9d7e47e7d2c1d8d81ee6.scope: Deactivated successfully. Nov 5 00:06:06.139262 containerd[1616]: time="2025-11-05T00:06:06.139228802Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cb94a9f014c00a7deb50ae755062b4130c50cd2da08a9d7e47e7d2c1d8d81ee6\" id:\"cb94a9f014c00a7deb50ae755062b4130c50cd2da08a9d7e47e7d2c1d8d81ee6\" pid:3326 exited_at:{seconds:1762301166 nanos:138724733}" Nov 5 00:06:06.139517 containerd[1616]: time="2025-11-05T00:06:06.139463295Z" level=info msg="received exit event container_id:\"cb94a9f014c00a7deb50ae755062b4130c50cd2da08a9d7e47e7d2c1d8d81ee6\" id:\"cb94a9f014c00a7deb50ae755062b4130c50cd2da08a9d7e47e7d2c1d8d81ee6\" pid:3326 exited_at:{seconds:1762301166 nanos:138724733}" Nov 5 00:06:06.159149 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cb94a9f014c00a7deb50ae755062b4130c50cd2da08a9d7e47e7d2c1d8d81ee6-rootfs.mount: Deactivated successfully. Nov 5 00:06:06.160695 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 5 00:06:07.016534 systemd[1]: Started sshd@9-10.0.0.3:22-10.0.0.1:58500.service - OpenSSH per-connection server daemon (10.0.0.1:58500). Nov 5 00:06:07.056227 kubelet[2749]: E1105 00:06:07.056197 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 00:06:07.058089 containerd[1616]: time="2025-11-05T00:06:07.058033946Z" level=info msg="CreateContainer within sandbox \"54233661c7fa1220917b520c26abf3a10f292c738d79975fc391f8c01fb591b3\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 5 00:06:07.068070 sshd[3362]: Accepted publickey for core from 10.0.0.1 port 58500 ssh2: RSA SHA256:76q7OxBnofV7CHovkMIRFzt50oWq5VGuNm0JMuzHE4A Nov 5 00:06:07.072162 sshd-session[3362]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 00:06:07.074615 containerd[1616]: time="2025-11-05T00:06:07.074548428Z" level=info msg="Container 068b9e0c35cd2ab9338698eaf3775a5e203b28a64a9a99c40f330fff3b91140c: CDI devices from CRI Config.CDIDevices: []" Nov 5 00:06:07.075982 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount661238996.mount: Deactivated successfully. Nov 5 00:06:07.080598 systemd-logind[1590]: New session 10 of user core. Nov 5 00:06:07.082522 containerd[1616]: time="2025-11-05T00:06:07.082486362Z" level=info msg="CreateContainer within sandbox \"54233661c7fa1220917b520c26abf3a10f292c738d79975fc391f8c01fb591b3\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"068b9e0c35cd2ab9338698eaf3775a5e203b28a64a9a99c40f330fff3b91140c\"" Nov 5 00:06:07.083012 containerd[1616]: time="2025-11-05T00:06:07.082972798Z" level=info msg="StartContainer for \"068b9e0c35cd2ab9338698eaf3775a5e203b28a64a9a99c40f330fff3b91140c\"" Nov 5 00:06:07.084199 containerd[1616]: time="2025-11-05T00:06:07.084176476Z" level=info msg="connecting to shim 068b9e0c35cd2ab9338698eaf3775a5e203b28a64a9a99c40f330fff3b91140c" address="unix:///run/containerd/s/7b8f4405095ba5fc2727840f955c60811df3d8719b35f2319bdcac8e19868079" protocol=ttrpc version=3 Nov 5 00:06:07.086079 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 5 00:06:07.107007 systemd[1]: Started cri-containerd-068b9e0c35cd2ab9338698eaf3775a5e203b28a64a9a99c40f330fff3b91140c.scope - libcontainer container 068b9e0c35cd2ab9338698eaf3775a5e203b28a64a9a99c40f330fff3b91140c. Nov 5 00:06:07.150321 systemd[1]: cri-containerd-068b9e0c35cd2ab9338698eaf3775a5e203b28a64a9a99c40f330fff3b91140c.scope: Deactivated successfully. Nov 5 00:06:07.151329 containerd[1616]: time="2025-11-05T00:06:07.151290402Z" level=info msg="TaskExit event in podsandbox handler container_id:\"068b9e0c35cd2ab9338698eaf3775a5e203b28a64a9a99c40f330fff3b91140c\" id:\"068b9e0c35cd2ab9338698eaf3775a5e203b28a64a9a99c40f330fff3b91140c\" pid:3378 exited_at:{seconds:1762301167 nanos:151038858}" Nov 5 00:06:07.195777 containerd[1616]: time="2025-11-05T00:06:07.195735653Z" level=info msg="received exit event container_id:\"068b9e0c35cd2ab9338698eaf3775a5e203b28a64a9a99c40f330fff3b91140c\" id:\"068b9e0c35cd2ab9338698eaf3775a5e203b28a64a9a99c40f330fff3b91140c\" pid:3378 exited_at:{seconds:1762301167 nanos:151038858}" Nov 5 00:06:07.198186 containerd[1616]: time="2025-11-05T00:06:07.198097031Z" level=info msg="StartContainer for \"068b9e0c35cd2ab9338698eaf3775a5e203b28a64a9a99c40f330fff3b91140c\" returns successfully" Nov 5 00:06:07.219689 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-068b9e0c35cd2ab9338698eaf3775a5e203b28a64a9a99c40f330fff3b91140c-rootfs.mount: Deactivated successfully. Nov 5 00:06:07.236769 sshd[3373]: Connection closed by 10.0.0.1 port 58500 Nov 5 00:06:07.237104 sshd-session[3362]: pam_unix(sshd:session): session closed for user core Nov 5 00:06:07.240834 systemd[1]: sshd@9-10.0.0.3:22-10.0.0.1:58500.service: Deactivated successfully. Nov 5 00:06:07.242728 systemd[1]: session-10.scope: Deactivated successfully. Nov 5 00:06:07.244090 systemd-logind[1590]: Session 10 logged out. Waiting for processes to exit. Nov 5 00:06:07.245110 systemd-logind[1590]: Removed session 10. Nov 5 00:06:08.061098 kubelet[2749]: E1105 00:06:08.061066 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 00:06:08.064065 containerd[1616]: time="2025-11-05T00:06:08.062787753Z" level=info msg="CreateContainer within sandbox \"54233661c7fa1220917b520c26abf3a10f292c738d79975fc391f8c01fb591b3\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 5 00:06:08.071596 containerd[1616]: time="2025-11-05T00:06:08.071548141Z" level=info msg="Container a6ef7b3c77ab5b988d421714de8e388903972bb2af9f0f5edbf2c49be3d71655: CDI devices from CRI Config.CDIDevices: []" Nov 5 00:06:08.084445 containerd[1616]: time="2025-11-05T00:06:08.084404996Z" level=info msg="CreateContainer within sandbox \"54233661c7fa1220917b520c26abf3a10f292c738d79975fc391f8c01fb591b3\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a6ef7b3c77ab5b988d421714de8e388903972bb2af9f0f5edbf2c49be3d71655\"" Nov 5 00:06:08.084827 containerd[1616]: time="2025-11-05T00:06:08.084780874Z" level=info msg="StartContainer for \"a6ef7b3c77ab5b988d421714de8e388903972bb2af9f0f5edbf2c49be3d71655\"" Nov 5 00:06:08.085586 containerd[1616]: time="2025-11-05T00:06:08.085563567Z" level=info msg="connecting to shim a6ef7b3c77ab5b988d421714de8e388903972bb2af9f0f5edbf2c49be3d71655" address="unix:///run/containerd/s/7b8f4405095ba5fc2727840f955c60811df3d8719b35f2319bdcac8e19868079" protocol=ttrpc version=3 Nov 5 00:06:08.107016 systemd[1]: Started cri-containerd-a6ef7b3c77ab5b988d421714de8e388903972bb2af9f0f5edbf2c49be3d71655.scope - libcontainer container a6ef7b3c77ab5b988d421714de8e388903972bb2af9f0f5edbf2c49be3d71655. Nov 5 00:06:08.133636 systemd[1]: cri-containerd-a6ef7b3c77ab5b988d421714de8e388903972bb2af9f0f5edbf2c49be3d71655.scope: Deactivated successfully. Nov 5 00:06:08.135173 containerd[1616]: time="2025-11-05T00:06:08.135096315Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a6ef7b3c77ab5b988d421714de8e388903972bb2af9f0f5edbf2c49be3d71655\" id:\"a6ef7b3c77ab5b988d421714de8e388903972bb2af9f0f5edbf2c49be3d71655\" pid:3433 exited_at:{seconds:1762301168 nanos:134456409}" Nov 5 00:06:08.135173 containerd[1616]: time="2025-11-05T00:06:08.135152751Z" level=info msg="received exit event container_id:\"a6ef7b3c77ab5b988d421714de8e388903972bb2af9f0f5edbf2c49be3d71655\" id:\"a6ef7b3c77ab5b988d421714de8e388903972bb2af9f0f5edbf2c49be3d71655\" pid:3433 exited_at:{seconds:1762301168 nanos:134456409}" Nov 5 00:06:08.142195 containerd[1616]: time="2025-11-05T00:06:08.142153196Z" level=info msg="StartContainer for \"a6ef7b3c77ab5b988d421714de8e388903972bb2af9f0f5edbf2c49be3d71655\" returns successfully" Nov 5 00:06:08.153486 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a6ef7b3c77ab5b988d421714de8e388903972bb2af9f0f5edbf2c49be3d71655-rootfs.mount: Deactivated successfully. Nov 5 00:06:09.066112 kubelet[2749]: E1105 00:06:09.066081 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 00:06:09.067953 containerd[1616]: time="2025-11-05T00:06:09.067833809Z" level=info msg="CreateContainer within sandbox \"54233661c7fa1220917b520c26abf3a10f292c738d79975fc391f8c01fb591b3\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 5 00:06:09.088907 containerd[1616]: time="2025-11-05T00:06:09.088849859Z" level=info msg="Container 10190723ac1220f8bc3a77c75879dad90b1db2c7033e9dc2fbd1597e157886c8: CDI devices from CRI Config.CDIDevices: []" Nov 5 00:06:09.092414 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3588047536.mount: Deactivated successfully. Nov 5 00:06:09.095050 containerd[1616]: time="2025-11-05T00:06:09.095008395Z" level=info msg="CreateContainer within sandbox \"54233661c7fa1220917b520c26abf3a10f292c738d79975fc391f8c01fb591b3\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"10190723ac1220f8bc3a77c75879dad90b1db2c7033e9dc2fbd1597e157886c8\"" Nov 5 00:06:09.095439 containerd[1616]: time="2025-11-05T00:06:09.095403569Z" level=info msg="StartContainer for \"10190723ac1220f8bc3a77c75879dad90b1db2c7033e9dc2fbd1597e157886c8\"" Nov 5 00:06:09.096155 containerd[1616]: time="2025-11-05T00:06:09.096132822Z" level=info msg="connecting to shim 10190723ac1220f8bc3a77c75879dad90b1db2c7033e9dc2fbd1597e157886c8" address="unix:///run/containerd/s/7b8f4405095ba5fc2727840f955c60811df3d8719b35f2319bdcac8e19868079" protocol=ttrpc version=3 Nov 5 00:06:09.119013 systemd[1]: Started cri-containerd-10190723ac1220f8bc3a77c75879dad90b1db2c7033e9dc2fbd1597e157886c8.scope - libcontainer container 10190723ac1220f8bc3a77c75879dad90b1db2c7033e9dc2fbd1597e157886c8. Nov 5 00:06:09.151315 containerd[1616]: time="2025-11-05T00:06:09.151266429Z" level=info msg="StartContainer for \"10190723ac1220f8bc3a77c75879dad90b1db2c7033e9dc2fbd1597e157886c8\" returns successfully" Nov 5 00:06:09.215496 containerd[1616]: time="2025-11-05T00:06:09.215447511Z" level=info msg="TaskExit event in podsandbox handler container_id:\"10190723ac1220f8bc3a77c75879dad90b1db2c7033e9dc2fbd1597e157886c8\" id:\"13558edce63da3844d9693e8192aaadd685e4e094f2abfde7d037621950855f6\" pid:3501 exited_at:{seconds:1762301169 nanos:215044903}" Nov 5 00:06:09.239933 kubelet[2749]: I1105 00:06:09.239897 2749 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 5 00:06:09.272753 systemd[1]: Created slice kubepods-burstable-pod4f5da3d9_d59d_4866_8bdc_7941774c154d.slice - libcontainer container kubepods-burstable-pod4f5da3d9_d59d_4866_8bdc_7941774c154d.slice. Nov 5 00:06:09.280736 systemd[1]: Created slice kubepods-burstable-pod9f0c3eef_5621_467e_812b_8a22483b5582.slice - libcontainer container kubepods-burstable-pod9f0c3eef_5621_467e_812b_8a22483b5582.slice. Nov 5 00:06:09.325770 kubelet[2749]: I1105 00:06:09.325675 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9f0c3eef-5621-467e-812b-8a22483b5582-config-volume\") pod \"coredns-668d6bf9bc-mszg7\" (UID: \"9f0c3eef-5621-467e-812b-8a22483b5582\") " pod="kube-system/coredns-668d6bf9bc-mszg7" Nov 5 00:06:09.325770 kubelet[2749]: I1105 00:06:09.325704 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4f5da3d9-d59d-4866-8bdc-7941774c154d-config-volume\") pod \"coredns-668d6bf9bc-vzb5w\" (UID: \"4f5da3d9-d59d-4866-8bdc-7941774c154d\") " pod="kube-system/coredns-668d6bf9bc-vzb5w" Nov 5 00:06:09.325770 kubelet[2749]: I1105 00:06:09.325724 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ls59\" (UniqueName: \"kubernetes.io/projected/9f0c3eef-5621-467e-812b-8a22483b5582-kube-api-access-7ls59\") pod \"coredns-668d6bf9bc-mszg7\" (UID: \"9f0c3eef-5621-467e-812b-8a22483b5582\") " pod="kube-system/coredns-668d6bf9bc-mszg7" Nov 5 00:06:09.325770 kubelet[2749]: I1105 00:06:09.325742 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtj87\" (UniqueName: \"kubernetes.io/projected/4f5da3d9-d59d-4866-8bdc-7941774c154d-kube-api-access-jtj87\") pod \"coredns-668d6bf9bc-vzb5w\" (UID: \"4f5da3d9-d59d-4866-8bdc-7941774c154d\") " pod="kube-system/coredns-668d6bf9bc-vzb5w" Nov 5 00:06:09.576348 kubelet[2749]: E1105 00:06:09.576234 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 00:06:09.577150 containerd[1616]: time="2025-11-05T00:06:09.577107083Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-vzb5w,Uid:4f5da3d9-d59d-4866-8bdc-7941774c154d,Namespace:kube-system,Attempt:0,}" Nov 5 00:06:09.584181 kubelet[2749]: E1105 00:06:09.584148 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 00:06:09.584653 containerd[1616]: time="2025-11-05T00:06:09.584610841Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mszg7,Uid:9f0c3eef-5621-467e-812b-8a22483b5582,Namespace:kube-system,Attempt:0,}" Nov 5 00:06:10.072742 kubelet[2749]: E1105 00:06:10.072710 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 00:06:10.087295 kubelet[2749]: I1105 00:06:10.086946 2749 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-p7wtc" podStartSLOduration=7.482850696 podStartE2EDuration="34.086857563s" podCreationTimestamp="2025-11-05 00:05:36 +0000 UTC" firstStartedPulling="2025-11-05 00:05:37.46872165 +0000 UTC m=+6.562982190" lastFinishedPulling="2025-11-05 00:06:04.072728517 +0000 UTC m=+33.166989057" observedRunningTime="2025-11-05 00:06:10.086285886 +0000 UTC m=+39.180546426" watchObservedRunningTime="2025-11-05 00:06:10.086857563 +0000 UTC m=+39.181118103" Nov 5 00:06:11.073896 kubelet[2749]: E1105 00:06:11.073840 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 00:06:12.075543 kubelet[2749]: E1105 00:06:12.075502 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 00:06:12.105252 systemd-networkd[1511]: cilium_host: Link UP Nov 5 00:06:12.105784 systemd-networkd[1511]: cilium_net: Link UP Nov 5 00:06:12.105983 systemd-networkd[1511]: cilium_net: Gained carrier Nov 5 00:06:12.106161 systemd-networkd[1511]: cilium_host: Gained carrier Nov 5 00:06:12.202317 systemd-networkd[1511]: cilium_vxlan: Link UP Nov 5 00:06:12.202325 systemd-networkd[1511]: cilium_vxlan: Gained carrier Nov 5 00:06:12.258398 systemd[1]: Started sshd@10-10.0.0.3:22-10.0.0.1:58508.service - OpenSSH per-connection server daemon (10.0.0.1:58508). Nov 5 00:06:12.315044 sshd[3685]: Accepted publickey for core from 10.0.0.1 port 58508 ssh2: RSA SHA256:76q7OxBnofV7CHovkMIRFzt50oWq5VGuNm0JMuzHE4A Nov 5 00:06:12.316341 sshd-session[3685]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 00:06:12.320424 systemd-logind[1590]: New session 11 of user core. Nov 5 00:06:12.330040 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 5 00:06:12.408908 kernel: NET: Registered PF_ALG protocol family Nov 5 00:06:12.445405 sshd[3688]: Connection closed by 10.0.0.1 port 58508 Nov 5 00:06:12.445707 sshd-session[3685]: pam_unix(sshd:session): session closed for user core Nov 5 00:06:12.458539 systemd[1]: sshd@10-10.0.0.3:22-10.0.0.1:58508.service: Deactivated successfully. Nov 5 00:06:12.460399 systemd[1]: session-11.scope: Deactivated successfully. Nov 5 00:06:12.461099 systemd-logind[1590]: Session 11 logged out. Waiting for processes to exit. Nov 5 00:06:12.464049 systemd[1]: Started sshd@11-10.0.0.3:22-10.0.0.1:58510.service - OpenSSH per-connection server daemon (10.0.0.1:58510). Nov 5 00:06:12.465003 systemd-logind[1590]: Removed session 11. Nov 5 00:06:12.514130 sshd[3725]: Accepted publickey for core from 10.0.0.1 port 58510 ssh2: RSA SHA256:76q7OxBnofV7CHovkMIRFzt50oWq5VGuNm0JMuzHE4A Nov 5 00:06:12.515506 sshd-session[3725]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 00:06:12.519695 systemd-logind[1590]: New session 12 of user core. Nov 5 00:06:12.530023 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 5 00:06:12.680551 sshd[3728]: Connection closed by 10.0.0.1 port 58510 Nov 5 00:06:12.681423 sshd-session[3725]: pam_unix(sshd:session): session closed for user core Nov 5 00:06:12.690925 systemd[1]: sshd@11-10.0.0.3:22-10.0.0.1:58510.service: Deactivated successfully. Nov 5 00:06:12.693624 systemd[1]: session-12.scope: Deactivated successfully. Nov 5 00:06:12.696246 systemd-logind[1590]: Session 12 logged out. Waiting for processes to exit. Nov 5 00:06:12.699491 systemd-logind[1590]: Removed session 12. Nov 5 00:06:12.703119 systemd[1]: Started sshd@12-10.0.0.3:22-10.0.0.1:58526.service - OpenSSH per-connection server daemon (10.0.0.1:58526). Nov 5 00:06:12.751849 sshd[3810]: Accepted publickey for core from 10.0.0.1 port 58526 ssh2: RSA SHA256:76q7OxBnofV7CHovkMIRFzt50oWq5VGuNm0JMuzHE4A Nov 5 00:06:12.753092 sshd-session[3810]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 00:06:12.757398 systemd-logind[1590]: New session 13 of user core. Nov 5 00:06:12.764997 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 5 00:06:12.845133 systemd-networkd[1511]: cilium_net: Gained IPv6LL Nov 5 00:06:12.845419 systemd-networkd[1511]: cilium_host: Gained IPv6LL Nov 5 00:06:12.878718 sshd[3844]: Connection closed by 10.0.0.1 port 58526 Nov 5 00:06:12.879033 sshd-session[3810]: pam_unix(sshd:session): session closed for user core Nov 5 00:06:12.883595 systemd[1]: sshd@12-10.0.0.3:22-10.0.0.1:58526.service: Deactivated successfully. Nov 5 00:06:12.885533 systemd[1]: session-13.scope: Deactivated successfully. Nov 5 00:06:12.886415 systemd-logind[1590]: Session 13 logged out. Waiting for processes to exit. Nov 5 00:06:12.888016 systemd-logind[1590]: Removed session 13. Nov 5 00:06:13.056993 systemd-networkd[1511]: lxc_health: Link UP Nov 5 00:06:13.058780 systemd-networkd[1511]: lxc_health: Gained carrier Nov 5 00:06:13.163910 kernel: eth0: renamed from tmp53674 Nov 5 00:06:13.168850 systemd-networkd[1511]: lxc23e178652273: Link UP Nov 5 00:06:13.171920 systemd-networkd[1511]: lxc23e178652273: Gained carrier Nov 5 00:06:13.172127 systemd-networkd[1511]: lxc294060c0d4be: Link UP Nov 5 00:06:13.181958 kernel: eth0: renamed from tmpd0688 Nov 5 00:06:13.183794 systemd-networkd[1511]: lxc294060c0d4be: Gained carrier Nov 5 00:06:13.357409 kubelet[2749]: E1105 00:06:13.357289 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 00:06:13.423003 systemd-networkd[1511]: cilium_vxlan: Gained IPv6LL Nov 5 00:06:14.077967 kubelet[2749]: E1105 00:06:14.077931 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 00:06:14.189992 systemd-networkd[1511]: lxc_health: Gained IPv6LL Nov 5 00:06:14.381168 systemd-networkd[1511]: lxc294060c0d4be: Gained IPv6LL Nov 5 00:06:15.079909 kubelet[2749]: E1105 00:06:15.079087 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 00:06:15.213022 systemd-networkd[1511]: lxc23e178652273: Gained IPv6LL Nov 5 00:06:16.356015 containerd[1616]: time="2025-11-05T00:06:16.355973115Z" level=info msg="connecting to shim d068840eba5cfdfcd83e3f4e1a48b0ad1a47ec4752d3b9ee1d65b071ff10bcb1" address="unix:///run/containerd/s/4200f817f5fe0fd0be7cf8a55cb333337bcdad7e29bbee562ac5c0a1eb2459ba" namespace=k8s.io protocol=ttrpc version=3 Nov 5 00:06:16.357943 containerd[1616]: time="2025-11-05T00:06:16.357859691Z" level=info msg="connecting to shim 536743f46788cd4a49db353712a603233369f1e795698b17529fa3428b1afbc6" address="unix:///run/containerd/s/afffb35c311f4b3c803a3bb95e5039b1c07c17d08f20ef825d4c2256585a7160" namespace=k8s.io protocol=ttrpc version=3 Nov 5 00:06:16.384995 systemd[1]: Started cri-containerd-d068840eba5cfdfcd83e3f4e1a48b0ad1a47ec4752d3b9ee1d65b071ff10bcb1.scope - libcontainer container d068840eba5cfdfcd83e3f4e1a48b0ad1a47ec4752d3b9ee1d65b071ff10bcb1. Nov 5 00:06:16.393440 systemd[1]: Started cri-containerd-536743f46788cd4a49db353712a603233369f1e795698b17529fa3428b1afbc6.scope - libcontainer container 536743f46788cd4a49db353712a603233369f1e795698b17529fa3428b1afbc6. Nov 5 00:06:16.401837 systemd-resolved[1303]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 5 00:06:16.407680 systemd-resolved[1303]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 5 00:06:16.432728 containerd[1616]: time="2025-11-05T00:06:16.432690843Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-vzb5w,Uid:4f5da3d9-d59d-4866-8bdc-7941774c154d,Namespace:kube-system,Attempt:0,} returns sandbox id \"d068840eba5cfdfcd83e3f4e1a48b0ad1a47ec4752d3b9ee1d65b071ff10bcb1\"" Nov 5 00:06:16.433391 kubelet[2749]: E1105 00:06:16.433360 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 00:06:16.445433 containerd[1616]: time="2025-11-05T00:06:16.445396799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mszg7,Uid:9f0c3eef-5621-467e-812b-8a22483b5582,Namespace:kube-system,Attempt:0,} returns sandbox id \"536743f46788cd4a49db353712a603233369f1e795698b17529fa3428b1afbc6\"" Nov 5 00:06:16.446184 kubelet[2749]: E1105 00:06:16.446164 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 00:06:16.447756 containerd[1616]: time="2025-11-05T00:06:16.447734174Z" level=info msg="CreateContainer within sandbox \"536743f46788cd4a49db353712a603233369f1e795698b17529fa3428b1afbc6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 5 00:06:16.448681 containerd[1616]: time="2025-11-05T00:06:16.448479474Z" level=info msg="CreateContainer within sandbox \"d068840eba5cfdfcd83e3f4e1a48b0ad1a47ec4752d3b9ee1d65b071ff10bcb1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 5 00:06:16.463991 containerd[1616]: time="2025-11-05T00:06:16.463955710Z" level=info msg="Container 5d9f6ddc5a0d7089053cd8c01e461c6ad04b7d10993a8304c8d5c4b22c496d30: CDI devices from CRI Config.CDIDevices: []" Nov 5 00:06:16.467424 containerd[1616]: time="2025-11-05T00:06:16.467384215Z" level=info msg="Container 62f5ef6d59fe7698d3def047de8a19ef32698c9187f2778fa05dca73e9f445a8: CDI devices from CRI Config.CDIDevices: []" Nov 5 00:06:16.470528 containerd[1616]: time="2025-11-05T00:06:16.470490986Z" level=info msg="CreateContainer within sandbox \"d068840eba5cfdfcd83e3f4e1a48b0ad1a47ec4752d3b9ee1d65b071ff10bcb1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5d9f6ddc5a0d7089053cd8c01e461c6ad04b7d10993a8304c8d5c4b22c496d30\"" Nov 5 00:06:16.470969 containerd[1616]: time="2025-11-05T00:06:16.470939209Z" level=info msg="StartContainer for \"5d9f6ddc5a0d7089053cd8c01e461c6ad04b7d10993a8304c8d5c4b22c496d30\"" Nov 5 00:06:16.471662 containerd[1616]: time="2025-11-05T00:06:16.471634846Z" level=info msg="connecting to shim 5d9f6ddc5a0d7089053cd8c01e461c6ad04b7d10993a8304c8d5c4b22c496d30" address="unix:///run/containerd/s/4200f817f5fe0fd0be7cf8a55cb333337bcdad7e29bbee562ac5c0a1eb2459ba" protocol=ttrpc version=3 Nov 5 00:06:16.473743 containerd[1616]: time="2025-11-05T00:06:16.473708665Z" level=info msg="CreateContainer within sandbox \"536743f46788cd4a49db353712a603233369f1e795698b17529fa3428b1afbc6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"62f5ef6d59fe7698d3def047de8a19ef32698c9187f2778fa05dca73e9f445a8\"" Nov 5 00:06:16.474123 containerd[1616]: time="2025-11-05T00:06:16.474089882Z" level=info msg="StartContainer for \"62f5ef6d59fe7698d3def047de8a19ef32698c9187f2778fa05dca73e9f445a8\"" Nov 5 00:06:16.474798 containerd[1616]: time="2025-11-05T00:06:16.474760383Z" level=info msg="connecting to shim 62f5ef6d59fe7698d3def047de8a19ef32698c9187f2778fa05dca73e9f445a8" address="unix:///run/containerd/s/afffb35c311f4b3c803a3bb95e5039b1c07c17d08f20ef825d4c2256585a7160" protocol=ttrpc version=3 Nov 5 00:06:16.489011 systemd[1]: Started cri-containerd-5d9f6ddc5a0d7089053cd8c01e461c6ad04b7d10993a8304c8d5c4b22c496d30.scope - libcontainer container 5d9f6ddc5a0d7089053cd8c01e461c6ad04b7d10993a8304c8d5c4b22c496d30. Nov 5 00:06:16.492694 systemd[1]: Started cri-containerd-62f5ef6d59fe7698d3def047de8a19ef32698c9187f2778fa05dca73e9f445a8.scope - libcontainer container 62f5ef6d59fe7698d3def047de8a19ef32698c9187f2778fa05dca73e9f445a8. Nov 5 00:06:16.523135 containerd[1616]: time="2025-11-05T00:06:16.523082637Z" level=info msg="StartContainer for \"5d9f6ddc5a0d7089053cd8c01e461c6ad04b7d10993a8304c8d5c4b22c496d30\" returns successfully" Nov 5 00:06:16.529783 containerd[1616]: time="2025-11-05T00:06:16.529754791Z" level=info msg="StartContainer for \"62f5ef6d59fe7698d3def047de8a19ef32698c9187f2778fa05dca73e9f445a8\" returns successfully" Nov 5 00:06:17.086980 kubelet[2749]: E1105 00:06:17.086941 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 00:06:17.089136 kubelet[2749]: E1105 00:06:17.089043 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 00:06:17.097938 kubelet[2749]: I1105 00:06:17.097843 2749 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-vzb5w" podStartSLOduration=41.097829811 podStartE2EDuration="41.097829811s" podCreationTimestamp="2025-11-05 00:05:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 00:06:17.097413919 +0000 UTC m=+46.191674459" watchObservedRunningTime="2025-11-05 00:06:17.097829811 +0000 UTC m=+46.192090351" Nov 5 00:06:17.115478 kubelet[2749]: I1105 00:06:17.115410 2749 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-mszg7" podStartSLOduration=41.11539087 podStartE2EDuration="41.11539087s" podCreationTimestamp="2025-11-05 00:05:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 00:06:17.10591623 +0000 UTC m=+46.200176770" watchObservedRunningTime="2025-11-05 00:06:17.11539087 +0000 UTC m=+46.209651410" Nov 5 00:06:17.893632 systemd[1]: Started sshd@13-10.0.0.3:22-10.0.0.1:33876.service - OpenSSH per-connection server daemon (10.0.0.1:33876). Nov 5 00:06:17.943288 sshd[4192]: Accepted publickey for core from 10.0.0.1 port 33876 ssh2: RSA SHA256:76q7OxBnofV7CHovkMIRFzt50oWq5VGuNm0JMuzHE4A Nov 5 00:06:17.944669 sshd-session[4192]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 00:06:17.949501 systemd-logind[1590]: New session 14 of user core. Nov 5 00:06:17.964004 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 5 00:06:18.082443 sshd[4195]: Connection closed by 10.0.0.1 port 33876 Nov 5 00:06:18.082773 sshd-session[4192]: pam_unix(sshd:session): session closed for user core Nov 5 00:06:18.087378 systemd[1]: sshd@13-10.0.0.3:22-10.0.0.1:33876.service: Deactivated successfully. Nov 5 00:06:18.089385 systemd[1]: session-14.scope: Deactivated successfully. Nov 5 00:06:18.091038 systemd-logind[1590]: Session 14 logged out. Waiting for processes to exit. Nov 5 00:06:18.091322 kubelet[2749]: E1105 00:06:18.091273 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 00:06:18.091954 kubelet[2749]: E1105 00:06:18.091924 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 00:06:18.092307 systemd-logind[1590]: Removed session 14. Nov 5 00:06:19.093240 kubelet[2749]: E1105 00:06:19.092969 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 00:06:19.093240 kubelet[2749]: E1105 00:06:19.093101 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 00:06:23.094520 systemd[1]: Started sshd@14-10.0.0.3:22-10.0.0.1:46840.service - OpenSSH per-connection server daemon (10.0.0.1:46840). Nov 5 00:06:23.149501 sshd[4211]: Accepted publickey for core from 10.0.0.1 port 46840 ssh2: RSA SHA256:76q7OxBnofV7CHovkMIRFzt50oWq5VGuNm0JMuzHE4A Nov 5 00:06:23.150652 sshd-session[4211]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 00:06:23.154534 systemd-logind[1590]: New session 15 of user core. Nov 5 00:06:23.164004 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 5 00:06:23.271130 sshd[4214]: Connection closed by 10.0.0.1 port 46840 Nov 5 00:06:23.271444 sshd-session[4211]: pam_unix(sshd:session): session closed for user core Nov 5 00:06:23.280561 systemd[1]: sshd@14-10.0.0.3:22-10.0.0.1:46840.service: Deactivated successfully. Nov 5 00:06:23.282422 systemd[1]: session-15.scope: Deactivated successfully. Nov 5 00:06:23.283194 systemd-logind[1590]: Session 15 logged out. Waiting for processes to exit. Nov 5 00:06:23.286403 systemd[1]: Started sshd@15-10.0.0.3:22-10.0.0.1:46854.service - OpenSSH per-connection server daemon (10.0.0.1:46854). Nov 5 00:06:23.287078 systemd-logind[1590]: Removed session 15. Nov 5 00:06:23.345116 sshd[4227]: Accepted publickey for core from 10.0.0.1 port 46854 ssh2: RSA SHA256:76q7OxBnofV7CHovkMIRFzt50oWq5VGuNm0JMuzHE4A Nov 5 00:06:23.346221 sshd-session[4227]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 00:06:23.350253 systemd-logind[1590]: New session 16 of user core. Nov 5 00:06:23.365037 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 5 00:06:23.545335 sshd[4230]: Connection closed by 10.0.0.1 port 46854 Nov 5 00:06:23.545808 sshd-session[4227]: pam_unix(sshd:session): session closed for user core Nov 5 00:06:23.559659 systemd[1]: sshd@15-10.0.0.3:22-10.0.0.1:46854.service: Deactivated successfully. Nov 5 00:06:23.561551 systemd[1]: session-16.scope: Deactivated successfully. Nov 5 00:06:23.562359 systemd-logind[1590]: Session 16 logged out. Waiting for processes to exit. Nov 5 00:06:23.565339 systemd[1]: Started sshd@16-10.0.0.3:22-10.0.0.1:46868.service - OpenSSH per-connection server daemon (10.0.0.1:46868). Nov 5 00:06:23.566122 systemd-logind[1590]: Removed session 16. Nov 5 00:06:23.627191 sshd[4242]: Accepted publickey for core from 10.0.0.1 port 46868 ssh2: RSA SHA256:76q7OxBnofV7CHovkMIRFzt50oWq5VGuNm0JMuzHE4A Nov 5 00:06:23.628682 sshd-session[4242]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 00:06:23.632897 systemd-logind[1590]: New session 17 of user core. Nov 5 00:06:23.651005 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 5 00:06:24.264604 sshd[4245]: Connection closed by 10.0.0.1 port 46868 Nov 5 00:06:24.264981 sshd-session[4242]: pam_unix(sshd:session): session closed for user core Nov 5 00:06:24.275801 systemd[1]: sshd@16-10.0.0.3:22-10.0.0.1:46868.service: Deactivated successfully. Nov 5 00:06:24.278557 systemd[1]: session-17.scope: Deactivated successfully. Nov 5 00:06:24.281052 systemd-logind[1590]: Session 17 logged out. Waiting for processes to exit. Nov 5 00:06:24.285648 systemd[1]: Started sshd@17-10.0.0.3:22-10.0.0.1:46880.service - OpenSSH per-connection server daemon (10.0.0.1:46880). Nov 5 00:06:24.287048 systemd-logind[1590]: Removed session 17. Nov 5 00:06:24.336745 sshd[4266]: Accepted publickey for core from 10.0.0.1 port 46880 ssh2: RSA SHA256:76q7OxBnofV7CHovkMIRFzt50oWq5VGuNm0JMuzHE4A Nov 5 00:06:24.337970 sshd-session[4266]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 00:06:24.342190 systemd-logind[1590]: New session 18 of user core. Nov 5 00:06:24.357024 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 5 00:06:24.559981 sshd[4269]: Connection closed by 10.0.0.1 port 46880 Nov 5 00:06:24.560498 sshd-session[4266]: pam_unix(sshd:session): session closed for user core Nov 5 00:06:24.570577 systemd[1]: sshd@17-10.0.0.3:22-10.0.0.1:46880.service: Deactivated successfully. Nov 5 00:06:24.572526 systemd[1]: session-18.scope: Deactivated successfully. Nov 5 00:06:24.573388 systemd-logind[1590]: Session 18 logged out. Waiting for processes to exit. Nov 5 00:06:24.576230 systemd[1]: Started sshd@18-10.0.0.3:22-10.0.0.1:46892.service - OpenSSH per-connection server daemon (10.0.0.1:46892). Nov 5 00:06:24.576941 systemd-logind[1590]: Removed session 18. Nov 5 00:06:24.625493 sshd[4280]: Accepted publickey for core from 10.0.0.1 port 46892 ssh2: RSA SHA256:76q7OxBnofV7CHovkMIRFzt50oWq5VGuNm0JMuzHE4A Nov 5 00:06:24.626689 sshd-session[4280]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 00:06:24.630903 systemd-logind[1590]: New session 19 of user core. Nov 5 00:06:24.638003 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 5 00:06:24.745008 sshd[4283]: Connection closed by 10.0.0.1 port 46892 Nov 5 00:06:24.745324 sshd-session[4280]: pam_unix(sshd:session): session closed for user core Nov 5 00:06:24.749747 systemd[1]: sshd@18-10.0.0.3:22-10.0.0.1:46892.service: Deactivated successfully. Nov 5 00:06:24.751714 systemd[1]: session-19.scope: Deactivated successfully. Nov 5 00:06:24.752633 systemd-logind[1590]: Session 19 logged out. Waiting for processes to exit. Nov 5 00:06:24.753642 systemd-logind[1590]: Removed session 19. Nov 5 00:06:29.761790 systemd[1]: Started sshd@19-10.0.0.3:22-10.0.0.1:46898.service - OpenSSH per-connection server daemon (10.0.0.1:46898). Nov 5 00:06:29.814594 sshd[4298]: Accepted publickey for core from 10.0.0.1 port 46898 ssh2: RSA SHA256:76q7OxBnofV7CHovkMIRFzt50oWq5VGuNm0JMuzHE4A Nov 5 00:06:29.815806 sshd-session[4298]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 00:06:29.819922 systemd-logind[1590]: New session 20 of user core. Nov 5 00:06:29.831999 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 5 00:06:29.934918 sshd[4301]: Connection closed by 10.0.0.1 port 46898 Nov 5 00:06:29.935249 sshd-session[4298]: pam_unix(sshd:session): session closed for user core Nov 5 00:06:29.939252 systemd[1]: sshd@19-10.0.0.3:22-10.0.0.1:46898.service: Deactivated successfully. Nov 5 00:06:29.942318 systemd[1]: session-20.scope: Deactivated successfully. Nov 5 00:06:29.943265 systemd-logind[1590]: Session 20 logged out. Waiting for processes to exit. Nov 5 00:06:29.944373 systemd-logind[1590]: Removed session 20. Nov 5 00:06:34.953337 systemd[1]: Started sshd@20-10.0.0.3:22-10.0.0.1:51468.service - OpenSSH per-connection server daemon (10.0.0.1:51468). Nov 5 00:06:35.003926 sshd[4316]: Accepted publickey for core from 10.0.0.1 port 51468 ssh2: RSA SHA256:76q7OxBnofV7CHovkMIRFzt50oWq5VGuNm0JMuzHE4A Nov 5 00:06:35.005418 sshd-session[4316]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 00:06:35.009576 systemd-logind[1590]: New session 21 of user core. Nov 5 00:06:35.016000 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 5 00:06:35.118600 sshd[4319]: Connection closed by 10.0.0.1 port 51468 Nov 5 00:06:35.118983 sshd-session[4316]: pam_unix(sshd:session): session closed for user core Nov 5 00:06:35.123252 systemd[1]: sshd@20-10.0.0.3:22-10.0.0.1:51468.service: Deactivated successfully. Nov 5 00:06:35.125339 systemd[1]: session-21.scope: Deactivated successfully. Nov 5 00:06:35.127186 systemd-logind[1590]: Session 21 logged out. Waiting for processes to exit. Nov 5 00:06:35.128524 systemd-logind[1590]: Removed session 21. Nov 5 00:06:40.134584 systemd[1]: Started sshd@21-10.0.0.3:22-10.0.0.1:51472.service - OpenSSH per-connection server daemon (10.0.0.1:51472). Nov 5 00:06:40.185986 sshd[4334]: Accepted publickey for core from 10.0.0.1 port 51472 ssh2: RSA SHA256:76q7OxBnofV7CHovkMIRFzt50oWq5VGuNm0JMuzHE4A Nov 5 00:06:40.187139 sshd-session[4334]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 00:06:40.191275 systemd-logind[1590]: New session 22 of user core. Nov 5 00:06:40.201011 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 5 00:06:40.306985 sshd[4337]: Connection closed by 10.0.0.1 port 51472 Nov 5 00:06:40.307301 sshd-session[4334]: pam_unix(sshd:session): session closed for user core Nov 5 00:06:40.311978 systemd[1]: sshd@21-10.0.0.3:22-10.0.0.1:51472.service: Deactivated successfully. Nov 5 00:06:40.313987 systemd[1]: session-22.scope: Deactivated successfully. Nov 5 00:06:40.314752 systemd-logind[1590]: Session 22 logged out. Waiting for processes to exit. Nov 5 00:06:40.315995 systemd-logind[1590]: Removed session 22. Nov 5 00:06:45.323585 systemd[1]: Started sshd@22-10.0.0.3:22-10.0.0.1:36526.service - OpenSSH per-connection server daemon (10.0.0.1:36526). Nov 5 00:06:45.376825 sshd[4351]: Accepted publickey for core from 10.0.0.1 port 36526 ssh2: RSA SHA256:76q7OxBnofV7CHovkMIRFzt50oWq5VGuNm0JMuzHE4A Nov 5 00:06:45.378529 sshd-session[4351]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 00:06:45.382638 systemd-logind[1590]: New session 23 of user core. Nov 5 00:06:45.392003 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 5 00:06:45.497763 sshd[4354]: Connection closed by 10.0.0.1 port 36526 Nov 5 00:06:45.498115 sshd-session[4351]: pam_unix(sshd:session): session closed for user core Nov 5 00:06:45.512548 systemd[1]: sshd@22-10.0.0.3:22-10.0.0.1:36526.service: Deactivated successfully. Nov 5 00:06:45.514423 systemd[1]: session-23.scope: Deactivated successfully. Nov 5 00:06:45.515225 systemd-logind[1590]: Session 23 logged out. Waiting for processes to exit. Nov 5 00:06:45.518437 systemd[1]: Started sshd@23-10.0.0.3:22-10.0.0.1:36538.service - OpenSSH per-connection server daemon (10.0.0.1:36538). Nov 5 00:06:45.519070 systemd-logind[1590]: Removed session 23. Nov 5 00:06:45.578921 sshd[4367]: Accepted publickey for core from 10.0.0.1 port 36538 ssh2: RSA SHA256:76q7OxBnofV7CHovkMIRFzt50oWq5VGuNm0JMuzHE4A Nov 5 00:06:45.580510 sshd-session[4367]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 00:06:45.584683 systemd-logind[1590]: New session 24 of user core. Nov 5 00:06:45.597988 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 5 00:06:46.914802 containerd[1616]: time="2025-11-05T00:06:46.914744368Z" level=info msg="StopContainer for \"101ca1e0ced8c982d5e1c0bb6173f0b73975d9397dab99f025600143dc61125e\" with timeout 30 (s)" Nov 5 00:06:46.922376 containerd[1616]: time="2025-11-05T00:06:46.922342690Z" level=info msg="Stop container \"101ca1e0ced8c982d5e1c0bb6173f0b73975d9397dab99f025600143dc61125e\" with signal terminated" Nov 5 00:06:46.934748 systemd[1]: cri-containerd-101ca1e0ced8c982d5e1c0bb6173f0b73975d9397dab99f025600143dc61125e.scope: Deactivated successfully. Nov 5 00:06:46.936740 containerd[1616]: time="2025-11-05T00:06:46.936548937Z" level=info msg="received exit event container_id:\"101ca1e0ced8c982d5e1c0bb6173f0b73975d9397dab99f025600143dc61125e\" id:\"101ca1e0ced8c982d5e1c0bb6173f0b73975d9397dab99f025600143dc61125e\" pid:3161 exited_at:{seconds:1762301206 nanos:936238256}" Nov 5 00:06:46.937648 containerd[1616]: time="2025-11-05T00:06:46.937619300Z" level=info msg="TaskExit event in podsandbox handler container_id:\"101ca1e0ced8c982d5e1c0bb6173f0b73975d9397dab99f025600143dc61125e\" id:\"101ca1e0ced8c982d5e1c0bb6173f0b73975d9397dab99f025600143dc61125e\" pid:3161 exited_at:{seconds:1762301206 nanos:936238256}" Nov 5 00:06:46.956522 containerd[1616]: time="2025-11-05T00:06:46.956470881Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 5 00:06:46.960324 containerd[1616]: time="2025-11-05T00:06:46.960162790Z" level=info msg="StopContainer for \"10190723ac1220f8bc3a77c75879dad90b1db2c7033e9dc2fbd1597e157886c8\" with timeout 2 (s)" Nov 5 00:06:46.961186 containerd[1616]: time="2025-11-05T00:06:46.961153458Z" level=info msg="Stop container \"10190723ac1220f8bc3a77c75879dad90b1db2c7033e9dc2fbd1597e157886c8\" with signal terminated" Nov 5 00:06:46.961765 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-101ca1e0ced8c982d5e1c0bb6173f0b73975d9397dab99f025600143dc61125e-rootfs.mount: Deactivated successfully. Nov 5 00:06:46.967893 systemd-networkd[1511]: lxc_health: Link DOWN Nov 5 00:06:46.967903 systemd-networkd[1511]: lxc_health: Lost carrier Nov 5 00:06:46.975185 containerd[1616]: time="2025-11-05T00:06:46.975152274Z" level=info msg="TaskExit event in podsandbox handler container_id:\"10190723ac1220f8bc3a77c75879dad90b1db2c7033e9dc2fbd1597e157886c8\" id:\"81d161ef122f2361ccf24a5951fce75e28127de0ffadce3555251f1f5bc963ed\" pid:4396 exited_at:{seconds:1762301206 nanos:957706694}" Nov 5 00:06:46.984616 containerd[1616]: time="2025-11-05T00:06:46.984559178Z" level=info msg="StopContainer for \"101ca1e0ced8c982d5e1c0bb6173f0b73975d9397dab99f025600143dc61125e\" returns successfully" Nov 5 00:06:46.986991 systemd[1]: cri-containerd-10190723ac1220f8bc3a77c75879dad90b1db2c7033e9dc2fbd1597e157886c8.scope: Deactivated successfully. Nov 5 00:06:46.987344 systemd[1]: cri-containerd-10190723ac1220f8bc3a77c75879dad90b1db2c7033e9dc2fbd1597e157886c8.scope: Consumed 6.033s CPU time, 127.2M memory peak, 144K read from disk, 13.3M written to disk. Nov 5 00:06:46.990087 containerd[1616]: time="2025-11-05T00:06:46.990055850Z" level=info msg="StopPodSandbox for \"5833d8798378b70dd852e137161dbc03e5366be14172651e85471179576f64a6\"" Nov 5 00:06:46.990145 containerd[1616]: time="2025-11-05T00:06:46.990113202Z" level=info msg="Container to stop \"101ca1e0ced8c982d5e1c0bb6173f0b73975d9397dab99f025600143dc61125e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 5 00:06:46.990367 containerd[1616]: time="2025-11-05T00:06:46.990346874Z" level=info msg="received exit event container_id:\"10190723ac1220f8bc3a77c75879dad90b1db2c7033e9dc2fbd1597e157886c8\" id:\"10190723ac1220f8bc3a77c75879dad90b1db2c7033e9dc2fbd1597e157886c8\" pid:3470 exited_at:{seconds:1762301206 nanos:989979042}" Nov 5 00:06:46.990466 containerd[1616]: time="2025-11-05T00:06:46.990442539Z" level=info msg="TaskExit event in podsandbox handler container_id:\"10190723ac1220f8bc3a77c75879dad90b1db2c7033e9dc2fbd1597e157886c8\" id:\"10190723ac1220f8bc3a77c75879dad90b1db2c7033e9dc2fbd1597e157886c8\" pid:3470 exited_at:{seconds:1762301206 nanos:989979042}" Nov 5 00:06:46.996975 systemd[1]: cri-containerd-5833d8798378b70dd852e137161dbc03e5366be14172651e85471179576f64a6.scope: Deactivated successfully. Nov 5 00:06:46.999163 containerd[1616]: time="2025-11-05T00:06:46.999125822Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5833d8798378b70dd852e137161dbc03e5366be14172651e85471179576f64a6\" id:\"5833d8798378b70dd852e137161dbc03e5366be14172651e85471179576f64a6\" pid:2910 exit_status:137 exited_at:{seconds:1762301206 nanos:998736208}" Nov 5 00:06:47.010973 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-10190723ac1220f8bc3a77c75879dad90b1db2c7033e9dc2fbd1597e157886c8-rootfs.mount: Deactivated successfully. Nov 5 00:06:47.021251 containerd[1616]: time="2025-11-05T00:06:47.021214056Z" level=info msg="StopContainer for \"10190723ac1220f8bc3a77c75879dad90b1db2c7033e9dc2fbd1597e157886c8\" returns successfully" Nov 5 00:06:47.021796 containerd[1616]: time="2025-11-05T00:06:47.021700548Z" level=info msg="StopPodSandbox for \"54233661c7fa1220917b520c26abf3a10f292c738d79975fc391f8c01fb591b3\"" Nov 5 00:06:47.021796 containerd[1616]: time="2025-11-05T00:06:47.021774872Z" level=info msg="Container to stop \"7e93bee5a254c9c4de9888652c40d878f972af256e6ec00e1b9982f3cdaa2fed\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 5 00:06:47.021904 containerd[1616]: time="2025-11-05T00:06:47.021890686Z" level=info msg="Container to stop \"068b9e0c35cd2ab9338698eaf3775a5e203b28a64a9a99c40f330fff3b91140c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 5 00:06:47.021984 containerd[1616]: time="2025-11-05T00:06:47.021971122Z" level=info msg="Container to stop \"a6ef7b3c77ab5b988d421714de8e388903972bb2af9f0f5edbf2c49be3d71655\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 5 00:06:47.022057 containerd[1616]: time="2025-11-05T00:06:47.022043642Z" level=info msg="Container to stop \"cb94a9f014c00a7deb50ae755062b4130c50cd2da08a9d7e47e7d2c1d8d81ee6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 5 00:06:47.022110 containerd[1616]: time="2025-11-05T00:06:47.022096634Z" level=info msg="Container to stop \"10190723ac1220f8bc3a77c75879dad90b1db2c7033e9dc2fbd1597e157886c8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 5 00:06:47.028777 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5833d8798378b70dd852e137161dbc03e5366be14172651e85471179576f64a6-rootfs.mount: Deactivated successfully. Nov 5 00:06:47.029403 systemd[1]: cri-containerd-54233661c7fa1220917b520c26abf3a10f292c738d79975fc391f8c01fb591b3.scope: Deactivated successfully. Nov 5 00:06:47.031349 containerd[1616]: time="2025-11-05T00:06:47.031313885Z" level=info msg="shim disconnected" id=5833d8798378b70dd852e137161dbc03e5366be14172651e85471179576f64a6 namespace=k8s.io Nov 5 00:06:47.031349 containerd[1616]: time="2025-11-05T00:06:47.031343603Z" level=warning msg="cleaning up after shim disconnected" id=5833d8798378b70dd852e137161dbc03e5366be14172651e85471179576f64a6 namespace=k8s.io Nov 5 00:06:47.041772 containerd[1616]: time="2025-11-05T00:06:47.031351890Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 5 00:06:47.053169 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-54233661c7fa1220917b520c26abf3a10f292c738d79975fc391f8c01fb591b3-rootfs.mount: Deactivated successfully. Nov 5 00:06:47.057069 containerd[1616]: time="2025-11-05T00:06:47.057032230Z" level=info msg="shim disconnected" id=54233661c7fa1220917b520c26abf3a10f292c738d79975fc391f8c01fb591b3 namespace=k8s.io Nov 5 00:06:47.057069 containerd[1616]: time="2025-11-05T00:06:47.057064563Z" level=warning msg="cleaning up after shim disconnected" id=54233661c7fa1220917b520c26abf3a10f292c738d79975fc391f8c01fb591b3 namespace=k8s.io Nov 5 00:06:47.057253 containerd[1616]: time="2025-11-05T00:06:47.057073249Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 5 00:06:47.069142 containerd[1616]: time="2025-11-05T00:06:47.069102771Z" level=info msg="TaskExit event in podsandbox handler container_id:\"54233661c7fa1220917b520c26abf3a10f292c738d79975fc391f8c01fb591b3\" id:\"54233661c7fa1220917b520c26abf3a10f292c738d79975fc391f8c01fb591b3\" pid:2953 exit_status:137 exited_at:{seconds:1762301207 nanos:28647157}" Nov 5 00:06:47.069291 containerd[1616]: time="2025-11-05T00:06:47.069272530Z" level=info msg="received exit event sandbox_id:\"54233661c7fa1220917b520c26abf3a10f292c738d79975fc391f8c01fb591b3\" exit_status:137 exited_at:{seconds:1762301207 nanos:28647157}" Nov 5 00:06:47.070905 containerd[1616]: time="2025-11-05T00:06:47.070640036Z" level=info msg="received exit event sandbox_id:\"5833d8798378b70dd852e137161dbc03e5366be14172651e85471179576f64a6\" exit_status:137 exited_at:{seconds:1762301206 nanos:998736208}" Nov 5 00:06:47.072113 containerd[1616]: time="2025-11-05T00:06:47.072093417Z" level=info msg="TearDown network for sandbox \"54233661c7fa1220917b520c26abf3a10f292c738d79975fc391f8c01fb591b3\" successfully" Nov 5 00:06:47.072188 containerd[1616]: time="2025-11-05T00:06:47.072175014Z" level=info msg="StopPodSandbox for \"54233661c7fa1220917b520c26abf3a10f292c738d79975fc391f8c01fb591b3\" returns successfully" Nov 5 00:06:47.072316 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-54233661c7fa1220917b520c26abf3a10f292c738d79975fc391f8c01fb591b3-shm.mount: Deactivated successfully. Nov 5 00:06:47.072499 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5833d8798378b70dd852e137161dbc03e5366be14172651e85471179576f64a6-shm.mount: Deactivated successfully. Nov 5 00:06:47.076148 containerd[1616]: time="2025-11-05T00:06:47.076086893Z" level=info msg="TearDown network for sandbox \"5833d8798378b70dd852e137161dbc03e5366be14172651e85471179576f64a6\" successfully" Nov 5 00:06:47.076148 containerd[1616]: time="2025-11-05T00:06:47.076105879Z" level=info msg="StopPodSandbox for \"5833d8798378b70dd852e137161dbc03e5366be14172651e85471179576f64a6\" returns successfully" Nov 5 00:06:47.134630 kubelet[2749]: I1105 00:06:47.134560 2749 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9e082060-ad3b-46f6-a953-f94bd98790b3-host-proc-sys-net\") pod \"9e082060-ad3b-46f6-a953-f94bd98790b3\" (UID: \"9e082060-ad3b-46f6-a953-f94bd98790b3\") " Nov 5 00:06:47.134630 kubelet[2749]: I1105 00:06:47.134615 2749 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d113c0dc-7aeb-4225-9ae0-77ff41813a7a-cilium-config-path\") pod \"d113c0dc-7aeb-4225-9ae0-77ff41813a7a\" (UID: \"d113c0dc-7aeb-4225-9ae0-77ff41813a7a\") " Nov 5 00:06:47.135150 kubelet[2749]: I1105 00:06:47.134657 2749 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9e082060-ad3b-46f6-a953-f94bd98790b3-bpf-maps\") pod \"9e082060-ad3b-46f6-a953-f94bd98790b3\" (UID: \"9e082060-ad3b-46f6-a953-f94bd98790b3\") " Nov 5 00:06:47.135150 kubelet[2749]: I1105 00:06:47.134680 2749 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fs9js\" (UniqueName: \"kubernetes.io/projected/9e082060-ad3b-46f6-a953-f94bd98790b3-kube-api-access-fs9js\") pod \"9e082060-ad3b-46f6-a953-f94bd98790b3\" (UID: \"9e082060-ad3b-46f6-a953-f94bd98790b3\") " Nov 5 00:06:47.135150 kubelet[2749]: I1105 00:06:47.134699 2749 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9e082060-ad3b-46f6-a953-f94bd98790b3-cilium-cgroup\") pod \"9e082060-ad3b-46f6-a953-f94bd98790b3\" (UID: \"9e082060-ad3b-46f6-a953-f94bd98790b3\") " Nov 5 00:06:47.135150 kubelet[2749]: I1105 00:06:47.134717 2749 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9e082060-ad3b-46f6-a953-f94bd98790b3-host-proc-sys-kernel\") pod \"9e082060-ad3b-46f6-a953-f94bd98790b3\" (UID: \"9e082060-ad3b-46f6-a953-f94bd98790b3\") " Nov 5 00:06:47.135150 kubelet[2749]: I1105 00:06:47.134713 2749 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e082060-ad3b-46f6-a953-f94bd98790b3-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "9e082060-ad3b-46f6-a953-f94bd98790b3" (UID: "9e082060-ad3b-46f6-a953-f94bd98790b3"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 5 00:06:47.135150 kubelet[2749]: I1105 00:06:47.134735 2749 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9e082060-ad3b-46f6-a953-f94bd98790b3-cilium-config-path\") pod \"9e082060-ad3b-46f6-a953-f94bd98790b3\" (UID: \"9e082060-ad3b-46f6-a953-f94bd98790b3\") " Nov 5 00:06:47.135295 kubelet[2749]: I1105 00:06:47.134802 2749 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9e082060-ad3b-46f6-a953-f94bd98790b3-cni-path\") pod \"9e082060-ad3b-46f6-a953-f94bd98790b3\" (UID: \"9e082060-ad3b-46f6-a953-f94bd98790b3\") " Nov 5 00:06:47.135295 kubelet[2749]: I1105 00:06:47.134826 2749 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9e082060-ad3b-46f6-a953-f94bd98790b3-hubble-tls\") pod \"9e082060-ad3b-46f6-a953-f94bd98790b3\" (UID: \"9e082060-ad3b-46f6-a953-f94bd98790b3\") " Nov 5 00:06:47.135295 kubelet[2749]: I1105 00:06:47.134847 2749 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9e082060-ad3b-46f6-a953-f94bd98790b3-clustermesh-secrets\") pod \"9e082060-ad3b-46f6-a953-f94bd98790b3\" (UID: \"9e082060-ad3b-46f6-a953-f94bd98790b3\") " Nov 5 00:06:47.135295 kubelet[2749]: I1105 00:06:47.134865 2749 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9e082060-ad3b-46f6-a953-f94bd98790b3-xtables-lock\") pod \"9e082060-ad3b-46f6-a953-f94bd98790b3\" (UID: \"9e082060-ad3b-46f6-a953-f94bd98790b3\") " Nov 5 00:06:47.135295 kubelet[2749]: I1105 00:06:47.134904 2749 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9e082060-ad3b-46f6-a953-f94bd98790b3-hostproc\") pod \"9e082060-ad3b-46f6-a953-f94bd98790b3\" (UID: \"9e082060-ad3b-46f6-a953-f94bd98790b3\") " Nov 5 00:06:47.135295 kubelet[2749]: I1105 00:06:47.134918 2749 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9e082060-ad3b-46f6-a953-f94bd98790b3-etc-cni-netd\") pod \"9e082060-ad3b-46f6-a953-f94bd98790b3\" (UID: \"9e082060-ad3b-46f6-a953-f94bd98790b3\") " Nov 5 00:06:47.135432 kubelet[2749]: I1105 00:06:47.134931 2749 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9e082060-ad3b-46f6-a953-f94bd98790b3-lib-modules\") pod \"9e082060-ad3b-46f6-a953-f94bd98790b3\" (UID: \"9e082060-ad3b-46f6-a953-f94bd98790b3\") " Nov 5 00:06:47.135432 kubelet[2749]: I1105 00:06:47.134945 2749 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9e082060-ad3b-46f6-a953-f94bd98790b3-cilium-run\") pod \"9e082060-ad3b-46f6-a953-f94bd98790b3\" (UID: \"9e082060-ad3b-46f6-a953-f94bd98790b3\") " Nov 5 00:06:47.135432 kubelet[2749]: I1105 00:06:47.134963 2749 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l54c5\" (UniqueName: \"kubernetes.io/projected/d113c0dc-7aeb-4225-9ae0-77ff41813a7a-kube-api-access-l54c5\") pod \"d113c0dc-7aeb-4225-9ae0-77ff41813a7a\" (UID: \"d113c0dc-7aeb-4225-9ae0-77ff41813a7a\") " Nov 5 00:06:47.135432 kubelet[2749]: I1105 00:06:47.135010 2749 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9e082060-ad3b-46f6-a953-f94bd98790b3-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Nov 5 00:06:47.135650 kubelet[2749]: I1105 00:06:47.135615 2749 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e082060-ad3b-46f6-a953-f94bd98790b3-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "9e082060-ad3b-46f6-a953-f94bd98790b3" (UID: "9e082060-ad3b-46f6-a953-f94bd98790b3"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 5 00:06:47.135901 kubelet[2749]: I1105 00:06:47.135720 2749 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e082060-ad3b-46f6-a953-f94bd98790b3-cni-path" (OuterVolumeSpecName: "cni-path") pod "9e082060-ad3b-46f6-a953-f94bd98790b3" (UID: "9e082060-ad3b-46f6-a953-f94bd98790b3"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 5 00:06:47.138120 kubelet[2749]: I1105 00:06:47.138091 2749 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e082060-ad3b-46f6-a953-f94bd98790b3-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "9e082060-ad3b-46f6-a953-f94bd98790b3" (UID: "9e082060-ad3b-46f6-a953-f94bd98790b3"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 5 00:06:47.138356 kubelet[2749]: I1105 00:06:47.138134 2749 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d113c0dc-7aeb-4225-9ae0-77ff41813a7a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d113c0dc-7aeb-4225-9ae0-77ff41813a7a" (UID: "d113c0dc-7aeb-4225-9ae0-77ff41813a7a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 5 00:06:47.138421 kubelet[2749]: I1105 00:06:47.138408 2749 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e082060-ad3b-46f6-a953-f94bd98790b3-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "9e082060-ad3b-46f6-a953-f94bd98790b3" (UID: "9e082060-ad3b-46f6-a953-f94bd98790b3"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 5 00:06:47.138528 kubelet[2749]: I1105 00:06:47.138513 2749 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e082060-ad3b-46f6-a953-f94bd98790b3-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "9e082060-ad3b-46f6-a953-f94bd98790b3" (UID: "9e082060-ad3b-46f6-a953-f94bd98790b3"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 5 00:06:47.138588 kubelet[2749]: I1105 00:06:47.138535 2749 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e082060-ad3b-46f6-a953-f94bd98790b3-hostproc" (OuterVolumeSpecName: "hostproc") pod "9e082060-ad3b-46f6-a953-f94bd98790b3" (UID: "9e082060-ad3b-46f6-a953-f94bd98790b3"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 5 00:06:47.138652 kubelet[2749]: I1105 00:06:47.138548 2749 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e082060-ad3b-46f6-a953-f94bd98790b3-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "9e082060-ad3b-46f6-a953-f94bd98790b3" (UID: "9e082060-ad3b-46f6-a953-f94bd98790b3"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 5 00:06:47.138714 kubelet[2749]: I1105 00:06:47.138558 2749 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e082060-ad3b-46f6-a953-f94bd98790b3-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "9e082060-ad3b-46f6-a953-f94bd98790b3" (UID: "9e082060-ad3b-46f6-a953-f94bd98790b3"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 5 00:06:47.138774 kubelet[2749]: I1105 00:06:47.138568 2749 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e082060-ad3b-46f6-a953-f94bd98790b3-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "9e082060-ad3b-46f6-a953-f94bd98790b3" (UID: "9e082060-ad3b-46f6-a953-f94bd98790b3"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 5 00:06:47.138828 kubelet[2749]: I1105 00:06:47.138613 2749 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e082060-ad3b-46f6-a953-f94bd98790b3-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "9e082060-ad3b-46f6-a953-f94bd98790b3" (UID: "9e082060-ad3b-46f6-a953-f94bd98790b3"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 5 00:06:47.140034 kubelet[2749]: I1105 00:06:47.139980 2749 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d113c0dc-7aeb-4225-9ae0-77ff41813a7a-kube-api-access-l54c5" (OuterVolumeSpecName: "kube-api-access-l54c5") pod "d113c0dc-7aeb-4225-9ae0-77ff41813a7a" (UID: "d113c0dc-7aeb-4225-9ae0-77ff41813a7a"). InnerVolumeSpecName "kube-api-access-l54c5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 5 00:06:47.140168 kubelet[2749]: I1105 00:06:47.140103 2749 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e082060-ad3b-46f6-a953-f94bd98790b3-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "9e082060-ad3b-46f6-a953-f94bd98790b3" (UID: "9e082060-ad3b-46f6-a953-f94bd98790b3"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 5 00:06:47.140490 kubelet[2749]: I1105 00:06:47.140452 2749 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e082060-ad3b-46f6-a953-f94bd98790b3-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "9e082060-ad3b-46f6-a953-f94bd98790b3" (UID: "9e082060-ad3b-46f6-a953-f94bd98790b3"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 5 00:06:47.141717 kubelet[2749]: I1105 00:06:47.141685 2749 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e082060-ad3b-46f6-a953-f94bd98790b3-kube-api-access-fs9js" (OuterVolumeSpecName: "kube-api-access-fs9js") pod "9e082060-ad3b-46f6-a953-f94bd98790b3" (UID: "9e082060-ad3b-46f6-a953-f94bd98790b3"). InnerVolumeSpecName "kube-api-access-fs9js". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 5 00:06:47.163236 kubelet[2749]: I1105 00:06:47.163160 2749 scope.go:117] "RemoveContainer" containerID="101ca1e0ced8c982d5e1c0bb6173f0b73975d9397dab99f025600143dc61125e" Nov 5 00:06:47.166535 containerd[1616]: time="2025-11-05T00:06:47.166460278Z" level=info msg="RemoveContainer for \"101ca1e0ced8c982d5e1c0bb6173f0b73975d9397dab99f025600143dc61125e\"" Nov 5 00:06:47.169203 systemd[1]: Removed slice kubepods-besteffort-podd113c0dc_7aeb_4225_9ae0_77ff41813a7a.slice - libcontainer container kubepods-besteffort-podd113c0dc_7aeb_4225_9ae0_77ff41813a7a.slice. Nov 5 00:06:47.177503 systemd[1]: Removed slice kubepods-burstable-pod9e082060_ad3b_46f6_a953_f94bd98790b3.slice - libcontainer container kubepods-burstable-pod9e082060_ad3b_46f6_a953_f94bd98790b3.slice. Nov 5 00:06:47.178074 systemd[1]: kubepods-burstable-pod9e082060_ad3b_46f6_a953_f94bd98790b3.slice: Consumed 6.137s CPU time, 127.6M memory peak, 156K read from disk, 16.3M written to disk. Nov 5 00:06:47.181774 containerd[1616]: time="2025-11-05T00:06:47.181736200Z" level=info msg="RemoveContainer for \"101ca1e0ced8c982d5e1c0bb6173f0b73975d9397dab99f025600143dc61125e\" returns successfully" Nov 5 00:06:47.182024 kubelet[2749]: I1105 00:06:47.181999 2749 scope.go:117] "RemoveContainer" containerID="101ca1e0ced8c982d5e1c0bb6173f0b73975d9397dab99f025600143dc61125e" Nov 5 00:06:47.182214 containerd[1616]: time="2025-11-05T00:06:47.182177864Z" level=error msg="ContainerStatus for \"101ca1e0ced8c982d5e1c0bb6173f0b73975d9397dab99f025600143dc61125e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"101ca1e0ced8c982d5e1c0bb6173f0b73975d9397dab99f025600143dc61125e\": not found" Nov 5 00:06:47.182326 kubelet[2749]: E1105 00:06:47.182303 2749 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"101ca1e0ced8c982d5e1c0bb6173f0b73975d9397dab99f025600143dc61125e\": not found" containerID="101ca1e0ced8c982d5e1c0bb6173f0b73975d9397dab99f025600143dc61125e" Nov 5 00:06:47.182395 kubelet[2749]: I1105 00:06:47.182331 2749 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"101ca1e0ced8c982d5e1c0bb6173f0b73975d9397dab99f025600143dc61125e"} err="failed to get container status \"101ca1e0ced8c982d5e1c0bb6173f0b73975d9397dab99f025600143dc61125e\": rpc error: code = NotFound desc = an error occurred when try to find container \"101ca1e0ced8c982d5e1c0bb6173f0b73975d9397dab99f025600143dc61125e\": not found" Nov 5 00:06:47.182395 kubelet[2749]: I1105 00:06:47.182392 2749 scope.go:117] "RemoveContainer" containerID="10190723ac1220f8bc3a77c75879dad90b1db2c7033e9dc2fbd1597e157886c8" Nov 5 00:06:47.184292 containerd[1616]: time="2025-11-05T00:06:47.183764665Z" level=info msg="RemoveContainer for \"10190723ac1220f8bc3a77c75879dad90b1db2c7033e9dc2fbd1597e157886c8\"" Nov 5 00:06:47.188711 containerd[1616]: time="2025-11-05T00:06:47.188690965Z" level=info msg="RemoveContainer for \"10190723ac1220f8bc3a77c75879dad90b1db2c7033e9dc2fbd1597e157886c8\" returns successfully" Nov 5 00:06:47.189063 kubelet[2749]: I1105 00:06:47.189010 2749 scope.go:117] "RemoveContainer" containerID="a6ef7b3c77ab5b988d421714de8e388903972bb2af9f0f5edbf2c49be3d71655" Nov 5 00:06:47.190490 containerd[1616]: time="2025-11-05T00:06:47.190456139Z" level=info msg="RemoveContainer for \"a6ef7b3c77ab5b988d421714de8e388903972bb2af9f0f5edbf2c49be3d71655\"" Nov 5 00:06:47.195120 containerd[1616]: time="2025-11-05T00:06:47.195082048Z" level=info msg="RemoveContainer for \"a6ef7b3c77ab5b988d421714de8e388903972bb2af9f0f5edbf2c49be3d71655\" returns successfully" Nov 5 00:06:47.195311 kubelet[2749]: I1105 00:06:47.195290 2749 scope.go:117] "RemoveContainer" containerID="068b9e0c35cd2ab9338698eaf3775a5e203b28a64a9a99c40f330fff3b91140c" Nov 5 00:06:47.198145 containerd[1616]: time="2025-11-05T00:06:47.198117801Z" level=info msg="RemoveContainer for \"068b9e0c35cd2ab9338698eaf3775a5e203b28a64a9a99c40f330fff3b91140c\"" Nov 5 00:06:47.202020 containerd[1616]: time="2025-11-05T00:06:47.201992077Z" level=info msg="RemoveContainer for \"068b9e0c35cd2ab9338698eaf3775a5e203b28a64a9a99c40f330fff3b91140c\" returns successfully" Nov 5 00:06:47.202155 kubelet[2749]: I1105 00:06:47.202125 2749 scope.go:117] "RemoveContainer" containerID="cb94a9f014c00a7deb50ae755062b4130c50cd2da08a9d7e47e7d2c1d8d81ee6" Nov 5 00:06:47.203272 containerd[1616]: time="2025-11-05T00:06:47.203249269Z" level=info msg="RemoveContainer for \"cb94a9f014c00a7deb50ae755062b4130c50cd2da08a9d7e47e7d2c1d8d81ee6\"" Nov 5 00:06:47.206479 containerd[1616]: time="2025-11-05T00:06:47.206446805Z" level=info msg="RemoveContainer for \"cb94a9f014c00a7deb50ae755062b4130c50cd2da08a9d7e47e7d2c1d8d81ee6\" returns successfully" Nov 5 00:06:47.206608 kubelet[2749]: I1105 00:06:47.206570 2749 scope.go:117] "RemoveContainer" containerID="7e93bee5a254c9c4de9888652c40d878f972af256e6ec00e1b9982f3cdaa2fed" Nov 5 00:06:47.207720 containerd[1616]: time="2025-11-05T00:06:47.207693486Z" level=info msg="RemoveContainer for \"7e93bee5a254c9c4de9888652c40d878f972af256e6ec00e1b9982f3cdaa2fed\"" Nov 5 00:06:47.210813 containerd[1616]: time="2025-11-05T00:06:47.210785889Z" level=info msg="RemoveContainer for \"7e93bee5a254c9c4de9888652c40d878f972af256e6ec00e1b9982f3cdaa2fed\" returns successfully" Nov 5 00:06:47.210967 kubelet[2749]: I1105 00:06:47.210928 2749 scope.go:117] "RemoveContainer" containerID="10190723ac1220f8bc3a77c75879dad90b1db2c7033e9dc2fbd1597e157886c8" Nov 5 00:06:47.211135 containerd[1616]: time="2025-11-05T00:06:47.211092262Z" level=error msg="ContainerStatus for \"10190723ac1220f8bc3a77c75879dad90b1db2c7033e9dc2fbd1597e157886c8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"10190723ac1220f8bc3a77c75879dad90b1db2c7033e9dc2fbd1597e157886c8\": not found" Nov 5 00:06:47.211225 kubelet[2749]: E1105 00:06:47.211201 2749 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"10190723ac1220f8bc3a77c75879dad90b1db2c7033e9dc2fbd1597e157886c8\": not found" containerID="10190723ac1220f8bc3a77c75879dad90b1db2c7033e9dc2fbd1597e157886c8" Nov 5 00:06:47.211254 kubelet[2749]: I1105 00:06:47.211230 2749 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"10190723ac1220f8bc3a77c75879dad90b1db2c7033e9dc2fbd1597e157886c8"} err="failed to get container status \"10190723ac1220f8bc3a77c75879dad90b1db2c7033e9dc2fbd1597e157886c8\": rpc error: code = NotFound desc = an error occurred when try to find container \"10190723ac1220f8bc3a77c75879dad90b1db2c7033e9dc2fbd1597e157886c8\": not found" Nov 5 00:06:47.211254 kubelet[2749]: I1105 00:06:47.211250 2749 scope.go:117] "RemoveContainer" containerID="a6ef7b3c77ab5b988d421714de8e388903972bb2af9f0f5edbf2c49be3d71655" Nov 5 00:06:47.211443 containerd[1616]: time="2025-11-05T00:06:47.211409606Z" level=error msg="ContainerStatus for \"a6ef7b3c77ab5b988d421714de8e388903972bb2af9f0f5edbf2c49be3d71655\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a6ef7b3c77ab5b988d421714de8e388903972bb2af9f0f5edbf2c49be3d71655\": not found" Nov 5 00:06:47.211567 kubelet[2749]: E1105 00:06:47.211542 2749 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a6ef7b3c77ab5b988d421714de8e388903972bb2af9f0f5edbf2c49be3d71655\": not found" containerID="a6ef7b3c77ab5b988d421714de8e388903972bb2af9f0f5edbf2c49be3d71655" Nov 5 00:06:47.211612 kubelet[2749]: I1105 00:06:47.211573 2749 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a6ef7b3c77ab5b988d421714de8e388903972bb2af9f0f5edbf2c49be3d71655"} err="failed to get container status \"a6ef7b3c77ab5b988d421714de8e388903972bb2af9f0f5edbf2c49be3d71655\": rpc error: code = NotFound desc = an error occurred when try to find container \"a6ef7b3c77ab5b988d421714de8e388903972bb2af9f0f5edbf2c49be3d71655\": not found" Nov 5 00:06:47.211612 kubelet[2749]: I1105 00:06:47.211603 2749 scope.go:117] "RemoveContainer" containerID="068b9e0c35cd2ab9338698eaf3775a5e203b28a64a9a99c40f330fff3b91140c" Nov 5 00:06:47.211760 containerd[1616]: time="2025-11-05T00:06:47.211733833Z" level=error msg="ContainerStatus for \"068b9e0c35cd2ab9338698eaf3775a5e203b28a64a9a99c40f330fff3b91140c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"068b9e0c35cd2ab9338698eaf3775a5e203b28a64a9a99c40f330fff3b91140c\": not found" Nov 5 00:06:47.211844 kubelet[2749]: E1105 00:06:47.211825 2749 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"068b9e0c35cd2ab9338698eaf3775a5e203b28a64a9a99c40f330fff3b91140c\": not found" containerID="068b9e0c35cd2ab9338698eaf3775a5e203b28a64a9a99c40f330fff3b91140c" Nov 5 00:06:47.211894 kubelet[2749]: I1105 00:06:47.211844 2749 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"068b9e0c35cd2ab9338698eaf3775a5e203b28a64a9a99c40f330fff3b91140c"} err="failed to get container status \"068b9e0c35cd2ab9338698eaf3775a5e203b28a64a9a99c40f330fff3b91140c\": rpc error: code = NotFound desc = an error occurred when try to find container \"068b9e0c35cd2ab9338698eaf3775a5e203b28a64a9a99c40f330fff3b91140c\": not found" Nov 5 00:06:47.211894 kubelet[2749]: I1105 00:06:47.211856 2749 scope.go:117] "RemoveContainer" containerID="cb94a9f014c00a7deb50ae755062b4130c50cd2da08a9d7e47e7d2c1d8d81ee6" Nov 5 00:06:47.212049 containerd[1616]: time="2025-11-05T00:06:47.212005459Z" level=error msg="ContainerStatus for \"cb94a9f014c00a7deb50ae755062b4130c50cd2da08a9d7e47e7d2c1d8d81ee6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cb94a9f014c00a7deb50ae755062b4130c50cd2da08a9d7e47e7d2c1d8d81ee6\": not found" Nov 5 00:06:47.212178 kubelet[2749]: E1105 00:06:47.212153 2749 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cb94a9f014c00a7deb50ae755062b4130c50cd2da08a9d7e47e7d2c1d8d81ee6\": not found" containerID="cb94a9f014c00a7deb50ae755062b4130c50cd2da08a9d7e47e7d2c1d8d81ee6" Nov 5 00:06:47.212232 kubelet[2749]: I1105 00:06:47.212184 2749 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cb94a9f014c00a7deb50ae755062b4130c50cd2da08a9d7e47e7d2c1d8d81ee6"} err="failed to get container status \"cb94a9f014c00a7deb50ae755062b4130c50cd2da08a9d7e47e7d2c1d8d81ee6\": rpc error: code = NotFound desc = an error occurred when try to find container \"cb94a9f014c00a7deb50ae755062b4130c50cd2da08a9d7e47e7d2c1d8d81ee6\": not found" Nov 5 00:06:47.212232 kubelet[2749]: I1105 00:06:47.212207 2749 scope.go:117] "RemoveContainer" containerID="7e93bee5a254c9c4de9888652c40d878f972af256e6ec00e1b9982f3cdaa2fed" Nov 5 00:06:47.212432 containerd[1616]: time="2025-11-05T00:06:47.212384451Z" level=error msg="ContainerStatus for \"7e93bee5a254c9c4de9888652c40d878f972af256e6ec00e1b9982f3cdaa2fed\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7e93bee5a254c9c4de9888652c40d878f972af256e6ec00e1b9982f3cdaa2fed\": not found" Nov 5 00:06:47.212560 kubelet[2749]: E1105 00:06:47.212501 2749 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7e93bee5a254c9c4de9888652c40d878f972af256e6ec00e1b9982f3cdaa2fed\": not found" containerID="7e93bee5a254c9c4de9888652c40d878f972af256e6ec00e1b9982f3cdaa2fed" Nov 5 00:06:47.212560 kubelet[2749]: I1105 00:06:47.212526 2749 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7e93bee5a254c9c4de9888652c40d878f972af256e6ec00e1b9982f3cdaa2fed"} err="failed to get container status \"7e93bee5a254c9c4de9888652c40d878f972af256e6ec00e1b9982f3cdaa2fed\": rpc error: code = NotFound desc = an error occurred when try to find container \"7e93bee5a254c9c4de9888652c40d878f972af256e6ec00e1b9982f3cdaa2fed\": not found" Nov 5 00:06:47.235817 kubelet[2749]: I1105 00:06:47.235784 2749 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9e082060-ad3b-46f6-a953-f94bd98790b3-hubble-tls\") on node \"localhost\" DevicePath \"\"" Nov 5 00:06:47.235817 kubelet[2749]: I1105 00:06:47.235802 2749 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9e082060-ad3b-46f6-a953-f94bd98790b3-xtables-lock\") on node \"localhost\" DevicePath \"\"" Nov 5 00:06:47.235817 kubelet[2749]: I1105 00:06:47.235811 2749 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9e082060-ad3b-46f6-a953-f94bd98790b3-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Nov 5 00:06:47.235817 kubelet[2749]: I1105 00:06:47.235820 2749 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9e082060-ad3b-46f6-a953-f94bd98790b3-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Nov 5 00:06:47.235967 kubelet[2749]: I1105 00:06:47.235828 2749 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9e082060-ad3b-46f6-a953-f94bd98790b3-lib-modules\") on node \"localhost\" DevicePath \"\"" Nov 5 00:06:47.235967 kubelet[2749]: I1105 00:06:47.235836 2749 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9e082060-ad3b-46f6-a953-f94bd98790b3-cilium-run\") on node \"localhost\" DevicePath \"\"" Nov 5 00:06:47.235967 kubelet[2749]: I1105 00:06:47.235845 2749 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9e082060-ad3b-46f6-a953-f94bd98790b3-hostproc\") on node \"localhost\" DevicePath \"\"" Nov 5 00:06:47.235967 kubelet[2749]: I1105 00:06:47.235853 2749 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l54c5\" (UniqueName: \"kubernetes.io/projected/d113c0dc-7aeb-4225-9ae0-77ff41813a7a-kube-api-access-l54c5\") on node \"localhost\" DevicePath \"\"" Nov 5 00:06:47.235967 kubelet[2749]: I1105 00:06:47.235861 2749 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d113c0dc-7aeb-4225-9ae0-77ff41813a7a-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Nov 5 00:06:47.235967 kubelet[2749]: I1105 00:06:47.235870 2749 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9e082060-ad3b-46f6-a953-f94bd98790b3-bpf-maps\") on node \"localhost\" DevicePath \"\"" Nov 5 00:06:47.235967 kubelet[2749]: I1105 00:06:47.235897 2749 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fs9js\" (UniqueName: \"kubernetes.io/projected/9e082060-ad3b-46f6-a953-f94bd98790b3-kube-api-access-fs9js\") on node \"localhost\" DevicePath \"\"" Nov 5 00:06:47.235967 kubelet[2749]: I1105 00:06:47.235905 2749 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9e082060-ad3b-46f6-a953-f94bd98790b3-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Nov 5 00:06:47.236144 kubelet[2749]: I1105 00:06:47.235914 2749 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9e082060-ad3b-46f6-a953-f94bd98790b3-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Nov 5 00:06:47.236144 kubelet[2749]: I1105 00:06:47.235922 2749 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9e082060-ad3b-46f6-a953-f94bd98790b3-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Nov 5 00:06:47.236144 kubelet[2749]: I1105 00:06:47.235930 2749 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9e082060-ad3b-46f6-a953-f94bd98790b3-cni-path\") on node \"localhost\" DevicePath \"\"" Nov 5 00:06:47.960839 systemd[1]: var-lib-kubelet-pods-9e082060\x2dad3b\x2d46f6\x2da953\x2df94bd98790b3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfs9js.mount: Deactivated successfully. Nov 5 00:06:47.960978 systemd[1]: var-lib-kubelet-pods-d113c0dc\x2d7aeb\x2d4225\x2d9ae0\x2d77ff41813a7a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dl54c5.mount: Deactivated successfully. Nov 5 00:06:47.961060 systemd[1]: var-lib-kubelet-pods-9e082060\x2dad3b\x2d46f6\x2da953\x2df94bd98790b3-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Nov 5 00:06:47.961146 systemd[1]: var-lib-kubelet-pods-9e082060\x2dad3b\x2d46f6\x2da953\x2df94bd98790b3-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Nov 5 00:06:48.888795 sshd[4370]: Connection closed by 10.0.0.1 port 36538 Nov 5 00:06:48.889259 sshd-session[4367]: pam_unix(sshd:session): session closed for user core Nov 5 00:06:48.897551 systemd[1]: sshd@23-10.0.0.3:22-10.0.0.1:36538.service: Deactivated successfully. Nov 5 00:06:48.899591 systemd[1]: session-24.scope: Deactivated successfully. Nov 5 00:06:48.900417 systemd-logind[1590]: Session 24 logged out. Waiting for processes to exit. Nov 5 00:06:48.903601 systemd[1]: Started sshd@24-10.0.0.3:22-10.0.0.1:36550.service - OpenSSH per-connection server daemon (10.0.0.1:36550). Nov 5 00:06:48.904982 systemd-logind[1590]: Removed session 24. Nov 5 00:06:48.968571 sshd[4522]: Accepted publickey for core from 10.0.0.1 port 36550 ssh2: RSA SHA256:76q7OxBnofV7CHovkMIRFzt50oWq5VGuNm0JMuzHE4A Nov 5 00:06:48.970239 sshd-session[4522]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 00:06:48.974604 systemd-logind[1590]: New session 25 of user core. Nov 5 00:06:48.983946 kubelet[2749]: I1105 00:06:48.983910 2749 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e082060-ad3b-46f6-a953-f94bd98790b3" path="/var/lib/kubelet/pods/9e082060-ad3b-46f6-a953-f94bd98790b3/volumes" Nov 5 00:06:48.984016 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 5 00:06:48.984671 kubelet[2749]: I1105 00:06:48.984653 2749 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d113c0dc-7aeb-4225-9ae0-77ff41813a7a" path="/var/lib/kubelet/pods/d113c0dc-7aeb-4225-9ae0-77ff41813a7a/volumes" Nov 5 00:06:49.533662 sshd[4525]: Connection closed by 10.0.0.1 port 36550 Nov 5 00:06:49.532004 sshd-session[4522]: pam_unix(sshd:session): session closed for user core Nov 5 00:06:49.541600 systemd[1]: sshd@24-10.0.0.3:22-10.0.0.1:36550.service: Deactivated successfully. Nov 5 00:06:49.544329 systemd[1]: session-25.scope: Deactivated successfully. Nov 5 00:06:49.546756 systemd-logind[1590]: Session 25 logged out. Waiting for processes to exit. Nov 5 00:06:49.548012 systemd-logind[1590]: Removed session 25. Nov 5 00:06:49.552112 systemd[1]: Started sshd@25-10.0.0.3:22-10.0.0.1:36562.service - OpenSSH per-connection server daemon (10.0.0.1:36562). Nov 5 00:06:49.557732 kubelet[2749]: I1105 00:06:49.557689 2749 memory_manager.go:355] "RemoveStaleState removing state" podUID="d113c0dc-7aeb-4225-9ae0-77ff41813a7a" containerName="cilium-operator" Nov 5 00:06:49.557732 kubelet[2749]: I1105 00:06:49.557718 2749 memory_manager.go:355] "RemoveStaleState removing state" podUID="9e082060-ad3b-46f6-a953-f94bd98790b3" containerName="cilium-agent" Nov 5 00:06:49.572992 systemd[1]: Created slice kubepods-burstable-pod06795fc1_4709_4fcc_91ac_4701a299e396.slice - libcontainer container kubepods-burstable-pod06795fc1_4709_4fcc_91ac_4701a299e396.slice. Nov 5 00:06:49.601173 sshd[4537]: Accepted publickey for core from 10.0.0.1 port 36562 ssh2: RSA SHA256:76q7OxBnofV7CHovkMIRFzt50oWq5VGuNm0JMuzHE4A Nov 5 00:06:49.602813 sshd-session[4537]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 00:06:49.607225 systemd-logind[1590]: New session 26 of user core. Nov 5 00:06:49.621026 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 5 00:06:49.648063 kubelet[2749]: I1105 00:06:49.648025 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/06795fc1-4709-4fcc-91ac-4701a299e396-hostproc\") pod \"cilium-4xt8q\" (UID: \"06795fc1-4709-4fcc-91ac-4701a299e396\") " pod="kube-system/cilium-4xt8q" Nov 5 00:06:49.648063 kubelet[2749]: I1105 00:06:49.648060 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/06795fc1-4709-4fcc-91ac-4701a299e396-cilium-ipsec-secrets\") pod \"cilium-4xt8q\" (UID: \"06795fc1-4709-4fcc-91ac-4701a299e396\") " pod="kube-system/cilium-4xt8q" Nov 5 00:06:49.648141 kubelet[2749]: I1105 00:06:49.648082 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/06795fc1-4709-4fcc-91ac-4701a299e396-host-proc-sys-kernel\") pod \"cilium-4xt8q\" (UID: \"06795fc1-4709-4fcc-91ac-4701a299e396\") " pod="kube-system/cilium-4xt8q" Nov 5 00:06:49.648141 kubelet[2749]: I1105 00:06:49.648097 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/06795fc1-4709-4fcc-91ac-4701a299e396-hubble-tls\") pod \"cilium-4xt8q\" (UID: \"06795fc1-4709-4fcc-91ac-4701a299e396\") " pod="kube-system/cilium-4xt8q" Nov 5 00:06:49.648141 kubelet[2749]: I1105 00:06:49.648118 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/06795fc1-4709-4fcc-91ac-4701a299e396-etc-cni-netd\") pod \"cilium-4xt8q\" (UID: \"06795fc1-4709-4fcc-91ac-4701a299e396\") " pod="kube-system/cilium-4xt8q" Nov 5 00:06:49.648221 kubelet[2749]: I1105 00:06:49.648146 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/06795fc1-4709-4fcc-91ac-4701a299e396-cilium-config-path\") pod \"cilium-4xt8q\" (UID: \"06795fc1-4709-4fcc-91ac-4701a299e396\") " pod="kube-system/cilium-4xt8q" Nov 5 00:06:49.648221 kubelet[2749]: I1105 00:06:49.648201 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhfnb\" (UniqueName: \"kubernetes.io/projected/06795fc1-4709-4fcc-91ac-4701a299e396-kube-api-access-fhfnb\") pod \"cilium-4xt8q\" (UID: \"06795fc1-4709-4fcc-91ac-4701a299e396\") " pod="kube-system/cilium-4xt8q" Nov 5 00:06:49.648274 kubelet[2749]: I1105 00:06:49.648251 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/06795fc1-4709-4fcc-91ac-4701a299e396-cilium-cgroup\") pod \"cilium-4xt8q\" (UID: \"06795fc1-4709-4fcc-91ac-4701a299e396\") " pod="kube-system/cilium-4xt8q" Nov 5 00:06:49.648299 kubelet[2749]: I1105 00:06:49.648282 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/06795fc1-4709-4fcc-91ac-4701a299e396-xtables-lock\") pod \"cilium-4xt8q\" (UID: \"06795fc1-4709-4fcc-91ac-4701a299e396\") " pod="kube-system/cilium-4xt8q" Nov 5 00:06:49.648324 kubelet[2749]: I1105 00:06:49.648299 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/06795fc1-4709-4fcc-91ac-4701a299e396-clustermesh-secrets\") pod \"cilium-4xt8q\" (UID: \"06795fc1-4709-4fcc-91ac-4701a299e396\") " pod="kube-system/cilium-4xt8q" Nov 5 00:06:49.648324 kubelet[2749]: I1105 00:06:49.648318 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/06795fc1-4709-4fcc-91ac-4701a299e396-cni-path\") pod \"cilium-4xt8q\" (UID: \"06795fc1-4709-4fcc-91ac-4701a299e396\") " pod="kube-system/cilium-4xt8q" Nov 5 00:06:49.648377 kubelet[2749]: I1105 00:06:49.648332 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/06795fc1-4709-4fcc-91ac-4701a299e396-lib-modules\") pod \"cilium-4xt8q\" (UID: \"06795fc1-4709-4fcc-91ac-4701a299e396\") " pod="kube-system/cilium-4xt8q" Nov 5 00:06:49.648377 kubelet[2749]: I1105 00:06:49.648374 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/06795fc1-4709-4fcc-91ac-4701a299e396-cilium-run\") pod \"cilium-4xt8q\" (UID: \"06795fc1-4709-4fcc-91ac-4701a299e396\") " pod="kube-system/cilium-4xt8q" Nov 5 00:06:49.648421 kubelet[2749]: I1105 00:06:49.648389 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/06795fc1-4709-4fcc-91ac-4701a299e396-bpf-maps\") pod \"cilium-4xt8q\" (UID: \"06795fc1-4709-4fcc-91ac-4701a299e396\") " pod="kube-system/cilium-4xt8q" Nov 5 00:06:49.648421 kubelet[2749]: I1105 00:06:49.648412 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/06795fc1-4709-4fcc-91ac-4701a299e396-host-proc-sys-net\") pod \"cilium-4xt8q\" (UID: \"06795fc1-4709-4fcc-91ac-4701a299e396\") " pod="kube-system/cilium-4xt8q" Nov 5 00:06:49.670995 sshd[4540]: Connection closed by 10.0.0.1 port 36562 Nov 5 00:06:49.671232 sshd-session[4537]: pam_unix(sshd:session): session closed for user core Nov 5 00:06:49.684471 systemd[1]: sshd@25-10.0.0.3:22-10.0.0.1:36562.service: Deactivated successfully. Nov 5 00:06:49.686283 systemd[1]: session-26.scope: Deactivated successfully. Nov 5 00:06:49.687130 systemd-logind[1590]: Session 26 logged out. Waiting for processes to exit. Nov 5 00:06:49.689974 systemd[1]: Started sshd@26-10.0.0.3:22-10.0.0.1:36570.service - OpenSSH per-connection server daemon (10.0.0.1:36570). Nov 5 00:06:49.690616 systemd-logind[1590]: Removed session 26. Nov 5 00:06:49.744281 sshd[4547]: Accepted publickey for core from 10.0.0.1 port 36570 ssh2: RSA SHA256:76q7OxBnofV7CHovkMIRFzt50oWq5VGuNm0JMuzHE4A Nov 5 00:06:49.745456 sshd-session[4547]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 00:06:49.750971 systemd-logind[1590]: New session 27 of user core. Nov 5 00:06:49.757004 systemd[1]: Started session-27.scope - Session 27 of User core. Nov 5 00:06:49.879565 kubelet[2749]: E1105 00:06:49.878803 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 00:06:49.882456 containerd[1616]: time="2025-11-05T00:06:49.882421900Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4xt8q,Uid:06795fc1-4709-4fcc-91ac-4701a299e396,Namespace:kube-system,Attempt:0,}" Nov 5 00:06:49.898204 containerd[1616]: time="2025-11-05T00:06:49.898160708Z" level=info msg="connecting to shim 727a1a5b704e8300e97311ef9629dca8b66bc1a9bdc4162df00155333e682875" address="unix:///run/containerd/s/3f6a47b082bfb260961604f053b9c0f5c34a71fd2649309d6476fbc9276cfbe6" namespace=k8s.io protocol=ttrpc version=3 Nov 5 00:06:49.932012 systemd[1]: Started cri-containerd-727a1a5b704e8300e97311ef9629dca8b66bc1a9bdc4162df00155333e682875.scope - libcontainer container 727a1a5b704e8300e97311ef9629dca8b66bc1a9bdc4162df00155333e682875. Nov 5 00:06:49.962475 containerd[1616]: time="2025-11-05T00:06:49.962430708Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4xt8q,Uid:06795fc1-4709-4fcc-91ac-4701a299e396,Namespace:kube-system,Attempt:0,} returns sandbox id \"727a1a5b704e8300e97311ef9629dca8b66bc1a9bdc4162df00155333e682875\"" Nov 5 00:06:49.963167 kubelet[2749]: E1105 00:06:49.963142 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 00:06:49.966104 containerd[1616]: time="2025-11-05T00:06:49.965936790Z" level=info msg="CreateContainer within sandbox \"727a1a5b704e8300e97311ef9629dca8b66bc1a9bdc4162df00155333e682875\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 5 00:06:49.976310 containerd[1616]: time="2025-11-05T00:06:49.976278914Z" level=info msg="Container e65e618b86a3e5b602d2c349ec2806f6a8df6da1c4e2b2efed8cfaa689fcd925: CDI devices from CRI Config.CDIDevices: []" Nov 5 00:06:49.979856 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3752996890.mount: Deactivated successfully. Nov 5 00:06:49.982272 containerd[1616]: time="2025-11-05T00:06:49.982228595Z" level=info msg="CreateContainer within sandbox \"727a1a5b704e8300e97311ef9629dca8b66bc1a9bdc4162df00155333e682875\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e65e618b86a3e5b602d2c349ec2806f6a8df6da1c4e2b2efed8cfaa689fcd925\"" Nov 5 00:06:49.982641 containerd[1616]: time="2025-11-05T00:06:49.982611135Z" level=info msg="StartContainer for \"e65e618b86a3e5b602d2c349ec2806f6a8df6da1c4e2b2efed8cfaa689fcd925\"" Nov 5 00:06:49.983329 containerd[1616]: time="2025-11-05T00:06:49.983304544Z" level=info msg="connecting to shim e65e618b86a3e5b602d2c349ec2806f6a8df6da1c4e2b2efed8cfaa689fcd925" address="unix:///run/containerd/s/3f6a47b082bfb260961604f053b9c0f5c34a71fd2649309d6476fbc9276cfbe6" protocol=ttrpc version=3 Nov 5 00:06:50.003014 systemd[1]: Started cri-containerd-e65e618b86a3e5b602d2c349ec2806f6a8df6da1c4e2b2efed8cfaa689fcd925.scope - libcontainer container e65e618b86a3e5b602d2c349ec2806f6a8df6da1c4e2b2efed8cfaa689fcd925. Nov 5 00:06:50.030418 containerd[1616]: time="2025-11-05T00:06:50.030373597Z" level=info msg="StartContainer for \"e65e618b86a3e5b602d2c349ec2806f6a8df6da1c4e2b2efed8cfaa689fcd925\" returns successfully" Nov 5 00:06:50.039262 systemd[1]: cri-containerd-e65e618b86a3e5b602d2c349ec2806f6a8df6da1c4e2b2efed8cfaa689fcd925.scope: Deactivated successfully. Nov 5 00:06:50.041634 containerd[1616]: time="2025-11-05T00:06:50.041607277Z" level=info msg="received exit event container_id:\"e65e618b86a3e5b602d2c349ec2806f6a8df6da1c4e2b2efed8cfaa689fcd925\" id:\"e65e618b86a3e5b602d2c349ec2806f6a8df6da1c4e2b2efed8cfaa689fcd925\" pid:4618 exited_at:{seconds:1762301210 nanos:40908027}" Nov 5 00:06:50.041773 containerd[1616]: time="2025-11-05T00:06:50.041739352Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e65e618b86a3e5b602d2c349ec2806f6a8df6da1c4e2b2efed8cfaa689fcd925\" id:\"e65e618b86a3e5b602d2c349ec2806f6a8df6da1c4e2b2efed8cfaa689fcd925\" pid:4618 exited_at:{seconds:1762301210 nanos:40908027}" Nov 5 00:06:50.179920 kubelet[2749]: E1105 00:06:50.179773 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 00:06:50.182888 containerd[1616]: time="2025-11-05T00:06:50.182833554Z" level=info msg="CreateContainer within sandbox \"727a1a5b704e8300e97311ef9629dca8b66bc1a9bdc4162df00155333e682875\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 5 00:06:50.189461 containerd[1616]: time="2025-11-05T00:06:50.189424062Z" level=info msg="Container 299ea71a4df2894a96274c95ae2385e40c967dbc090af1c2309e72003251146b: CDI devices from CRI Config.CDIDevices: []" Nov 5 00:06:50.200324 containerd[1616]: time="2025-11-05T00:06:50.200280143Z" level=info msg="CreateContainer within sandbox \"727a1a5b704e8300e97311ef9629dca8b66bc1a9bdc4162df00155333e682875\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"299ea71a4df2894a96274c95ae2385e40c967dbc090af1c2309e72003251146b\"" Nov 5 00:06:50.200785 containerd[1616]: time="2025-11-05T00:06:50.200747045Z" level=info msg="StartContainer for \"299ea71a4df2894a96274c95ae2385e40c967dbc090af1c2309e72003251146b\"" Nov 5 00:06:50.201525 containerd[1616]: time="2025-11-05T00:06:50.201501661Z" level=info msg="connecting to shim 299ea71a4df2894a96274c95ae2385e40c967dbc090af1c2309e72003251146b" address="unix:///run/containerd/s/3f6a47b082bfb260961604f053b9c0f5c34a71fd2649309d6476fbc9276cfbe6" protocol=ttrpc version=3 Nov 5 00:06:50.225011 systemd[1]: Started cri-containerd-299ea71a4df2894a96274c95ae2385e40c967dbc090af1c2309e72003251146b.scope - libcontainer container 299ea71a4df2894a96274c95ae2385e40c967dbc090af1c2309e72003251146b. Nov 5 00:06:50.251247 containerd[1616]: time="2025-11-05T00:06:50.251211845Z" level=info msg="StartContainer for \"299ea71a4df2894a96274c95ae2385e40c967dbc090af1c2309e72003251146b\" returns successfully" Nov 5 00:06:50.258049 systemd[1]: cri-containerd-299ea71a4df2894a96274c95ae2385e40c967dbc090af1c2309e72003251146b.scope: Deactivated successfully. Nov 5 00:06:50.258433 containerd[1616]: time="2025-11-05T00:06:50.258389657Z" level=info msg="received exit event container_id:\"299ea71a4df2894a96274c95ae2385e40c967dbc090af1c2309e72003251146b\" id:\"299ea71a4df2894a96274c95ae2385e40c967dbc090af1c2309e72003251146b\" pid:4663 exited_at:{seconds:1762301210 nanos:258236912}" Nov 5 00:06:50.258627 containerd[1616]: time="2025-11-05T00:06:50.258567951Z" level=info msg="TaskExit event in podsandbox handler container_id:\"299ea71a4df2894a96274c95ae2385e40c967dbc090af1c2309e72003251146b\" id:\"299ea71a4df2894a96274c95ae2385e40c967dbc090af1c2309e72003251146b\" pid:4663 exited_at:{seconds:1762301210 nanos:258236912}" Nov 5 00:06:51.035810 kubelet[2749]: E1105 00:06:51.035768 2749 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Nov 5 00:06:51.184305 kubelet[2749]: E1105 00:06:51.184273 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 00:06:51.186183 containerd[1616]: time="2025-11-05T00:06:51.186137821Z" level=info msg="CreateContainer within sandbox \"727a1a5b704e8300e97311ef9629dca8b66bc1a9bdc4162df00155333e682875\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 5 00:06:51.208201 containerd[1616]: time="2025-11-05T00:06:51.208146011Z" level=info msg="Container 245e3a5dffe103e4fbd38529a2092d55a6ff75c8bfaeb2308810f87c6bf6f386: CDI devices from CRI Config.CDIDevices: []" Nov 5 00:06:51.214715 containerd[1616]: time="2025-11-05T00:06:51.214668029Z" level=info msg="CreateContainer within sandbox \"727a1a5b704e8300e97311ef9629dca8b66bc1a9bdc4162df00155333e682875\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"245e3a5dffe103e4fbd38529a2092d55a6ff75c8bfaeb2308810f87c6bf6f386\"" Nov 5 00:06:51.215241 containerd[1616]: time="2025-11-05T00:06:51.215187270Z" level=info msg="StartContainer for \"245e3a5dffe103e4fbd38529a2092d55a6ff75c8bfaeb2308810f87c6bf6f386\"" Nov 5 00:06:51.216574 containerd[1616]: time="2025-11-05T00:06:51.216532564Z" level=info msg="connecting to shim 245e3a5dffe103e4fbd38529a2092d55a6ff75c8bfaeb2308810f87c6bf6f386" address="unix:///run/containerd/s/3f6a47b082bfb260961604f053b9c0f5c34a71fd2649309d6476fbc9276cfbe6" protocol=ttrpc version=3 Nov 5 00:06:51.242004 systemd[1]: Started cri-containerd-245e3a5dffe103e4fbd38529a2092d55a6ff75c8bfaeb2308810f87c6bf6f386.scope - libcontainer container 245e3a5dffe103e4fbd38529a2092d55a6ff75c8bfaeb2308810f87c6bf6f386. Nov 5 00:06:51.279202 containerd[1616]: time="2025-11-05T00:06:51.279166016Z" level=info msg="StartContainer for \"245e3a5dffe103e4fbd38529a2092d55a6ff75c8bfaeb2308810f87c6bf6f386\" returns successfully" Nov 5 00:06:51.279721 systemd[1]: cri-containerd-245e3a5dffe103e4fbd38529a2092d55a6ff75c8bfaeb2308810f87c6bf6f386.scope: Deactivated successfully. Nov 5 00:06:51.280601 containerd[1616]: time="2025-11-05T00:06:51.280558701Z" level=info msg="received exit event container_id:\"245e3a5dffe103e4fbd38529a2092d55a6ff75c8bfaeb2308810f87c6bf6f386\" id:\"245e3a5dffe103e4fbd38529a2092d55a6ff75c8bfaeb2308810f87c6bf6f386\" pid:4708 exited_at:{seconds:1762301211 nanos:280296285}" Nov 5 00:06:51.280810 containerd[1616]: time="2025-11-05T00:06:51.280587828Z" level=info msg="TaskExit event in podsandbox handler container_id:\"245e3a5dffe103e4fbd38529a2092d55a6ff75c8bfaeb2308810f87c6bf6f386\" id:\"245e3a5dffe103e4fbd38529a2092d55a6ff75c8bfaeb2308810f87c6bf6f386\" pid:4708 exited_at:{seconds:1762301211 nanos:280296285}" Nov 5 00:06:51.300964 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-245e3a5dffe103e4fbd38529a2092d55a6ff75c8bfaeb2308810f87c6bf6f386-rootfs.mount: Deactivated successfully. Nov 5 00:06:52.189082 kubelet[2749]: E1105 00:06:52.189051 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 00:06:52.198310 containerd[1616]: time="2025-11-05T00:06:52.198251525Z" level=info msg="CreateContainer within sandbox \"727a1a5b704e8300e97311ef9629dca8b66bc1a9bdc4162df00155333e682875\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 5 00:06:52.207663 containerd[1616]: time="2025-11-05T00:06:52.207610900Z" level=info msg="Container 651f57d924ccd13b04298099cdc9cef69b5f29e9c5d6af0c6cca3881c2eff493: CDI devices from CRI Config.CDIDevices: []" Nov 5 00:06:52.215602 containerd[1616]: time="2025-11-05T00:06:52.215557573Z" level=info msg="CreateContainer within sandbox \"727a1a5b704e8300e97311ef9629dca8b66bc1a9bdc4162df00155333e682875\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"651f57d924ccd13b04298099cdc9cef69b5f29e9c5d6af0c6cca3881c2eff493\"" Nov 5 00:06:52.216212 containerd[1616]: time="2025-11-05T00:06:52.216179221Z" level=info msg="StartContainer for \"651f57d924ccd13b04298099cdc9cef69b5f29e9c5d6af0c6cca3881c2eff493\"" Nov 5 00:06:52.217040 containerd[1616]: time="2025-11-05T00:06:52.217010213Z" level=info msg="connecting to shim 651f57d924ccd13b04298099cdc9cef69b5f29e9c5d6af0c6cca3881c2eff493" address="unix:///run/containerd/s/3f6a47b082bfb260961604f053b9c0f5c34a71fd2649309d6476fbc9276cfbe6" protocol=ttrpc version=3 Nov 5 00:06:52.251055 systemd[1]: Started cri-containerd-651f57d924ccd13b04298099cdc9cef69b5f29e9c5d6af0c6cca3881c2eff493.scope - libcontainer container 651f57d924ccd13b04298099cdc9cef69b5f29e9c5d6af0c6cca3881c2eff493. Nov 5 00:06:52.276312 systemd[1]: cri-containerd-651f57d924ccd13b04298099cdc9cef69b5f29e9c5d6af0c6cca3881c2eff493.scope: Deactivated successfully. Nov 5 00:06:52.276567 containerd[1616]: time="2025-11-05T00:06:52.276537629Z" level=info msg="TaskExit event in podsandbox handler container_id:\"651f57d924ccd13b04298099cdc9cef69b5f29e9c5d6af0c6cca3881c2eff493\" id:\"651f57d924ccd13b04298099cdc9cef69b5f29e9c5d6af0c6cca3881c2eff493\" pid:4748 exited_at:{seconds:1762301212 nanos:276099585}" Nov 5 00:06:52.431323 containerd[1616]: time="2025-11-05T00:06:52.431278067Z" level=info msg="received exit event container_id:\"651f57d924ccd13b04298099cdc9cef69b5f29e9c5d6af0c6cca3881c2eff493\" id:\"651f57d924ccd13b04298099cdc9cef69b5f29e9c5d6af0c6cca3881c2eff493\" pid:4748 exited_at:{seconds:1762301212 nanos:276099585}" Nov 5 00:06:52.438312 containerd[1616]: time="2025-11-05T00:06:52.438286983Z" level=info msg="StartContainer for \"651f57d924ccd13b04298099cdc9cef69b5f29e9c5d6af0c6cca3881c2eff493\" returns successfully" Nov 5 00:06:52.449999 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-651f57d924ccd13b04298099cdc9cef69b5f29e9c5d6af0c6cca3881c2eff493-rootfs.mount: Deactivated successfully. Nov 5 00:06:53.000045 kubelet[2749]: I1105 00:06:52.999967 2749 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-05T00:06:52Z","lastTransitionTime":"2025-11-05T00:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Nov 5 00:06:53.194375 kubelet[2749]: E1105 00:06:53.194338 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 00:06:53.196080 containerd[1616]: time="2025-11-05T00:06:53.196044407Z" level=info msg="CreateContainer within sandbox \"727a1a5b704e8300e97311ef9629dca8b66bc1a9bdc4162df00155333e682875\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 5 00:06:53.208371 containerd[1616]: time="2025-11-05T00:06:53.208305857Z" level=info msg="Container 6c6d13e23a28a79c7fc158a187bbcf41ee4e2598b784c032d0c9dac2335f301b: CDI devices from CRI Config.CDIDevices: []" Nov 5 00:06:53.210310 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1815437534.mount: Deactivated successfully. Nov 5 00:06:53.214774 containerd[1616]: time="2025-11-05T00:06:53.214723766Z" level=info msg="CreateContainer within sandbox \"727a1a5b704e8300e97311ef9629dca8b66bc1a9bdc4162df00155333e682875\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6c6d13e23a28a79c7fc158a187bbcf41ee4e2598b784c032d0c9dac2335f301b\"" Nov 5 00:06:53.215309 containerd[1616]: time="2025-11-05T00:06:53.215271279Z" level=info msg="StartContainer for \"6c6d13e23a28a79c7fc158a187bbcf41ee4e2598b784c032d0c9dac2335f301b\"" Nov 5 00:06:53.216055 containerd[1616]: time="2025-11-05T00:06:53.216025502Z" level=info msg="connecting to shim 6c6d13e23a28a79c7fc158a187bbcf41ee4e2598b784c032d0c9dac2335f301b" address="unix:///run/containerd/s/3f6a47b082bfb260961604f053b9c0f5c34a71fd2649309d6476fbc9276cfbe6" protocol=ttrpc version=3 Nov 5 00:06:53.238005 systemd[1]: Started cri-containerd-6c6d13e23a28a79c7fc158a187bbcf41ee4e2598b784c032d0c9dac2335f301b.scope - libcontainer container 6c6d13e23a28a79c7fc158a187bbcf41ee4e2598b784c032d0c9dac2335f301b. Nov 5 00:06:53.268327 containerd[1616]: time="2025-11-05T00:06:53.268240846Z" level=info msg="StartContainer for \"6c6d13e23a28a79c7fc158a187bbcf41ee4e2598b784c032d0c9dac2335f301b\" returns successfully" Nov 5 00:06:53.325425 containerd[1616]: time="2025-11-05T00:06:53.325369882Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6c6d13e23a28a79c7fc158a187bbcf41ee4e2598b784c032d0c9dac2335f301b\" id:\"fac938036372538f7e89fe6963ab195083f9aafd604db29c51ce3b3850f89f1e\" pid:4815 exited_at:{seconds:1762301213 nanos:325014567}" Nov 5 00:06:53.662911 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Nov 5 00:06:54.199591 kubelet[2749]: E1105 00:06:54.199561 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 00:06:54.212569 kubelet[2749]: I1105 00:06:54.212501 2749 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-4xt8q" podStartSLOduration=5.212484931 podStartE2EDuration="5.212484931s" podCreationTimestamp="2025-11-05 00:06:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 00:06:54.212469111 +0000 UTC m=+83.306729651" watchObservedRunningTime="2025-11-05 00:06:54.212484931 +0000 UTC m=+83.306745461" Nov 5 00:06:55.879620 kubelet[2749]: E1105 00:06:55.879539 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 00:06:56.137199 containerd[1616]: time="2025-11-05T00:06:56.137044862Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6c6d13e23a28a79c7fc158a187bbcf41ee4e2598b784c032d0c9dac2335f301b\" id:\"93f2a58886d23fa278ec3c7bf14720b55c68fda1479e4e9e9256e861120f29fc\" pid:5235 exit_status:1 exited_at:{seconds:1762301216 nanos:136632199}" Nov 5 00:06:56.656778 systemd-networkd[1511]: lxc_health: Link UP Nov 5 00:06:56.657090 systemd-networkd[1511]: lxc_health: Gained carrier Nov 5 00:06:56.983712 kubelet[2749]: E1105 00:06:56.983674 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 00:06:57.880190 kubelet[2749]: E1105 00:06:57.879905 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 00:06:58.209486 kubelet[2749]: E1105 00:06:58.209266 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 00:06:58.236546 containerd[1616]: time="2025-11-05T00:06:58.236490681Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6c6d13e23a28a79c7fc158a187bbcf41ee4e2598b784c032d0c9dac2335f301b\" id:\"a260a46212efe261423f1215107f2a871faef49cd195dff05f66d755219c1f80\" pid:5383 exited_at:{seconds:1762301218 nanos:231490805}" Nov 5 00:06:58.669154 systemd-networkd[1511]: lxc_health: Gained IPv6LL Nov 5 00:06:59.210649 kubelet[2749]: E1105 00:06:59.210606 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 00:07:00.321134 containerd[1616]: time="2025-11-05T00:07:00.320985049Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6c6d13e23a28a79c7fc158a187bbcf41ee4e2598b784c032d0c9dac2335f301b\" id:\"44fdc65c98e00aaeeb47998b975e0e339e8a0624ed03c4578a1ac1118d280b62\" pid:5416 exited_at:{seconds:1762301220 nanos:320772852}" Nov 5 00:07:02.409807 containerd[1616]: time="2025-11-05T00:07:02.409762196Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6c6d13e23a28a79c7fc158a187bbcf41ee4e2598b784c032d0c9dac2335f301b\" id:\"d371574425ae075fa5a77ea24aa123359393111c9da9ad27a92b4f1404433de2\" pid:5442 exited_at:{seconds:1762301222 nanos:409592520}" Nov 5 00:07:02.440514 sshd[4554]: Connection closed by 10.0.0.1 port 36570 Nov 5 00:07:02.440945 sshd-session[4547]: pam_unix(sshd:session): session closed for user core Nov 5 00:07:02.445708 systemd[1]: sshd@26-10.0.0.3:22-10.0.0.1:36570.service: Deactivated successfully. Nov 5 00:07:02.447686 systemd[1]: session-27.scope: Deactivated successfully. Nov 5 00:07:02.448447 systemd-logind[1590]: Session 27 logged out. Waiting for processes to exit. Nov 5 00:07:02.449777 systemd-logind[1590]: Removed session 27. Nov 5 00:07:03.982105 kubelet[2749]: E1105 00:07:03.982056 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 00:07:03.982514 kubelet[2749]: E1105 00:07:03.982208 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"