Nov 6 23:59:44.213030 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Thu Nov 6 22:10:46 -00 2025 Nov 6 23:59:44.213057 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=dfca907f387399f05a1f70f0a721c67729758750135d0f481fa9c4c0c2ff9c7e Nov 6 23:59:44.213071 kernel: BIOS-provided physical RAM map: Nov 6 23:59:44.213080 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Nov 6 23:59:44.213089 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Nov 6 23:59:44.213097 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Nov 6 23:59:44.213108 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Nov 6 23:59:44.213126 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Nov 6 23:59:44.213139 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Nov 6 23:59:44.213150 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Nov 6 23:59:44.213160 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 6 23:59:44.213169 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Nov 6 23:59:44.213179 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Nov 6 23:59:44.213188 kernel: NX (Execute Disable) protection: active Nov 6 23:59:44.213202 kernel: APIC: Static calls initialized Nov 6 23:59:44.213212 kernel: SMBIOS 2.8 present. Nov 6 23:59:44.213226 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Nov 6 23:59:44.213235 kernel: DMI: Memory slots populated: 1/1 Nov 6 23:59:44.213245 kernel: Hypervisor detected: KVM Nov 6 23:59:44.213255 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Nov 6 23:59:44.213265 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 6 23:59:44.213274 kernel: kvm-clock: using sched offset of 4059586194 cycles Nov 6 23:59:44.213284 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 6 23:59:44.213295 kernel: tsc: Detected 2794.750 MHz processor Nov 6 23:59:44.213309 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 6 23:59:44.213320 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 6 23:59:44.213330 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Nov 6 23:59:44.213341 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Nov 6 23:59:44.213352 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 6 23:59:44.213362 kernel: Using GB pages for direct mapping Nov 6 23:59:44.213373 kernel: ACPI: Early table checksum verification disabled Nov 6 23:59:44.213400 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Nov 6 23:59:44.213411 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 23:59:44.213422 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 23:59:44.213432 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 23:59:44.213443 kernel: ACPI: FACS 0x000000009CFE0000 000040 Nov 6 23:59:44.213453 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 23:59:44.213464 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 23:59:44.213478 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 23:59:44.213489 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 23:59:44.213505 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Nov 6 23:59:44.213515 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Nov 6 23:59:44.213526 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Nov 6 23:59:44.213540 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Nov 6 23:59:44.213550 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Nov 6 23:59:44.213561 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Nov 6 23:59:44.213572 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Nov 6 23:59:44.213582 kernel: No NUMA configuration found Nov 6 23:59:44.213593 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Nov 6 23:59:44.213607 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Nov 6 23:59:44.213620 kernel: Zone ranges: Nov 6 23:59:44.213633 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 6 23:59:44.213644 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Nov 6 23:59:44.213655 kernel: Normal empty Nov 6 23:59:44.213665 kernel: Device empty Nov 6 23:59:44.213676 kernel: Movable zone start for each node Nov 6 23:59:44.213686 kernel: Early memory node ranges Nov 6 23:59:44.213701 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Nov 6 23:59:44.213713 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Nov 6 23:59:44.213725 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Nov 6 23:59:44.213737 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 6 23:59:44.213749 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Nov 6 23:59:44.213761 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Nov 6 23:59:44.213777 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 6 23:59:44.213788 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 6 23:59:44.213804 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 6 23:59:44.213816 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 6 23:59:44.213831 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 6 23:59:44.213842 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 6 23:59:44.213853 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 6 23:59:44.213864 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 6 23:59:44.213876 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 6 23:59:44.213890 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 6 23:59:44.213901 kernel: TSC deadline timer available Nov 6 23:59:44.213913 kernel: CPU topo: Max. logical packages: 1 Nov 6 23:59:44.213924 kernel: CPU topo: Max. logical dies: 1 Nov 6 23:59:44.213936 kernel: CPU topo: Max. dies per package: 1 Nov 6 23:59:44.213947 kernel: CPU topo: Max. threads per core: 1 Nov 6 23:59:44.213958 kernel: CPU topo: Num. cores per package: 4 Nov 6 23:59:44.213973 kernel: CPU topo: Num. threads per package: 4 Nov 6 23:59:44.213984 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Nov 6 23:59:44.213995 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 6 23:59:44.214006 kernel: kvm-guest: KVM setup pv remote TLB flush Nov 6 23:59:44.214017 kernel: kvm-guest: setup PV sched yield Nov 6 23:59:44.214029 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Nov 6 23:59:44.214040 kernel: Booting paravirtualized kernel on KVM Nov 6 23:59:44.214051 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 6 23:59:44.214066 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Nov 6 23:59:44.214077 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Nov 6 23:59:44.214088 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Nov 6 23:59:44.214099 kernel: pcpu-alloc: [0] 0 1 2 3 Nov 6 23:59:44.214111 kernel: kvm-guest: PV spinlocks enabled Nov 6 23:59:44.214131 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 6 23:59:44.214146 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=dfca907f387399f05a1f70f0a721c67729758750135d0f481fa9c4c0c2ff9c7e Nov 6 23:59:44.214162 kernel: random: crng init done Nov 6 23:59:44.214174 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 6 23:59:44.214186 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 6 23:59:44.214198 kernel: Fallback order for Node 0: 0 Nov 6 23:59:44.214211 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Nov 6 23:59:44.214223 kernel: Policy zone: DMA32 Nov 6 23:59:44.214239 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 6 23:59:44.214252 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Nov 6 23:59:44.214263 kernel: ftrace: allocating 40092 entries in 157 pages Nov 6 23:59:44.214275 kernel: ftrace: allocated 157 pages with 5 groups Nov 6 23:59:44.214287 kernel: Dynamic Preempt: voluntary Nov 6 23:59:44.214298 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 6 23:59:44.214310 kernel: rcu: RCU event tracing is enabled. Nov 6 23:59:44.214323 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Nov 6 23:59:44.214337 kernel: Trampoline variant of Tasks RCU enabled. Nov 6 23:59:44.214354 kernel: Rude variant of Tasks RCU enabled. Nov 6 23:59:44.214367 kernel: Tracing variant of Tasks RCU enabled. Nov 6 23:59:44.214379 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 6 23:59:44.214405 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Nov 6 23:59:44.214416 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 6 23:59:44.214428 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 6 23:59:44.214445 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 6 23:59:44.214458 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Nov 6 23:59:44.214470 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 6 23:59:44.214491 kernel: Console: colour VGA+ 80x25 Nov 6 23:59:44.214505 kernel: printk: legacy console [ttyS0] enabled Nov 6 23:59:44.214518 kernel: ACPI: Core revision 20240827 Nov 6 23:59:44.214531 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 6 23:59:44.214545 kernel: APIC: Switch to symmetric I/O mode setup Nov 6 23:59:44.214557 kernel: x2apic enabled Nov 6 23:59:44.214574 kernel: APIC: Switched APIC routing to: physical x2apic Nov 6 23:59:44.214590 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Nov 6 23:59:44.214602 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Nov 6 23:59:44.214614 kernel: kvm-guest: setup PV IPIs Nov 6 23:59:44.214629 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 6 23:59:44.214641 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Nov 6 23:59:44.214654 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Nov 6 23:59:44.214666 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Nov 6 23:59:44.214679 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Nov 6 23:59:44.214690 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Nov 6 23:59:44.214702 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 6 23:59:44.214717 kernel: Spectre V2 : Mitigation: Retpolines Nov 6 23:59:44.214730 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 6 23:59:44.214741 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Nov 6 23:59:44.214754 kernel: active return thunk: retbleed_return_thunk Nov 6 23:59:44.214766 kernel: RETBleed: Mitigation: untrained return thunk Nov 6 23:59:44.214778 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 6 23:59:44.214791 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 6 23:59:44.214807 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Nov 6 23:59:44.214819 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Nov 6 23:59:44.214832 kernel: active return thunk: srso_return_thunk Nov 6 23:59:44.214844 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Nov 6 23:59:44.214856 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 6 23:59:44.214868 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 6 23:59:44.214881 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 6 23:59:44.214895 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 6 23:59:44.214907 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Nov 6 23:59:44.214919 kernel: Freeing SMP alternatives memory: 32K Nov 6 23:59:44.214931 kernel: pid_max: default: 32768 minimum: 301 Nov 6 23:59:44.214943 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Nov 6 23:59:44.214954 kernel: landlock: Up and running. Nov 6 23:59:44.214966 kernel: SELinux: Initializing. Nov 6 23:59:44.214984 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 6 23:59:44.214997 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 6 23:59:44.215010 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Nov 6 23:59:44.215022 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Nov 6 23:59:44.215034 kernel: ... version: 0 Nov 6 23:59:44.215046 kernel: ... bit width: 48 Nov 6 23:59:44.215059 kernel: ... generic registers: 6 Nov 6 23:59:44.215075 kernel: ... value mask: 0000ffffffffffff Nov 6 23:59:44.215088 kernel: ... max period: 00007fffffffffff Nov 6 23:59:44.215100 kernel: ... fixed-purpose events: 0 Nov 6 23:59:44.215112 kernel: ... event mask: 000000000000003f Nov 6 23:59:44.215132 kernel: signal: max sigframe size: 1776 Nov 6 23:59:44.215146 kernel: rcu: Hierarchical SRCU implementation. Nov 6 23:59:44.215158 kernel: rcu: Max phase no-delay instances is 400. Nov 6 23:59:44.215175 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Nov 6 23:59:44.215188 kernel: smp: Bringing up secondary CPUs ... Nov 6 23:59:44.215200 kernel: smpboot: x86: Booting SMP configuration: Nov 6 23:59:44.215213 kernel: .... node #0, CPUs: #1 #2 #3 Nov 6 23:59:44.215226 kernel: smp: Brought up 1 node, 4 CPUs Nov 6 23:59:44.215238 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Nov 6 23:59:44.215251 kernel: Memory: 2451440K/2571752K available (14336K kernel code, 2443K rwdata, 26064K rodata, 15956K init, 2088K bss, 114376K reserved, 0K cma-reserved) Nov 6 23:59:44.215265 kernel: devtmpfs: initialized Nov 6 23:59:44.215277 kernel: x86/mm: Memory block size: 128MB Nov 6 23:59:44.215289 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 6 23:59:44.215301 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Nov 6 23:59:44.215314 kernel: pinctrl core: initialized pinctrl subsystem Nov 6 23:59:44.215325 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 6 23:59:44.215337 kernel: audit: initializing netlink subsys (disabled) Nov 6 23:59:44.215349 kernel: audit: type=2000 audit(1762473582.102:1): state=initialized audit_enabled=0 res=1 Nov 6 23:59:44.215364 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 6 23:59:44.215376 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 6 23:59:44.215403 kernel: cpuidle: using governor menu Nov 6 23:59:44.215416 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 6 23:59:44.215429 kernel: dca service started, version 1.12.1 Nov 6 23:59:44.215441 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Nov 6 23:59:44.215456 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Nov 6 23:59:44.215472 kernel: PCI: Using configuration type 1 for base access Nov 6 23:59:44.215485 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 6 23:59:44.215498 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 6 23:59:44.215510 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 6 23:59:44.215522 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 6 23:59:44.215534 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 6 23:59:44.215545 kernel: ACPI: Added _OSI(Module Device) Nov 6 23:59:44.215560 kernel: ACPI: Added _OSI(Processor Device) Nov 6 23:59:44.215572 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 6 23:59:44.215583 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 6 23:59:44.215595 kernel: ACPI: Interpreter enabled Nov 6 23:59:44.215607 kernel: ACPI: PM: (supports S0 S3 S5) Nov 6 23:59:44.215618 kernel: ACPI: Using IOAPIC for interrupt routing Nov 6 23:59:44.215631 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 6 23:59:44.215643 kernel: PCI: Using E820 reservations for host bridge windows Nov 6 23:59:44.215658 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Nov 6 23:59:44.215671 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 6 23:59:44.215943 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 6 23:59:44.216165 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Nov 6 23:59:44.216371 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Nov 6 23:59:44.216410 kernel: PCI host bridge to bus 0000:00 Nov 6 23:59:44.216618 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 6 23:59:44.216807 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 6 23:59:44.216991 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 6 23:59:44.217186 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Nov 6 23:59:44.217370 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Nov 6 23:59:44.217577 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Nov 6 23:59:44.217764 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 6 23:59:44.217988 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Nov 6 23:59:44.218213 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Nov 6 23:59:44.218439 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Nov 6 23:59:44.218652 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Nov 6 23:59:44.218854 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Nov 6 23:59:44.219053 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 6 23:59:44.219276 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Nov 6 23:59:44.219500 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Nov 6 23:59:44.219705 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Nov 6 23:59:44.219912 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Nov 6 23:59:44.220134 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Nov 6 23:59:44.220339 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Nov 6 23:59:44.220561 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Nov 6 23:59:44.220765 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Nov 6 23:59:44.220978 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Nov 6 23:59:44.221195 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Nov 6 23:59:44.221410 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Nov 6 23:59:44.221611 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Nov 6 23:59:44.221812 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Nov 6 23:59:44.222006 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Nov 6 23:59:44.222210 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Nov 6 23:59:44.222440 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Nov 6 23:59:44.222634 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Nov 6 23:59:44.222826 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Nov 6 23:59:44.223023 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Nov 6 23:59:44.223224 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Nov 6 23:59:44.223244 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 6 23:59:44.223255 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 6 23:59:44.223265 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 6 23:59:44.223277 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 6 23:59:44.223287 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Nov 6 23:59:44.223298 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Nov 6 23:59:44.223308 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Nov 6 23:59:44.223322 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Nov 6 23:59:44.223332 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Nov 6 23:59:44.223342 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Nov 6 23:59:44.223352 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Nov 6 23:59:44.223362 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Nov 6 23:59:44.223372 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Nov 6 23:59:44.223382 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Nov 6 23:59:44.223409 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Nov 6 23:59:44.223419 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Nov 6 23:59:44.223429 kernel: iommu: Default domain type: Translated Nov 6 23:59:44.223439 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 6 23:59:44.223450 kernel: PCI: Using ACPI for IRQ routing Nov 6 23:59:44.223460 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 6 23:59:44.223470 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Nov 6 23:59:44.223483 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Nov 6 23:59:44.223680 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Nov 6 23:59:44.223874 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Nov 6 23:59:44.224060 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 6 23:59:44.224073 kernel: vgaarb: loaded Nov 6 23:59:44.224084 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 6 23:59:44.224094 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 6 23:59:44.224108 kernel: clocksource: Switched to clocksource kvm-clock Nov 6 23:59:44.224127 kernel: VFS: Disk quotas dquot_6.6.0 Nov 6 23:59:44.224137 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 6 23:59:44.224147 kernel: pnp: PnP ACPI init Nov 6 23:59:44.224353 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Nov 6 23:59:44.224368 kernel: pnp: PnP ACPI: found 6 devices Nov 6 23:59:44.224382 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 6 23:59:44.224406 kernel: NET: Registered PF_INET protocol family Nov 6 23:59:44.224417 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 6 23:59:44.224428 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 6 23:59:44.224438 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 6 23:59:44.224448 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 6 23:59:44.224458 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 6 23:59:44.224472 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 6 23:59:44.224482 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 6 23:59:44.224493 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 6 23:59:44.224503 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 6 23:59:44.224513 kernel: NET: Registered PF_XDP protocol family Nov 6 23:59:44.224698 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 6 23:59:44.224895 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 6 23:59:44.225173 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 6 23:59:44.225358 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Nov 6 23:59:44.225557 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Nov 6 23:59:44.225746 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Nov 6 23:59:44.225762 kernel: PCI: CLS 0 bytes, default 64 Nov 6 23:59:44.225774 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Nov 6 23:59:44.225785 kernel: Initialise system trusted keyrings Nov 6 23:59:44.225801 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 6 23:59:44.225812 kernel: Key type asymmetric registered Nov 6 23:59:44.225823 kernel: Asymmetric key parser 'x509' registered Nov 6 23:59:44.225834 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Nov 6 23:59:44.225845 kernel: io scheduler mq-deadline registered Nov 6 23:59:44.225856 kernel: io scheduler kyber registered Nov 6 23:59:44.225867 kernel: io scheduler bfq registered Nov 6 23:59:44.225881 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 6 23:59:44.225892 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Nov 6 23:59:44.225903 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Nov 6 23:59:44.225913 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Nov 6 23:59:44.225923 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 6 23:59:44.225933 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 6 23:59:44.225944 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 6 23:59:44.225956 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 6 23:59:44.225966 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 6 23:59:44.226224 kernel: rtc_cmos 00:04: RTC can wake from S4 Nov 6 23:59:44.226242 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 6 23:59:44.226459 kernel: rtc_cmos 00:04: registered as rtc0 Nov 6 23:59:44.226655 kernel: rtc_cmos 00:04: setting system clock to 2025-11-06T23:59:42 UTC (1762473582) Nov 6 23:59:44.226857 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Nov 6 23:59:44.226873 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Nov 6 23:59:44.226886 kernel: NET: Registered PF_INET6 protocol family Nov 6 23:59:44.226898 kernel: Segment Routing with IPv6 Nov 6 23:59:44.226910 kernel: In-situ OAM (IOAM) with IPv6 Nov 6 23:59:44.226923 kernel: NET: Registered PF_PACKET protocol family Nov 6 23:59:44.226935 kernel: Key type dns_resolver registered Nov 6 23:59:44.226948 kernel: IPI shorthand broadcast: enabled Nov 6 23:59:44.226964 kernel: sched_clock: Marking stable (1165004037, 306124383)->(1558795030, -87666610) Nov 6 23:59:44.226976 kernel: registered taskstats version 1 Nov 6 23:59:44.226989 kernel: Loading compiled-in X.509 certificates Nov 6 23:59:44.227002 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: a701a154daed2de4fe9459199e7b4f93a1f30f1e' Nov 6 23:59:44.227014 kernel: Demotion targets for Node 0: null Nov 6 23:59:44.227027 kernel: Key type .fscrypt registered Nov 6 23:59:44.227039 kernel: Key type fscrypt-provisioning registered Nov 6 23:59:44.227054 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 6 23:59:44.227066 kernel: ima: Allocated hash algorithm: sha1 Nov 6 23:59:44.227078 kernel: ima: No architecture policies found Nov 6 23:59:44.227090 kernel: clk: Disabling unused clocks Nov 6 23:59:44.227102 kernel: Freeing unused kernel image (initmem) memory: 15956K Nov 6 23:59:44.227114 kernel: Write protecting the kernel read-only data: 40960k Nov 6 23:59:44.227138 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Nov 6 23:59:44.227153 kernel: Run /init as init process Nov 6 23:59:44.227165 kernel: with arguments: Nov 6 23:59:44.227178 kernel: /init Nov 6 23:59:44.227190 kernel: with environment: Nov 6 23:59:44.227202 kernel: HOME=/ Nov 6 23:59:44.227215 kernel: TERM=linux Nov 6 23:59:44.227227 kernel: SCSI subsystem initialized Nov 6 23:59:44.227243 kernel: libata version 3.00 loaded. Nov 6 23:59:44.227472 kernel: ahci 0000:00:1f.2: version 3.0 Nov 6 23:59:44.227511 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Nov 6 23:59:44.227719 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Nov 6 23:59:44.227927 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Nov 6 23:59:44.228149 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Nov 6 23:59:44.228386 kernel: scsi host0: ahci Nov 6 23:59:44.228639 kernel: scsi host1: ahci Nov 6 23:59:44.228862 kernel: scsi host2: ahci Nov 6 23:59:44.229053 kernel: scsi host3: ahci Nov 6 23:59:44.229250 kernel: scsi host4: ahci Nov 6 23:59:44.229474 kernel: scsi host5: ahci Nov 6 23:59:44.229492 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 26 lpm-pol 1 Nov 6 23:59:44.229504 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 26 lpm-pol 1 Nov 6 23:59:44.229516 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 26 lpm-pol 1 Nov 6 23:59:44.229528 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 26 lpm-pol 1 Nov 6 23:59:44.229540 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 26 lpm-pol 1 Nov 6 23:59:44.229551 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 26 lpm-pol 1 Nov 6 23:59:44.229568 kernel: ata2: SATA link down (SStatus 0 SControl 300) Nov 6 23:59:44.229579 kernel: ata1: SATA link down (SStatus 0 SControl 300) Nov 6 23:59:44.229591 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 6 23:59:44.229603 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 6 23:59:44.229615 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Nov 6 23:59:44.229629 kernel: ata3.00: LPM support broken, forcing max_power Nov 6 23:59:44.229642 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Nov 6 23:59:44.229657 kernel: ata3.00: applying bridge limits Nov 6 23:59:44.229669 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 6 23:59:44.229680 kernel: ata3.00: LPM support broken, forcing max_power Nov 6 23:59:44.229692 kernel: ata3.00: configured for UDMA/100 Nov 6 23:59:44.229931 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Nov 6 23:59:44.230161 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Nov 6 23:59:44.230370 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Nov 6 23:59:44.230386 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 6 23:59:44.230413 kernel: GPT:16515071 != 27000831 Nov 6 23:59:44.230425 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 6 23:59:44.230436 kernel: GPT:16515071 != 27000831 Nov 6 23:59:44.230448 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 6 23:59:44.230459 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 6 23:59:44.230689 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Nov 6 23:59:44.230705 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 6 23:59:44.230927 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Nov 6 23:59:44.230943 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 6 23:59:44.230955 kernel: device-mapper: uevent: version 1.0.3 Nov 6 23:59:44.230967 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Nov 6 23:59:44.230983 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Nov 6 23:59:44.230997 kernel: raid6: avx2x4 gen() 30058 MB/s Nov 6 23:59:44.231009 kernel: raid6: avx2x2 gen() 30971 MB/s Nov 6 23:59:44.231021 kernel: raid6: avx2x1 gen() 25697 MB/s Nov 6 23:59:44.231033 kernel: raid6: using algorithm avx2x2 gen() 30971 MB/s Nov 6 23:59:44.231047 kernel: raid6: .... xor() 19773 MB/s, rmw enabled Nov 6 23:59:44.231059 kernel: raid6: using avx2x2 recovery algorithm Nov 6 23:59:44.231071 kernel: xor: automatically using best checksumming function avx Nov 6 23:59:44.231083 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 6 23:59:44.231095 kernel: BTRFS: device fsid e643e10b-d997-4333-8d60-30d1c22703fe devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (182) Nov 6 23:59:44.231108 kernel: BTRFS info (device dm-0): first mount of filesystem e643e10b-d997-4333-8d60-30d1c22703fe Nov 6 23:59:44.231128 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 6 23:59:44.231143 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 6 23:59:44.231154 kernel: BTRFS info (device dm-0): enabling free space tree Nov 6 23:59:44.231166 kernel: loop: module loaded Nov 6 23:59:44.231178 kernel: loop0: detected capacity change from 0 to 100120 Nov 6 23:59:44.231190 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 6 23:59:44.231203 systemd[1]: Successfully made /usr/ read-only. Nov 6 23:59:44.231219 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 6 23:59:44.231235 systemd[1]: Detected virtualization kvm. Nov 6 23:59:44.231247 systemd[1]: Detected architecture x86-64. Nov 6 23:59:44.231259 systemd[1]: Running in initrd. Nov 6 23:59:44.231271 systemd[1]: No hostname configured, using default hostname. Nov 6 23:59:44.231284 systemd[1]: Hostname set to . Nov 6 23:59:44.231296 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Nov 6 23:59:44.231311 systemd[1]: Queued start job for default target initrd.target. Nov 6 23:59:44.231323 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 6 23:59:44.231336 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 6 23:59:44.231348 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 6 23:59:44.231364 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 6 23:59:44.231377 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 6 23:59:44.231419 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 6 23:59:44.231449 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 6 23:59:44.231464 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 6 23:59:44.231476 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 6 23:59:44.231489 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Nov 6 23:59:44.231501 systemd[1]: Reached target paths.target - Path Units. Nov 6 23:59:44.231517 systemd[1]: Reached target slices.target - Slice Units. Nov 6 23:59:44.231530 systemd[1]: Reached target swap.target - Swaps. Nov 6 23:59:44.231542 systemd[1]: Reached target timers.target - Timer Units. Nov 6 23:59:44.231554 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 6 23:59:44.231567 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 6 23:59:44.231579 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 6 23:59:44.231591 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Nov 6 23:59:44.231606 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 6 23:59:44.231619 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 6 23:59:44.231631 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 6 23:59:44.231643 systemd[1]: Reached target sockets.target - Socket Units. Nov 6 23:59:44.231656 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 6 23:59:44.231669 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 6 23:59:44.231684 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 6 23:59:44.231697 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 6 23:59:44.231710 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Nov 6 23:59:44.231723 systemd[1]: Starting systemd-fsck-usr.service... Nov 6 23:59:44.231735 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 6 23:59:44.231748 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 6 23:59:44.231760 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 23:59:44.231777 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 6 23:59:44.231819 systemd-journald[315]: Collecting audit messages is disabled. Nov 6 23:59:44.231849 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 6 23:59:44.231862 systemd[1]: Finished systemd-fsck-usr.service. Nov 6 23:59:44.231874 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 6 23:59:44.231887 systemd-journald[315]: Journal started Nov 6 23:59:44.231913 systemd-journald[315]: Runtime Journal (/run/log/journal/268d3807a8864e999840704f297ea80a) is 6M, max 48.3M, 42.2M free. Nov 6 23:59:44.236419 systemd[1]: Started systemd-journald.service - Journal Service. Nov 6 23:59:44.244531 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 6 23:59:44.256430 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 6 23:59:44.259619 systemd-modules-load[318]: Inserted module 'br_netfilter' Nov 6 23:59:44.326794 kernel: Bridge firewalling registered Nov 6 23:59:44.260376 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 6 23:59:44.268433 systemd-tmpfiles[333]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Nov 6 23:59:44.328426 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 6 23:59:44.335365 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 23:59:44.340627 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 6 23:59:44.346641 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 6 23:59:44.349399 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 6 23:59:44.365053 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 6 23:59:44.379592 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 6 23:59:44.381535 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 6 23:59:44.382728 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 6 23:59:44.388624 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 6 23:59:44.392511 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 6 23:59:44.417578 dracut-cmdline[359]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=dfca907f387399f05a1f70f0a721c67729758750135d0f481fa9c4c0c2ff9c7e Nov 6 23:59:44.452130 systemd-resolved[360]: Positive Trust Anchors: Nov 6 23:59:44.452147 systemd-resolved[360]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 6 23:59:44.452152 systemd-resolved[360]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 6 23:59:44.452184 systemd-resolved[360]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 6 23:59:44.475942 systemd-resolved[360]: Defaulting to hostname 'linux'. Nov 6 23:59:44.477786 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 6 23:59:44.478828 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 6 23:59:44.552431 kernel: Loading iSCSI transport class v2.0-870. Nov 6 23:59:44.566423 kernel: iscsi: registered transport (tcp) Nov 6 23:59:44.590422 kernel: iscsi: registered transport (qla4xxx) Nov 6 23:59:44.590480 kernel: QLogic iSCSI HBA Driver Nov 6 23:59:44.618365 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 6 23:59:44.650099 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 6 23:59:44.655742 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 6 23:59:44.713705 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 6 23:59:44.717236 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 6 23:59:44.719603 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 6 23:59:44.767074 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 6 23:59:44.772447 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 6 23:59:44.813536 systemd-udevd[601]: Using default interface naming scheme 'v257'. Nov 6 23:59:44.829560 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 6 23:59:44.835324 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 6 23:59:44.862193 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 6 23:59:44.866491 dracut-pre-trigger[676]: rd.md=0: removing MD RAID activation Nov 6 23:59:44.867513 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 6 23:59:44.900159 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 6 23:59:44.905164 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 6 23:59:44.929637 systemd-networkd[709]: lo: Link UP Nov 6 23:59:44.929647 systemd-networkd[709]: lo: Gained carrier Nov 6 23:59:44.930352 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 6 23:59:44.931164 systemd[1]: Reached target network.target - Network. Nov 6 23:59:45.006524 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 6 23:59:45.009361 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 6 23:59:45.071833 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 6 23:59:45.090375 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 6 23:59:45.103154 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 6 23:59:45.113377 kernel: cryptd: max_cpu_qlen set to 1000 Nov 6 23:59:45.118327 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 6 23:59:45.126043 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 6 23:59:45.131580 kernel: AES CTR mode by8 optimization enabled Nov 6 23:59:45.140736 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 6 23:59:45.148833 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Nov 6 23:59:45.140888 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 23:59:45.143589 systemd-networkd[709]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 6 23:59:45.143594 systemd-networkd[709]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 6 23:59:45.150329 systemd-networkd[709]: eth0: Link UP Nov 6 23:59:45.150668 systemd-networkd[709]: eth0: Gained carrier Nov 6 23:59:45.150686 systemd-networkd[709]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 6 23:59:45.157149 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 23:59:45.163814 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 23:59:45.167023 systemd-networkd[709]: eth0: DHCPv4 address 10.0.0.46/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 6 23:59:45.272576 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 23:59:45.279562 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 6 23:59:45.280853 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 6 23:59:45.283920 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 6 23:59:45.290085 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 6 23:59:45.297327 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 6 23:59:45.325249 disk-uuid[804]: Primary Header is updated. Nov 6 23:59:45.325249 disk-uuid[804]: Secondary Entries is updated. Nov 6 23:59:45.325249 disk-uuid[804]: Secondary Header is updated. Nov 6 23:59:45.346019 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 6 23:59:46.367929 disk-uuid[846]: Warning: The kernel is still using the old partition table. Nov 6 23:59:46.367929 disk-uuid[846]: The new table will be used at the next reboot or after you Nov 6 23:59:46.367929 disk-uuid[846]: run partprobe(8) or kpartx(8) Nov 6 23:59:46.367929 disk-uuid[846]: The operation has completed successfully. Nov 6 23:59:46.376217 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 6 23:59:46.376384 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 6 23:59:46.381861 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 6 23:59:46.426431 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (864) Nov 6 23:59:46.426501 kernel: BTRFS info (device vda6): first mount of filesystem 2ac2db45-4534-4157-8998-4b59cd0cd819 Nov 6 23:59:46.430014 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 6 23:59:46.434357 kernel: BTRFS info (device vda6): turning on async discard Nov 6 23:59:46.434380 kernel: BTRFS info (device vda6): enabling free space tree Nov 6 23:59:46.443446 kernel: BTRFS info (device vda6): last unmount of filesystem 2ac2db45-4534-4157-8998-4b59cd0cd819 Nov 6 23:59:46.444654 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 6 23:59:46.449035 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 6 23:59:46.581809 ignition[883]: Ignition 2.22.0 Nov 6 23:59:46.581824 ignition[883]: Stage: fetch-offline Nov 6 23:59:46.581874 ignition[883]: no configs at "/usr/lib/ignition/base.d" Nov 6 23:59:46.581886 ignition[883]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 6 23:59:46.581964 ignition[883]: parsed url from cmdline: "" Nov 6 23:59:46.581969 ignition[883]: no config URL provided Nov 6 23:59:46.581974 ignition[883]: reading system config file "/usr/lib/ignition/user.ign" Nov 6 23:59:46.581986 ignition[883]: no config at "/usr/lib/ignition/user.ign" Nov 6 23:59:46.582035 ignition[883]: op(1): [started] loading QEMU firmware config module Nov 6 23:59:46.582040 ignition[883]: op(1): executing: "modprobe" "qemu_fw_cfg" Nov 6 23:59:46.598453 ignition[883]: op(1): [finished] loading QEMU firmware config module Nov 6 23:59:46.610571 systemd-networkd[709]: eth0: Gained IPv6LL Nov 6 23:59:46.680652 ignition[883]: parsing config with SHA512: 1e45bcbb3de034aef5fbc8defe0dab92ba87c205ae54d95c02974dfc01e1e18720104fd0cf368f0ddf0574f065734a92d9d4a3c9776e2266c631d021f317e711 Nov 6 23:59:46.687236 unknown[883]: fetched base config from "system" Nov 6 23:59:46.687254 unknown[883]: fetched user config from "qemu" Nov 6 23:59:46.687768 ignition[883]: fetch-offline: fetch-offline passed Nov 6 23:59:46.687833 ignition[883]: Ignition finished successfully Nov 6 23:59:46.691650 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 6 23:59:46.693199 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 6 23:59:46.694252 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 6 23:59:46.743771 ignition[894]: Ignition 2.22.0 Nov 6 23:59:46.743789 ignition[894]: Stage: kargs Nov 6 23:59:46.743931 ignition[894]: no configs at "/usr/lib/ignition/base.d" Nov 6 23:59:46.743942 ignition[894]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 6 23:59:46.744707 ignition[894]: kargs: kargs passed Nov 6 23:59:46.744767 ignition[894]: Ignition finished successfully Nov 6 23:59:46.756477 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 6 23:59:46.761042 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 6 23:59:46.800135 ignition[902]: Ignition 2.22.0 Nov 6 23:59:46.800151 ignition[902]: Stage: disks Nov 6 23:59:46.800322 ignition[902]: no configs at "/usr/lib/ignition/base.d" Nov 6 23:59:46.800334 ignition[902]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 6 23:59:46.801280 ignition[902]: disks: disks passed Nov 6 23:59:46.801333 ignition[902]: Ignition finished successfully Nov 6 23:59:46.809149 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 6 23:59:46.810040 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 6 23:59:46.810363 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 6 23:59:46.816379 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 6 23:59:46.821191 systemd[1]: Reached target sysinit.target - System Initialization. Nov 6 23:59:46.824501 systemd[1]: Reached target basic.target - Basic System. Nov 6 23:59:46.828526 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 6 23:59:46.879106 systemd-fsck[912]: ROOT: clean, 15/456736 files, 38230/456704 blocks Nov 6 23:59:47.208020 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 6 23:59:47.209928 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 6 23:59:47.362430 kernel: EXT4-fs (vda9): mounted filesystem 9eac1486-40e9-4edf-8a17-71182690c138 r/w with ordered data mode. Quota mode: none. Nov 6 23:59:47.363462 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 6 23:59:47.364766 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 6 23:59:47.369259 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 6 23:59:47.373490 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 6 23:59:47.374685 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 6 23:59:47.374735 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 6 23:59:47.374773 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 6 23:59:47.392755 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 6 23:59:47.400493 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (920) Nov 6 23:59:47.400513 kernel: BTRFS info (device vda6): first mount of filesystem 2ac2db45-4534-4157-8998-4b59cd0cd819 Nov 6 23:59:47.400525 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 6 23:59:47.401466 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 6 23:59:47.407507 kernel: BTRFS info (device vda6): turning on async discard Nov 6 23:59:47.407558 kernel: BTRFS info (device vda6): enabling free space tree Nov 6 23:59:47.408702 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 6 23:59:47.453997 initrd-setup-root[944]: cut: /sysroot/etc/passwd: No such file or directory Nov 6 23:59:47.459894 initrd-setup-root[951]: cut: /sysroot/etc/group: No such file or directory Nov 6 23:59:47.465302 initrd-setup-root[958]: cut: /sysroot/etc/shadow: No such file or directory Nov 6 23:59:47.469451 initrd-setup-root[965]: cut: /sysroot/etc/gshadow: No such file or directory Nov 6 23:59:47.565767 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 6 23:59:47.571309 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 6 23:59:47.573861 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 6 23:59:47.591905 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 6 23:59:47.594473 kernel: BTRFS info (device vda6): last unmount of filesystem 2ac2db45-4534-4157-8998-4b59cd0cd819 Nov 6 23:59:47.626609 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 6 23:59:47.632815 ignition[1034]: INFO : Ignition 2.22.0 Nov 6 23:59:47.632815 ignition[1034]: INFO : Stage: mount Nov 6 23:59:47.635271 ignition[1034]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 6 23:59:47.635271 ignition[1034]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 6 23:59:47.635271 ignition[1034]: INFO : mount: mount passed Nov 6 23:59:47.635271 ignition[1034]: INFO : Ignition finished successfully Nov 6 23:59:47.641664 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 6 23:59:47.643820 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 6 23:59:47.667699 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 6 23:59:47.698572 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1046) Nov 6 23:59:47.698624 kernel: BTRFS info (device vda6): first mount of filesystem 2ac2db45-4534-4157-8998-4b59cd0cd819 Nov 6 23:59:47.698649 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 6 23:59:47.703692 kernel: BTRFS info (device vda6): turning on async discard Nov 6 23:59:47.703763 kernel: BTRFS info (device vda6): enabling free space tree Nov 6 23:59:47.705414 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 6 23:59:47.740010 ignition[1063]: INFO : Ignition 2.22.0 Nov 6 23:59:47.740010 ignition[1063]: INFO : Stage: files Nov 6 23:59:47.742759 ignition[1063]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 6 23:59:47.742759 ignition[1063]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 6 23:59:47.742759 ignition[1063]: DEBUG : files: compiled without relabeling support, skipping Nov 6 23:59:47.742759 ignition[1063]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 6 23:59:47.742759 ignition[1063]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 6 23:59:47.753072 ignition[1063]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 6 23:59:47.753072 ignition[1063]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 6 23:59:47.753072 ignition[1063]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 6 23:59:47.753072 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 6 23:59:47.753072 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Nov 6 23:59:47.745983 unknown[1063]: wrote ssh authorized keys file for user: core Nov 6 23:59:47.798274 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 6 23:59:47.895660 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 6 23:59:47.900085 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 6 23:59:47.900085 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Nov 6 23:59:48.138557 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Nov 6 23:59:48.242518 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 6 23:59:48.242518 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Nov 6 23:59:48.248435 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Nov 6 23:59:48.248435 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 6 23:59:48.248435 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 6 23:59:48.248435 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 6 23:59:48.248435 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 6 23:59:48.248435 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 6 23:59:48.248435 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 6 23:59:48.313649 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 6 23:59:48.313649 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 6 23:59:48.313649 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 6 23:59:48.324374 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 6 23:59:48.324374 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 6 23:59:48.324374 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Nov 6 23:59:48.709675 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Nov 6 23:59:49.104323 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 6 23:59:49.104323 ignition[1063]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Nov 6 23:59:49.110219 ignition[1063]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 6 23:59:49.157665 ignition[1063]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 6 23:59:49.157665 ignition[1063]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Nov 6 23:59:49.157665 ignition[1063]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Nov 6 23:59:49.166583 ignition[1063]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 6 23:59:49.166583 ignition[1063]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 6 23:59:49.166583 ignition[1063]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Nov 6 23:59:49.166583 ignition[1063]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Nov 6 23:59:49.181217 ignition[1063]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Nov 6 23:59:49.185633 ignition[1063]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Nov 6 23:59:49.188568 ignition[1063]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Nov 6 23:59:49.188568 ignition[1063]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Nov 6 23:59:49.188568 ignition[1063]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Nov 6 23:59:49.188568 ignition[1063]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 6 23:59:49.188568 ignition[1063]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 6 23:59:49.188568 ignition[1063]: INFO : files: files passed Nov 6 23:59:49.188568 ignition[1063]: INFO : Ignition finished successfully Nov 6 23:59:49.192680 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 6 23:59:49.195947 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 6 23:59:49.199141 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 6 23:59:49.215645 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 6 23:59:49.215783 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 6 23:59:49.225912 initrd-setup-root-after-ignition[1095]: grep: /sysroot/oem/oem-release: No such file or directory Nov 6 23:59:49.231894 initrd-setup-root-after-ignition[1101]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 6 23:59:49.234460 initrd-setup-root-after-ignition[1097]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 6 23:59:49.234460 initrd-setup-root-after-ignition[1097]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 6 23:59:49.240801 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 6 23:59:49.245480 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 6 23:59:49.250051 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 6 23:59:49.331263 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 6 23:59:49.331474 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 6 23:59:49.333094 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 6 23:59:49.337943 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 6 23:59:49.339271 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 6 23:59:49.343584 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 6 23:59:49.378932 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 6 23:59:49.381332 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 6 23:59:49.404803 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 6 23:59:49.405017 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 6 23:59:49.406261 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 6 23:59:49.412071 systemd[1]: Stopped target timers.target - Timer Units. Nov 6 23:59:49.415325 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 6 23:59:49.415474 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 6 23:59:49.420048 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 6 23:59:49.423443 systemd[1]: Stopped target basic.target - Basic System. Nov 6 23:59:49.424350 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 6 23:59:49.429328 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 6 23:59:49.430245 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 6 23:59:49.430790 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Nov 6 23:59:49.431343 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 6 23:59:49.441866 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 6 23:59:49.442449 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 6 23:59:49.448361 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 6 23:59:49.449176 systemd[1]: Stopped target swap.target - Swaps. Nov 6 23:59:49.454350 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 6 23:59:49.454483 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 6 23:59:49.459279 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 6 23:59:49.460192 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 6 23:59:49.460980 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 6 23:59:49.467623 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 6 23:59:49.470974 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 6 23:59:49.471087 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 6 23:59:49.476168 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 6 23:59:49.476297 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 6 23:59:49.477181 systemd[1]: Stopped target paths.target - Path Units. Nov 6 23:59:49.477883 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 6 23:59:49.487492 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 6 23:59:49.488281 systemd[1]: Stopped target slices.target - Slice Units. Nov 6 23:59:49.492508 systemd[1]: Stopped target sockets.target - Socket Units. Nov 6 23:59:49.493015 systemd[1]: iscsid.socket: Deactivated successfully. Nov 6 23:59:49.493107 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 6 23:59:49.497991 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 6 23:59:49.498081 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 6 23:59:49.498876 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 6 23:59:49.498990 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 6 23:59:49.503926 systemd[1]: ignition-files.service: Deactivated successfully. Nov 6 23:59:49.504037 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 6 23:59:49.505273 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 6 23:59:49.509720 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 6 23:59:49.509844 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 6 23:59:49.527175 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 6 23:59:49.527835 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 6 23:59:49.527965 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 6 23:59:49.530991 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 6 23:59:49.531105 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 6 23:59:49.534235 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 6 23:59:49.534420 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 6 23:59:49.547361 ignition[1121]: INFO : Ignition 2.22.0 Nov 6 23:59:49.547361 ignition[1121]: INFO : Stage: umount Nov 6 23:59:49.547361 ignition[1121]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 6 23:59:49.547361 ignition[1121]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 6 23:59:49.547361 ignition[1121]: INFO : umount: umount passed Nov 6 23:59:49.547361 ignition[1121]: INFO : Ignition finished successfully Nov 6 23:59:49.544567 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 6 23:59:49.544773 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 6 23:59:49.547803 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 6 23:59:49.547943 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 6 23:59:49.553870 systemd[1]: Stopped target network.target - Network. Nov 6 23:59:49.555064 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 6 23:59:49.555169 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 6 23:59:49.561364 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 6 23:59:49.561459 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 6 23:59:49.562174 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 6 23:59:49.562228 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 6 23:59:49.567520 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 6 23:59:49.567578 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 6 23:59:49.568951 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 6 23:59:49.572942 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 6 23:59:49.583558 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 6 23:59:49.583699 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 6 23:59:49.590165 systemd[1]: Stopped target network-pre.target - Preparation for Network. Nov 6 23:59:49.591894 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 6 23:59:49.591942 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 6 23:59:49.593340 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 6 23:59:49.600132 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 6 23:59:49.600196 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 6 23:59:49.601039 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 6 23:59:49.620428 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 6 23:59:49.620613 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 6 23:59:49.623414 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 6 23:59:49.623500 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 6 23:59:49.624013 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 6 23:59:49.624057 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 6 23:59:49.624298 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 6 23:59:49.624352 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 6 23:59:49.640259 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 6 23:59:49.640348 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 6 23:59:49.645053 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 6 23:59:49.645133 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 6 23:59:49.650182 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 6 23:59:49.651795 systemd[1]: systemd-network-generator.service: Deactivated successfully. Nov 6 23:59:49.651864 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Nov 6 23:59:49.652431 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 6 23:59:49.652495 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 6 23:59:49.659654 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 6 23:59:49.659719 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 23:59:49.661571 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 6 23:59:49.673488 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 6 23:59:49.677859 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 6 23:59:49.683315 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 6 23:59:49.683485 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 6 23:59:49.684774 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 6 23:59:49.684839 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 6 23:59:49.690781 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 6 23:59:49.698674 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 6 23:59:49.704321 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 6 23:59:49.704501 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 6 23:59:49.766848 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 6 23:59:49.766999 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 6 23:59:49.768511 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 6 23:59:49.771808 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 6 23:59:49.771874 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 6 23:59:49.776010 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 6 23:59:49.801890 systemd[1]: Switching root. Nov 6 23:59:49.849074 systemd-journald[315]: Journal stopped Nov 6 23:59:53.171439 systemd-journald[315]: Received SIGTERM from PID 1 (systemd). Nov 6 23:59:53.171527 kernel: SELinux: policy capability network_peer_controls=1 Nov 6 23:59:53.171544 kernel: SELinux: policy capability open_perms=1 Nov 6 23:59:53.171560 kernel: SELinux: policy capability extended_socket_class=1 Nov 6 23:59:53.171576 kernel: SELinux: policy capability always_check_network=0 Nov 6 23:59:53.171596 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 6 23:59:53.171614 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 6 23:59:53.171630 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 6 23:59:53.171644 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 6 23:59:53.171660 kernel: SELinux: policy capability userspace_initial_context=0 Nov 6 23:59:53.171676 kernel: audit: type=1403 audit(1762473592.214:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 6 23:59:53.171697 systemd[1]: Successfully loaded SELinux policy in 134.129ms. Nov 6 23:59:53.171731 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 8.918ms. Nov 6 23:59:53.171751 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 6 23:59:53.171769 systemd[1]: Detected virtualization kvm. Nov 6 23:59:53.171785 systemd[1]: Detected architecture x86-64. Nov 6 23:59:53.171802 systemd[1]: Detected first boot. Nov 6 23:59:53.171818 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Nov 6 23:59:53.171836 zram_generator::config[1166]: No configuration found. Nov 6 23:59:53.171856 kernel: Guest personality initialized and is inactive Nov 6 23:59:53.171872 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Nov 6 23:59:53.171888 kernel: Initialized host personality Nov 6 23:59:53.171904 kernel: NET: Registered PF_VSOCK protocol family Nov 6 23:59:53.171920 systemd[1]: Populated /etc with preset unit settings. Nov 6 23:59:53.171937 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 6 23:59:53.171953 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 6 23:59:53.171978 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 6 23:59:53.171996 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 6 23:59:53.172012 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 6 23:59:53.172035 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 6 23:59:53.172051 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 6 23:59:53.172069 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 6 23:59:53.172090 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 6 23:59:53.172118 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 6 23:59:53.172134 systemd[1]: Created slice user.slice - User and Session Slice. Nov 6 23:59:53.172151 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 6 23:59:53.172168 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 6 23:59:53.172186 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 6 23:59:53.172203 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 6 23:59:53.172223 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 6 23:59:53.172242 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 6 23:59:53.172260 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 6 23:59:53.172277 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 6 23:59:53.172293 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 6 23:59:53.172310 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 6 23:59:53.172330 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 6 23:59:53.172346 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 6 23:59:53.172368 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 6 23:59:53.172385 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 6 23:59:53.172421 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 6 23:59:53.172438 systemd[1]: Reached target slices.target - Slice Units. Nov 6 23:59:53.172456 systemd[1]: Reached target swap.target - Swaps. Nov 6 23:59:53.172473 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 6 23:59:53.172494 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 6 23:59:53.172510 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Nov 6 23:59:53.172527 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 6 23:59:53.172544 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 6 23:59:53.172561 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 6 23:59:53.172578 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 6 23:59:53.172594 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 6 23:59:53.172615 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 6 23:59:53.172631 systemd[1]: Mounting media.mount - External Media Directory... Nov 6 23:59:53.172649 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 23:59:53.172666 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 6 23:59:53.172683 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 6 23:59:53.172700 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 6 23:59:53.172717 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 6 23:59:53.172739 systemd[1]: Reached target machines.target - Containers. Nov 6 23:59:53.172756 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 6 23:59:53.172773 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 6 23:59:53.172790 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 6 23:59:53.172806 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 6 23:59:53.172822 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 6 23:59:53.172843 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 6 23:59:53.172859 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 6 23:59:53.172875 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 6 23:59:53.172892 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 6 23:59:53.172908 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 6 23:59:53.172925 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 6 23:59:53.172942 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 6 23:59:53.172962 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 6 23:59:53.172978 systemd[1]: Stopped systemd-fsck-usr.service. Nov 6 23:59:53.172995 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 6 23:59:53.173012 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 6 23:59:53.173030 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 6 23:59:53.173046 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 6 23:59:53.173064 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 6 23:59:53.173085 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Nov 6 23:59:53.173103 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 6 23:59:53.173133 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 23:59:53.173173 kernel: ACPI: bus type drm_connector registered Nov 6 23:59:53.173195 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 6 23:59:53.173212 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 6 23:59:53.173229 kernel: fuse: init (API version 7.41) Nov 6 23:59:53.173246 systemd[1]: Mounted media.mount - External Media Directory. Nov 6 23:59:53.173263 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 6 23:59:53.173280 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 6 23:59:53.173320 systemd-journald[1244]: Collecting audit messages is disabled. Nov 6 23:59:53.173354 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 6 23:59:53.173372 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 6 23:59:53.173405 systemd-journald[1244]: Journal started Nov 6 23:59:53.173435 systemd-journald[1244]: Runtime Journal (/run/log/journal/268d3807a8864e999840704f297ea80a) is 6M, max 48.3M, 42.2M free. Nov 6 23:59:52.844491 systemd[1]: Queued start job for default target multi-user.target. Nov 6 23:59:52.862470 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 6 23:59:52.863015 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 6 23:59:53.179430 systemd[1]: Started systemd-journald.service - Journal Service. Nov 6 23:59:53.182206 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 6 23:59:53.184681 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 6 23:59:53.184972 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 6 23:59:53.187347 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 6 23:59:53.187653 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 6 23:59:53.189927 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 6 23:59:53.190230 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 6 23:59:53.192425 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 6 23:59:53.192714 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 6 23:59:53.195147 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 6 23:59:53.195450 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 6 23:59:53.197662 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 6 23:59:53.197943 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 6 23:59:53.200253 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 6 23:59:53.202885 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 6 23:59:53.206675 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 6 23:59:53.209463 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Nov 6 23:59:53.229353 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 6 23:59:53.232088 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Nov 6 23:59:53.235854 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 6 23:59:53.239055 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 6 23:59:53.240971 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 6 23:59:53.241087 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 6 23:59:53.244094 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Nov 6 23:59:53.246499 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 6 23:59:53.258575 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 6 23:59:53.262174 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 6 23:59:53.264188 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 6 23:59:53.265606 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 6 23:59:53.267506 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 6 23:59:53.279541 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 6 23:59:53.293779 systemd-journald[1244]: Time spent on flushing to /var/log/journal/268d3807a8864e999840704f297ea80a is 22.208ms for 966 entries. Nov 6 23:59:53.293779 systemd-journald[1244]: System Journal (/var/log/journal/268d3807a8864e999840704f297ea80a) is 8M, max 163.5M, 155.5M free. Nov 6 23:59:53.332781 systemd-journald[1244]: Received client request to flush runtime journal. Nov 6 23:59:53.332872 kernel: loop1: detected capacity change from 0 to 128048 Nov 6 23:59:53.282597 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 6 23:59:53.285589 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 6 23:59:53.288483 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 6 23:59:53.294206 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 6 23:59:53.298268 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 6 23:59:53.302358 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 6 23:59:53.313173 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 6 23:59:53.316861 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Nov 6 23:59:53.327919 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 6 23:59:53.335725 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 6 23:59:53.345422 kernel: loop2: detected capacity change from 0 to 110984 Nov 6 23:59:53.345531 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 6 23:59:53.349812 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 6 23:59:53.353201 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 6 23:59:53.369570 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Nov 6 23:59:53.374870 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 6 23:59:53.381421 kernel: loop3: detected capacity change from 0 to 229808 Nov 6 23:59:53.391513 systemd-tmpfiles[1301]: ACLs are not supported, ignoring. Nov 6 23:59:53.391542 systemd-tmpfiles[1301]: ACLs are not supported, ignoring. Nov 6 23:59:53.399032 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 6 23:59:53.407477 kernel: loop4: detected capacity change from 0 to 128048 Nov 6 23:59:53.418412 kernel: loop5: detected capacity change from 0 to 110984 Nov 6 23:59:53.423735 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 6 23:59:53.432483 kernel: loop6: detected capacity change from 0 to 229808 Nov 6 23:59:53.438609 (sd-merge)[1309]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Nov 6 23:59:53.443013 (sd-merge)[1309]: Merged extensions into '/usr'. Nov 6 23:59:53.448598 systemd[1]: Reload requested from client PID 1285 ('systemd-sysext') (unit systemd-sysext.service)... Nov 6 23:59:53.448625 systemd[1]: Reloading... Nov 6 23:59:53.505173 systemd-resolved[1300]: Positive Trust Anchors: Nov 6 23:59:53.505586 systemd-resolved[1300]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 6 23:59:53.505596 systemd-resolved[1300]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 6 23:59:53.505634 systemd-resolved[1300]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 6 23:59:53.510819 systemd-resolved[1300]: Defaulting to hostname 'linux'. Nov 6 23:59:53.530430 zram_generator::config[1349]: No configuration found. Nov 6 23:59:53.713961 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 6 23:59:53.714573 systemd[1]: Reloading finished in 265 ms. Nov 6 23:59:53.746040 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 6 23:59:53.748649 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 6 23:59:53.753808 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 6 23:59:53.767560 systemd[1]: Starting ensure-sysext.service... Nov 6 23:59:53.770733 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 6 23:59:53.795679 systemd[1]: Reload requested from client PID 1379 ('systemctl') (unit ensure-sysext.service)... Nov 6 23:59:53.795702 systemd[1]: Reloading... Nov 6 23:59:53.799329 systemd-tmpfiles[1380]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Nov 6 23:59:53.799370 systemd-tmpfiles[1380]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Nov 6 23:59:53.799700 systemd-tmpfiles[1380]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 6 23:59:53.799989 systemd-tmpfiles[1380]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 6 23:59:53.800967 systemd-tmpfiles[1380]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 6 23:59:53.801257 systemd-tmpfiles[1380]: ACLs are not supported, ignoring. Nov 6 23:59:53.801330 systemd-tmpfiles[1380]: ACLs are not supported, ignoring. Nov 6 23:59:53.807416 systemd-tmpfiles[1380]: Detected autofs mount point /boot during canonicalization of boot. Nov 6 23:59:53.807562 systemd-tmpfiles[1380]: Skipping /boot Nov 6 23:59:53.819190 systemd-tmpfiles[1380]: Detected autofs mount point /boot during canonicalization of boot. Nov 6 23:59:53.819206 systemd-tmpfiles[1380]: Skipping /boot Nov 6 23:59:53.855729 zram_generator::config[1413]: No configuration found. Nov 6 23:59:54.043816 systemd[1]: Reloading finished in 247 ms. Nov 6 23:59:54.067469 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 6 23:59:54.103053 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 6 23:59:54.115447 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 6 23:59:54.118790 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 6 23:59:54.132963 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 6 23:59:54.138407 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 6 23:59:54.142738 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 6 23:59:54.148679 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 6 23:59:54.154622 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 23:59:54.154910 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 6 23:59:54.159232 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 6 23:59:54.164748 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 6 23:59:54.168640 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 6 23:59:54.170465 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 6 23:59:54.170569 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 6 23:59:54.170663 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 23:59:54.179893 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 23:59:54.180292 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 6 23:59:54.180526 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 6 23:59:54.180657 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 6 23:59:54.180793 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 23:59:54.185265 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 6 23:59:54.187986 systemd-udevd[1458]: Using default interface naming scheme 'v257'. Nov 6 23:59:54.192551 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 6 23:59:54.192794 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 6 23:59:54.196034 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 6 23:59:54.196416 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 6 23:59:54.199207 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 6 23:59:54.199536 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 6 23:59:54.212623 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 23:59:54.212843 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 6 23:59:54.216621 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 6 23:59:54.223502 augenrules[1483]: No rules Nov 6 23:59:54.219669 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 6 23:59:54.223779 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 6 23:59:54.235426 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 6 23:59:54.237517 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 6 23:59:54.237685 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 6 23:59:54.237814 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 23:59:54.238815 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 6 23:59:54.242900 systemd[1]: audit-rules.service: Deactivated successfully. Nov 6 23:59:54.243189 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 6 23:59:54.248213 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 6 23:59:54.251805 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 6 23:59:54.252024 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 6 23:59:54.255014 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 6 23:59:54.255518 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 6 23:59:54.259803 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 6 23:59:54.260053 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 6 23:59:54.263327 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 6 23:59:54.263643 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 6 23:59:54.275809 systemd[1]: Finished ensure-sysext.service. Nov 6 23:59:54.295944 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 6 23:59:54.316140 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 6 23:59:54.316215 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 6 23:59:54.318595 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 6 23:59:54.323149 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 6 23:59:54.327684 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 6 23:59:54.327752 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 6 23:59:54.343969 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 6 23:59:54.356640 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 6 23:59:54.386414 kernel: mousedev: PS/2 mouse device common for all mice Nov 6 23:59:54.402180 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 6 23:59:54.427845 systemd-networkd[1517]: lo: Link UP Nov 6 23:59:54.428221 systemd-networkd[1517]: lo: Gained carrier Nov 6 23:59:54.431996 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 6 23:59:54.435383 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Nov 6 23:59:54.434877 systemd-networkd[1517]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 6 23:59:54.434882 systemd-networkd[1517]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 6 23:59:54.435668 systemd-networkd[1517]: eth0: Link UP Nov 6 23:59:54.435986 systemd-networkd[1517]: eth0: Gained carrier Nov 6 23:59:54.436008 systemd-networkd[1517]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 6 23:59:54.436600 systemd[1]: Reached target network.target - Network. Nov 6 23:59:54.440430 kernel: ACPI: button: Power Button [PWRF] Nov 6 23:59:54.441052 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Nov 6 23:59:54.445289 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 6 23:59:54.448450 systemd-networkd[1517]: eth0: DHCPv4 address 10.0.0.46/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 6 23:59:54.459459 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 6 23:59:55.472739 systemd-timesyncd[1520]: Contacted time server 10.0.0.1:123 (10.0.0.1). Nov 6 23:59:55.472797 systemd-timesyncd[1520]: Initial clock synchronization to Thu 2025-11-06 23:59:55.472627 UTC. Nov 6 23:59:55.472840 systemd-resolved[1300]: Clock change detected. Flushing caches. Nov 6 23:59:55.473965 systemd[1]: Reached target time-set.target - System Time Set. Nov 6 23:59:55.492291 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Nov 6 23:59:55.502162 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Nov 6 23:59:55.502516 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Nov 6 23:59:55.637885 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 23:59:55.671204 kernel: kvm_amd: TSC scaling supported Nov 6 23:59:55.671276 kernel: kvm_amd: Nested Virtualization enabled Nov 6 23:59:55.671294 kernel: kvm_amd: Nested Paging enabled Nov 6 23:59:55.672529 kernel: kvm_amd: LBR virtualization supported Nov 6 23:59:55.672592 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Nov 6 23:59:55.672635 kernel: kvm_amd: Virtual GIF supported Nov 6 23:59:55.707181 kernel: EDAC MC: Ver: 3.0.0 Nov 6 23:59:55.780617 ldconfig[1452]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 6 23:59:55.805062 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 23:59:56.547443 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 6 23:59:56.550909 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 6 23:59:56.576031 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 6 23:59:56.592875 systemd[1]: Reached target sysinit.target - System Initialization. Nov 6 23:59:56.594805 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 6 23:59:56.596932 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 6 23:59:56.599061 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Nov 6 23:59:56.601208 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 6 23:59:56.603116 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 6 23:59:56.605314 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 6 23:59:56.607438 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 6 23:59:56.607476 systemd[1]: Reached target paths.target - Path Units. Nov 6 23:59:56.609030 systemd[1]: Reached target timers.target - Timer Units. Nov 6 23:59:56.611729 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 6 23:59:56.615185 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 6 23:59:56.618946 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Nov 6 23:59:56.621266 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Nov 6 23:59:56.623406 systemd[1]: Reached target ssh-access.target - SSH Access Available. Nov 6 23:59:56.631449 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 6 23:59:56.656463 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Nov 6 23:59:56.659086 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 6 23:59:56.661602 systemd[1]: Reached target sockets.target - Socket Units. Nov 6 23:59:56.663252 systemd[1]: Reached target basic.target - Basic System. Nov 6 23:59:56.664880 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 6 23:59:56.664912 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 6 23:59:56.666010 systemd[1]: Starting containerd.service - containerd container runtime... Nov 6 23:59:56.668824 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 6 23:59:56.671474 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 6 23:59:56.680835 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 6 23:59:56.685236 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 6 23:59:56.687001 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 6 23:59:56.688248 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Nov 6 23:59:56.690863 jq[1571]: false Nov 6 23:59:56.691267 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 6 23:59:56.694802 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 6 23:59:56.698327 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 6 23:59:56.703210 oslogin_cache_refresh[1573]: Refreshing passwd entry cache Nov 6 23:59:56.706816 google_oslogin_nss_cache[1573]: oslogin_cache_refresh[1573]: Refreshing passwd entry cache Nov 6 23:59:56.703396 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 6 23:59:56.707770 extend-filesystems[1572]: Found /dev/vda6 Nov 6 23:59:56.708865 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 6 23:59:56.710692 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 6 23:59:56.711169 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 6 23:59:56.712037 systemd[1]: Starting update-engine.service - Update Engine... Nov 6 23:59:56.715362 extend-filesystems[1572]: Found /dev/vda9 Nov 6 23:59:56.717064 google_oslogin_nss_cache[1573]: oslogin_cache_refresh[1573]: Failure getting users, quitting Nov 6 23:59:56.717064 google_oslogin_nss_cache[1573]: oslogin_cache_refresh[1573]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 6 23:59:56.717064 google_oslogin_nss_cache[1573]: oslogin_cache_refresh[1573]: Refreshing group entry cache Nov 6 23:59:56.716657 oslogin_cache_refresh[1573]: Failure getting users, quitting Nov 6 23:59:56.716671 oslogin_cache_refresh[1573]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 6 23:59:56.716715 oslogin_cache_refresh[1573]: Refreshing group entry cache Nov 6 23:59:56.719015 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 6 23:59:56.721287 extend-filesystems[1572]: Checking size of /dev/vda9 Nov 6 23:59:56.725965 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 6 23:59:56.725228 oslogin_cache_refresh[1573]: Failure getting groups, quitting Nov 6 23:59:56.728376 google_oslogin_nss_cache[1573]: oslogin_cache_refresh[1573]: Failure getting groups, quitting Nov 6 23:59:56.728376 google_oslogin_nss_cache[1573]: oslogin_cache_refresh[1573]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 6 23:59:56.725239 oslogin_cache_refresh[1573]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 6 23:59:56.728394 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 6 23:59:56.728648 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 6 23:59:56.729130 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Nov 6 23:59:56.729794 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Nov 6 23:59:56.732550 systemd[1]: motdgen.service: Deactivated successfully. Nov 6 23:59:56.732798 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 6 23:59:56.733442 jq[1590]: true Nov 6 23:59:56.735494 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 6 23:59:56.735751 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 6 23:59:56.742681 update_engine[1585]: I20251106 23:59:56.742599 1585 main.cc:92] Flatcar Update Engine starting Nov 6 23:59:56.752813 (ntainerd)[1601]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 6 23:59:56.754727 jq[1600]: true Nov 6 23:59:56.764466 tar[1598]: linux-amd64/LICENSE Nov 6 23:59:56.764925 tar[1598]: linux-amd64/helm Nov 6 23:59:56.767104 extend-filesystems[1572]: Resized partition /dev/vda9 Nov 6 23:59:56.820167 extend-filesystems[1635]: resize2fs 1.47.3 (8-Jul-2025) Nov 6 23:59:56.838530 systemd-logind[1583]: Watching system buttons on /dev/input/event2 (Power Button) Nov 6 23:59:56.838557 systemd-logind[1583]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 6 23:59:56.839067 systemd-logind[1583]: New seat seat0. Nov 6 23:59:56.840578 systemd[1]: Started systemd-logind.service - User Login Management. Nov 6 23:59:56.926269 dbus-daemon[1569]: [system] SELinux support is enabled Nov 6 23:59:56.938248 update_engine[1585]: I20251106 23:59:56.931044 1585 update_check_scheduler.cc:74] Next update check in 7m17s Nov 6 23:59:56.926533 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 6 23:59:56.932387 dbus-daemon[1569]: [system] Successfully activated service 'org.freedesktop.systemd1' Nov 6 23:59:56.931044 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 6 23:59:56.931069 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 6 23:59:56.933176 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 6 23:59:56.933192 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 6 23:59:56.935245 systemd[1]: Started update-engine.service - Update Engine. Nov 6 23:59:56.939246 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 6 23:59:56.959179 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Nov 6 23:59:57.093407 systemd-networkd[1517]: eth0: Gained IPv6LL Nov 6 23:59:57.096452 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 6 23:59:57.099053 systemd[1]: Reached target network-online.target - Network is Online. Nov 6 23:59:57.155586 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Nov 6 23:59:57.328352 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 23:59:57.333329 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 6 23:59:57.355838 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 6 23:59:57.361409 systemd[1]: coreos-metadata.service: Deactivated successfully. Nov 6 23:59:57.361734 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Nov 6 23:59:57.364093 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 6 23:59:57.388893 tar[1598]: linux-amd64/README.md Nov 6 23:59:57.412440 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 6 23:59:57.524648 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 7 00:00:00.031749 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Nov 7 00:00:00.031872 containerd[1601]: time="2025-11-06T23:59:58Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Nov 6 23:59:57.843114 locksmithd[1637]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 7 00:00:00.032534 containerd[1601]: time="2025-11-07T00:00:00.032422151Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Nov 7 00:00:00.035263 extend-filesystems[1635]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 7 00:00:00.035263 extend-filesystems[1635]: old_desc_blocks = 1, new_desc_blocks = 1 Nov 7 00:00:00.035263 extend-filesystems[1635]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Nov 7 00:00:00.044840 sshd_keygen[1613]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 7 00:00:00.045004 bash[1636]: Updated "/home/core/.ssh/authorized_keys" Nov 7 00:00:00.045291 containerd[1601]: time="2025-11-07T00:00:00.043472561Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="11.822µs" Nov 7 00:00:00.045291 containerd[1601]: time="2025-11-07T00:00:00.043516213Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Nov 7 00:00:00.045291 containerd[1601]: time="2025-11-07T00:00:00.043534247Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Nov 7 00:00:00.045291 containerd[1601]: time="2025-11-07T00:00:00.043759559Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Nov 7 00:00:00.045291 containerd[1601]: time="2025-11-07T00:00:00.043774197Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Nov 7 00:00:00.045291 containerd[1601]: time="2025-11-07T00:00:00.043800536Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 7 00:00:00.045291 containerd[1601]: time="2025-11-07T00:00:00.043876849Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 7 00:00:00.045291 containerd[1601]: time="2025-11-07T00:00:00.043889733Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 7 00:00:00.045291 containerd[1601]: time="2025-11-07T00:00:00.044202981Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 7 00:00:00.045291 containerd[1601]: time="2025-11-07T00:00:00.044216756Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 7 00:00:00.045291 containerd[1601]: time="2025-11-07T00:00:00.044227506Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 7 00:00:00.045291 containerd[1601]: time="2025-11-07T00:00:00.044235481Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Nov 7 00:00:00.036367 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 7 00:00:00.045804 extend-filesystems[1572]: Resized filesystem in /dev/vda9 Nov 7 00:00:00.050130 containerd[1601]: time="2025-11-07T00:00:00.044339466Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Nov 7 00:00:00.050130 containerd[1601]: time="2025-11-07T00:00:00.044634369Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 7 00:00:00.050130 containerd[1601]: time="2025-11-07T00:00:00.044671309Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 7 00:00:00.050130 containerd[1601]: time="2025-11-07T00:00:00.044681658Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Nov 7 00:00:00.050130 containerd[1601]: time="2025-11-07T00:00:00.044730970Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Nov 7 00:00:00.050130 containerd[1601]: time="2025-11-07T00:00:00.045102136Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Nov 7 00:00:00.050130 containerd[1601]: time="2025-11-07T00:00:00.045207494Z" level=info msg="metadata content store policy set" policy=shared Nov 7 00:00:00.036684 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 7 00:00:00.046641 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 7 00:00:00.052502 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Nov 7 00:00:00.056301 containerd[1601]: time="2025-11-07T00:00:00.056248436Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Nov 7 00:00:00.056350 containerd[1601]: time="2025-11-07T00:00:00.056329808Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Nov 7 00:00:00.056350 containerd[1601]: time="2025-11-07T00:00:00.056344716Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Nov 7 00:00:00.056389 containerd[1601]: time="2025-11-07T00:00:00.056358001Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Nov 7 00:00:00.056389 containerd[1601]: time="2025-11-07T00:00:00.056371096Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Nov 7 00:00:00.056389 containerd[1601]: time="2025-11-07T00:00:00.056381415Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Nov 7 00:00:00.056464 containerd[1601]: time="2025-11-07T00:00:00.056404328Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Nov 7 00:00:00.056464 containerd[1601]: time="2025-11-07T00:00:00.056417623Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Nov 7 00:00:00.056464 containerd[1601]: time="2025-11-07T00:00:00.056428453Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Nov 7 00:00:00.056464 containerd[1601]: time="2025-11-07T00:00:00.056438492Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Nov 7 00:00:00.056532 containerd[1601]: time="2025-11-07T00:00:00.056447569Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Nov 7 00:00:00.056532 containerd[1601]: time="2025-11-07T00:00:00.056487915Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Nov 7 00:00:00.056673 containerd[1601]: time="2025-11-07T00:00:00.056642986Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Nov 7 00:00:00.056699 containerd[1601]: time="2025-11-07T00:00:00.056675927Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Nov 7 00:00:00.056699 containerd[1601]: time="2025-11-07T00:00:00.056695945Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Nov 7 00:00:00.056735 containerd[1601]: time="2025-11-07T00:00:00.056712476Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Nov 7 00:00:00.056735 containerd[1601]: time="2025-11-07T00:00:00.056724288Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Nov 7 00:00:00.056771 containerd[1601]: time="2025-11-07T00:00:00.056735078Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Nov 7 00:00:00.056771 containerd[1601]: time="2025-11-07T00:00:00.056749916Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Nov 7 00:00:00.056771 containerd[1601]: time="2025-11-07T00:00:00.056760676Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Nov 7 00:00:00.056836 containerd[1601]: time="2025-11-07T00:00:00.056775845Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Nov 7 00:00:00.056836 containerd[1601]: time="2025-11-07T00:00:00.056787677Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Nov 7 00:00:00.056836 containerd[1601]: time="2025-11-07T00:00:00.056805490Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Nov 7 00:00:00.056907 containerd[1601]: time="2025-11-07T00:00:00.056895108Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Nov 7 00:00:00.056929 containerd[1601]: time="2025-11-07T00:00:00.056910407Z" level=info msg="Start snapshots syncer" Nov 7 00:00:00.058174 containerd[1601]: time="2025-11-07T00:00:00.056949200Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Nov 7 00:00:00.058174 containerd[1601]: time="2025-11-07T00:00:00.057254602Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Nov 7 00:00:00.058386 containerd[1601]: time="2025-11-07T00:00:00.057304616Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Nov 7 00:00:00.058386 containerd[1601]: time="2025-11-07T00:00:00.057383584Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Nov 7 00:00:00.058386 containerd[1601]: time="2025-11-07T00:00:00.057497307Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Nov 7 00:00:00.058386 containerd[1601]: time="2025-11-07T00:00:00.057521623Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Nov 7 00:00:00.058386 containerd[1601]: time="2025-11-07T00:00:00.057532553Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Nov 7 00:00:00.058386 containerd[1601]: time="2025-11-07T00:00:00.057550768Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Nov 7 00:00:00.058386 containerd[1601]: time="2025-11-07T00:00:00.057569883Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Nov 7 00:00:00.058386 containerd[1601]: time="2025-11-07T00:00:00.057581345Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Nov 7 00:00:00.058386 containerd[1601]: time="2025-11-07T00:00:00.057591063Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Nov 7 00:00:00.058386 containerd[1601]: time="2025-11-07T00:00:00.057611612Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Nov 7 00:00:00.058386 containerd[1601]: time="2025-11-07T00:00:00.057621540Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Nov 7 00:00:00.058386 containerd[1601]: time="2025-11-07T00:00:00.057631820Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Nov 7 00:00:00.058386 containerd[1601]: time="2025-11-07T00:00:00.057656336Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 7 00:00:00.058386 containerd[1601]: time="2025-11-07T00:00:00.057682284Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 7 00:00:00.058695 containerd[1601]: time="2025-11-07T00:00:00.057691241Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 7 00:00:00.058695 containerd[1601]: time="2025-11-07T00:00:00.057701130Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 7 00:00:00.058695 containerd[1601]: time="2025-11-07T00:00:00.057708934Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Nov 7 00:00:00.058695 containerd[1601]: time="2025-11-07T00:00:00.057717340Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Nov 7 00:00:00.058695 containerd[1601]: time="2025-11-07T00:00:00.057728441Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Nov 7 00:00:00.058695 containerd[1601]: time="2025-11-07T00:00:00.057745483Z" level=info msg="runtime interface created" Nov 7 00:00:00.058695 containerd[1601]: time="2025-11-07T00:00:00.057761152Z" level=info msg="created NRI interface" Nov 7 00:00:00.058695 containerd[1601]: time="2025-11-07T00:00:00.057773746Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Nov 7 00:00:00.058695 containerd[1601]: time="2025-11-07T00:00:00.057784105Z" level=info msg="Connect containerd service" Nov 7 00:00:00.058695 containerd[1601]: time="2025-11-07T00:00:00.057803922Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 7 00:00:00.059631 containerd[1601]: time="2025-11-07T00:00:00.059578429Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 7 00:00:00.071994 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 7 00:00:00.077504 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 7 00:00:00.083363 systemd[1]: Started sshd@0-10.0.0.46:22-10.0.0.1:41584.service - OpenSSH per-connection server daemon (10.0.0.1:41584). Nov 7 00:00:00.101326 systemd[1]: issuegen.service: Deactivated successfully. Nov 7 00:00:00.101932 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 7 00:00:00.107798 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 7 00:00:00.139435 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 7 00:00:00.144426 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 7 00:00:00.149118 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 7 00:00:00.156977 systemd[1]: Reached target getty.target - Login Prompts. Nov 7 00:00:00.192158 sshd[1688]: Accepted publickey for core from 10.0.0.1 port 41584 ssh2: RSA SHA256:byrlmJ17egF8Tfblhd3C23XmFA3LkMTDn1Cz5Op8b3A Nov 7 00:00:00.194300 sshd-session[1688]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 7 00:00:00.201884 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 7 00:00:00.205329 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 7 00:00:00.206999 containerd[1601]: time="2025-11-07T00:00:00.206965360Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 7 00:00:00.207169 containerd[1601]: time="2025-11-07T00:00:00.207135749Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 7 00:00:00.207333 containerd[1601]: time="2025-11-07T00:00:00.207263459Z" level=info msg="Start subscribing containerd event" Nov 7 00:00:00.209382 containerd[1601]: time="2025-11-07T00:00:00.209341455Z" level=info msg="Start recovering state" Nov 7 00:00:00.209588 containerd[1601]: time="2025-11-07T00:00:00.209568591Z" level=info msg="Start event monitor" Nov 7 00:00:00.209642 containerd[1601]: time="2025-11-07T00:00:00.209631138Z" level=info msg="Start cni network conf syncer for default" Nov 7 00:00:00.209683 containerd[1601]: time="2025-11-07T00:00:00.209673187Z" level=info msg="Start streaming server" Nov 7 00:00:00.209727 containerd[1601]: time="2025-11-07T00:00:00.209717530Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Nov 7 00:00:00.209769 containerd[1601]: time="2025-11-07T00:00:00.209759228Z" level=info msg="runtime interface starting up..." Nov 7 00:00:00.209807 containerd[1601]: time="2025-11-07T00:00:00.209797620Z" level=info msg="starting plugins..." Nov 7 00:00:00.209869 containerd[1601]: time="2025-11-07T00:00:00.209845170Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Nov 7 00:00:00.210059 containerd[1601]: time="2025-11-07T00:00:00.210042730Z" level=info msg="containerd successfully booted in 2.122767s" Nov 7 00:00:00.211837 systemd[1]: Started containerd.service - containerd container runtime. Nov 7 00:00:00.216693 systemd-logind[1583]: New session 1 of user core. Nov 7 00:00:00.246241 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 7 00:00:00.252042 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 7 00:00:00.269923 (systemd)[1711]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 7 00:00:00.272798 systemd-logind[1583]: New session c1 of user core. Nov 7 00:00:00.429269 systemd[1711]: Queued start job for default target default.target. Nov 7 00:00:00.441326 systemd[1711]: Created slice app.slice - User Application Slice. Nov 7 00:00:00.441358 systemd[1711]: Reached target paths.target - Paths. Nov 7 00:00:00.441445 systemd[1711]: Reached target timers.target - Timers. Nov 7 00:00:00.444085 systemd[1711]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 7 00:00:00.459273 systemd[1711]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 7 00:00:00.459571 systemd[1711]: Reached target sockets.target - Sockets. Nov 7 00:00:00.459615 systemd[1711]: Reached target basic.target - Basic System. Nov 7 00:00:00.459660 systemd[1711]: Reached target default.target - Main User Target. Nov 7 00:00:00.459702 systemd[1711]: Startup finished in 179ms. Nov 7 00:00:00.460185 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 7 00:00:00.463893 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 7 00:00:00.529624 systemd[1]: Started sshd@1-10.0.0.46:22-10.0.0.1:41596.service - OpenSSH per-connection server daemon (10.0.0.1:41596). Nov 7 00:00:00.590525 sshd[1722]: Accepted publickey for core from 10.0.0.1 port 41596 ssh2: RSA SHA256:byrlmJ17egF8Tfblhd3C23XmFA3LkMTDn1Cz5Op8b3A Nov 7 00:00:00.592300 sshd-session[1722]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 7 00:00:00.597523 systemd-logind[1583]: New session 2 of user core. Nov 7 00:00:00.608338 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 7 00:00:00.670220 sshd[1725]: Connection closed by 10.0.0.1 port 41596 Nov 7 00:00:00.671968 sshd-session[1722]: pam_unix(sshd:session): session closed for user core Nov 7 00:00:00.682228 systemd[1]: sshd@1-10.0.0.46:22-10.0.0.1:41596.service: Deactivated successfully. Nov 7 00:00:00.684956 systemd[1]: session-2.scope: Deactivated successfully. Nov 7 00:00:00.685883 systemd-logind[1583]: Session 2 logged out. Waiting for processes to exit. Nov 7 00:00:00.689218 systemd[1]: Started sshd@2-10.0.0.46:22-10.0.0.1:41600.service - OpenSSH per-connection server daemon (10.0.0.1:41600). Nov 7 00:00:00.693394 systemd-logind[1583]: Removed session 2. Nov 7 00:00:00.749864 sshd[1731]: Accepted publickey for core from 10.0.0.1 port 41600 ssh2: RSA SHA256:byrlmJ17egF8Tfblhd3C23XmFA3LkMTDn1Cz5Op8b3A Nov 7 00:00:00.752530 sshd-session[1731]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 7 00:00:00.757998 systemd-logind[1583]: New session 3 of user core. Nov 7 00:00:00.765372 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 7 00:00:00.830683 sshd[1734]: Connection closed by 10.0.0.1 port 41600 Nov 7 00:00:00.831179 sshd-session[1731]: pam_unix(sshd:session): session closed for user core Nov 7 00:00:00.838437 systemd[1]: sshd@2-10.0.0.46:22-10.0.0.1:41600.service: Deactivated successfully. Nov 7 00:00:00.841396 systemd[1]: session-3.scope: Deactivated successfully. Nov 7 00:00:00.842449 systemd-logind[1583]: Session 3 logged out. Waiting for processes to exit. Nov 7 00:00:00.845667 systemd-logind[1583]: Removed session 3. Nov 7 00:00:00.926294 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 7 00:00:00.929079 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 7 00:00:00.931895 systemd[1]: Startup finished in 2.445s (kernel) + 8.352s (initrd) + 7.772s (userspace) = 18.569s. Nov 7 00:00:00.941488 (kubelet)[1744]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 7 00:00:01.400459 kubelet[1744]: E1107 00:00:01.400369 1744 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 7 00:00:01.404742 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 7 00:00:01.404980 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 7 00:00:01.405459 systemd[1]: kubelet.service: Consumed 1.083s CPU time, 267.5M memory peak. Nov 7 00:00:10.846894 systemd[1]: Started sshd@3-10.0.0.46:22-10.0.0.1:40456.service - OpenSSH per-connection server daemon (10.0.0.1:40456). Nov 7 00:00:10.911011 sshd[1757]: Accepted publickey for core from 10.0.0.1 port 40456 ssh2: RSA SHA256:byrlmJ17egF8Tfblhd3C23XmFA3LkMTDn1Cz5Op8b3A Nov 7 00:00:10.912816 sshd-session[1757]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 7 00:00:10.918078 systemd-logind[1583]: New session 4 of user core. Nov 7 00:00:10.933462 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 7 00:00:10.991260 sshd[1760]: Connection closed by 10.0.0.1 port 40456 Nov 7 00:00:10.991617 sshd-session[1757]: pam_unix(sshd:session): session closed for user core Nov 7 00:00:11.002297 systemd[1]: sshd@3-10.0.0.46:22-10.0.0.1:40456.service: Deactivated successfully. Nov 7 00:00:11.004158 systemd[1]: session-4.scope: Deactivated successfully. Nov 7 00:00:11.005021 systemd-logind[1583]: Session 4 logged out. Waiting for processes to exit. Nov 7 00:00:11.008126 systemd[1]: Started sshd@4-10.0.0.46:22-10.0.0.1:40460.service - OpenSSH per-connection server daemon (10.0.0.1:40460). Nov 7 00:00:11.008760 systemd-logind[1583]: Removed session 4. Nov 7 00:00:11.066693 sshd[1766]: Accepted publickey for core from 10.0.0.1 port 40460 ssh2: RSA SHA256:byrlmJ17egF8Tfblhd3C23XmFA3LkMTDn1Cz5Op8b3A Nov 7 00:00:11.068375 sshd-session[1766]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 7 00:00:11.073250 systemd-logind[1583]: New session 5 of user core. Nov 7 00:00:11.083304 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 7 00:00:11.134218 sshd[1770]: Connection closed by 10.0.0.1 port 40460 Nov 7 00:00:11.135083 sshd-session[1766]: pam_unix(sshd:session): session closed for user core Nov 7 00:00:11.145003 systemd[1]: sshd@4-10.0.0.46:22-10.0.0.1:40460.service: Deactivated successfully. Nov 7 00:00:11.146939 systemd[1]: session-5.scope: Deactivated successfully. Nov 7 00:00:11.147810 systemd-logind[1583]: Session 5 logged out. Waiting for processes to exit. Nov 7 00:00:11.150935 systemd[1]: Started sshd@5-10.0.0.46:22-10.0.0.1:40462.service - OpenSSH per-connection server daemon (10.0.0.1:40462). Nov 7 00:00:11.151739 systemd-logind[1583]: Removed session 5. Nov 7 00:00:11.218641 sshd[1776]: Accepted publickey for core from 10.0.0.1 port 40462 ssh2: RSA SHA256:byrlmJ17egF8Tfblhd3C23XmFA3LkMTDn1Cz5Op8b3A Nov 7 00:00:11.220286 sshd-session[1776]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 7 00:00:11.224996 systemd-logind[1583]: New session 6 of user core. Nov 7 00:00:11.235299 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 7 00:00:11.290607 sshd[1779]: Connection closed by 10.0.0.1 port 40462 Nov 7 00:00:11.290993 sshd-session[1776]: pam_unix(sshd:session): session closed for user core Nov 7 00:00:11.300208 systemd[1]: sshd@5-10.0.0.46:22-10.0.0.1:40462.service: Deactivated successfully. Nov 7 00:00:11.302264 systemd[1]: session-6.scope: Deactivated successfully. Nov 7 00:00:11.303198 systemd-logind[1583]: Session 6 logged out. Waiting for processes to exit. Nov 7 00:00:11.306108 systemd[1]: Started sshd@6-10.0.0.46:22-10.0.0.1:40468.service - OpenSSH per-connection server daemon (10.0.0.1:40468). Nov 7 00:00:11.306925 systemd-logind[1583]: Removed session 6. Nov 7 00:00:11.368646 sshd[1785]: Accepted publickey for core from 10.0.0.1 port 40468 ssh2: RSA SHA256:byrlmJ17egF8Tfblhd3C23XmFA3LkMTDn1Cz5Op8b3A Nov 7 00:00:11.370370 sshd-session[1785]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 7 00:00:11.374987 systemd-logind[1583]: New session 7 of user core. Nov 7 00:00:11.392358 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 7 00:00:11.457918 sudo[1789]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 7 00:00:11.458323 sudo[1789]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 7 00:00:11.459446 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 7 00:00:11.461387 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 7 00:00:11.480041 sudo[1789]: pam_unix(sudo:session): session closed for user root Nov 7 00:00:11.482211 sshd[1788]: Connection closed by 10.0.0.1 port 40468 Nov 7 00:00:11.482584 sshd-session[1785]: pam_unix(sshd:session): session closed for user core Nov 7 00:00:11.488513 systemd[1]: sshd@6-10.0.0.46:22-10.0.0.1:40468.service: Deactivated successfully. Nov 7 00:00:11.490621 systemd[1]: session-7.scope: Deactivated successfully. Nov 7 00:00:11.492223 systemd-logind[1583]: Session 7 logged out. Waiting for processes to exit. Nov 7 00:00:11.495655 systemd[1]: Started sshd@7-10.0.0.46:22-10.0.0.1:40476.service - OpenSSH per-connection server daemon (10.0.0.1:40476). Nov 7 00:00:11.496565 systemd-logind[1583]: Removed session 7. Nov 7 00:00:11.559120 sshd[1798]: Accepted publickey for core from 10.0.0.1 port 40476 ssh2: RSA SHA256:byrlmJ17egF8Tfblhd3C23XmFA3LkMTDn1Cz5Op8b3A Nov 7 00:00:11.560573 sshd-session[1798]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 7 00:00:11.565037 systemd-logind[1583]: New session 8 of user core. Nov 7 00:00:11.578272 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 7 00:00:11.633339 sudo[1803]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 7 00:00:11.633653 sudo[1803]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 7 00:00:11.851496 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 7 00:00:11.855931 (kubelet)[1810]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 7 00:00:11.948233 kubelet[1810]: E1107 00:00:11.948169 1810 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 7 00:00:11.956328 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 7 00:00:11.956526 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 7 00:00:11.956949 systemd[1]: kubelet.service: Consumed 249ms CPU time, 110.9M memory peak. Nov 7 00:00:12.001522 sudo[1803]: pam_unix(sudo:session): session closed for user root Nov 7 00:00:12.011728 sudo[1802]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 7 00:00:12.012229 sudo[1802]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 7 00:00:12.024748 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 7 00:00:12.082169 augenrules[1839]: No rules Nov 7 00:00:12.084062 systemd[1]: audit-rules.service: Deactivated successfully. Nov 7 00:00:12.084383 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 7 00:00:12.085647 sudo[1802]: pam_unix(sudo:session): session closed for user root Nov 7 00:00:12.087521 sshd[1801]: Connection closed by 10.0.0.1 port 40476 Nov 7 00:00:12.087847 sshd-session[1798]: pam_unix(sshd:session): session closed for user core Nov 7 00:00:12.106559 systemd[1]: sshd@7-10.0.0.46:22-10.0.0.1:40476.service: Deactivated successfully. Nov 7 00:00:12.108421 systemd[1]: session-8.scope: Deactivated successfully. Nov 7 00:00:12.109467 systemd-logind[1583]: Session 8 logged out. Waiting for processes to exit. Nov 7 00:00:12.112623 systemd[1]: Started sshd@8-10.0.0.46:22-10.0.0.1:40490.service - OpenSSH per-connection server daemon (10.0.0.1:40490). Nov 7 00:00:12.113378 systemd-logind[1583]: Removed session 8. Nov 7 00:00:12.172338 sshd[1848]: Accepted publickey for core from 10.0.0.1 port 40490 ssh2: RSA SHA256:byrlmJ17egF8Tfblhd3C23XmFA3LkMTDn1Cz5Op8b3A Nov 7 00:00:12.173651 sshd-session[1848]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 7 00:00:12.178162 systemd-logind[1583]: New session 9 of user core. Nov 7 00:00:12.192278 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 7 00:00:12.247050 sudo[1852]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 7 00:00:12.247368 sudo[1852]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 7 00:00:13.271222 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 7 00:00:13.359547 (dockerd)[1872]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 7 00:00:14.171405 dockerd[1872]: time="2025-11-07T00:00:14.171324930Z" level=info msg="Starting up" Nov 7 00:00:14.172244 dockerd[1872]: time="2025-11-07T00:00:14.172198689Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Nov 7 00:00:14.195194 dockerd[1872]: time="2025-11-07T00:00:14.195131137Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Nov 7 00:00:14.301708 dockerd[1872]: time="2025-11-07T00:00:14.300975175Z" level=info msg="Loading containers: start." Nov 7 00:00:14.316192 kernel: Initializing XFRM netlink socket Nov 7 00:00:14.616581 systemd-networkd[1517]: docker0: Link UP Nov 7 00:00:14.817282 dockerd[1872]: time="2025-11-07T00:00:14.817206398Z" level=info msg="Loading containers: done." Nov 7 00:00:14.834384 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2432764197-merged.mount: Deactivated successfully. Nov 7 00:00:14.836981 dockerd[1872]: time="2025-11-07T00:00:14.836905203Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 7 00:00:14.837169 dockerd[1872]: time="2025-11-07T00:00:14.837021591Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Nov 7 00:00:14.837169 dockerd[1872]: time="2025-11-07T00:00:14.837136507Z" level=info msg="Initializing buildkit" Nov 7 00:00:14.897187 dockerd[1872]: time="2025-11-07T00:00:14.897109637Z" level=info msg="Completed buildkit initialization" Nov 7 00:00:14.902018 dockerd[1872]: time="2025-11-07T00:00:14.901981782Z" level=info msg="Daemon has completed initialization" Nov 7 00:00:14.902133 dockerd[1872]: time="2025-11-07T00:00:14.902063635Z" level=info msg="API listen on /run/docker.sock" Nov 7 00:00:14.902372 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 7 00:00:15.918728 containerd[1601]: time="2025-11-07T00:00:15.918672256Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Nov 7 00:00:16.617200 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3256524949.mount: Deactivated successfully. Nov 7 00:00:18.258860 containerd[1601]: time="2025-11-07T00:00:18.258773065Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 7 00:00:18.269222 containerd[1601]: time="2025-11-07T00:00:18.269128982Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=30114893" Nov 7 00:00:18.299465 containerd[1601]: time="2025-11-07T00:00:18.299397265Z" level=info msg="ImageCreate event name:\"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 7 00:00:18.302117 containerd[1601]: time="2025-11-07T00:00:18.302051912Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 7 00:00:18.303014 containerd[1601]: time="2025-11-07T00:00:18.302957741Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"30111492\" in 2.384226805s" Nov 7 00:00:18.303070 containerd[1601]: time="2025-11-07T00:00:18.303019487Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\"" Nov 7 00:00:18.303879 containerd[1601]: time="2025-11-07T00:00:18.303834926Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Nov 7 00:00:19.886968 containerd[1601]: time="2025-11-07T00:00:19.886893109Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 7 00:00:19.887759 containerd[1601]: time="2025-11-07T00:00:19.887707246Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=26020844" Nov 7 00:00:19.888864 containerd[1601]: time="2025-11-07T00:00:19.888817147Z" level=info msg="ImageCreate event name:\"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 7 00:00:19.891360 containerd[1601]: time="2025-11-07T00:00:19.891322945Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 7 00:00:19.892334 containerd[1601]: time="2025-11-07T00:00:19.892299687Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"27681301\" in 1.588436127s" Nov 7 00:00:19.892375 containerd[1601]: time="2025-11-07T00:00:19.892338439Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\"" Nov 7 00:00:19.892884 containerd[1601]: time="2025-11-07T00:00:19.892840370Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Nov 7 00:00:21.245643 containerd[1601]: time="2025-11-07T00:00:21.245565540Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 7 00:00:21.246423 containerd[1601]: time="2025-11-07T00:00:21.246378113Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=20155568" Nov 7 00:00:21.247662 containerd[1601]: time="2025-11-07T00:00:21.247609192Z" level=info msg="ImageCreate event name:\"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 7 00:00:21.254885 containerd[1601]: time="2025-11-07T00:00:21.254824350Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 7 00:00:21.255776 containerd[1601]: time="2025-11-07T00:00:21.255705542Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"21816043\" in 1.362829705s" Nov 7 00:00:21.255776 containerd[1601]: time="2025-11-07T00:00:21.255771225Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\"" Nov 7 00:00:21.256806 containerd[1601]: time="2025-11-07T00:00:21.256760340Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Nov 7 00:00:22.005628 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 7 00:00:22.007453 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 7 00:00:22.229862 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 7 00:00:22.246499 (kubelet)[2169]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 7 00:00:22.340678 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1271600092.mount: Deactivated successfully. Nov 7 00:00:22.395667 kubelet[2169]: E1107 00:00:22.395572 2169 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 7 00:00:22.399982 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 7 00:00:22.400193 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 7 00:00:22.400580 systemd[1]: kubelet.service: Consumed 222ms CPU time, 109.3M memory peak. Nov 7 00:00:22.974636 containerd[1601]: time="2025-11-07T00:00:22.974567498Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 7 00:00:22.975525 containerd[1601]: time="2025-11-07T00:00:22.975485619Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=31929469" Nov 7 00:00:22.976597 containerd[1601]: time="2025-11-07T00:00:22.976553582Z" level=info msg="ImageCreate event name:\"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 7 00:00:22.978454 containerd[1601]: time="2025-11-07T00:00:22.978408790Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 7 00:00:22.979011 containerd[1601]: time="2025-11-07T00:00:22.978970403Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"31928488\" in 1.722182091s" Nov 7 00:00:22.979011 containerd[1601]: time="2025-11-07T00:00:22.979001221Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\"" Nov 7 00:00:22.979610 containerd[1601]: time="2025-11-07T00:00:22.979572122Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Nov 7 00:00:23.600386 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4004033133.mount: Deactivated successfully. Nov 7 00:00:24.867851 containerd[1601]: time="2025-11-07T00:00:24.867765604Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 7 00:00:24.868555 containerd[1601]: time="2025-11-07T00:00:24.868490894Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Nov 7 00:00:24.869758 containerd[1601]: time="2025-11-07T00:00:24.869720529Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 7 00:00:24.873331 containerd[1601]: time="2025-11-07T00:00:24.873281677Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 7 00:00:24.874215 containerd[1601]: time="2025-11-07T00:00:24.874176665Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.894572643s" Nov 7 00:00:24.874215 containerd[1601]: time="2025-11-07T00:00:24.874212842Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Nov 7 00:00:24.874757 containerd[1601]: time="2025-11-07T00:00:24.874727928Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 7 00:00:25.491970 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3883057255.mount: Deactivated successfully. Nov 7 00:00:25.498843 containerd[1601]: time="2025-11-07T00:00:25.498800715Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 7 00:00:25.499621 containerd[1601]: time="2025-11-07T00:00:25.499585386Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Nov 7 00:00:25.500983 containerd[1601]: time="2025-11-07T00:00:25.500887317Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 7 00:00:25.503086 containerd[1601]: time="2025-11-07T00:00:25.503051495Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 7 00:00:25.503878 containerd[1601]: time="2025-11-07T00:00:25.503817681Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 629.061249ms" Nov 7 00:00:25.503878 containerd[1601]: time="2025-11-07T00:00:25.503852226Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 7 00:00:25.504562 containerd[1601]: time="2025-11-07T00:00:25.504529296Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Nov 7 00:00:26.875307 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3691128315.mount: Deactivated successfully. Nov 7 00:00:28.429660 containerd[1601]: time="2025-11-07T00:00:28.429591849Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 7 00:00:28.430386 containerd[1601]: time="2025-11-07T00:00:28.430362726Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58378433" Nov 7 00:00:28.431575 containerd[1601]: time="2025-11-07T00:00:28.431519665Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 7 00:00:28.434097 containerd[1601]: time="2025-11-07T00:00:28.434060566Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 7 00:00:28.435353 containerd[1601]: time="2025-11-07T00:00:28.435316559Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.930754221s" Nov 7 00:00:28.435353 containerd[1601]: time="2025-11-07T00:00:28.435350131Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Nov 7 00:00:31.417304 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 7 00:00:31.417474 systemd[1]: kubelet.service: Consumed 222ms CPU time, 109.3M memory peak. Nov 7 00:00:31.419623 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 7 00:00:31.449123 systemd[1]: Reload requested from client PID 2323 ('systemctl') (unit session-9.scope)... Nov 7 00:00:31.449152 systemd[1]: Reloading... Nov 7 00:00:31.540194 zram_generator::config[2370]: No configuration found. Nov 7 00:00:31.867916 systemd[1]: Reloading finished in 418 ms. Nov 7 00:00:31.956017 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 7 00:00:31.956126 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 7 00:00:31.956479 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 7 00:00:31.956542 systemd[1]: kubelet.service: Consumed 162ms CPU time, 98.4M memory peak. Nov 7 00:00:31.958414 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 7 00:00:32.184799 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 7 00:00:32.195508 (kubelet)[2415]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 7 00:00:32.236965 kubelet[2415]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 7 00:00:32.236965 kubelet[2415]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 7 00:00:32.236965 kubelet[2415]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 7 00:00:32.237426 kubelet[2415]: I1107 00:00:32.236988 2415 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 7 00:00:32.549914 kubelet[2415]: I1107 00:00:32.549799 2415 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 7 00:00:32.549914 kubelet[2415]: I1107 00:00:32.549828 2415 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 7 00:00:32.550062 kubelet[2415]: I1107 00:00:32.550045 2415 server.go:956] "Client rotation is on, will bootstrap in background" Nov 7 00:00:32.575778 kubelet[2415]: E1107 00:00:32.575686 2415 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.46:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.46:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 7 00:00:32.576213 kubelet[2415]: I1107 00:00:32.576185 2415 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 7 00:00:32.584072 kubelet[2415]: I1107 00:00:32.584045 2415 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 7 00:00:32.590049 kubelet[2415]: I1107 00:00:32.590010 2415 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 7 00:00:32.590325 kubelet[2415]: I1107 00:00:32.590287 2415 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 7 00:00:32.590517 kubelet[2415]: I1107 00:00:32.590316 2415 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 7 00:00:32.590617 kubelet[2415]: I1107 00:00:32.590521 2415 topology_manager.go:138] "Creating topology manager with none policy" Nov 7 00:00:32.590617 kubelet[2415]: I1107 00:00:32.590533 2415 container_manager_linux.go:303] "Creating device plugin manager" Nov 7 00:00:32.591339 kubelet[2415]: I1107 00:00:32.591309 2415 state_mem.go:36] "Initialized new in-memory state store" Nov 7 00:00:32.593424 kubelet[2415]: I1107 00:00:32.593393 2415 kubelet.go:480] "Attempting to sync node with API server" Nov 7 00:00:32.593424 kubelet[2415]: I1107 00:00:32.593414 2415 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 7 00:00:32.593490 kubelet[2415]: I1107 00:00:32.593441 2415 kubelet.go:386] "Adding apiserver pod source" Nov 7 00:00:32.593490 kubelet[2415]: I1107 00:00:32.593459 2415 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 7 00:00:32.598241 kubelet[2415]: I1107 00:00:32.598214 2415 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 7 00:00:32.598713 kubelet[2415]: I1107 00:00:32.598672 2415 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 7 00:00:32.599290 kubelet[2415]: W1107 00:00:32.599264 2415 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 7 00:00:32.601970 kubelet[2415]: E1107 00:00:32.601883 2415 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.46:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.46:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 7 00:00:32.602064 kubelet[2415]: E1107 00:00:32.602046 2415 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.46:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.46:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 7 00:00:32.602582 kubelet[2415]: I1107 00:00:32.602560 2415 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 7 00:00:32.602643 kubelet[2415]: I1107 00:00:32.602617 2415 server.go:1289] "Started kubelet" Nov 7 00:00:32.602884 kubelet[2415]: I1107 00:00:32.602824 2415 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 7 00:00:32.603824 kubelet[2415]: I1107 00:00:32.603200 2415 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 7 00:00:32.603824 kubelet[2415]: I1107 00:00:32.603252 2415 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 7 00:00:32.604610 kubelet[2415]: I1107 00:00:32.604546 2415 server.go:317] "Adding debug handlers to kubelet server" Nov 7 00:00:32.605381 kubelet[2415]: I1107 00:00:32.605339 2415 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 7 00:00:32.606213 kubelet[2415]: I1107 00:00:32.606135 2415 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 7 00:00:32.607272 kubelet[2415]: E1107 00:00:32.605737 2415 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.46:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.46:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1875906e4d44f517 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-11-07 00:00:32.602584343 +0000 UTC m=+0.401997529,LastTimestamp:2025-11-07 00:00:32.602584343 +0000 UTC m=+0.401997529,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 7 00:00:32.608866 kubelet[2415]: E1107 00:00:32.607572 2415 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 7 00:00:32.608866 kubelet[2415]: I1107 00:00:32.607601 2415 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 7 00:00:32.608977 kubelet[2415]: I1107 00:00:32.608932 2415 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 7 00:00:32.609984 kubelet[2415]: E1107 00:00:32.609951 2415 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 7 00:00:32.610048 kubelet[2415]: I1107 00:00:32.610021 2415 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 7 00:00:32.610091 kubelet[2415]: I1107 00:00:32.610086 2415 reconciler.go:26] "Reconciler: start to sync state" Nov 7 00:00:32.610710 kubelet[2415]: E1107 00:00:32.610676 2415 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.46:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.46:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 7 00:00:32.611083 kubelet[2415]: E1107 00:00:32.611053 2415 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.46:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.46:6443: connect: connection refused" interval="200ms" Nov 7 00:00:32.611138 kubelet[2415]: I1107 00:00:32.611118 2415 factory.go:223] Registration of the containerd container factory successfully Nov 7 00:00:32.611138 kubelet[2415]: I1107 00:00:32.611132 2415 factory.go:223] Registration of the systemd container factory successfully Nov 7 00:00:32.626058 kubelet[2415]: I1107 00:00:32.626025 2415 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 7 00:00:32.626058 kubelet[2415]: I1107 00:00:32.626043 2415 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 7 00:00:32.626058 kubelet[2415]: I1107 00:00:32.626061 2415 state_mem.go:36] "Initialized new in-memory state store" Nov 7 00:00:32.629980 kubelet[2415]: I1107 00:00:32.629932 2415 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 7 00:00:32.631451 kubelet[2415]: I1107 00:00:32.631384 2415 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 7 00:00:32.631451 kubelet[2415]: I1107 00:00:32.631420 2415 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 7 00:00:32.631451 kubelet[2415]: I1107 00:00:32.631439 2415 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 7 00:00:32.631451 kubelet[2415]: I1107 00:00:32.631445 2415 kubelet.go:2436] "Starting kubelet main sync loop" Nov 7 00:00:32.631570 kubelet[2415]: E1107 00:00:32.631523 2415 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 7 00:00:32.633178 kubelet[2415]: E1107 00:00:32.632221 2415 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.46:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.46:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 7 00:00:32.708296 kubelet[2415]: E1107 00:00:32.708258 2415 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 7 00:00:32.731862 kubelet[2415]: E1107 00:00:32.731800 2415 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 7 00:00:32.809306 kubelet[2415]: E1107 00:00:32.809125 2415 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 7 00:00:32.811818 kubelet[2415]: E1107 00:00:32.811759 2415 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.46:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.46:6443: connect: connection refused" interval="400ms" Nov 7 00:00:32.910012 kubelet[2415]: E1107 00:00:32.909958 2415 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 7 00:00:32.932515 kubelet[2415]: E1107 00:00:32.932452 2415 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 7 00:00:33.010980 kubelet[2415]: E1107 00:00:33.010904 2415 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 7 00:00:33.111350 kubelet[2415]: E1107 00:00:33.111190 2415 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 7 00:00:33.211939 kubelet[2415]: E1107 00:00:33.211889 2415 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 7 00:00:33.212216 kubelet[2415]: E1107 00:00:33.212165 2415 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.46:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.46:6443: connect: connection refused" interval="800ms" Nov 7 00:00:33.312971 kubelet[2415]: E1107 00:00:33.312892 2415 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 7 00:00:33.333320 kubelet[2415]: E1107 00:00:33.333269 2415 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 7 00:00:33.413868 kubelet[2415]: E1107 00:00:33.413819 2415 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 7 00:00:33.514924 kubelet[2415]: E1107 00:00:33.514853 2415 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 7 00:00:33.599120 kubelet[2415]: E1107 00:00:33.599076 2415 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.46:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.46:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 7 00:00:33.615154 kubelet[2415]: E1107 00:00:33.615099 2415 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 7 00:00:33.643641 kubelet[2415]: E1107 00:00:33.643612 2415 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.46:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.46:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 7 00:00:33.715395 kubelet[2415]: E1107 00:00:33.715280 2415 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 7 00:00:33.734056 kubelet[2415]: E1107 00:00:33.734012 2415 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.46:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.46:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 7 00:00:33.815851 kubelet[2415]: E1107 00:00:33.815804 2415 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 7 00:00:33.916662 kubelet[2415]: E1107 00:00:33.916591 2415 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 7 00:00:34.012853 kubelet[2415]: E1107 00:00:34.012724 2415 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.46:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.46:6443: connect: connection refused" interval="1.6s" Nov 7 00:00:34.016688 kubelet[2415]: E1107 00:00:34.016640 2415 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 7 00:00:34.117056 kubelet[2415]: E1107 00:00:34.117008 2415 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 7 00:00:34.133466 kubelet[2415]: E1107 00:00:34.133411 2415 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 7 00:00:34.133997 kubelet[2415]: E1107 00:00:34.133964 2415 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.46:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.46:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 7 00:00:34.217688 kubelet[2415]: E1107 00:00:34.217639 2415 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 7 00:00:34.318554 kubelet[2415]: E1107 00:00:34.318431 2415 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 7 00:00:34.419440 kubelet[2415]: E1107 00:00:34.419377 2415 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 7 00:00:34.520408 kubelet[2415]: E1107 00:00:34.520340 2415 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 7 00:00:34.621336 kubelet[2415]: E1107 00:00:34.621193 2415 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 7 00:00:34.671786 kubelet[2415]: E1107 00:00:34.671725 2415 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.46:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.46:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 7 00:00:34.721547 kubelet[2415]: E1107 00:00:34.721490 2415 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 7 00:00:34.822731 kubelet[2415]: E1107 00:00:34.822603 2415 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 7 00:00:34.923735 kubelet[2415]: E1107 00:00:34.923678 2415 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 7 00:00:35.023893 kubelet[2415]: E1107 00:00:35.023787 2415 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 7 00:00:35.026697 kubelet[2415]: I1107 00:00:35.026603 2415 policy_none.go:49] "None policy: Start" Nov 7 00:00:35.026697 kubelet[2415]: I1107 00:00:35.026687 2415 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 7 00:00:35.026697 kubelet[2415]: I1107 00:00:35.026703 2415 state_mem.go:35] "Initializing new in-memory state store" Nov 7 00:00:35.102354 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 7 00:00:35.113891 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 7 00:00:35.117416 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 7 00:00:35.124507 kubelet[2415]: E1107 00:00:35.124477 2415 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 7 00:00:35.137482 kubelet[2415]: E1107 00:00:35.137225 2415 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 7 00:00:35.137559 kubelet[2415]: I1107 00:00:35.137541 2415 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 7 00:00:35.137592 kubelet[2415]: I1107 00:00:35.137553 2415 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 7 00:00:35.137856 kubelet[2415]: I1107 00:00:35.137841 2415 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 7 00:00:35.138889 kubelet[2415]: E1107 00:00:35.138845 2415 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 7 00:00:35.138969 kubelet[2415]: E1107 00:00:35.138909 2415 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Nov 7 00:00:35.238880 kubelet[2415]: I1107 00:00:35.238776 2415 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 7 00:00:35.239287 kubelet[2415]: E1107 00:00:35.239241 2415 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.46:6443/api/v1/nodes\": dial tcp 10.0.0.46:6443: connect: connection refused" node="localhost" Nov 7 00:00:35.441416 kubelet[2415]: I1107 00:00:35.441367 2415 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 7 00:00:35.441895 kubelet[2415]: E1107 00:00:35.441791 2415 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.46:6443/api/v1/nodes\": dial tcp 10.0.0.46:6443: connect: connection refused" node="localhost" Nov 7 00:00:35.613522 kubelet[2415]: E1107 00:00:35.613383 2415 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.46:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.46:6443: connect: connection refused" interval="3.2s" Nov 7 00:00:35.752216 systemd[1]: Created slice kubepods-burstable-pode950901d5795e61efa75774a4af3d5d9.slice - libcontainer container kubepods-burstable-pode950901d5795e61efa75774a4af3d5d9.slice. Nov 7 00:00:35.783524 kubelet[2415]: E1107 00:00:35.783463 2415 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 7 00:00:35.787822 systemd[1]: Created slice kubepods-burstable-pod20c890a246d840d308022312da9174cb.slice - libcontainer container kubepods-burstable-pod20c890a246d840d308022312da9174cb.slice. Nov 7 00:00:35.790329 kubelet[2415]: E1107 00:00:35.790285 2415 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 7 00:00:35.792728 systemd[1]: Created slice kubepods-burstable-podd13d96f639b65e57f439b4396b605564.slice - libcontainer container kubepods-burstable-podd13d96f639b65e57f439b4396b605564.slice. Nov 7 00:00:35.795450 kubelet[2415]: E1107 00:00:35.794707 2415 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 7 00:00:35.832372 kubelet[2415]: I1107 00:00:35.832303 2415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e950901d5795e61efa75774a4af3d5d9-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e950901d5795e61efa75774a4af3d5d9\") " pod="kube-system/kube-apiserver-localhost" Nov 7 00:00:35.832543 kubelet[2415]: I1107 00:00:35.832420 2415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e950901d5795e61efa75774a4af3d5d9-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e950901d5795e61efa75774a4af3d5d9\") " pod="kube-system/kube-apiserver-localhost" Nov 7 00:00:35.832543 kubelet[2415]: I1107 00:00:35.832488 2415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 7 00:00:35.832543 kubelet[2415]: I1107 00:00:35.832510 2415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 7 00:00:35.832543 kubelet[2415]: I1107 00:00:35.832532 2415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 7 00:00:35.832709 kubelet[2415]: I1107 00:00:35.832564 2415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 7 00:00:35.832709 kubelet[2415]: I1107 00:00:35.832586 2415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 7 00:00:35.832709 kubelet[2415]: I1107 00:00:35.832657 2415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d13d96f639b65e57f439b4396b605564-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d13d96f639b65e57f439b4396b605564\") " pod="kube-system/kube-scheduler-localhost" Nov 7 00:00:35.832709 kubelet[2415]: I1107 00:00:35.832703 2415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e950901d5795e61efa75774a4af3d5d9-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e950901d5795e61efa75774a4af3d5d9\") " pod="kube-system/kube-apiserver-localhost" Nov 7 00:00:35.843890 kubelet[2415]: I1107 00:00:35.843839 2415 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 7 00:00:35.844354 kubelet[2415]: E1107 00:00:35.844299 2415 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.46:6443/api/v1/nodes\": dial tcp 10.0.0.46:6443: connect: connection refused" node="localhost" Nov 7 00:00:35.995738 kubelet[2415]: E1107 00:00:35.995662 2415 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.46:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.46:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 7 00:00:36.044490 kubelet[2415]: E1107 00:00:36.044402 2415 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.46:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.46:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 7 00:00:36.084440 kubelet[2415]: E1107 00:00:36.084389 2415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 00:00:36.085253 containerd[1601]: time="2025-11-07T00:00:36.085205780Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e950901d5795e61efa75774a4af3d5d9,Namespace:kube-system,Attempt:0,}" Nov 7 00:00:36.091513 kubelet[2415]: E1107 00:00:36.091469 2415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 00:00:36.092009 containerd[1601]: time="2025-11-07T00:00:36.091959511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:20c890a246d840d308022312da9174cb,Namespace:kube-system,Attempt:0,}" Nov 7 00:00:36.095660 kubelet[2415]: E1107 00:00:36.095636 2415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 00:00:36.096879 containerd[1601]: time="2025-11-07T00:00:36.096816445Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d13d96f639b65e57f439b4396b605564,Namespace:kube-system,Attempt:0,}" Nov 7 00:00:36.293798 kubelet[2415]: E1107 00:00:36.293662 2415 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.46:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.46:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 7 00:00:36.340737 containerd[1601]: time="2025-11-07T00:00:36.340281341Z" level=info msg="connecting to shim c5bcd24dd94c90f6734dc41c96a44ba9f0b40414b66bbdcd58a6d928c4d7a858" address="unix:///run/containerd/s/12474e900b4195c4c44014fb03a0c0362a6229fdec0ed266544d2f4960bd606b" namespace=k8s.io protocol=ttrpc version=3 Nov 7 00:00:36.401593 kubelet[2415]: E1107 00:00:36.398678 2415 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.46:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.46:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 7 00:00:36.419940 containerd[1601]: time="2025-11-07T00:00:36.419898893Z" level=info msg="connecting to shim bbfefbebbc79ddba31dfa5ac3b1b7d6bcb0b7b493d3da640f092848924d2697f" address="unix:///run/containerd/s/d7fc90f03c1cb488cccd732869ed458361abc2d91d8a96f24bbafa210042e1eb" namespace=k8s.io protocol=ttrpc version=3 Nov 7 00:00:36.428664 containerd[1601]: time="2025-11-07T00:00:36.428612150Z" level=info msg="connecting to shim 894c270b54b57dd93bfe6af3683bfabcaaa98c01a167c148fafc5603060fdb7b" address="unix:///run/containerd/s/47ec9b66827c8b74706bff0c92b96e40bb97ac349b6466cef801b8becd2d1391" namespace=k8s.io protocol=ttrpc version=3 Nov 7 00:00:36.455371 systemd[1]: Started cri-containerd-bbfefbebbc79ddba31dfa5ac3b1b7d6bcb0b7b493d3da640f092848924d2697f.scope - libcontainer container bbfefbebbc79ddba31dfa5ac3b1b7d6bcb0b7b493d3da640f092848924d2697f. Nov 7 00:00:36.461166 systemd[1]: Started cri-containerd-c5bcd24dd94c90f6734dc41c96a44ba9f0b40414b66bbdcd58a6d928c4d7a858.scope - libcontainer container c5bcd24dd94c90f6734dc41c96a44ba9f0b40414b66bbdcd58a6d928c4d7a858. Nov 7 00:00:36.477293 systemd[1]: Started cri-containerd-894c270b54b57dd93bfe6af3683bfabcaaa98c01a167c148fafc5603060fdb7b.scope - libcontainer container 894c270b54b57dd93bfe6af3683bfabcaaa98c01a167c148fafc5603060fdb7b. Nov 7 00:00:36.540185 containerd[1601]: time="2025-11-07T00:00:36.540123051Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e950901d5795e61efa75774a4af3d5d9,Namespace:kube-system,Attempt:0,} returns sandbox id \"bbfefbebbc79ddba31dfa5ac3b1b7d6bcb0b7b493d3da640f092848924d2697f\"" Nov 7 00:00:36.541196 kubelet[2415]: E1107 00:00:36.541133 2415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 00:00:36.548112 containerd[1601]: time="2025-11-07T00:00:36.547996184Z" level=info msg="CreateContainer within sandbox \"bbfefbebbc79ddba31dfa5ac3b1b7d6bcb0b7b493d3da640f092848924d2697f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 7 00:00:36.548859 containerd[1601]: time="2025-11-07T00:00:36.548834002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:20c890a246d840d308022312da9174cb,Namespace:kube-system,Attempt:0,} returns sandbox id \"c5bcd24dd94c90f6734dc41c96a44ba9f0b40414b66bbdcd58a6d928c4d7a858\"" Nov 7 00:00:36.549735 kubelet[2415]: E1107 00:00:36.549697 2415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 00:00:36.554462 containerd[1601]: time="2025-11-07T00:00:36.554434319Z" level=info msg="CreateContainer within sandbox \"c5bcd24dd94c90f6734dc41c96a44ba9f0b40414b66bbdcd58a6d928c4d7a858\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 7 00:00:36.588050 containerd[1601]: time="2025-11-07T00:00:36.588007031Z" level=info msg="Container 577ddd9ce81d55e4e5d72a92ac68886365d2895a5054f8bb4f3ff644850d531e: CDI devices from CRI Config.CDIDevices: []" Nov 7 00:00:36.591436 containerd[1601]: time="2025-11-07T00:00:36.591385185Z" level=info msg="Container 6a630c99e1bdb54d9587505d166aa80970d1fbcc16ae890538325525ab4db185: CDI devices from CRI Config.CDIDevices: []" Nov 7 00:00:36.598408 containerd[1601]: time="2025-11-07T00:00:36.598347193Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d13d96f639b65e57f439b4396b605564,Namespace:kube-system,Attempt:0,} returns sandbox id \"894c270b54b57dd93bfe6af3683bfabcaaa98c01a167c148fafc5603060fdb7b\"" Nov 7 00:00:36.599186 kubelet[2415]: E1107 00:00:36.599135 2415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 00:00:36.600159 containerd[1601]: time="2025-11-07T00:00:36.599728662Z" level=info msg="CreateContainer within sandbox \"c5bcd24dd94c90f6734dc41c96a44ba9f0b40414b66bbdcd58a6d928c4d7a858\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"577ddd9ce81d55e4e5d72a92ac68886365d2895a5054f8bb4f3ff644850d531e\"" Nov 7 00:00:36.602769 containerd[1601]: time="2025-11-07T00:00:36.602719124Z" level=info msg="StartContainer for \"577ddd9ce81d55e4e5d72a92ac68886365d2895a5054f8bb4f3ff644850d531e\"" Nov 7 00:00:36.604167 containerd[1601]: time="2025-11-07T00:00:36.603985840Z" level=info msg="connecting to shim 577ddd9ce81d55e4e5d72a92ac68886365d2895a5054f8bb4f3ff644850d531e" address="unix:///run/containerd/s/12474e900b4195c4c44014fb03a0c0362a6229fdec0ed266544d2f4960bd606b" protocol=ttrpc version=3 Nov 7 00:00:36.606692 containerd[1601]: time="2025-11-07T00:00:36.606596536Z" level=info msg="CreateContainer within sandbox \"894c270b54b57dd93bfe6af3683bfabcaaa98c01a167c148fafc5603060fdb7b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 7 00:00:36.607472 containerd[1601]: time="2025-11-07T00:00:36.607440886Z" level=info msg="CreateContainer within sandbox \"bbfefbebbc79ddba31dfa5ac3b1b7d6bcb0b7b493d3da640f092848924d2697f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6a630c99e1bdb54d9587505d166aa80970d1fbcc16ae890538325525ab4db185\"" Nov 7 00:00:36.607930 containerd[1601]: time="2025-11-07T00:00:36.607901132Z" level=info msg="StartContainer for \"6a630c99e1bdb54d9587505d166aa80970d1fbcc16ae890538325525ab4db185\"" Nov 7 00:00:36.608934 containerd[1601]: time="2025-11-07T00:00:36.608877208Z" level=info msg="connecting to shim 6a630c99e1bdb54d9587505d166aa80970d1fbcc16ae890538325525ab4db185" address="unix:///run/containerd/s/d7fc90f03c1cb488cccd732869ed458361abc2d91d8a96f24bbafa210042e1eb" protocol=ttrpc version=3 Nov 7 00:00:36.619798 containerd[1601]: time="2025-11-07T00:00:36.619746454Z" level=info msg="Container 7d76ad1714201fb513388a6895ab55ddad148731d7a43135b5a5d840fe58d1a8: CDI devices from CRI Config.CDIDevices: []" Nov 7 00:00:36.625365 systemd[1]: Started cri-containerd-577ddd9ce81d55e4e5d72a92ac68886365d2895a5054f8bb4f3ff644850d531e.scope - libcontainer container 577ddd9ce81d55e4e5d72a92ac68886365d2895a5054f8bb4f3ff644850d531e. Nov 7 00:00:36.629111 systemd[1]: Started cri-containerd-6a630c99e1bdb54d9587505d166aa80970d1fbcc16ae890538325525ab4db185.scope - libcontainer container 6a630c99e1bdb54d9587505d166aa80970d1fbcc16ae890538325525ab4db185. Nov 7 00:00:36.630110 containerd[1601]: time="2025-11-07T00:00:36.630072319Z" level=info msg="CreateContainer within sandbox \"894c270b54b57dd93bfe6af3683bfabcaaa98c01a167c148fafc5603060fdb7b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"7d76ad1714201fb513388a6895ab55ddad148731d7a43135b5a5d840fe58d1a8\"" Nov 7 00:00:36.631941 containerd[1601]: time="2025-11-07T00:00:36.630964287Z" level=info msg="StartContainer for \"7d76ad1714201fb513388a6895ab55ddad148731d7a43135b5a5d840fe58d1a8\"" Nov 7 00:00:36.632123 containerd[1601]: time="2025-11-07T00:00:36.632103977Z" level=info msg="connecting to shim 7d76ad1714201fb513388a6895ab55ddad148731d7a43135b5a5d840fe58d1a8" address="unix:///run/containerd/s/47ec9b66827c8b74706bff0c92b96e40bb97ac349b6466cef801b8becd2d1391" protocol=ttrpc version=3 Nov 7 00:00:36.645654 kubelet[2415]: I1107 00:00:36.645625 2415 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 7 00:00:36.646397 kubelet[2415]: E1107 00:00:36.646375 2415 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.46:6443/api/v1/nodes\": dial tcp 10.0.0.46:6443: connect: connection refused" node="localhost" Nov 7 00:00:36.663287 systemd[1]: Started cri-containerd-7d76ad1714201fb513388a6895ab55ddad148731d7a43135b5a5d840fe58d1a8.scope - libcontainer container 7d76ad1714201fb513388a6895ab55ddad148731d7a43135b5a5d840fe58d1a8. Nov 7 00:00:36.692528 containerd[1601]: time="2025-11-07T00:00:36.692441381Z" level=info msg="StartContainer for \"6a630c99e1bdb54d9587505d166aa80970d1fbcc16ae890538325525ab4db185\" returns successfully" Nov 7 00:00:36.702542 containerd[1601]: time="2025-11-07T00:00:36.702489199Z" level=info msg="StartContainer for \"577ddd9ce81d55e4e5d72a92ac68886365d2895a5054f8bb4f3ff644850d531e\" returns successfully" Nov 7 00:00:36.748163 containerd[1601]: time="2025-11-07T00:00:36.748105321Z" level=info msg="StartContainer for \"7d76ad1714201fb513388a6895ab55ddad148731d7a43135b5a5d840fe58d1a8\" returns successfully" Nov 7 00:00:37.651973 kubelet[2415]: E1107 00:00:37.651429 2415 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 7 00:00:37.651973 kubelet[2415]: E1107 00:00:37.651553 2415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 00:00:37.660536 kubelet[2415]: E1107 00:00:37.659631 2415 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 7 00:00:37.661161 kubelet[2415]: E1107 00:00:37.661036 2415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 00:00:37.661161 kubelet[2415]: E1107 00:00:37.660825 2415 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 7 00:00:37.661490 kubelet[2415]: E1107 00:00:37.661472 2415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 00:00:38.248881 kubelet[2415]: I1107 00:00:38.248836 2415 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 7 00:00:38.668013 kubelet[2415]: E1107 00:00:38.667968 2415 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 7 00:00:38.668519 kubelet[2415]: E1107 00:00:38.668076 2415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 00:00:38.668519 kubelet[2415]: E1107 00:00:38.668507 2415 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 7 00:00:38.670170 kubelet[2415]: E1107 00:00:38.668591 2415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 00:00:38.670170 kubelet[2415]: E1107 00:00:38.668810 2415 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 7 00:00:38.670170 kubelet[2415]: E1107 00:00:38.668896 2415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 00:00:38.789763 kubelet[2415]: I1107 00:00:38.789633 2415 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 7 00:00:38.789763 kubelet[2415]: E1107 00:00:38.789682 2415 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Nov 7 00:00:38.807349 kubelet[2415]: E1107 00:00:38.807296 2415 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 7 00:00:38.908315 kubelet[2415]: E1107 00:00:38.908186 2415 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 7 00:00:39.009896 kubelet[2415]: E1107 00:00:39.009378 2415 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 7 00:00:39.110284 kubelet[2415]: E1107 00:00:39.110216 2415 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 7 00:00:39.210549 kubelet[2415]: E1107 00:00:39.210487 2415 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 7 00:00:39.310056 kubelet[2415]: I1107 00:00:39.309894 2415 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 7 00:00:39.348704 kubelet[2415]: E1107 00:00:39.348643 2415 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Nov 7 00:00:39.348704 kubelet[2415]: I1107 00:00:39.348686 2415 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 7 00:00:39.350798 kubelet[2415]: E1107 00:00:39.350770 2415 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Nov 7 00:00:39.350798 kubelet[2415]: I1107 00:00:39.350792 2415 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 7 00:00:39.352061 kubelet[2415]: E1107 00:00:39.352033 2415 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Nov 7 00:00:39.606085 kubelet[2415]: I1107 00:00:39.605912 2415 apiserver.go:52] "Watching apiserver" Nov 7 00:00:39.609960 kubelet[2415]: I1107 00:00:39.609798 2415 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 7 00:00:39.666517 kubelet[2415]: I1107 00:00:39.666484 2415 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 7 00:00:39.666517 kubelet[2415]: I1107 00:00:39.666515 2415 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 7 00:00:39.667178 kubelet[2415]: I1107 00:00:39.666801 2415 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 7 00:00:39.668548 kubelet[2415]: E1107 00:00:39.668491 2415 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Nov 7 00:00:39.668548 kubelet[2415]: E1107 00:00:39.668515 2415 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Nov 7 00:00:39.668967 kubelet[2415]: E1107 00:00:39.668500 2415 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Nov 7 00:00:39.668967 kubelet[2415]: E1107 00:00:39.668667 2415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 00:00:39.668967 kubelet[2415]: E1107 00:00:39.668693 2415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 00:00:39.668967 kubelet[2415]: E1107 00:00:39.668725 2415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 00:00:41.136773 systemd[1]: Reload requested from client PID 2700 ('systemctl') (unit session-9.scope)... Nov 7 00:00:41.136790 systemd[1]: Reloading... Nov 7 00:00:41.226198 zram_generator::config[2747]: No configuration found. Nov 7 00:00:41.588014 systemd[1]: Reloading finished in 450 ms. Nov 7 00:00:41.618182 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 7 00:00:41.642920 systemd[1]: kubelet.service: Deactivated successfully. Nov 7 00:00:41.643316 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 7 00:00:41.643382 systemd[1]: kubelet.service: Consumed 962ms CPU time, 128.2M memory peak. Nov 7 00:00:41.645645 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 7 00:00:41.999473 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 7 00:00:42.014641 (kubelet)[2789]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 7 00:00:42.062868 kubelet[2789]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 7 00:00:42.062868 kubelet[2789]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 7 00:00:42.062868 kubelet[2789]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 7 00:00:42.063304 kubelet[2789]: I1107 00:00:42.062894 2789 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 7 00:00:42.069798 kubelet[2789]: I1107 00:00:42.069742 2789 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 7 00:00:42.069798 kubelet[2789]: I1107 00:00:42.069773 2789 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 7 00:00:42.070048 kubelet[2789]: I1107 00:00:42.070024 2789 server.go:956] "Client rotation is on, will bootstrap in background" Nov 7 00:00:42.071290 kubelet[2789]: I1107 00:00:42.071253 2789 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 7 00:00:42.073362 kubelet[2789]: I1107 00:00:42.073327 2789 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 7 00:00:42.077089 kubelet[2789]: I1107 00:00:42.077061 2789 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 7 00:00:42.083724 kubelet[2789]: I1107 00:00:42.083675 2789 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 7 00:00:42.083986 kubelet[2789]: I1107 00:00:42.083942 2789 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 7 00:00:42.084151 kubelet[2789]: I1107 00:00:42.083969 2789 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 7 00:00:42.084263 kubelet[2789]: I1107 00:00:42.084188 2789 topology_manager.go:138] "Creating topology manager with none policy" Nov 7 00:00:42.084263 kubelet[2789]: I1107 00:00:42.084202 2789 container_manager_linux.go:303] "Creating device plugin manager" Nov 7 00:00:42.084263 kubelet[2789]: I1107 00:00:42.084258 2789 state_mem.go:36] "Initialized new in-memory state store" Nov 7 00:00:42.084476 kubelet[2789]: I1107 00:00:42.084453 2789 kubelet.go:480] "Attempting to sync node with API server" Nov 7 00:00:42.084476 kubelet[2789]: I1107 00:00:42.084470 2789 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 7 00:00:42.084564 kubelet[2789]: I1107 00:00:42.084514 2789 kubelet.go:386] "Adding apiserver pod source" Nov 7 00:00:42.084564 kubelet[2789]: I1107 00:00:42.084538 2789 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 7 00:00:42.085672 kubelet[2789]: I1107 00:00:42.085646 2789 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 7 00:00:42.086117 kubelet[2789]: I1107 00:00:42.086084 2789 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 7 00:00:42.090624 kubelet[2789]: I1107 00:00:42.090594 2789 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 7 00:00:42.090692 kubelet[2789]: I1107 00:00:42.090644 2789 server.go:1289] "Started kubelet" Nov 7 00:00:42.091038 kubelet[2789]: I1107 00:00:42.090984 2789 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 7 00:00:42.092052 kubelet[2789]: I1107 00:00:42.092022 2789 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 7 00:00:42.092255 kubelet[2789]: I1107 00:00:42.092183 2789 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 7 00:00:42.092311 kubelet[2789]: I1107 00:00:42.092264 2789 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 7 00:00:42.095893 kubelet[2789]: I1107 00:00:42.095860 2789 server.go:317] "Adding debug handlers to kubelet server" Nov 7 00:00:42.100634 kubelet[2789]: I1107 00:00:42.100494 2789 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 7 00:00:42.100634 kubelet[2789]: I1107 00:00:42.100551 2789 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 7 00:00:42.100634 kubelet[2789]: I1107 00:00:42.100619 2789 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 7 00:00:42.100757 kubelet[2789]: I1107 00:00:42.100727 2789 reconciler.go:26] "Reconciler: start to sync state" Nov 7 00:00:42.108165 kubelet[2789]: I1107 00:00:42.107970 2789 factory.go:223] Registration of the containerd container factory successfully Nov 7 00:00:42.108165 kubelet[2789]: I1107 00:00:42.107989 2789 factory.go:223] Registration of the systemd container factory successfully Nov 7 00:00:42.108165 kubelet[2789]: I1107 00:00:42.108066 2789 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 7 00:00:42.110805 kubelet[2789]: I1107 00:00:42.110744 2789 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 7 00:00:42.116192 kubelet[2789]: E1107 00:00:42.116135 2789 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 7 00:00:42.116876 kubelet[2789]: I1107 00:00:42.116851 2789 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 7 00:00:42.117527 kubelet[2789]: I1107 00:00:42.117218 2789 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 7 00:00:42.117527 kubelet[2789]: I1107 00:00:42.117240 2789 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 7 00:00:42.117527 kubelet[2789]: I1107 00:00:42.117248 2789 kubelet.go:2436] "Starting kubelet main sync loop" Nov 7 00:00:42.117527 kubelet[2789]: E1107 00:00:42.117296 2789 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 7 00:00:42.148801 kubelet[2789]: I1107 00:00:42.148768 2789 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 7 00:00:42.148985 kubelet[2789]: I1107 00:00:42.148971 2789 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 7 00:00:42.149058 kubelet[2789]: I1107 00:00:42.149047 2789 state_mem.go:36] "Initialized new in-memory state store" Nov 7 00:00:42.149328 kubelet[2789]: I1107 00:00:42.149302 2789 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 7 00:00:42.149402 kubelet[2789]: I1107 00:00:42.149379 2789 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 7 00:00:42.149462 kubelet[2789]: I1107 00:00:42.149453 2789 policy_none.go:49] "None policy: Start" Nov 7 00:00:42.149532 kubelet[2789]: I1107 00:00:42.149522 2789 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 7 00:00:42.149593 kubelet[2789]: I1107 00:00:42.149583 2789 state_mem.go:35] "Initializing new in-memory state store" Nov 7 00:00:42.149749 kubelet[2789]: I1107 00:00:42.149736 2789 state_mem.go:75] "Updated machine memory state" Nov 7 00:00:42.151662 sudo[2825]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Nov 7 00:00:42.152002 sudo[2825]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Nov 7 00:00:42.155180 kubelet[2789]: E1107 00:00:42.155069 2789 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 7 00:00:42.155922 kubelet[2789]: I1107 00:00:42.155534 2789 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 7 00:00:42.155922 kubelet[2789]: I1107 00:00:42.155549 2789 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 7 00:00:42.155922 kubelet[2789]: I1107 00:00:42.155792 2789 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 7 00:00:42.159173 kubelet[2789]: E1107 00:00:42.159118 2789 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 7 00:00:42.218246 kubelet[2789]: I1107 00:00:42.218177 2789 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 7 00:00:42.218409 kubelet[2789]: I1107 00:00:42.218323 2789 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 7 00:00:42.218897 kubelet[2789]: I1107 00:00:42.218750 2789 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 7 00:00:42.264627 kubelet[2789]: I1107 00:00:42.264495 2789 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 7 00:00:42.362492 update_engine[1585]: I20251107 00:00:42.362394 1585 update_attempter.cc:509] Updating boot flags... Nov 7 00:00:42.402323 kubelet[2789]: I1107 00:00:42.402264 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 7 00:00:42.402323 kubelet[2789]: I1107 00:00:42.402318 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 7 00:00:42.402470 kubelet[2789]: I1107 00:00:42.402338 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d13d96f639b65e57f439b4396b605564-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d13d96f639b65e57f439b4396b605564\") " pod="kube-system/kube-scheduler-localhost" Nov 7 00:00:42.402470 kubelet[2789]: I1107 00:00:42.402354 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e950901d5795e61efa75774a4af3d5d9-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e950901d5795e61efa75774a4af3d5d9\") " pod="kube-system/kube-apiserver-localhost" Nov 7 00:00:42.402470 kubelet[2789]: I1107 00:00:42.402369 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 7 00:00:42.402470 kubelet[2789]: I1107 00:00:42.402384 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 7 00:00:42.402470 kubelet[2789]: I1107 00:00:42.402399 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 7 00:00:42.402597 kubelet[2789]: I1107 00:00:42.402414 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e950901d5795e61efa75774a4af3d5d9-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e950901d5795e61efa75774a4af3d5d9\") " pod="kube-system/kube-apiserver-localhost" Nov 7 00:00:42.402597 kubelet[2789]: I1107 00:00:42.402428 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e950901d5795e61efa75774a4af3d5d9-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e950901d5795e61efa75774a4af3d5d9\") " pod="kube-system/kube-apiserver-localhost" Nov 7 00:00:42.525040 kubelet[2789]: E1107 00:00:42.524908 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 00:00:42.525181 kubelet[2789]: E1107 00:00:42.525065 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 00:00:42.526035 kubelet[2789]: E1107 00:00:42.525983 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 00:00:42.728577 sudo[2825]: pam_unix(sudo:session): session closed for user root Nov 7 00:00:42.916934 kubelet[2789]: I1107 00:00:42.916040 2789 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Nov 7 00:00:42.916934 kubelet[2789]: I1107 00:00:42.916132 2789 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 7 00:00:43.085135 kubelet[2789]: I1107 00:00:43.085093 2789 apiserver.go:52] "Watching apiserver" Nov 7 00:00:43.101657 kubelet[2789]: I1107 00:00:43.101623 2789 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 7 00:00:43.135502 kubelet[2789]: I1107 00:00:43.135332 2789 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 7 00:00:43.135739 kubelet[2789]: E1107 00:00:43.135720 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 00:00:43.136619 kubelet[2789]: E1107 00:00:43.136593 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 00:00:43.285248 kubelet[2789]: E1107 00:00:43.285096 2789 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Nov 7 00:00:43.286594 kubelet[2789]: E1107 00:00:43.286539 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 00:00:43.801136 kubelet[2789]: I1107 00:00:43.800941 2789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.800925195 podStartE2EDuration="1.800925195s" podCreationTimestamp="2025-11-07 00:00:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-07 00:00:43.435835455 +0000 UTC m=+1.415593725" watchObservedRunningTime="2025-11-07 00:00:43.800925195 +0000 UTC m=+1.780683465" Nov 7 00:00:44.137569 kubelet[2789]: E1107 00:00:44.137379 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 00:00:44.137569 kubelet[2789]: E1107 00:00:44.137467 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 00:00:44.137569 kubelet[2789]: E1107 00:00:44.137527 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 00:00:44.165426 kubelet[2789]: I1107 00:00:44.165302 2789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.1646442009999998 podStartE2EDuration="2.164644201s" podCreationTimestamp="2025-11-07 00:00:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-07 00:00:44.164527754 +0000 UTC m=+2.144286024" watchObservedRunningTime="2025-11-07 00:00:44.164644201 +0000 UTC m=+2.144402471" Nov 7 00:00:44.165426 kubelet[2789]: I1107 00:00:44.165429 2789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.165424036 podStartE2EDuration="2.165424036s" podCreationTimestamp="2025-11-07 00:00:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-07 00:00:43.801242917 +0000 UTC m=+1.781001187" watchObservedRunningTime="2025-11-07 00:00:44.165424036 +0000 UTC m=+2.145182306" Nov 7 00:00:45.082841 sudo[1852]: pam_unix(sudo:session): session closed for user root Nov 7 00:00:45.084562 sshd[1851]: Connection closed by 10.0.0.1 port 40490 Nov 7 00:00:45.085006 sshd-session[1848]: pam_unix(sshd:session): session closed for user core Nov 7 00:00:45.088929 systemd[1]: sshd@8-10.0.0.46:22-10.0.0.1:40490.service: Deactivated successfully. Nov 7 00:00:45.091663 systemd[1]: session-9.scope: Deactivated successfully. Nov 7 00:00:45.091891 systemd[1]: session-9.scope: Consumed 5.651s CPU time, 255.7M memory peak. Nov 7 00:00:45.094845 systemd-logind[1583]: Session 9 logged out. Waiting for processes to exit. Nov 7 00:00:45.095849 systemd-logind[1583]: Removed session 9. Nov 7 00:00:48.245199 kubelet[2789]: E1107 00:00:48.245067 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 00:00:49.144258 kubelet[2789]: E1107 00:00:49.144209 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 00:00:49.287818 kubelet[2789]: I1107 00:00:49.287781 2789 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 7 00:00:49.288392 containerd[1601]: time="2025-11-07T00:00:49.288182078Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 7 00:00:49.288645 kubelet[2789]: I1107 00:00:49.288444 2789 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 7 00:00:49.354311 systemd[1]: Created slice kubepods-besteffort-poddace02f0_f8cb_4812_91c4_8bfef2f5f5c7.slice - libcontainer container kubepods-besteffort-poddace02f0_f8cb_4812_91c4_8bfef2f5f5c7.slice. Nov 7 00:00:49.366430 systemd[1]: Created slice kubepods-burstable-podf22b05e1_2865_4e95_8406_a34e7e1f0b4b.slice - libcontainer container kubepods-burstable-podf22b05e1_2865_4e95_8406_a34e7e1f0b4b.slice. Nov 7 00:00:49.450927 kubelet[2789]: I1107 00:00:49.450792 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f22b05e1-2865-4e95-8406-a34e7e1f0b4b-xtables-lock\") pod \"cilium-zkz7l\" (UID: \"f22b05e1-2865-4e95-8406-a34e7e1f0b4b\") " pod="kube-system/cilium-zkz7l" Nov 7 00:00:49.450927 kubelet[2789]: I1107 00:00:49.450834 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f22b05e1-2865-4e95-8406-a34e7e1f0b4b-host-proc-sys-net\") pod \"cilium-zkz7l\" (UID: \"f22b05e1-2865-4e95-8406-a34e7e1f0b4b\") " pod="kube-system/cilium-zkz7l" Nov 7 00:00:49.450927 kubelet[2789]: I1107 00:00:49.450848 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f22b05e1-2865-4e95-8406-a34e7e1f0b4b-hubble-tls\") pod \"cilium-zkz7l\" (UID: \"f22b05e1-2865-4e95-8406-a34e7e1f0b4b\") " pod="kube-system/cilium-zkz7l" Nov 7 00:00:49.450927 kubelet[2789]: I1107 00:00:49.450884 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fgdl\" (UniqueName: \"kubernetes.io/projected/f22b05e1-2865-4e95-8406-a34e7e1f0b4b-kube-api-access-8fgdl\") pod \"cilium-zkz7l\" (UID: \"f22b05e1-2865-4e95-8406-a34e7e1f0b4b\") " pod="kube-system/cilium-zkz7l" Nov 7 00:00:49.450927 kubelet[2789]: I1107 00:00:49.450920 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/dace02f0-f8cb-4812-91c4-8bfef2f5f5c7-kube-proxy\") pod \"kube-proxy-5fhdh\" (UID: \"dace02f0-f8cb-4812-91c4-8bfef2f5f5c7\") " pod="kube-system/kube-proxy-5fhdh" Nov 7 00:00:49.451217 kubelet[2789]: I1107 00:00:49.450956 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dace02f0-f8cb-4812-91c4-8bfef2f5f5c7-lib-modules\") pod \"kube-proxy-5fhdh\" (UID: \"dace02f0-f8cb-4812-91c4-8bfef2f5f5c7\") " pod="kube-system/kube-proxy-5fhdh" Nov 7 00:00:49.451217 kubelet[2789]: I1107 00:00:49.450979 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fq8n8\" (UniqueName: \"kubernetes.io/projected/dace02f0-f8cb-4812-91c4-8bfef2f5f5c7-kube-api-access-fq8n8\") pod \"kube-proxy-5fhdh\" (UID: \"dace02f0-f8cb-4812-91c4-8bfef2f5f5c7\") " pod="kube-system/kube-proxy-5fhdh" Nov 7 00:00:49.451217 kubelet[2789]: I1107 00:00:49.451014 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f22b05e1-2865-4e95-8406-a34e7e1f0b4b-bpf-maps\") pod \"cilium-zkz7l\" (UID: \"f22b05e1-2865-4e95-8406-a34e7e1f0b4b\") " pod="kube-system/cilium-zkz7l" Nov 7 00:00:49.451217 kubelet[2789]: I1107 00:00:49.451032 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f22b05e1-2865-4e95-8406-a34e7e1f0b4b-cilium-config-path\") pod \"cilium-zkz7l\" (UID: \"f22b05e1-2865-4e95-8406-a34e7e1f0b4b\") " pod="kube-system/cilium-zkz7l" Nov 7 00:00:49.451217 kubelet[2789]: I1107 00:00:49.451087 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dace02f0-f8cb-4812-91c4-8bfef2f5f5c7-xtables-lock\") pod \"kube-proxy-5fhdh\" (UID: \"dace02f0-f8cb-4812-91c4-8bfef2f5f5c7\") " pod="kube-system/kube-proxy-5fhdh" Nov 7 00:00:49.451217 kubelet[2789]: I1107 00:00:49.451120 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f22b05e1-2865-4e95-8406-a34e7e1f0b4b-hostproc\") pod \"cilium-zkz7l\" (UID: \"f22b05e1-2865-4e95-8406-a34e7e1f0b4b\") " pod="kube-system/cilium-zkz7l" Nov 7 00:00:49.451418 kubelet[2789]: I1107 00:00:49.451165 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f22b05e1-2865-4e95-8406-a34e7e1f0b4b-cni-path\") pod \"cilium-zkz7l\" (UID: \"f22b05e1-2865-4e95-8406-a34e7e1f0b4b\") " pod="kube-system/cilium-zkz7l" Nov 7 00:00:49.451418 kubelet[2789]: I1107 00:00:49.451194 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f22b05e1-2865-4e95-8406-a34e7e1f0b4b-clustermesh-secrets\") pod \"cilium-zkz7l\" (UID: \"f22b05e1-2865-4e95-8406-a34e7e1f0b4b\") " pod="kube-system/cilium-zkz7l" Nov 7 00:00:49.451418 kubelet[2789]: I1107 00:00:49.451214 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f22b05e1-2865-4e95-8406-a34e7e1f0b4b-cilium-run\") pod \"cilium-zkz7l\" (UID: \"f22b05e1-2865-4e95-8406-a34e7e1f0b4b\") " pod="kube-system/cilium-zkz7l" Nov 7 00:00:49.451418 kubelet[2789]: I1107 00:00:49.451233 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f22b05e1-2865-4e95-8406-a34e7e1f0b4b-cilium-cgroup\") pod \"cilium-zkz7l\" (UID: \"f22b05e1-2865-4e95-8406-a34e7e1f0b4b\") " pod="kube-system/cilium-zkz7l" Nov 7 00:00:49.451418 kubelet[2789]: I1107 00:00:49.451269 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f22b05e1-2865-4e95-8406-a34e7e1f0b4b-host-proc-sys-kernel\") pod \"cilium-zkz7l\" (UID: \"f22b05e1-2865-4e95-8406-a34e7e1f0b4b\") " pod="kube-system/cilium-zkz7l" Nov 7 00:00:49.451418 kubelet[2789]: I1107 00:00:49.451289 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f22b05e1-2865-4e95-8406-a34e7e1f0b4b-etc-cni-netd\") pod \"cilium-zkz7l\" (UID: \"f22b05e1-2865-4e95-8406-a34e7e1f0b4b\") " pod="kube-system/cilium-zkz7l" Nov 7 00:00:49.451607 kubelet[2789]: I1107 00:00:49.451308 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f22b05e1-2865-4e95-8406-a34e7e1f0b4b-lib-modules\") pod \"cilium-zkz7l\" (UID: \"f22b05e1-2865-4e95-8406-a34e7e1f0b4b\") " pod="kube-system/cilium-zkz7l" Nov 7 00:00:49.561866 kubelet[2789]: E1107 00:00:49.561833 2789 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Nov 7 00:00:49.561866 kubelet[2789]: E1107 00:00:49.561866 2789 projected.go:194] Error preparing data for projected volume kube-api-access-8fgdl for pod kube-system/cilium-zkz7l: configmap "kube-root-ca.crt" not found Nov 7 00:00:49.562217 kubelet[2789]: E1107 00:00:49.561917 2789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f22b05e1-2865-4e95-8406-a34e7e1f0b4b-kube-api-access-8fgdl podName:f22b05e1-2865-4e95-8406-a34e7e1f0b4b nodeName:}" failed. No retries permitted until 2025-11-07 00:00:50.061899664 +0000 UTC m=+8.041657934 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-8fgdl" (UniqueName: "kubernetes.io/projected/f22b05e1-2865-4e95-8406-a34e7e1f0b4b-kube-api-access-8fgdl") pod "cilium-zkz7l" (UID: "f22b05e1-2865-4e95-8406-a34e7e1f0b4b") : configmap "kube-root-ca.crt" not found Nov 7 00:00:49.562217 kubelet[2789]: E1107 00:00:49.561834 2789 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Nov 7 00:00:49.562217 kubelet[2789]: E1107 00:00:49.562075 2789 projected.go:194] Error preparing data for projected volume kube-api-access-fq8n8 for pod kube-system/kube-proxy-5fhdh: configmap "kube-root-ca.crt" not found Nov 7 00:00:49.562217 kubelet[2789]: E1107 00:00:49.562097 2789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/dace02f0-f8cb-4812-91c4-8bfef2f5f5c7-kube-api-access-fq8n8 podName:dace02f0-f8cb-4812-91c4-8bfef2f5f5c7 nodeName:}" failed. No retries permitted until 2025-11-07 00:00:50.06209017 +0000 UTC m=+8.041848441 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-fq8n8" (UniqueName: "kubernetes.io/projected/dace02f0-f8cb-4812-91c4-8bfef2f5f5c7-kube-api-access-fq8n8") pod "kube-proxy-5fhdh" (UID: "dace02f0-f8cb-4812-91c4-8bfef2f5f5c7") : configmap "kube-root-ca.crt" not found Nov 7 00:00:50.145927 kubelet[2789]: E1107 00:00:50.145885 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 00:00:50.264600 kubelet[2789]: E1107 00:00:50.264522 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 00:00:50.265370 containerd[1601]: time="2025-11-07T00:00:50.265215676Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5fhdh,Uid:dace02f0-f8cb-4812-91c4-8bfef2f5f5c7,Namespace:kube-system,Attempt:0,}" Nov 7 00:00:50.269597 kubelet[2789]: E1107 00:00:50.269572 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 00:00:50.269940 containerd[1601]: time="2025-11-07T00:00:50.269914357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zkz7l,Uid:f22b05e1-2865-4e95-8406-a34e7e1f0b4b,Namespace:kube-system,Attempt:0,}" Nov 7 00:00:50.882072 kubelet[2789]: E1107 00:00:50.882038 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 00:00:51.074604 systemd[1]: Created slice kubepods-besteffort-poda8f1149e_f58d_4354_bec5_ede0ad122c5a.slice - libcontainer container kubepods-besteffort-poda8f1149e_f58d_4354_bec5_ede0ad122c5a.slice. Nov 7 00:00:51.149200 kubelet[2789]: E1107 00:00:51.148343 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 00:00:51.163282 kubelet[2789]: I1107 00:00:51.163234 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a8f1149e-f58d-4354-bec5-ede0ad122c5a-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-m7zjd\" (UID: \"a8f1149e-f58d-4354-bec5-ede0ad122c5a\") " pod="kube-system/cilium-operator-6c4d7847fc-m7zjd" Nov 7 00:00:51.163282 kubelet[2789]: I1107 00:00:51.163276 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlbr7\" (UniqueName: \"kubernetes.io/projected/a8f1149e-f58d-4354-bec5-ede0ad122c5a-kube-api-access-nlbr7\") pod \"cilium-operator-6c4d7847fc-m7zjd\" (UID: \"a8f1149e-f58d-4354-bec5-ede0ad122c5a\") " pod="kube-system/cilium-operator-6c4d7847fc-m7zjd" Nov 7 00:00:51.170097 containerd[1601]: time="2025-11-07T00:00:51.170025741Z" level=info msg="connecting to shim d51d7593ac12df5f580c77a42a4f922f221f0b22dacf96e388f706daa60c8d68" address="unix:///run/containerd/s/c6025a159e7a07df7648a95c5955eca1994d924ba3e5ba08ed0a4e12db12932e" namespace=k8s.io protocol=ttrpc version=3 Nov 7 00:00:51.235427 systemd[1]: Started cri-containerd-d51d7593ac12df5f580c77a42a4f922f221f0b22dacf96e388f706daa60c8d68.scope - libcontainer container d51d7593ac12df5f580c77a42a4f922f221f0b22dacf96e388f706daa60c8d68. Nov 7 00:00:51.259422 containerd[1601]: time="2025-11-07T00:00:51.259347558Z" level=info msg="connecting to shim cca5ce73f71a5b051d443293d193eddb1f5b194c5f865032efeadf4cb144e3af" address="unix:///run/containerd/s/de130534b5d9e53bce69a19dbb69355b19383771c74bba5166de225095bca524" namespace=k8s.io protocol=ttrpc version=3 Nov 7 00:00:51.280267 containerd[1601]: time="2025-11-07T00:00:51.280182414Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5fhdh,Uid:dace02f0-f8cb-4812-91c4-8bfef2f5f5c7,Namespace:kube-system,Attempt:0,} returns sandbox id \"d51d7593ac12df5f580c77a42a4f922f221f0b22dacf96e388f706daa60c8d68\"" Nov 7 00:00:51.281660 kubelet[2789]: E1107 00:00:51.281622 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 00:00:51.290657 containerd[1601]: time="2025-11-07T00:00:51.290618361Z" level=info msg="CreateContainer within sandbox \"d51d7593ac12df5f580c77a42a4f922f221f0b22dacf96e388f706daa60c8d68\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 7 00:00:51.303469 containerd[1601]: time="2025-11-07T00:00:51.303416946Z" level=info msg="Container e7744e44b946f38a7e7496afe1b66a7b0534956be29fec7800cc9411685339ad: CDI devices from CRI Config.CDIDevices: []" Nov 7 00:00:51.306946 systemd[1]: Started cri-containerd-cca5ce73f71a5b051d443293d193eddb1f5b194c5f865032efeadf4cb144e3af.scope - libcontainer container cca5ce73f71a5b051d443293d193eddb1f5b194c5f865032efeadf4cb144e3af. Nov 7 00:00:51.319984 containerd[1601]: time="2025-11-07T00:00:51.319942987Z" level=info msg="CreateContainer within sandbox \"d51d7593ac12df5f580c77a42a4f922f221f0b22dacf96e388f706daa60c8d68\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e7744e44b946f38a7e7496afe1b66a7b0534956be29fec7800cc9411685339ad\"" Nov 7 00:00:51.321082 containerd[1601]: time="2025-11-07T00:00:51.321033736Z" level=info msg="StartContainer for \"e7744e44b946f38a7e7496afe1b66a7b0534956be29fec7800cc9411685339ad\"" Nov 7 00:00:51.323537 containerd[1601]: time="2025-11-07T00:00:51.322530223Z" level=info msg="connecting to shim e7744e44b946f38a7e7496afe1b66a7b0534956be29fec7800cc9411685339ad" address="unix:///run/containerd/s/c6025a159e7a07df7648a95c5955eca1994d924ba3e5ba08ed0a4e12db12932e" protocol=ttrpc version=3 Nov 7 00:00:51.336803 containerd[1601]: time="2025-11-07T00:00:51.336747590Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zkz7l,Uid:f22b05e1-2865-4e95-8406-a34e7e1f0b4b,Namespace:kube-system,Attempt:0,} returns sandbox id \"cca5ce73f71a5b051d443293d193eddb1f5b194c5f865032efeadf4cb144e3af\"" Nov 7 00:00:51.337764 kubelet[2789]: E1107 00:00:51.337734 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 00:00:51.339432 containerd[1601]: time="2025-11-07T00:00:51.339386413Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Nov 7 00:00:51.354347 systemd[1]: Started cri-containerd-e7744e44b946f38a7e7496afe1b66a7b0534956be29fec7800cc9411685339ad.scope - libcontainer container e7744e44b946f38a7e7496afe1b66a7b0534956be29fec7800cc9411685339ad. Nov 7 00:00:51.377682 kubelet[2789]: E1107 00:00:51.377639 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 00:00:51.378677 containerd[1601]: time="2025-11-07T00:00:51.378626813Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-m7zjd,Uid:a8f1149e-f58d-4354-bec5-ede0ad122c5a,Namespace:kube-system,Attempt:0,}" Nov 7 00:00:51.400304 containerd[1601]: time="2025-11-07T00:00:51.400187847Z" level=info msg="connecting to shim 1d4f6a9f7068aaefcd276751e91d9c1b3061d295829eb70ed78f34ff85411c42" address="unix:///run/containerd/s/559eae0cc61970a59cf1ae6d5e3f26417f2093a6a70dd24e3652c035e35d63af" namespace=k8s.io protocol=ttrpc version=3 Nov 7 00:00:51.405073 containerd[1601]: time="2025-11-07T00:00:51.405033816Z" level=info msg="StartContainer for \"e7744e44b946f38a7e7496afe1b66a7b0534956be29fec7800cc9411685339ad\" returns successfully" Nov 7 00:00:51.441333 systemd[1]: Started cri-containerd-1d4f6a9f7068aaefcd276751e91d9c1b3061d295829eb70ed78f34ff85411c42.scope - libcontainer container 1d4f6a9f7068aaefcd276751e91d9c1b3061d295829eb70ed78f34ff85411c42. Nov 7 00:00:51.490470 containerd[1601]: time="2025-11-07T00:00:51.490420987Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-m7zjd,Uid:a8f1149e-f58d-4354-bec5-ede0ad122c5a,Namespace:kube-system,Attempt:0,} returns sandbox id \"1d4f6a9f7068aaefcd276751e91d9c1b3061d295829eb70ed78f34ff85411c42\"" Nov 7 00:00:51.491430 kubelet[2789]: E1107 00:00:51.491405 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 00:00:52.156529 kubelet[2789]: E1107 00:00:52.156468 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 00:00:52.157112 kubelet[2789]: E1107 00:00:52.156543 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 00:00:52.255961 kubelet[2789]: I1107 00:00:52.255828 2789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5fhdh" podStartSLOduration=3.25579454 podStartE2EDuration="3.25579454s" podCreationTimestamp="2025-11-07 00:00:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-07 00:00:52.255699733 +0000 UTC m=+10.235457993" watchObservedRunningTime="2025-11-07 00:00:52.25579454 +0000 UTC m=+10.235552840" Nov 7 00:00:53.585721 kubelet[2789]: E1107 00:00:53.585638 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 00:01:01.696030 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount804110554.mount: Deactivated successfully. Nov 7 00:01:05.293101 containerd[1601]: time="2025-11-07T00:01:05.293050131Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 7 00:01:05.294049 containerd[1601]: time="2025-11-07T00:01:05.294012482Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Nov 7 00:01:05.295385 containerd[1601]: time="2025-11-07T00:01:05.295355788Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 7 00:01:05.296871 containerd[1601]: time="2025-11-07T00:01:05.296818267Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 13.957383565s" Nov 7 00:01:05.296871 containerd[1601]: time="2025-11-07T00:01:05.296863051Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Nov 7 00:01:05.297955 containerd[1601]: time="2025-11-07T00:01:05.297793973Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Nov 7 00:01:05.300686 containerd[1601]: time="2025-11-07T00:01:05.300643270Z" level=info msg="CreateContainer within sandbox \"cca5ce73f71a5b051d443293d193eddb1f5b194c5f865032efeadf4cb144e3af\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 7 00:01:05.313674 containerd[1601]: time="2025-11-07T00:01:05.313623508Z" level=info msg="Container edc91404cebe882566b1fca8d20ca44c93e95f58a7e588681fbb9abf2ca58659: CDI devices from CRI Config.CDIDevices: []" Nov 7 00:01:05.318349 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3022987278.mount: Deactivated successfully. Nov 7 00:01:05.323269 containerd[1601]: time="2025-11-07T00:01:05.323224296Z" level=info msg="CreateContainer within sandbox \"cca5ce73f71a5b051d443293d193eddb1f5b194c5f865032efeadf4cb144e3af\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"edc91404cebe882566b1fca8d20ca44c93e95f58a7e588681fbb9abf2ca58659\"" Nov 7 00:01:05.323800 containerd[1601]: time="2025-11-07T00:01:05.323752396Z" level=info msg="StartContainer for \"edc91404cebe882566b1fca8d20ca44c93e95f58a7e588681fbb9abf2ca58659\"" Nov 7 00:01:05.324567 containerd[1601]: time="2025-11-07T00:01:05.324543858Z" level=info msg="connecting to shim edc91404cebe882566b1fca8d20ca44c93e95f58a7e588681fbb9abf2ca58659" address="unix:///run/containerd/s/de130534b5d9e53bce69a19dbb69355b19383771c74bba5166de225095bca524" protocol=ttrpc version=3 Nov 7 00:01:05.359336 systemd[1]: Started cri-containerd-edc91404cebe882566b1fca8d20ca44c93e95f58a7e588681fbb9abf2ca58659.scope - libcontainer container edc91404cebe882566b1fca8d20ca44c93e95f58a7e588681fbb9abf2ca58659. Nov 7 00:01:05.398387 containerd[1601]: time="2025-11-07T00:01:05.398334965Z" level=info msg="StartContainer for \"edc91404cebe882566b1fca8d20ca44c93e95f58a7e588681fbb9abf2ca58659\" returns successfully" Nov 7 00:01:05.409180 systemd[1]: cri-containerd-edc91404cebe882566b1fca8d20ca44c93e95f58a7e588681fbb9abf2ca58659.scope: Deactivated successfully. Nov 7 00:01:05.410602 containerd[1601]: time="2025-11-07T00:01:05.410559006Z" level=info msg="received exit event container_id:\"edc91404cebe882566b1fca8d20ca44c93e95f58a7e588681fbb9abf2ca58659\" id:\"edc91404cebe882566b1fca8d20ca44c93e95f58a7e588681fbb9abf2ca58659\" pid:3237 exited_at:{seconds:1762473665 nanos:410236142}" Nov 7 00:01:05.410805 containerd[1601]: time="2025-11-07T00:01:05.410773288Z" level=info msg="TaskExit event in podsandbox handler container_id:\"edc91404cebe882566b1fca8d20ca44c93e95f58a7e588681fbb9abf2ca58659\" id:\"edc91404cebe882566b1fca8d20ca44c93e95f58a7e588681fbb9abf2ca58659\" pid:3237 exited_at:{seconds:1762473665 nanos:410236142}" Nov 7 00:01:05.433709 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-edc91404cebe882566b1fca8d20ca44c93e95f58a7e588681fbb9abf2ca58659-rootfs.mount: Deactivated successfully. Nov 7 00:01:06.177724 kubelet[2789]: E1107 00:01:06.177663 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 00:01:06.186049 containerd[1601]: time="2025-11-07T00:01:06.186002460Z" level=info msg="CreateContainer within sandbox \"cca5ce73f71a5b051d443293d193eddb1f5b194c5f865032efeadf4cb144e3af\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 7 00:01:06.194724 containerd[1601]: time="2025-11-07T00:01:06.194163033Z" level=info msg="Container 0328447fd234a087cd4a413c0e01c6b4c6e0e4d7bbca6a8d23af56f8393ad8f5: CDI devices from CRI Config.CDIDevices: []" Nov 7 00:01:06.204277 containerd[1601]: time="2025-11-07T00:01:06.204232922Z" level=info msg="CreateContainer within sandbox \"cca5ce73f71a5b051d443293d193eddb1f5b194c5f865032efeadf4cb144e3af\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"0328447fd234a087cd4a413c0e01c6b4c6e0e4d7bbca6a8d23af56f8393ad8f5\"" Nov 7 00:01:06.204762 containerd[1601]: time="2025-11-07T00:01:06.204686642Z" level=info msg="StartContainer for \"0328447fd234a087cd4a413c0e01c6b4c6e0e4d7bbca6a8d23af56f8393ad8f5\"" Nov 7 00:01:06.205739 containerd[1601]: time="2025-11-07T00:01:06.205710269Z" level=info msg="connecting to shim 0328447fd234a087cd4a413c0e01c6b4c6e0e4d7bbca6a8d23af56f8393ad8f5" address="unix:///run/containerd/s/de130534b5d9e53bce69a19dbb69355b19383771c74bba5166de225095bca524" protocol=ttrpc version=3 Nov 7 00:01:06.231350 systemd[1]: Started cri-containerd-0328447fd234a087cd4a413c0e01c6b4c6e0e4d7bbca6a8d23af56f8393ad8f5.scope - libcontainer container 0328447fd234a087cd4a413c0e01c6b4c6e0e4d7bbca6a8d23af56f8393ad8f5. Nov 7 00:01:06.262336 containerd[1601]: time="2025-11-07T00:01:06.262280125Z" level=info msg="StartContainer for \"0328447fd234a087cd4a413c0e01c6b4c6e0e4d7bbca6a8d23af56f8393ad8f5\" returns successfully" Nov 7 00:01:06.278466 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 7 00:01:06.278769 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 7 00:01:06.278850 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Nov 7 00:01:06.281625 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 7 00:01:06.281896 systemd[1]: cri-containerd-0328447fd234a087cd4a413c0e01c6b4c6e0e4d7bbca6a8d23af56f8393ad8f5.scope: Deactivated successfully. Nov 7 00:01:06.282774 containerd[1601]: time="2025-11-07T00:01:06.282736065Z" level=info msg="received exit event container_id:\"0328447fd234a087cd4a413c0e01c6b4c6e0e4d7bbca6a8d23af56f8393ad8f5\" id:\"0328447fd234a087cd4a413c0e01c6b4c6e0e4d7bbca6a8d23af56f8393ad8f5\" pid:3284 exited_at:{seconds:1762473666 nanos:282531102}" Nov 7 00:01:06.282945 containerd[1601]: time="2025-11-07T00:01:06.282894873Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0328447fd234a087cd4a413c0e01c6b4c6e0e4d7bbca6a8d23af56f8393ad8f5\" id:\"0328447fd234a087cd4a413c0e01c6b4c6e0e4d7bbca6a8d23af56f8393ad8f5\" pid:3284 exited_at:{seconds:1762473666 nanos:282531102}" Nov 7 00:01:06.317582 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 7 00:01:07.180735 kubelet[2789]: E1107 00:01:07.180697 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 00:01:07.301486 containerd[1601]: time="2025-11-07T00:01:07.301413639Z" level=info msg="CreateContainer within sandbox \"cca5ce73f71a5b051d443293d193eddb1f5b194c5f865032efeadf4cb144e3af\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 7 00:01:08.040447 containerd[1601]: time="2025-11-07T00:01:08.040394116Z" level=info msg="Container d1d6046ad770ee2f9797a091594bf903a0c25cfe68a4bbdb663e85598ae1a3f1: CDI devices from CRI Config.CDIDevices: []" Nov 7 00:01:08.108136 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1216537985.mount: Deactivated successfully. Nov 7 00:01:08.852306 containerd[1601]: time="2025-11-07T00:01:08.852256526Z" level=info msg="CreateContainer within sandbox \"cca5ce73f71a5b051d443293d193eddb1f5b194c5f865032efeadf4cb144e3af\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d1d6046ad770ee2f9797a091594bf903a0c25cfe68a4bbdb663e85598ae1a3f1\"" Nov 7 00:01:08.852685 containerd[1601]: time="2025-11-07T00:01:08.852626859Z" level=info msg="StartContainer for \"d1d6046ad770ee2f9797a091594bf903a0c25cfe68a4bbdb663e85598ae1a3f1\"" Nov 7 00:01:08.853961 containerd[1601]: time="2025-11-07T00:01:08.853937455Z" level=info msg="connecting to shim d1d6046ad770ee2f9797a091594bf903a0c25cfe68a4bbdb663e85598ae1a3f1" address="unix:///run/containerd/s/de130534b5d9e53bce69a19dbb69355b19383771c74bba5166de225095bca524" protocol=ttrpc version=3 Nov 7 00:01:08.873285 systemd[1]: Started cri-containerd-d1d6046ad770ee2f9797a091594bf903a0c25cfe68a4bbdb663e85598ae1a3f1.scope - libcontainer container d1d6046ad770ee2f9797a091594bf903a0c25cfe68a4bbdb663e85598ae1a3f1. Nov 7 00:01:08.921056 systemd[1]: cri-containerd-d1d6046ad770ee2f9797a091594bf903a0c25cfe68a4bbdb663e85598ae1a3f1.scope: Deactivated successfully. Nov 7 00:01:08.922096 containerd[1601]: time="2025-11-07T00:01:08.922044525Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d1d6046ad770ee2f9797a091594bf903a0c25cfe68a4bbdb663e85598ae1a3f1\" id:\"d1d6046ad770ee2f9797a091594bf903a0c25cfe68a4bbdb663e85598ae1a3f1\" pid:3336 exited_at:{seconds:1762473668 nanos:921765392}" Nov 7 00:01:09.142560 containerd[1601]: time="2025-11-07T00:01:09.142513379Z" level=info msg="received exit event container_id:\"d1d6046ad770ee2f9797a091594bf903a0c25cfe68a4bbdb663e85598ae1a3f1\" id:\"d1d6046ad770ee2f9797a091594bf903a0c25cfe68a4bbdb663e85598ae1a3f1\" pid:3336 exited_at:{seconds:1762473668 nanos:921765392}" Nov 7 00:01:09.144415 containerd[1601]: time="2025-11-07T00:01:09.144312690Z" level=info msg="StartContainer for \"d1d6046ad770ee2f9797a091594bf903a0c25cfe68a4bbdb663e85598ae1a3f1\" returns successfully" Nov 7 00:01:09.165258 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d1d6046ad770ee2f9797a091594bf903a0c25cfe68a4bbdb663e85598ae1a3f1-rootfs.mount: Deactivated successfully. Nov 7 00:01:09.186454 kubelet[2789]: E1107 00:01:09.186417 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 00:01:10.533879 kubelet[2789]: E1107 00:01:10.533818 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 00:01:10.629297 containerd[1601]: time="2025-11-07T00:01:10.629236431Z" level=info msg="CreateContainer within sandbox \"cca5ce73f71a5b051d443293d193eddb1f5b194c5f865032efeadf4cb144e3af\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 7 00:01:10.760096 containerd[1601]: time="2025-11-07T00:01:10.760042522Z" level=info msg="Container 80fb78b175af579cd09d631fd80b678fcd458955b8cb150fca98df818ea89d1e: CDI devices from CRI Config.CDIDevices: []" Nov 7 00:01:10.763582 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount209502179.mount: Deactivated successfully. Nov 7 00:01:10.854120 containerd[1601]: time="2025-11-07T00:01:10.853960301Z" level=info msg="CreateContainer within sandbox \"cca5ce73f71a5b051d443293d193eddb1f5b194c5f865032efeadf4cb144e3af\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"80fb78b175af579cd09d631fd80b678fcd458955b8cb150fca98df818ea89d1e\"" Nov 7 00:01:10.854897 containerd[1601]: time="2025-11-07T00:01:10.854839908Z" level=info msg="StartContainer for \"80fb78b175af579cd09d631fd80b678fcd458955b8cb150fca98df818ea89d1e\"" Nov 7 00:01:10.856089 containerd[1601]: time="2025-11-07T00:01:10.856054002Z" level=info msg="connecting to shim 80fb78b175af579cd09d631fd80b678fcd458955b8cb150fca98df818ea89d1e" address="unix:///run/containerd/s/de130534b5d9e53bce69a19dbb69355b19383771c74bba5166de225095bca524" protocol=ttrpc version=3 Nov 7 00:01:10.860169 containerd[1601]: time="2025-11-07T00:01:10.859164790Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 7 00:01:10.873443 containerd[1601]: time="2025-11-07T00:01:10.873367666Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Nov 7 00:01:10.881399 systemd[1]: Started cri-containerd-80fb78b175af579cd09d631fd80b678fcd458955b8cb150fca98df818ea89d1e.scope - libcontainer container 80fb78b175af579cd09d631fd80b678fcd458955b8cb150fca98df818ea89d1e. Nov 7 00:01:10.884924 containerd[1601]: time="2025-11-07T00:01:10.884853403Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 7 00:01:10.885743 containerd[1601]: time="2025-11-07T00:01:10.885660745Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 5.587830413s" Nov 7 00:01:10.885743 containerd[1601]: time="2025-11-07T00:01:10.885739593Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Nov 7 00:01:10.896668 containerd[1601]: time="2025-11-07T00:01:10.896616910Z" level=info msg="CreateContainer within sandbox \"1d4f6a9f7068aaefcd276751e91d9c1b3061d295829eb70ed78f34ff85411c42\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Nov 7 00:01:10.914974 systemd[1]: cri-containerd-80fb78b175af579cd09d631fd80b678fcd458955b8cb150fca98df818ea89d1e.scope: Deactivated successfully. Nov 7 00:01:10.916050 containerd[1601]: time="2025-11-07T00:01:10.915692865Z" level=info msg="TaskExit event in podsandbox handler container_id:\"80fb78b175af579cd09d631fd80b678fcd458955b8cb150fca98df818ea89d1e\" id:\"80fb78b175af579cd09d631fd80b678fcd458955b8cb150fca98df818ea89d1e\" pid:3387 exited_at:{seconds:1762473670 nanos:915075008}" Nov 7 00:01:10.921283 containerd[1601]: time="2025-11-07T00:01:10.916014387Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf22b05e1_2865_4e95_8406_a34e7e1f0b4b.slice/cri-containerd-80fb78b175af579cd09d631fd80b678fcd458955b8cb150fca98df818ea89d1e.scope/memory.events\": no such file or directory" Nov 7 00:01:10.951109 containerd[1601]: time="2025-11-07T00:01:10.951043808Z" level=info msg="received exit event container_id:\"80fb78b175af579cd09d631fd80b678fcd458955b8cb150fca98df818ea89d1e\" id:\"80fb78b175af579cd09d631fd80b678fcd458955b8cb150fca98df818ea89d1e\" pid:3387 exited_at:{seconds:1762473670 nanos:915075008}" Nov 7 00:01:10.961666 containerd[1601]: time="2025-11-07T00:01:10.961612707Z" level=info msg="StartContainer for \"80fb78b175af579cd09d631fd80b678fcd458955b8cb150fca98df818ea89d1e\" returns successfully" Nov 7 00:01:10.966540 containerd[1601]: time="2025-11-07T00:01:10.966473213Z" level=info msg="Container d255f95226d880029c7190d0ab82fb22affde8802c6233cf98d51447ceecd708: CDI devices from CRI Config.CDIDevices: []" Nov 7 00:01:10.975411 containerd[1601]: time="2025-11-07T00:01:10.974958829Z" level=info msg="CreateContainer within sandbox \"1d4f6a9f7068aaefcd276751e91d9c1b3061d295829eb70ed78f34ff85411c42\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"d255f95226d880029c7190d0ab82fb22affde8802c6233cf98d51447ceecd708\"" Nov 7 00:01:10.976161 containerd[1601]: time="2025-11-07T00:01:10.976124012Z" level=info msg="StartContainer for \"d255f95226d880029c7190d0ab82fb22affde8802c6233cf98d51447ceecd708\"" Nov 7 00:01:10.982878 containerd[1601]: time="2025-11-07T00:01:10.982836447Z" level=info msg="connecting to shim d255f95226d880029c7190d0ab82fb22affde8802c6233cf98d51447ceecd708" address="unix:///run/containerd/s/559eae0cc61970a59cf1ae6d5e3f26417f2093a6a70dd24e3652c035e35d63af" protocol=ttrpc version=3 Nov 7 00:01:11.005295 systemd[1]: Started cri-containerd-d255f95226d880029c7190d0ab82fb22affde8802c6233cf98d51447ceecd708.scope - libcontainer container d255f95226d880029c7190d0ab82fb22affde8802c6233cf98d51447ceecd708. Nov 7 00:01:11.036899 containerd[1601]: time="2025-11-07T00:01:11.036835528Z" level=info msg="StartContainer for \"d255f95226d880029c7190d0ab82fb22affde8802c6233cf98d51447ceecd708\" returns successfully" Nov 7 00:01:11.197267 kubelet[2789]: E1107 00:01:11.197213 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 00:01:11.198730 kubelet[2789]: E1107 00:01:11.198698 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 00:01:11.204287 containerd[1601]: time="2025-11-07T00:01:11.204234797Z" level=info msg="CreateContainer within sandbox \"cca5ce73f71a5b051d443293d193eddb1f5b194c5f865032efeadf4cb144e3af\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 7 00:01:11.214030 containerd[1601]: time="2025-11-07T00:01:11.213971148Z" level=info msg="Container 776400762199924c43bda12f2613906c29e3f5ac407545eec4529a8222738e25: CDI devices from CRI Config.CDIDevices: []" Nov 7 00:01:11.220841 containerd[1601]: time="2025-11-07T00:01:11.220793659Z" level=info msg="CreateContainer within sandbox \"cca5ce73f71a5b051d443293d193eddb1f5b194c5f865032efeadf4cb144e3af\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"776400762199924c43bda12f2613906c29e3f5ac407545eec4529a8222738e25\"" Nov 7 00:01:11.221847 containerd[1601]: time="2025-11-07T00:01:11.221810034Z" level=info msg="StartContainer for \"776400762199924c43bda12f2613906c29e3f5ac407545eec4529a8222738e25\"" Nov 7 00:01:11.223279 containerd[1601]: time="2025-11-07T00:01:11.223210316Z" level=info msg="connecting to shim 776400762199924c43bda12f2613906c29e3f5ac407545eec4529a8222738e25" address="unix:///run/containerd/s/de130534b5d9e53bce69a19dbb69355b19383771c74bba5166de225095bca524" protocol=ttrpc version=3 Nov 7 00:01:11.248325 systemd[1]: Started cri-containerd-776400762199924c43bda12f2613906c29e3f5ac407545eec4529a8222738e25.scope - libcontainer container 776400762199924c43bda12f2613906c29e3f5ac407545eec4529a8222738e25. Nov 7 00:01:11.285339 containerd[1601]: time="2025-11-07T00:01:11.285292013Z" level=info msg="StartContainer for \"776400762199924c43bda12f2613906c29e3f5ac407545eec4529a8222738e25\" returns successfully" Nov 7 00:01:11.384418 containerd[1601]: time="2025-11-07T00:01:11.384366461Z" level=info msg="TaskExit event in podsandbox handler container_id:\"776400762199924c43bda12f2613906c29e3f5ac407545eec4529a8222738e25\" id:\"eb52731c22ac2f5fd0fb541398d16cecf4ee6952153056fe4dcb9546ef3b5e82\" pid:3491 exited_at:{seconds:1762473671 nanos:383855133}" Nov 7 00:01:11.453396 kubelet[2789]: I1107 00:01:11.453286 2789 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 7 00:01:11.510347 kubelet[2789]: I1107 00:01:11.510275 2789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-m7zjd" podStartSLOduration=2.11568252 podStartE2EDuration="21.510254863s" podCreationTimestamp="2025-11-07 00:00:50 +0000 UTC" firstStartedPulling="2025-11-07 00:00:51.491885766 +0000 UTC m=+9.471644026" lastFinishedPulling="2025-11-07 00:01:10.886458099 +0000 UTC m=+28.866216369" observedRunningTime="2025-11-07 00:01:11.221549936 +0000 UTC m=+29.201308206" watchObservedRunningTime="2025-11-07 00:01:11.510254863 +0000 UTC m=+29.490013133" Nov 7 00:01:11.547709 systemd[1]: Created slice kubepods-burstable-pod2027cce2_4723_4073_8962_d2b8a64267a4.slice - libcontainer container kubepods-burstable-pod2027cce2_4723_4073_8962_d2b8a64267a4.slice. Nov 7 00:01:11.558250 systemd[1]: Created slice kubepods-burstable-pod6d36a104_6fb9_4a05_995d_9f6d69737d51.slice - libcontainer container kubepods-burstable-pod6d36a104_6fb9_4a05_995d_9f6d69737d51.slice. Nov 7 00:01:11.651962 kubelet[2789]: I1107 00:01:11.651882 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ld9j\" (UniqueName: \"kubernetes.io/projected/6d36a104-6fb9-4a05-995d-9f6d69737d51-kube-api-access-7ld9j\") pod \"coredns-674b8bbfcf-k7788\" (UID: \"6d36a104-6fb9-4a05-995d-9f6d69737d51\") " pod="kube-system/coredns-674b8bbfcf-k7788" Nov 7 00:01:11.651962 kubelet[2789]: I1107 00:01:11.651950 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6plc5\" (UniqueName: \"kubernetes.io/projected/2027cce2-4723-4073-8962-d2b8a64267a4-kube-api-access-6plc5\") pod \"coredns-674b8bbfcf-p85kl\" (UID: \"2027cce2-4723-4073-8962-d2b8a64267a4\") " pod="kube-system/coredns-674b8bbfcf-p85kl" Nov 7 00:01:11.651962 kubelet[2789]: I1107 00:01:11.651968 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6d36a104-6fb9-4a05-995d-9f6d69737d51-config-volume\") pod \"coredns-674b8bbfcf-k7788\" (UID: \"6d36a104-6fb9-4a05-995d-9f6d69737d51\") " pod="kube-system/coredns-674b8bbfcf-k7788" Nov 7 00:01:11.652607 kubelet[2789]: I1107 00:01:11.651984 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2027cce2-4723-4073-8962-d2b8a64267a4-config-volume\") pod \"coredns-674b8bbfcf-p85kl\" (UID: \"2027cce2-4723-4073-8962-d2b8a64267a4\") " pod="kube-system/coredns-674b8bbfcf-p85kl" Nov 7 00:01:11.764732 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-80fb78b175af579cd09d631fd80b678fcd458955b8cb150fca98df818ea89d1e-rootfs.mount: Deactivated successfully. Nov 7 00:01:11.851956 kubelet[2789]: E1107 00:01:11.851813 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 00:01:11.852618 containerd[1601]: time="2025-11-07T00:01:11.852576527Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-p85kl,Uid:2027cce2-4723-4073-8962-d2b8a64267a4,Namespace:kube-system,Attempt:0,}" Nov 7 00:01:11.863419 kubelet[2789]: E1107 00:01:11.863385 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 00:01:11.863788 containerd[1601]: time="2025-11-07T00:01:11.863751403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-k7788,Uid:6d36a104-6fb9-4a05-995d-9f6d69737d51,Namespace:kube-system,Attempt:0,}" Nov 7 00:01:12.205413 kubelet[2789]: E1107 00:01:12.205234 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 00:01:12.208233 kubelet[2789]: E1107 00:01:12.207598 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 00:01:13.206964 kubelet[2789]: E1107 00:01:13.206922 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 00:01:14.097019 systemd-networkd[1517]: cilium_host: Link UP Nov 7 00:01:14.097219 systemd-networkd[1517]: cilium_net: Link UP Nov 7 00:01:14.097403 systemd-networkd[1517]: cilium_net: Gained carrier Nov 7 00:01:14.097575 systemd-networkd[1517]: cilium_host: Gained carrier Nov 7 00:01:14.209229 kubelet[2789]: E1107 00:01:14.208995 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 00:01:14.218453 systemd-networkd[1517]: cilium_vxlan: Link UP Nov 7 00:01:14.218463 systemd-networkd[1517]: cilium_vxlan: Gained carrier Nov 7 00:01:14.437181 kernel: NET: Registered PF_ALG protocol family Nov 7 00:01:14.565405 systemd-networkd[1517]: cilium_net: Gained IPv6LL Nov 7 00:01:14.661388 systemd-networkd[1517]: cilium_host: Gained IPv6LL Nov 7 00:01:15.111528 systemd-networkd[1517]: lxc_health: Link UP Nov 7 00:01:15.112725 systemd-networkd[1517]: lxc_health: Gained carrier Nov 7 00:01:15.216303 systemd-networkd[1517]: lxc1b4bf2d91a07: Link UP Nov 7 00:01:15.231195 kernel: eth0: renamed from tmpe03f9 Nov 7 00:01:15.233995 systemd-networkd[1517]: lxc1b4bf2d91a07: Gained carrier Nov 7 00:01:15.661846 systemd-networkd[1517]: lxca9c8c71df882: Link UP Nov 7 00:01:15.673216 kernel: eth0: renamed from tmp4b4c8 Nov 7 00:01:15.675751 systemd-networkd[1517]: lxca9c8c71df882: Gained carrier Nov 7 00:01:16.069477 systemd-networkd[1517]: cilium_vxlan: Gained IPv6LL Nov 7 00:01:16.271971 kubelet[2789]: E1107 00:01:16.271927 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 00:01:16.325339 systemd-networkd[1517]: lxc_health: Gained IPv6LL Nov 7 00:01:16.558656 kubelet[2789]: I1107 00:01:16.558571 2789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-zkz7l" podStartSLOduration=13.59981718 podStartE2EDuration="27.558553256s" podCreationTimestamp="2025-11-07 00:00:49 +0000 UTC" firstStartedPulling="2025-11-07 00:00:51.338864847 +0000 UTC m=+9.318623117" lastFinishedPulling="2025-11-07 00:01:05.297600933 +0000 UTC m=+23.277359193" observedRunningTime="2025-11-07 00:01:12.226689912 +0000 UTC m=+30.206448182" watchObservedRunningTime="2025-11-07 00:01:16.558553256 +0000 UTC m=+34.538311526" Nov 7 00:01:16.581426 systemd-networkd[1517]: lxc1b4bf2d91a07: Gained IPv6LL Nov 7 00:01:17.158372 systemd-networkd[1517]: lxca9c8c71df882: Gained IPv6LL Nov 7 00:01:17.214015 kubelet[2789]: E1107 00:01:17.213978 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 00:01:18.216188 kubelet[2789]: E1107 00:01:18.215789 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 00:01:18.386624 systemd[1]: Started sshd@9-10.0.0.46:22-10.0.0.1:36800.service - OpenSSH per-connection server daemon (10.0.0.1:36800). Nov 7 00:01:18.454111 sshd[3965]: Accepted publickey for core from 10.0.0.1 port 36800 ssh2: RSA SHA256:byrlmJ17egF8Tfblhd3C23XmFA3LkMTDn1Cz5Op8b3A Nov 7 00:01:18.456078 sshd-session[3965]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 7 00:01:18.461298 systemd-logind[1583]: New session 10 of user core. Nov 7 00:01:18.471407 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 7 00:01:18.683428 sshd[3968]: Connection closed by 10.0.0.1 port 36800 Nov 7 00:01:18.683720 sshd-session[3965]: pam_unix(sshd:session): session closed for user core Nov 7 00:01:18.687835 systemd[1]: sshd@9-10.0.0.46:22-10.0.0.1:36800.service: Deactivated successfully. Nov 7 00:01:18.689837 systemd[1]: session-10.scope: Deactivated successfully. Nov 7 00:01:18.690676 systemd-logind[1583]: Session 10 logged out. Waiting for processes to exit. Nov 7 00:01:18.691881 systemd-logind[1583]: Removed session 10. Nov 7 00:01:18.740059 containerd[1601]: time="2025-11-07T00:01:18.739889696Z" level=info msg="connecting to shim e03f9e818740bc43c1d533b58015e5bdc4cc8ec857c778e446ec0aad040a2aec" address="unix:///run/containerd/s/c6c55a2d3e0e176b9dd98b60cecf1d56ff3023959556bdd32551a3e0118e6bb1" namespace=k8s.io protocol=ttrpc version=3 Nov 7 00:01:18.752336 containerd[1601]: time="2025-11-07T00:01:18.752281598Z" level=info msg="connecting to shim 4b4c868f2e19def67cad6ec9545f2598d286a91bc794b41b534a6b500f5c0438" address="unix:///run/containerd/s/4474f4c28bcaea647ac46e78c43fd2ce6a9041e535c5ed27492c426dd0c3d828" namespace=k8s.io protocol=ttrpc version=3 Nov 7 00:01:18.774281 systemd[1]: Started cri-containerd-e03f9e818740bc43c1d533b58015e5bdc4cc8ec857c778e446ec0aad040a2aec.scope - libcontainer container e03f9e818740bc43c1d533b58015e5bdc4cc8ec857c778e446ec0aad040a2aec. Nov 7 00:01:18.778255 systemd[1]: Started cri-containerd-4b4c868f2e19def67cad6ec9545f2598d286a91bc794b41b534a6b500f5c0438.scope - libcontainer container 4b4c868f2e19def67cad6ec9545f2598d286a91bc794b41b534a6b500f5c0438. Nov 7 00:01:18.790694 systemd-resolved[1300]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 7 00:01:18.794344 systemd-resolved[1300]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 7 00:01:18.823880 containerd[1601]: time="2025-11-07T00:01:18.823828778Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-k7788,Uid:6d36a104-6fb9-4a05-995d-9f6d69737d51,Namespace:kube-system,Attempt:0,} returns sandbox id \"e03f9e818740bc43c1d533b58015e5bdc4cc8ec857c778e446ec0aad040a2aec\"" Nov 7 00:01:18.825309 kubelet[2789]: E1107 00:01:18.825273 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 00:01:18.829901 containerd[1601]: time="2025-11-07T00:01:18.829857024Z" level=info msg="CreateContainer within sandbox \"e03f9e818740bc43c1d533b58015e5bdc4cc8ec857c778e446ec0aad040a2aec\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 7 00:01:18.839927 containerd[1601]: time="2025-11-07T00:01:18.839430595Z" level=info msg="Container d7c727a3dc65c74e864f8271fb87047996039399b956407da17f45055591eee0: CDI devices from CRI Config.CDIDevices: []" Nov 7 00:01:18.839927 containerd[1601]: time="2025-11-07T00:01:18.839750125Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-p85kl,Uid:2027cce2-4723-4073-8962-d2b8a64267a4,Namespace:kube-system,Attempt:0,} returns sandbox id \"4b4c868f2e19def67cad6ec9545f2598d286a91bc794b41b534a6b500f5c0438\"" Nov 7 00:01:18.841489 kubelet[2789]: E1107 00:01:18.841457 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 00:01:18.846492 containerd[1601]: time="2025-11-07T00:01:18.846018611Z" level=info msg="CreateContainer within sandbox \"4b4c868f2e19def67cad6ec9545f2598d286a91bc794b41b534a6b500f5c0438\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 7 00:01:18.847964 containerd[1601]: time="2025-11-07T00:01:18.847928350Z" level=info msg="CreateContainer within sandbox \"e03f9e818740bc43c1d533b58015e5bdc4cc8ec857c778e446ec0aad040a2aec\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d7c727a3dc65c74e864f8271fb87047996039399b956407da17f45055591eee0\"" Nov 7 00:01:18.848555 containerd[1601]: time="2025-11-07T00:01:18.848525209Z" level=info msg="StartContainer for \"d7c727a3dc65c74e864f8271fb87047996039399b956407da17f45055591eee0\"" Nov 7 00:01:18.849658 containerd[1601]: time="2025-11-07T00:01:18.849633446Z" level=info msg="connecting to shim d7c727a3dc65c74e864f8271fb87047996039399b956407da17f45055591eee0" address="unix:///run/containerd/s/c6c55a2d3e0e176b9dd98b60cecf1d56ff3023959556bdd32551a3e0118e6bb1" protocol=ttrpc version=3 Nov 7 00:01:18.855910 containerd[1601]: time="2025-11-07T00:01:18.855854684Z" level=info msg="Container 433fa4afcbdfe63f5f426b2b99a611a56d015cb4d4b0692bf1ff8a646fec532f: CDI devices from CRI Config.CDIDevices: []" Nov 7 00:01:18.863051 containerd[1601]: time="2025-11-07T00:01:18.863000625Z" level=info msg="CreateContainer within sandbox \"4b4c868f2e19def67cad6ec9545f2598d286a91bc794b41b534a6b500f5c0438\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"433fa4afcbdfe63f5f426b2b99a611a56d015cb4d4b0692bf1ff8a646fec532f\"" Nov 7 00:01:18.863956 containerd[1601]: time="2025-11-07T00:01:18.863908557Z" level=info msg="StartContainer for \"433fa4afcbdfe63f5f426b2b99a611a56d015cb4d4b0692bf1ff8a646fec532f\"" Nov 7 00:01:18.865135 containerd[1601]: time="2025-11-07T00:01:18.865090191Z" level=info msg="connecting to shim 433fa4afcbdfe63f5f426b2b99a611a56d015cb4d4b0692bf1ff8a646fec532f" address="unix:///run/containerd/s/4474f4c28bcaea647ac46e78c43fd2ce6a9041e535c5ed27492c426dd0c3d828" protocol=ttrpc version=3 Nov 7 00:01:18.871342 systemd[1]: Started cri-containerd-d7c727a3dc65c74e864f8271fb87047996039399b956407da17f45055591eee0.scope - libcontainer container d7c727a3dc65c74e864f8271fb87047996039399b956407da17f45055591eee0. Nov 7 00:01:18.893594 systemd[1]: Started cri-containerd-433fa4afcbdfe63f5f426b2b99a611a56d015cb4d4b0692bf1ff8a646fec532f.scope - libcontainer container 433fa4afcbdfe63f5f426b2b99a611a56d015cb4d4b0692bf1ff8a646fec532f. Nov 7 00:01:18.924315 containerd[1601]: time="2025-11-07T00:01:18.924266207Z" level=info msg="StartContainer for \"d7c727a3dc65c74e864f8271fb87047996039399b956407da17f45055591eee0\" returns successfully" Nov 7 00:01:18.933497 containerd[1601]: time="2025-11-07T00:01:18.933421183Z" level=info msg="StartContainer for \"433fa4afcbdfe63f5f426b2b99a611a56d015cb4d4b0692bf1ff8a646fec532f\" returns successfully" Nov 7 00:01:19.221453 kubelet[2789]: E1107 00:01:19.221366 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 00:01:19.223784 kubelet[2789]: E1107 00:01:19.223667 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 00:01:19.234593 kubelet[2789]: I1107 00:01:19.234522 2789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-p85kl" podStartSLOduration=29.234502374 podStartE2EDuration="29.234502374s" podCreationTimestamp="2025-11-07 00:00:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-07 00:01:19.233584764 +0000 UTC m=+37.213343034" watchObservedRunningTime="2025-11-07 00:01:19.234502374 +0000 UTC m=+37.214260644" Nov 7 00:01:20.236286 kubelet[2789]: E1107 00:01:20.236248 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 00:01:20.236766 kubelet[2789]: E1107 00:01:20.236451 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 00:01:21.237897 kubelet[2789]: E1107 00:01:21.237861 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 00:01:21.238459 kubelet[2789]: E1107 00:01:21.238010 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 00:01:23.703688 systemd[1]: Started sshd@10-10.0.0.46:22-10.0.0.1:40032.service - OpenSSH per-connection server daemon (10.0.0.1:40032). Nov 7 00:01:23.769745 sshd[4160]: Accepted publickey for core from 10.0.0.1 port 40032 ssh2: RSA SHA256:byrlmJ17egF8Tfblhd3C23XmFA3LkMTDn1Cz5Op8b3A Nov 7 00:01:23.771420 sshd-session[4160]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 7 00:01:23.776431 systemd-logind[1583]: New session 11 of user core. Nov 7 00:01:23.787310 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 7 00:01:23.918085 sshd[4163]: Connection closed by 10.0.0.1 port 40032 Nov 7 00:01:23.918418 sshd-session[4160]: pam_unix(sshd:session): session closed for user core Nov 7 00:01:23.922670 systemd[1]: sshd@10-10.0.0.46:22-10.0.0.1:40032.service: Deactivated successfully. Nov 7 00:01:23.924725 systemd[1]: session-11.scope: Deactivated successfully. Nov 7 00:01:23.925685 systemd-logind[1583]: Session 11 logged out. Waiting for processes to exit. Nov 7 00:01:23.926883 systemd-logind[1583]: Removed session 11. Nov 7 00:01:28.935317 systemd[1]: Started sshd@11-10.0.0.46:22-10.0.0.1:40036.service - OpenSSH per-connection server daemon (10.0.0.1:40036). Nov 7 00:01:28.998356 sshd[4177]: Accepted publickey for core from 10.0.0.1 port 40036 ssh2: RSA SHA256:byrlmJ17egF8Tfblhd3C23XmFA3LkMTDn1Cz5Op8b3A Nov 7 00:01:29.000250 sshd-session[4177]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 7 00:01:29.005523 systemd-logind[1583]: New session 12 of user core. Nov 7 00:01:29.016377 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 7 00:01:29.134058 sshd[4180]: Connection closed by 10.0.0.1 port 40036 Nov 7 00:01:29.134431 sshd-session[4177]: pam_unix(sshd:session): session closed for user core Nov 7 00:01:29.140052 systemd[1]: sshd@11-10.0.0.46:22-10.0.0.1:40036.service: Deactivated successfully. Nov 7 00:01:29.142480 systemd[1]: session-12.scope: Deactivated successfully. Nov 7 00:01:29.143550 systemd-logind[1583]: Session 12 logged out. Waiting for processes to exit. Nov 7 00:01:29.145115 systemd-logind[1583]: Removed session 12. Nov 7 00:01:34.149755 systemd[1]: Started sshd@12-10.0.0.46:22-10.0.0.1:56254.service - OpenSSH per-connection server daemon (10.0.0.1:56254). Nov 7 00:01:34.208338 sshd[4195]: Accepted publickey for core from 10.0.0.1 port 56254 ssh2: RSA SHA256:byrlmJ17egF8Tfblhd3C23XmFA3LkMTDn1Cz5Op8b3A Nov 7 00:01:34.209659 sshd-session[4195]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 7 00:01:34.214008 systemd-logind[1583]: New session 13 of user core. Nov 7 00:01:34.227259 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 7 00:01:34.342442 sshd[4198]: Connection closed by 10.0.0.1 port 56254 Nov 7 00:01:34.342727 sshd-session[4195]: pam_unix(sshd:session): session closed for user core Nov 7 00:01:34.352751 systemd[1]: sshd@12-10.0.0.46:22-10.0.0.1:56254.service: Deactivated successfully. Nov 7 00:01:34.354700 systemd[1]: session-13.scope: Deactivated successfully. Nov 7 00:01:34.355473 systemd-logind[1583]: Session 13 logged out. Waiting for processes to exit. Nov 7 00:01:34.357922 systemd[1]: Started sshd@13-10.0.0.46:22-10.0.0.1:56260.service - OpenSSH per-connection server daemon (10.0.0.1:56260). Nov 7 00:01:34.358892 systemd-logind[1583]: Removed session 13. Nov 7 00:01:34.413020 sshd[4212]: Accepted publickey for core from 10.0.0.1 port 56260 ssh2: RSA SHA256:byrlmJ17egF8Tfblhd3C23XmFA3LkMTDn1Cz5Op8b3A Nov 7 00:01:34.414350 sshd-session[4212]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 7 00:01:34.418776 systemd-logind[1583]: New session 14 of user core. Nov 7 00:01:34.428355 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 7 00:01:34.616207 sshd[4215]: Connection closed by 10.0.0.1 port 56260 Nov 7 00:01:34.614971 sshd-session[4212]: pam_unix(sshd:session): session closed for user core Nov 7 00:01:34.628642 systemd[1]: sshd@13-10.0.0.46:22-10.0.0.1:56260.service: Deactivated successfully. Nov 7 00:01:34.631907 systemd[1]: session-14.scope: Deactivated successfully. Nov 7 00:01:34.633692 systemd-logind[1583]: Session 14 logged out. Waiting for processes to exit. Nov 7 00:01:34.636604 systemd-logind[1583]: Removed session 14. Nov 7 00:01:34.638131 systemd[1]: Started sshd@14-10.0.0.46:22-10.0.0.1:56270.service - OpenSSH per-connection server daemon (10.0.0.1:56270). Nov 7 00:01:34.692356 sshd[4226]: Accepted publickey for core from 10.0.0.1 port 56270 ssh2: RSA SHA256:byrlmJ17egF8Tfblhd3C23XmFA3LkMTDn1Cz5Op8b3A Nov 7 00:01:34.693574 sshd-session[4226]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 7 00:01:34.698450 systemd-logind[1583]: New session 15 of user core. Nov 7 00:01:34.709262 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 7 00:01:34.828562 sshd[4229]: Connection closed by 10.0.0.1 port 56270 Nov 7 00:01:34.828853 sshd-session[4226]: pam_unix(sshd:session): session closed for user core Nov 7 00:01:34.832160 systemd[1]: sshd@14-10.0.0.46:22-10.0.0.1:56270.service: Deactivated successfully. Nov 7 00:01:34.834045 systemd[1]: session-15.scope: Deactivated successfully. Nov 7 00:01:34.836072 systemd-logind[1583]: Session 15 logged out. Waiting for processes to exit. Nov 7 00:01:34.836980 systemd-logind[1583]: Removed session 15. Nov 7 00:01:39.842027 systemd[1]: Started sshd@15-10.0.0.46:22-10.0.0.1:56276.service - OpenSSH per-connection server daemon (10.0.0.1:56276). Nov 7 00:01:39.895043 sshd[4243]: Accepted publickey for core from 10.0.0.1 port 56276 ssh2: RSA SHA256:byrlmJ17egF8Tfblhd3C23XmFA3LkMTDn1Cz5Op8b3A Nov 7 00:01:39.896692 sshd-session[4243]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 7 00:01:39.901023 systemd-logind[1583]: New session 16 of user core. Nov 7 00:01:39.912291 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 7 00:01:40.016997 sshd[4246]: Connection closed by 10.0.0.1 port 56276 Nov 7 00:01:40.017318 sshd-session[4243]: pam_unix(sshd:session): session closed for user core Nov 7 00:01:40.021455 systemd[1]: sshd@15-10.0.0.46:22-10.0.0.1:56276.service: Deactivated successfully. Nov 7 00:01:40.023568 systemd[1]: session-16.scope: Deactivated successfully. Nov 7 00:01:40.024554 systemd-logind[1583]: Session 16 logged out. Waiting for processes to exit. Nov 7 00:01:40.025679 systemd-logind[1583]: Removed session 16. Nov 7 00:01:45.030198 systemd[1]: Started sshd@16-10.0.0.46:22-10.0.0.1:42204.service - OpenSSH per-connection server daemon (10.0.0.1:42204). Nov 7 00:01:45.086679 sshd[4262]: Accepted publickey for core from 10.0.0.1 port 42204 ssh2: RSA SHA256:byrlmJ17egF8Tfblhd3C23XmFA3LkMTDn1Cz5Op8b3A Nov 7 00:01:45.088237 sshd-session[4262]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 7 00:01:45.092564 systemd-logind[1583]: New session 17 of user core. Nov 7 00:01:45.100321 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 7 00:01:45.216264 sshd[4265]: Connection closed by 10.0.0.1 port 42204 Nov 7 00:01:45.216621 sshd-session[4262]: pam_unix(sshd:session): session closed for user core Nov 7 00:01:45.230113 systemd[1]: sshd@16-10.0.0.46:22-10.0.0.1:42204.service: Deactivated successfully. Nov 7 00:01:45.232211 systemd[1]: session-17.scope: Deactivated successfully. Nov 7 00:01:45.233019 systemd-logind[1583]: Session 17 logged out. Waiting for processes to exit. Nov 7 00:01:45.236113 systemd[1]: Started sshd@17-10.0.0.46:22-10.0.0.1:42218.service - OpenSSH per-connection server daemon (10.0.0.1:42218). Nov 7 00:01:45.236916 systemd-logind[1583]: Removed session 17. Nov 7 00:01:45.296273 sshd[4278]: Accepted publickey for core from 10.0.0.1 port 42218 ssh2: RSA SHA256:byrlmJ17egF8Tfblhd3C23XmFA3LkMTDn1Cz5Op8b3A Nov 7 00:01:45.297974 sshd-session[4278]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 7 00:01:45.302767 systemd-logind[1583]: New session 18 of user core. Nov 7 00:01:45.313281 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 7 00:01:45.502210 sshd[4282]: Connection closed by 10.0.0.1 port 42218 Nov 7 00:01:45.502652 sshd-session[4278]: pam_unix(sshd:session): session closed for user core Nov 7 00:01:45.516870 systemd[1]: sshd@17-10.0.0.46:22-10.0.0.1:42218.service: Deactivated successfully. Nov 7 00:01:45.518840 systemd[1]: session-18.scope: Deactivated successfully. Nov 7 00:01:45.519770 systemd-logind[1583]: Session 18 logged out. Waiting for processes to exit. Nov 7 00:01:45.523969 systemd[1]: Started sshd@18-10.0.0.46:22-10.0.0.1:42224.service - OpenSSH per-connection server daemon (10.0.0.1:42224). Nov 7 00:01:45.524672 systemd-logind[1583]: Removed session 18. Nov 7 00:01:45.579852 sshd[4294]: Accepted publickey for core from 10.0.0.1 port 42224 ssh2: RSA SHA256:byrlmJ17egF8Tfblhd3C23XmFA3LkMTDn1Cz5Op8b3A Nov 7 00:01:45.581112 sshd-session[4294]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 7 00:01:45.585764 systemd-logind[1583]: New session 19 of user core. Nov 7 00:01:45.600327 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 7 00:01:46.227341 sshd[4297]: Connection closed by 10.0.0.1 port 42224 Nov 7 00:01:46.227244 sshd-session[4294]: pam_unix(sshd:session): session closed for user core Nov 7 00:01:46.238253 systemd[1]: sshd@18-10.0.0.46:22-10.0.0.1:42224.service: Deactivated successfully. Nov 7 00:01:46.240542 systemd[1]: session-19.scope: Deactivated successfully. Nov 7 00:01:46.241612 systemd-logind[1583]: Session 19 logged out. Waiting for processes to exit. Nov 7 00:01:46.245988 systemd[1]: Started sshd@19-10.0.0.46:22-10.0.0.1:42240.service - OpenSSH per-connection server daemon (10.0.0.1:42240). Nov 7 00:01:46.246935 systemd-logind[1583]: Removed session 19. Nov 7 00:01:46.302829 sshd[4317]: Accepted publickey for core from 10.0.0.1 port 42240 ssh2: RSA SHA256:byrlmJ17egF8Tfblhd3C23XmFA3LkMTDn1Cz5Op8b3A Nov 7 00:01:46.304468 sshd-session[4317]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 7 00:01:46.310430 systemd-logind[1583]: New session 20 of user core. Nov 7 00:01:46.321341 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 7 00:01:46.589704 sshd[4320]: Connection closed by 10.0.0.1 port 42240 Nov 7 00:01:46.590361 sshd-session[4317]: pam_unix(sshd:session): session closed for user core Nov 7 00:01:46.600546 systemd[1]: sshd@19-10.0.0.46:22-10.0.0.1:42240.service: Deactivated successfully. Nov 7 00:01:46.602618 systemd[1]: session-20.scope: Deactivated successfully. Nov 7 00:01:46.604241 systemd-logind[1583]: Session 20 logged out. Waiting for processes to exit. Nov 7 00:01:46.606569 systemd[1]: Started sshd@20-10.0.0.46:22-10.0.0.1:42250.service - OpenSSH per-connection server daemon (10.0.0.1:42250). Nov 7 00:01:46.607263 systemd-logind[1583]: Removed session 20. Nov 7 00:01:46.660085 sshd[4331]: Accepted publickey for core from 10.0.0.1 port 42250 ssh2: RSA SHA256:byrlmJ17egF8Tfblhd3C23XmFA3LkMTDn1Cz5Op8b3A Nov 7 00:01:46.661480 sshd-session[4331]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 7 00:01:46.666079 systemd-logind[1583]: New session 21 of user core. Nov 7 00:01:46.681281 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 7 00:01:46.785535 sshd[4334]: Connection closed by 10.0.0.1 port 42250 Nov 7 00:01:46.785866 sshd-session[4331]: pam_unix(sshd:session): session closed for user core Nov 7 00:01:46.790070 systemd[1]: sshd@20-10.0.0.46:22-10.0.0.1:42250.service: Deactivated successfully. Nov 7 00:01:46.792282 systemd[1]: session-21.scope: Deactivated successfully. Nov 7 00:01:46.793822 systemd-logind[1583]: Session 21 logged out. Waiting for processes to exit. Nov 7 00:01:46.795293 systemd-logind[1583]: Removed session 21. Nov 7 00:01:51.802769 systemd[1]: Started sshd@21-10.0.0.46:22-10.0.0.1:42258.service - OpenSSH per-connection server daemon (10.0.0.1:42258). Nov 7 00:01:51.864446 sshd[4352]: Accepted publickey for core from 10.0.0.1 port 42258 ssh2: RSA SHA256:byrlmJ17egF8Tfblhd3C23XmFA3LkMTDn1Cz5Op8b3A Nov 7 00:01:51.865827 sshd-session[4352]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 7 00:01:51.870049 systemd-logind[1583]: New session 22 of user core. Nov 7 00:01:51.886314 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 7 00:01:51.997475 sshd[4355]: Connection closed by 10.0.0.1 port 42258 Nov 7 00:01:51.997769 sshd-session[4352]: pam_unix(sshd:session): session closed for user core Nov 7 00:01:52.002861 systemd[1]: sshd@21-10.0.0.46:22-10.0.0.1:42258.service: Deactivated successfully. Nov 7 00:01:52.005282 systemd[1]: session-22.scope: Deactivated successfully. Nov 7 00:01:52.006163 systemd-logind[1583]: Session 22 logged out. Waiting for processes to exit. Nov 7 00:01:52.008003 systemd-logind[1583]: Removed session 22. Nov 7 00:01:55.117860 kubelet[2789]: E1107 00:01:55.117808 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 00:01:57.012714 systemd[1]: Started sshd@22-10.0.0.46:22-10.0.0.1:37690.service - OpenSSH per-connection server daemon (10.0.0.1:37690). Nov 7 00:01:57.057553 sshd[4368]: Accepted publickey for core from 10.0.0.1 port 37690 ssh2: RSA SHA256:byrlmJ17egF8Tfblhd3C23XmFA3LkMTDn1Cz5Op8b3A Nov 7 00:01:57.059054 sshd-session[4368]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 7 00:01:57.063265 systemd-logind[1583]: New session 23 of user core. Nov 7 00:01:57.074362 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 7 00:01:57.181097 sshd[4371]: Connection closed by 10.0.0.1 port 37690 Nov 7 00:01:57.181437 sshd-session[4368]: pam_unix(sshd:session): session closed for user core Nov 7 00:01:57.185676 systemd[1]: sshd@22-10.0.0.46:22-10.0.0.1:37690.service: Deactivated successfully. Nov 7 00:01:57.187725 systemd[1]: session-23.scope: Deactivated successfully. Nov 7 00:01:57.188554 systemd-logind[1583]: Session 23 logged out. Waiting for processes to exit. Nov 7 00:01:57.189584 systemd-logind[1583]: Removed session 23. Nov 7 00:01:58.118526 kubelet[2789]: E1107 00:01:58.118477 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 00:02:02.197318 systemd[1]: Started sshd@23-10.0.0.46:22-10.0.0.1:37692.service - OpenSSH per-connection server daemon (10.0.0.1:37692). Nov 7 00:02:02.242757 sshd[4384]: Accepted publickey for core from 10.0.0.1 port 37692 ssh2: RSA SHA256:byrlmJ17egF8Tfblhd3C23XmFA3LkMTDn1Cz5Op8b3A Nov 7 00:02:02.244474 sshd-session[4384]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 7 00:02:02.249075 systemd-logind[1583]: New session 24 of user core. Nov 7 00:02:02.263357 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 7 00:02:02.381508 sshd[4387]: Connection closed by 10.0.0.1 port 37692 Nov 7 00:02:02.381879 sshd-session[4384]: pam_unix(sshd:session): session closed for user core Nov 7 00:02:02.392841 systemd[1]: sshd@23-10.0.0.46:22-10.0.0.1:37692.service: Deactivated successfully. Nov 7 00:02:02.394829 systemd[1]: session-24.scope: Deactivated successfully. Nov 7 00:02:02.395620 systemd-logind[1583]: Session 24 logged out. Waiting for processes to exit. Nov 7 00:02:02.398612 systemd[1]: Started sshd@24-10.0.0.46:22-10.0.0.1:37706.service - OpenSSH per-connection server daemon (10.0.0.1:37706). Nov 7 00:02:02.399361 systemd-logind[1583]: Removed session 24. Nov 7 00:02:02.454117 sshd[4400]: Accepted publickey for core from 10.0.0.1 port 37706 ssh2: RSA SHA256:byrlmJ17egF8Tfblhd3C23XmFA3LkMTDn1Cz5Op8b3A Nov 7 00:02:02.455710 sshd-session[4400]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 7 00:02:02.460723 systemd-logind[1583]: New session 25 of user core. Nov 7 00:02:02.474277 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 7 00:02:03.118540 kubelet[2789]: E1107 00:02:03.118486 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 00:02:03.863657 kubelet[2789]: I1107 00:02:03.863588 2789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-k7788" podStartSLOduration=73.863572631 podStartE2EDuration="1m13.863572631s" podCreationTimestamp="2025-11-07 00:00:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-07 00:01:19.260698867 +0000 UTC m=+37.240457167" watchObservedRunningTime="2025-11-07 00:02:03.863572631 +0000 UTC m=+81.843330902" Nov 7 00:02:03.864919 containerd[1601]: time="2025-11-07T00:02:03.864875772Z" level=info msg="StopContainer for \"d255f95226d880029c7190d0ab82fb22affde8802c6233cf98d51447ceecd708\" with timeout 30 (s)" Nov 7 00:02:03.872646 containerd[1601]: time="2025-11-07T00:02:03.872568573Z" level=info msg="Stop container \"d255f95226d880029c7190d0ab82fb22affde8802c6233cf98d51447ceecd708\" with signal terminated" Nov 7 00:02:03.884549 systemd[1]: cri-containerd-d255f95226d880029c7190d0ab82fb22affde8802c6233cf98d51447ceecd708.scope: Deactivated successfully. Nov 7 00:02:03.887994 containerd[1601]: time="2025-11-07T00:02:03.887864386Z" level=info msg="received exit event container_id:\"d255f95226d880029c7190d0ab82fb22affde8802c6233cf98d51447ceecd708\" id:\"d255f95226d880029c7190d0ab82fb22affde8802c6233cf98d51447ceecd708\" pid:3424 exited_at:{seconds:1762473723 nanos:886404552}" Nov 7 00:02:03.888693 containerd[1601]: time="2025-11-07T00:02:03.888672175Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d255f95226d880029c7190d0ab82fb22affde8802c6233cf98d51447ceecd708\" id:\"d255f95226d880029c7190d0ab82fb22affde8802c6233cf98d51447ceecd708\" pid:3424 exited_at:{seconds:1762473723 nanos:886404552}" Nov 7 00:02:03.904237 containerd[1601]: time="2025-11-07T00:02:03.904183635Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 7 00:02:03.904627 containerd[1601]: time="2025-11-07T00:02:03.904600939Z" level=info msg="TaskExit event in podsandbox handler container_id:\"776400762199924c43bda12f2613906c29e3f5ac407545eec4529a8222738e25\" id:\"8ca7476c17e4557a357c6c1c1515a3c51b600a709545050c5b59e04c4d8d57a8\" pid:4429 exited_at:{seconds:1762473723 nanos:900277171}" Nov 7 00:02:03.906592 containerd[1601]: time="2025-11-07T00:02:03.906571092Z" level=info msg="StopContainer for \"776400762199924c43bda12f2613906c29e3f5ac407545eec4529a8222738e25\" with timeout 2 (s)" Nov 7 00:02:03.908899 containerd[1601]: time="2025-11-07T00:02:03.908875354Z" level=info msg="Stop container \"776400762199924c43bda12f2613906c29e3f5ac407545eec4529a8222738e25\" with signal terminated" Nov 7 00:02:03.916311 systemd-networkd[1517]: lxc_health: Link DOWN Nov 7 00:02:03.916322 systemd-networkd[1517]: lxc_health: Lost carrier Nov 7 00:02:03.921392 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d255f95226d880029c7190d0ab82fb22affde8802c6233cf98d51447ceecd708-rootfs.mount: Deactivated successfully. Nov 7 00:02:03.936673 systemd[1]: cri-containerd-776400762199924c43bda12f2613906c29e3f5ac407545eec4529a8222738e25.scope: Deactivated successfully. Nov 7 00:02:03.937065 systemd[1]: cri-containerd-776400762199924c43bda12f2613906c29e3f5ac407545eec4529a8222738e25.scope: Consumed 6.562s CPU time, 125.4M memory peak, 220K read from disk, 13.3M written to disk. Nov 7 00:02:03.938768 containerd[1601]: time="2025-11-07T00:02:03.938729515Z" level=info msg="received exit event container_id:\"776400762199924c43bda12f2613906c29e3f5ac407545eec4529a8222738e25\" id:\"776400762199924c43bda12f2613906c29e3f5ac407545eec4529a8222738e25\" pid:3461 exited_at:{seconds:1762473723 nanos:938494243}" Nov 7 00:02:03.938921 containerd[1601]: time="2025-11-07T00:02:03.938897200Z" level=info msg="TaskExit event in podsandbox handler container_id:\"776400762199924c43bda12f2613906c29e3f5ac407545eec4529a8222738e25\" id:\"776400762199924c43bda12f2613906c29e3f5ac407545eec4529a8222738e25\" pid:3461 exited_at:{seconds:1762473723 nanos:938494243}" Nov 7 00:02:03.943136 containerd[1601]: time="2025-11-07T00:02:03.943090092Z" level=info msg="StopContainer for \"d255f95226d880029c7190d0ab82fb22affde8802c6233cf98d51447ceecd708\" returns successfully" Nov 7 00:02:03.943703 containerd[1601]: time="2025-11-07T00:02:03.943670864Z" level=info msg="StopPodSandbox for \"1d4f6a9f7068aaefcd276751e91d9c1b3061d295829eb70ed78f34ff85411c42\"" Nov 7 00:02:03.950419 containerd[1601]: time="2025-11-07T00:02:03.950386809Z" level=info msg="Container to stop \"d255f95226d880029c7190d0ab82fb22affde8802c6233cf98d51447ceecd708\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 7 00:02:03.957394 systemd[1]: cri-containerd-1d4f6a9f7068aaefcd276751e91d9c1b3061d295829eb70ed78f34ff85411c42.scope: Deactivated successfully. Nov 7 00:02:03.966281 containerd[1601]: time="2025-11-07T00:02:03.965863272Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1d4f6a9f7068aaefcd276751e91d9c1b3061d295829eb70ed78f34ff85411c42\" id:\"1d4f6a9f7068aaefcd276751e91d9c1b3061d295829eb70ed78f34ff85411c42\" pid:3050 exit_status:137 exited_at:{seconds:1762473723 nanos:965430068}" Nov 7 00:02:03.966282 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-776400762199924c43bda12f2613906c29e3f5ac407545eec4529a8222738e25-rootfs.mount: Deactivated successfully. Nov 7 00:02:03.992349 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1d4f6a9f7068aaefcd276751e91d9c1b3061d295829eb70ed78f34ff85411c42-rootfs.mount: Deactivated successfully. Nov 7 00:02:04.020211 containerd[1601]: time="2025-11-07T00:02:04.020166983Z" level=info msg="shim disconnected" id=1d4f6a9f7068aaefcd276751e91d9c1b3061d295829eb70ed78f34ff85411c42 namespace=k8s.io Nov 7 00:02:04.020211 containerd[1601]: time="2025-11-07T00:02:04.020198963Z" level=warning msg="cleaning up after shim disconnected" id=1d4f6a9f7068aaefcd276751e91d9c1b3061d295829eb70ed78f34ff85411c42 namespace=k8s.io Nov 7 00:02:04.020413 containerd[1601]: time="2025-11-07T00:02:04.020207008Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 7 00:02:04.043223 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1d4f6a9f7068aaefcd276751e91d9c1b3061d295829eb70ed78f34ff85411c42-shm.mount: Deactivated successfully. Nov 7 00:02:04.046735 containerd[1601]: time="2025-11-07T00:02:04.046688005Z" level=info msg="received exit event sandbox_id:\"1d4f6a9f7068aaefcd276751e91d9c1b3061d295829eb70ed78f34ff85411c42\" exit_status:137 exited_at:{seconds:1762473723 nanos:965430068}" Nov 7 00:02:04.048356 containerd[1601]: time="2025-11-07T00:02:04.048318721Z" level=info msg="TearDown network for sandbox \"1d4f6a9f7068aaefcd276751e91d9c1b3061d295829eb70ed78f34ff85411c42\" successfully" Nov 7 00:02:04.048356 containerd[1601]: time="2025-11-07T00:02:04.048340131Z" level=info msg="StopPodSandbox for \"1d4f6a9f7068aaefcd276751e91d9c1b3061d295829eb70ed78f34ff85411c42\" returns successfully" Nov 7 00:02:04.109753 containerd[1601]: time="2025-11-07T00:02:04.109717494Z" level=info msg="StopContainer for \"776400762199924c43bda12f2613906c29e3f5ac407545eec4529a8222738e25\" returns successfully" Nov 7 00:02:04.110158 containerd[1601]: time="2025-11-07T00:02:04.110074024Z" level=info msg="StopPodSandbox for \"cca5ce73f71a5b051d443293d193eddb1f5b194c5f865032efeadf4cb144e3af\"" Nov 7 00:02:04.110245 containerd[1601]: time="2025-11-07T00:02:04.110224167Z" level=info msg="Container to stop \"edc91404cebe882566b1fca8d20ca44c93e95f58a7e588681fbb9abf2ca58659\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 7 00:02:04.110245 containerd[1601]: time="2025-11-07T00:02:04.110242461Z" level=info msg="Container to stop \"0328447fd234a087cd4a413c0e01c6b4c6e0e4d7bbca6a8d23af56f8393ad8f5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 7 00:02:04.110245 containerd[1601]: time="2025-11-07T00:02:04.110253211Z" level=info msg="Container to stop \"d1d6046ad770ee2f9797a091594bf903a0c25cfe68a4bbdb663e85598ae1a3f1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 7 00:02:04.110352 containerd[1601]: time="2025-11-07T00:02:04.110261837Z" level=info msg="Container to stop \"80fb78b175af579cd09d631fd80b678fcd458955b8cb150fca98df818ea89d1e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 7 00:02:04.110352 containerd[1601]: time="2025-11-07T00:02:04.110270043Z" level=info msg="Container to stop \"776400762199924c43bda12f2613906c29e3f5ac407545eec4529a8222738e25\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 7 00:02:04.118035 systemd[1]: cri-containerd-cca5ce73f71a5b051d443293d193eddb1f5b194c5f865032efeadf4cb144e3af.scope: Deactivated successfully. Nov 7 00:02:04.120861 containerd[1601]: time="2025-11-07T00:02:04.120831866Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cca5ce73f71a5b051d443293d193eddb1f5b194c5f865032efeadf4cb144e3af\" id:\"cca5ce73f71a5b051d443293d193eddb1f5b194c5f865032efeadf4cb144e3af\" pid:2973 exit_status:137 exited_at:{seconds:1762473724 nanos:120542643}" Nov 7 00:02:04.146404 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cca5ce73f71a5b051d443293d193eddb1f5b194c5f865032efeadf4cb144e3af-rootfs.mount: Deactivated successfully. Nov 7 00:02:04.149820 containerd[1601]: time="2025-11-07T00:02:04.149781003Z" level=info msg="shim disconnected" id=cca5ce73f71a5b051d443293d193eddb1f5b194c5f865032efeadf4cb144e3af namespace=k8s.io Nov 7 00:02:04.149820 containerd[1601]: time="2025-11-07T00:02:04.149810669Z" level=warning msg="cleaning up after shim disconnected" id=cca5ce73f71a5b051d443293d193eddb1f5b194c5f865032efeadf4cb144e3af namespace=k8s.io Nov 7 00:02:04.150009 containerd[1601]: time="2025-11-07T00:02:04.149819776Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 7 00:02:04.163929 containerd[1601]: time="2025-11-07T00:02:04.163871940Z" level=info msg="received exit event sandbox_id:\"cca5ce73f71a5b051d443293d193eddb1f5b194c5f865032efeadf4cb144e3af\" exit_status:137 exited_at:{seconds:1762473724 nanos:120542643}" Nov 7 00:02:04.164661 containerd[1601]: time="2025-11-07T00:02:04.164614957Z" level=info msg="TearDown network for sandbox \"cca5ce73f71a5b051d443293d193eddb1f5b194c5f865032efeadf4cb144e3af\" successfully" Nov 7 00:02:04.164661 containerd[1601]: time="2025-11-07T00:02:04.164643300Z" level=info msg="StopPodSandbox for \"cca5ce73f71a5b051d443293d193eddb1f5b194c5f865032efeadf4cb144e3af\" returns successfully" Nov 7 00:02:04.249434 kubelet[2789]: I1107 00:02:04.249383 2789 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nlbr7\" (UniqueName: \"kubernetes.io/projected/a8f1149e-f58d-4354-bec5-ede0ad122c5a-kube-api-access-nlbr7\") pod \"a8f1149e-f58d-4354-bec5-ede0ad122c5a\" (UID: \"a8f1149e-f58d-4354-bec5-ede0ad122c5a\") " Nov 7 00:02:04.249434 kubelet[2789]: I1107 00:02:04.249423 2789 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a8f1149e-f58d-4354-bec5-ede0ad122c5a-cilium-config-path\") pod \"a8f1149e-f58d-4354-bec5-ede0ad122c5a\" (UID: \"a8f1149e-f58d-4354-bec5-ede0ad122c5a\") " Nov 7 00:02:04.252667 kubelet[2789]: I1107 00:02:04.252638 2789 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8f1149e-f58d-4354-bec5-ede0ad122c5a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a8f1149e-f58d-4354-bec5-ede0ad122c5a" (UID: "a8f1149e-f58d-4354-bec5-ede0ad122c5a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 7 00:02:04.253203 kubelet[2789]: I1107 00:02:04.253163 2789 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8f1149e-f58d-4354-bec5-ede0ad122c5a-kube-api-access-nlbr7" (OuterVolumeSpecName: "kube-api-access-nlbr7") pod "a8f1149e-f58d-4354-bec5-ede0ad122c5a" (UID: "a8f1149e-f58d-4354-bec5-ede0ad122c5a"). InnerVolumeSpecName "kube-api-access-nlbr7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 7 00:02:04.316610 kubelet[2789]: I1107 00:02:04.316563 2789 scope.go:117] "RemoveContainer" containerID="776400762199924c43bda12f2613906c29e3f5ac407545eec4529a8222738e25" Nov 7 00:02:04.318605 containerd[1601]: time="2025-11-07T00:02:04.318379483Z" level=info msg="RemoveContainer for \"776400762199924c43bda12f2613906c29e3f5ac407545eec4529a8222738e25\"" Nov 7 00:02:04.324095 systemd[1]: Removed slice kubepods-besteffort-poda8f1149e_f58d_4354_bec5_ede0ad122c5a.slice - libcontainer container kubepods-besteffort-poda8f1149e_f58d_4354_bec5_ede0ad122c5a.slice. Nov 7 00:02:04.332967 containerd[1601]: time="2025-11-07T00:02:04.332922089Z" level=info msg="RemoveContainer for \"776400762199924c43bda12f2613906c29e3f5ac407545eec4529a8222738e25\" returns successfully" Nov 7 00:02:04.333256 kubelet[2789]: I1107 00:02:04.333224 2789 scope.go:117] "RemoveContainer" containerID="80fb78b175af579cd09d631fd80b678fcd458955b8cb150fca98df818ea89d1e" Nov 7 00:02:04.334938 containerd[1601]: time="2025-11-07T00:02:04.334877104Z" level=info msg="RemoveContainer for \"80fb78b175af579cd09d631fd80b678fcd458955b8cb150fca98df818ea89d1e\"" Nov 7 00:02:04.340880 containerd[1601]: time="2025-11-07T00:02:04.340838000Z" level=info msg="RemoveContainer for \"80fb78b175af579cd09d631fd80b678fcd458955b8cb150fca98df818ea89d1e\" returns successfully" Nov 7 00:02:04.341045 kubelet[2789]: I1107 00:02:04.341008 2789 scope.go:117] "RemoveContainer" containerID="d1d6046ad770ee2f9797a091594bf903a0c25cfe68a4bbdb663e85598ae1a3f1" Nov 7 00:02:04.343019 containerd[1601]: time="2025-11-07T00:02:04.342988722Z" level=info msg="RemoveContainer for \"d1d6046ad770ee2f9797a091594bf903a0c25cfe68a4bbdb663e85598ae1a3f1\"" Nov 7 00:02:04.347086 containerd[1601]: time="2025-11-07T00:02:04.347053603Z" level=info msg="RemoveContainer for \"d1d6046ad770ee2f9797a091594bf903a0c25cfe68a4bbdb663e85598ae1a3f1\" returns successfully" Nov 7 00:02:04.347236 kubelet[2789]: I1107 00:02:04.347203 2789 scope.go:117] "RemoveContainer" containerID="0328447fd234a087cd4a413c0e01c6b4c6e0e4d7bbca6a8d23af56f8393ad8f5" Nov 7 00:02:04.348413 containerd[1601]: time="2025-11-07T00:02:04.348375999Z" level=info msg="RemoveContainer for \"0328447fd234a087cd4a413c0e01c6b4c6e0e4d7bbca6a8d23af56f8393ad8f5\"" Nov 7 00:02:04.349623 kubelet[2789]: I1107 00:02:04.349597 2789 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f22b05e1-2865-4e95-8406-a34e7e1f0b4b-etc-cni-netd\") pod \"f22b05e1-2865-4e95-8406-a34e7e1f0b4b\" (UID: \"f22b05e1-2865-4e95-8406-a34e7e1f0b4b\") " Nov 7 00:02:04.349668 kubelet[2789]: I1107 00:02:04.349632 2789 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f22b05e1-2865-4e95-8406-a34e7e1f0b4b-hubble-tls\") pod \"f22b05e1-2865-4e95-8406-a34e7e1f0b4b\" (UID: \"f22b05e1-2865-4e95-8406-a34e7e1f0b4b\") " Nov 7 00:02:04.349668 kubelet[2789]: I1107 00:02:04.349653 2789 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8fgdl\" (UniqueName: \"kubernetes.io/projected/f22b05e1-2865-4e95-8406-a34e7e1f0b4b-kube-api-access-8fgdl\") pod \"f22b05e1-2865-4e95-8406-a34e7e1f0b4b\" (UID: \"f22b05e1-2865-4e95-8406-a34e7e1f0b4b\") " Nov 7 00:02:04.349739 kubelet[2789]: I1107 00:02:04.349668 2789 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f22b05e1-2865-4e95-8406-a34e7e1f0b4b-lib-modules\") pod \"f22b05e1-2865-4e95-8406-a34e7e1f0b4b\" (UID: \"f22b05e1-2865-4e95-8406-a34e7e1f0b4b\") " Nov 7 00:02:04.349739 kubelet[2789]: I1107 00:02:04.349681 2789 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f22b05e1-2865-4e95-8406-a34e7e1f0b4b-cilium-cgroup\") pod \"f22b05e1-2865-4e95-8406-a34e7e1f0b4b\" (UID: \"f22b05e1-2865-4e95-8406-a34e7e1f0b4b\") " Nov 7 00:02:04.349739 kubelet[2789]: I1107 00:02:04.349694 2789 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f22b05e1-2865-4e95-8406-a34e7e1f0b4b-hostproc\") pod \"f22b05e1-2865-4e95-8406-a34e7e1f0b4b\" (UID: \"f22b05e1-2865-4e95-8406-a34e7e1f0b4b\") " Nov 7 00:02:04.349739 kubelet[2789]: I1107 00:02:04.349708 2789 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f22b05e1-2865-4e95-8406-a34e7e1f0b4b-xtables-lock\") pod \"f22b05e1-2865-4e95-8406-a34e7e1f0b4b\" (UID: \"f22b05e1-2865-4e95-8406-a34e7e1f0b4b\") " Nov 7 00:02:04.349739 kubelet[2789]: I1107 00:02:04.349721 2789 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f22b05e1-2865-4e95-8406-a34e7e1f0b4b-bpf-maps\") pod \"f22b05e1-2865-4e95-8406-a34e7e1f0b4b\" (UID: \"f22b05e1-2865-4e95-8406-a34e7e1f0b4b\") " Nov 7 00:02:04.349739 kubelet[2789]: I1107 00:02:04.349737 2789 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f22b05e1-2865-4e95-8406-a34e7e1f0b4b-cni-path\") pod \"f22b05e1-2865-4e95-8406-a34e7e1f0b4b\" (UID: \"f22b05e1-2865-4e95-8406-a34e7e1f0b4b\") " Nov 7 00:02:04.349950 kubelet[2789]: I1107 00:02:04.349751 2789 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f22b05e1-2865-4e95-8406-a34e7e1f0b4b-cilium-run\") pod \"f22b05e1-2865-4e95-8406-a34e7e1f0b4b\" (UID: \"f22b05e1-2865-4e95-8406-a34e7e1f0b4b\") " Nov 7 00:02:04.349950 kubelet[2789]: I1107 00:02:04.349764 2789 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f22b05e1-2865-4e95-8406-a34e7e1f0b4b-host-proc-sys-kernel\") pod \"f22b05e1-2865-4e95-8406-a34e7e1f0b4b\" (UID: \"f22b05e1-2865-4e95-8406-a34e7e1f0b4b\") " Nov 7 00:02:04.349950 kubelet[2789]: I1107 00:02:04.349782 2789 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f22b05e1-2865-4e95-8406-a34e7e1f0b4b-cilium-config-path\") pod \"f22b05e1-2865-4e95-8406-a34e7e1f0b4b\" (UID: \"f22b05e1-2865-4e95-8406-a34e7e1f0b4b\") " Nov 7 00:02:04.349950 kubelet[2789]: I1107 00:02:04.349796 2789 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f22b05e1-2865-4e95-8406-a34e7e1f0b4b-host-proc-sys-net\") pod \"f22b05e1-2865-4e95-8406-a34e7e1f0b4b\" (UID: \"f22b05e1-2865-4e95-8406-a34e7e1f0b4b\") " Nov 7 00:02:04.349950 kubelet[2789]: I1107 00:02:04.349811 2789 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f22b05e1-2865-4e95-8406-a34e7e1f0b4b-clustermesh-secrets\") pod \"f22b05e1-2865-4e95-8406-a34e7e1f0b4b\" (UID: \"f22b05e1-2865-4e95-8406-a34e7e1f0b4b\") " Nov 7 00:02:04.349950 kubelet[2789]: I1107 00:02:04.349841 2789 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nlbr7\" (UniqueName: \"kubernetes.io/projected/a8f1149e-f58d-4354-bec5-ede0ad122c5a-kube-api-access-nlbr7\") on node \"localhost\" DevicePath \"\"" Nov 7 00:02:04.350095 kubelet[2789]: I1107 00:02:04.349851 2789 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a8f1149e-f58d-4354-bec5-ede0ad122c5a-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Nov 7 00:02:04.350095 kubelet[2789]: I1107 00:02:04.349702 2789 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f22b05e1-2865-4e95-8406-a34e7e1f0b4b-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "f22b05e1-2865-4e95-8406-a34e7e1f0b4b" (UID: "f22b05e1-2865-4e95-8406-a34e7e1f0b4b"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 7 00:02:04.350095 kubelet[2789]: I1107 00:02:04.349736 2789 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f22b05e1-2865-4e95-8406-a34e7e1f0b4b-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "f22b05e1-2865-4e95-8406-a34e7e1f0b4b" (UID: "f22b05e1-2865-4e95-8406-a34e7e1f0b4b"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 7 00:02:04.350095 kubelet[2789]: I1107 00:02:04.350039 2789 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f22b05e1-2865-4e95-8406-a34e7e1f0b4b-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "f22b05e1-2865-4e95-8406-a34e7e1f0b4b" (UID: "f22b05e1-2865-4e95-8406-a34e7e1f0b4b"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 7 00:02:04.350095 kubelet[2789]: I1107 00:02:04.350060 2789 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f22b05e1-2865-4e95-8406-a34e7e1f0b4b-hostproc" (OuterVolumeSpecName: "hostproc") pod "f22b05e1-2865-4e95-8406-a34e7e1f0b4b" (UID: "f22b05e1-2865-4e95-8406-a34e7e1f0b4b"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 7 00:02:04.350256 kubelet[2789]: I1107 00:02:04.350076 2789 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f22b05e1-2865-4e95-8406-a34e7e1f0b4b-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "f22b05e1-2865-4e95-8406-a34e7e1f0b4b" (UID: "f22b05e1-2865-4e95-8406-a34e7e1f0b4b"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 7 00:02:04.350256 kubelet[2789]: I1107 00:02:04.350113 2789 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f22b05e1-2865-4e95-8406-a34e7e1f0b4b-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "f22b05e1-2865-4e95-8406-a34e7e1f0b4b" (UID: "f22b05e1-2865-4e95-8406-a34e7e1f0b4b"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 7 00:02:04.351235 kubelet[2789]: I1107 00:02:04.351203 2789 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f22b05e1-2865-4e95-8406-a34e7e1f0b4b-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "f22b05e1-2865-4e95-8406-a34e7e1f0b4b" (UID: "f22b05e1-2865-4e95-8406-a34e7e1f0b4b"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 7 00:02:04.351276 kubelet[2789]: I1107 00:02:04.351238 2789 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f22b05e1-2865-4e95-8406-a34e7e1f0b4b-cni-path" (OuterVolumeSpecName: "cni-path") pod "f22b05e1-2865-4e95-8406-a34e7e1f0b4b" (UID: "f22b05e1-2865-4e95-8406-a34e7e1f0b4b"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 7 00:02:04.351276 kubelet[2789]: I1107 00:02:04.351259 2789 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f22b05e1-2865-4e95-8406-a34e7e1f0b4b-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "f22b05e1-2865-4e95-8406-a34e7e1f0b4b" (UID: "f22b05e1-2865-4e95-8406-a34e7e1f0b4b"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 7 00:02:04.352070 containerd[1601]: time="2025-11-07T00:02:04.352040217Z" level=info msg="RemoveContainer for \"0328447fd234a087cd4a413c0e01c6b4c6e0e4d7bbca6a8d23af56f8393ad8f5\" returns successfully" Nov 7 00:02:04.353166 kubelet[2789]: I1107 00:02:04.353078 2789 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f22b05e1-2865-4e95-8406-a34e7e1f0b4b-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "f22b05e1-2865-4e95-8406-a34e7e1f0b4b" (UID: "f22b05e1-2865-4e95-8406-a34e7e1f0b4b"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 7 00:02:04.353166 kubelet[2789]: I1107 00:02:04.353135 2789 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f22b05e1-2865-4e95-8406-a34e7e1f0b4b-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "f22b05e1-2865-4e95-8406-a34e7e1f0b4b" (UID: "f22b05e1-2865-4e95-8406-a34e7e1f0b4b"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 7 00:02:04.353678 kubelet[2789]: I1107 00:02:04.353228 2789 scope.go:117] "RemoveContainer" containerID="edc91404cebe882566b1fca8d20ca44c93e95f58a7e588681fbb9abf2ca58659" Nov 7 00:02:04.353678 kubelet[2789]: I1107 00:02:04.353367 2789 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f22b05e1-2865-4e95-8406-a34e7e1f0b4b-kube-api-access-8fgdl" (OuterVolumeSpecName: "kube-api-access-8fgdl") pod "f22b05e1-2865-4e95-8406-a34e7e1f0b4b" (UID: "f22b05e1-2865-4e95-8406-a34e7e1f0b4b"). InnerVolumeSpecName "kube-api-access-8fgdl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 7 00:02:04.354363 containerd[1601]: time="2025-11-07T00:02:04.354336353Z" level=info msg="RemoveContainer for \"edc91404cebe882566b1fca8d20ca44c93e95f58a7e588681fbb9abf2ca58659\"" Nov 7 00:02:04.356338 kubelet[2789]: I1107 00:02:04.356311 2789 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f22b05e1-2865-4e95-8406-a34e7e1f0b4b-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "f22b05e1-2865-4e95-8406-a34e7e1f0b4b" (UID: "f22b05e1-2865-4e95-8406-a34e7e1f0b4b"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 7 00:02:04.356709 kubelet[2789]: I1107 00:02:04.356667 2789 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f22b05e1-2865-4e95-8406-a34e7e1f0b4b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f22b05e1-2865-4e95-8406-a34e7e1f0b4b" (UID: "f22b05e1-2865-4e95-8406-a34e7e1f0b4b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 7 00:02:04.357710 containerd[1601]: time="2025-11-07T00:02:04.357676482Z" level=info msg="RemoveContainer for \"edc91404cebe882566b1fca8d20ca44c93e95f58a7e588681fbb9abf2ca58659\" returns successfully" Nov 7 00:02:04.357811 kubelet[2789]: I1107 00:02:04.357793 2789 scope.go:117] "RemoveContainer" containerID="776400762199924c43bda12f2613906c29e3f5ac407545eec4529a8222738e25" Nov 7 00:02:04.357990 containerd[1601]: time="2025-11-07T00:02:04.357953702Z" level=error msg="ContainerStatus for \"776400762199924c43bda12f2613906c29e3f5ac407545eec4529a8222738e25\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"776400762199924c43bda12f2613906c29e3f5ac407545eec4529a8222738e25\": not found" Nov 7 00:02:04.358133 kubelet[2789]: E1107 00:02:04.358096 2789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"776400762199924c43bda12f2613906c29e3f5ac407545eec4529a8222738e25\": not found" containerID="776400762199924c43bda12f2613906c29e3f5ac407545eec4529a8222738e25" Nov 7 00:02:04.358217 kubelet[2789]: I1107 00:02:04.358163 2789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"776400762199924c43bda12f2613906c29e3f5ac407545eec4529a8222738e25"} err="failed to get container status \"776400762199924c43bda12f2613906c29e3f5ac407545eec4529a8222738e25\": rpc error: code = NotFound desc = an error occurred when try to find container \"776400762199924c43bda12f2613906c29e3f5ac407545eec4529a8222738e25\": not found" Nov 7 00:02:04.358242 kubelet[2789]: I1107 00:02:04.358217 2789 scope.go:117] "RemoveContainer" containerID="80fb78b175af579cd09d631fd80b678fcd458955b8cb150fca98df818ea89d1e" Nov 7 00:02:04.358430 containerd[1601]: time="2025-11-07T00:02:04.358392116Z" level=error msg="ContainerStatus for \"80fb78b175af579cd09d631fd80b678fcd458955b8cb150fca98df818ea89d1e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"80fb78b175af579cd09d631fd80b678fcd458955b8cb150fca98df818ea89d1e\": not found" Nov 7 00:02:04.358532 kubelet[2789]: E1107 00:02:04.358507 2789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"80fb78b175af579cd09d631fd80b678fcd458955b8cb150fca98df818ea89d1e\": not found" containerID="80fb78b175af579cd09d631fd80b678fcd458955b8cb150fca98df818ea89d1e" Nov 7 00:02:04.358583 kubelet[2789]: I1107 00:02:04.358536 2789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"80fb78b175af579cd09d631fd80b678fcd458955b8cb150fca98df818ea89d1e"} err="failed to get container status \"80fb78b175af579cd09d631fd80b678fcd458955b8cb150fca98df818ea89d1e\": rpc error: code = NotFound desc = an error occurred when try to find container \"80fb78b175af579cd09d631fd80b678fcd458955b8cb150fca98df818ea89d1e\": not found" Nov 7 00:02:04.358583 kubelet[2789]: I1107 00:02:04.358555 2789 scope.go:117] "RemoveContainer" containerID="d1d6046ad770ee2f9797a091594bf903a0c25cfe68a4bbdb663e85598ae1a3f1" Nov 7 00:02:04.358710 containerd[1601]: time="2025-11-07T00:02:04.358678715Z" level=error msg="ContainerStatus for \"d1d6046ad770ee2f9797a091594bf903a0c25cfe68a4bbdb663e85598ae1a3f1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d1d6046ad770ee2f9797a091594bf903a0c25cfe68a4bbdb663e85598ae1a3f1\": not found" Nov 7 00:02:04.358795 kubelet[2789]: E1107 00:02:04.358771 2789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d1d6046ad770ee2f9797a091594bf903a0c25cfe68a4bbdb663e85598ae1a3f1\": not found" containerID="d1d6046ad770ee2f9797a091594bf903a0c25cfe68a4bbdb663e85598ae1a3f1" Nov 7 00:02:04.358833 kubelet[2789]: I1107 00:02:04.358793 2789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d1d6046ad770ee2f9797a091594bf903a0c25cfe68a4bbdb663e85598ae1a3f1"} err="failed to get container status \"d1d6046ad770ee2f9797a091594bf903a0c25cfe68a4bbdb663e85598ae1a3f1\": rpc error: code = NotFound desc = an error occurred when try to find container \"d1d6046ad770ee2f9797a091594bf903a0c25cfe68a4bbdb663e85598ae1a3f1\": not found" Nov 7 00:02:04.358833 kubelet[2789]: I1107 00:02:04.358807 2789 scope.go:117] "RemoveContainer" containerID="0328447fd234a087cd4a413c0e01c6b4c6e0e4d7bbca6a8d23af56f8393ad8f5" Nov 7 00:02:04.358936 containerd[1601]: time="2025-11-07T00:02:04.358908306Z" level=error msg="ContainerStatus for \"0328447fd234a087cd4a413c0e01c6b4c6e0e4d7bbca6a8d23af56f8393ad8f5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0328447fd234a087cd4a413c0e01c6b4c6e0e4d7bbca6a8d23af56f8393ad8f5\": not found" Nov 7 00:02:04.359023 kubelet[2789]: E1107 00:02:04.359002 2789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0328447fd234a087cd4a413c0e01c6b4c6e0e4d7bbca6a8d23af56f8393ad8f5\": not found" containerID="0328447fd234a087cd4a413c0e01c6b4c6e0e4d7bbca6a8d23af56f8393ad8f5" Nov 7 00:02:04.359066 kubelet[2789]: I1107 00:02:04.359020 2789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0328447fd234a087cd4a413c0e01c6b4c6e0e4d7bbca6a8d23af56f8393ad8f5"} err="failed to get container status \"0328447fd234a087cd4a413c0e01c6b4c6e0e4d7bbca6a8d23af56f8393ad8f5\": rpc error: code = NotFound desc = an error occurred when try to find container \"0328447fd234a087cd4a413c0e01c6b4c6e0e4d7bbca6a8d23af56f8393ad8f5\": not found" Nov 7 00:02:04.359066 kubelet[2789]: I1107 00:02:04.359034 2789 scope.go:117] "RemoveContainer" containerID="edc91404cebe882566b1fca8d20ca44c93e95f58a7e588681fbb9abf2ca58659" Nov 7 00:02:04.359249 containerd[1601]: time="2025-11-07T00:02:04.359170560Z" level=error msg="ContainerStatus for \"edc91404cebe882566b1fca8d20ca44c93e95f58a7e588681fbb9abf2ca58659\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"edc91404cebe882566b1fca8d20ca44c93e95f58a7e588681fbb9abf2ca58659\": not found" Nov 7 00:02:04.359296 kubelet[2789]: E1107 00:02:04.359259 2789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"edc91404cebe882566b1fca8d20ca44c93e95f58a7e588681fbb9abf2ca58659\": not found" containerID="edc91404cebe882566b1fca8d20ca44c93e95f58a7e588681fbb9abf2ca58659" Nov 7 00:02:04.359296 kubelet[2789]: I1107 00:02:04.359274 2789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"edc91404cebe882566b1fca8d20ca44c93e95f58a7e588681fbb9abf2ca58659"} err="failed to get container status \"edc91404cebe882566b1fca8d20ca44c93e95f58a7e588681fbb9abf2ca58659\": rpc error: code = NotFound desc = an error occurred when try to find container \"edc91404cebe882566b1fca8d20ca44c93e95f58a7e588681fbb9abf2ca58659\": not found" Nov 7 00:02:04.359296 kubelet[2789]: I1107 00:02:04.359286 2789 scope.go:117] "RemoveContainer" containerID="d255f95226d880029c7190d0ab82fb22affde8802c6233cf98d51447ceecd708" Nov 7 00:02:04.360367 containerd[1601]: time="2025-11-07T00:02:04.360339847Z" level=info msg="RemoveContainer for \"d255f95226d880029c7190d0ab82fb22affde8802c6233cf98d51447ceecd708\"" Nov 7 00:02:04.363470 containerd[1601]: time="2025-11-07T00:02:04.363439845Z" level=info msg="RemoveContainer for \"d255f95226d880029c7190d0ab82fb22affde8802c6233cf98d51447ceecd708\" returns successfully" Nov 7 00:02:04.363625 kubelet[2789]: I1107 00:02:04.363584 2789 scope.go:117] "RemoveContainer" containerID="d255f95226d880029c7190d0ab82fb22affde8802c6233cf98d51447ceecd708" Nov 7 00:02:04.363789 containerd[1601]: time="2025-11-07T00:02:04.363753945Z" level=error msg="ContainerStatus for \"d255f95226d880029c7190d0ab82fb22affde8802c6233cf98d51447ceecd708\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d255f95226d880029c7190d0ab82fb22affde8802c6233cf98d51447ceecd708\": not found" Nov 7 00:02:04.363907 kubelet[2789]: E1107 00:02:04.363880 2789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d255f95226d880029c7190d0ab82fb22affde8802c6233cf98d51447ceecd708\": not found" containerID="d255f95226d880029c7190d0ab82fb22affde8802c6233cf98d51447ceecd708" Nov 7 00:02:04.363943 kubelet[2789]: I1107 00:02:04.363910 2789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d255f95226d880029c7190d0ab82fb22affde8802c6233cf98d51447ceecd708"} err="failed to get container status \"d255f95226d880029c7190d0ab82fb22affde8802c6233cf98d51447ceecd708\": rpc error: code = NotFound desc = an error occurred when try to find container \"d255f95226d880029c7190d0ab82fb22affde8802c6233cf98d51447ceecd708\": not found" Nov 7 00:02:04.450388 kubelet[2789]: I1107 00:02:04.450346 2789 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f22b05e1-2865-4e95-8406-a34e7e1f0b4b-cilium-run\") on node \"localhost\" DevicePath \"\"" Nov 7 00:02:04.450388 kubelet[2789]: I1107 00:02:04.450374 2789 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f22b05e1-2865-4e95-8406-a34e7e1f0b4b-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Nov 7 00:02:04.450388 kubelet[2789]: I1107 00:02:04.450388 2789 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f22b05e1-2865-4e95-8406-a34e7e1f0b4b-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Nov 7 00:02:04.450388 kubelet[2789]: I1107 00:02:04.450399 2789 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f22b05e1-2865-4e95-8406-a34e7e1f0b4b-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Nov 7 00:02:04.450627 kubelet[2789]: I1107 00:02:04.450410 2789 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f22b05e1-2865-4e95-8406-a34e7e1f0b4b-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Nov 7 00:02:04.450627 kubelet[2789]: I1107 00:02:04.450421 2789 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f22b05e1-2865-4e95-8406-a34e7e1f0b4b-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Nov 7 00:02:04.450627 kubelet[2789]: I1107 00:02:04.450432 2789 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f22b05e1-2865-4e95-8406-a34e7e1f0b4b-hubble-tls\") on node \"localhost\" DevicePath \"\"" Nov 7 00:02:04.450627 kubelet[2789]: I1107 00:02:04.450442 2789 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8fgdl\" (UniqueName: \"kubernetes.io/projected/f22b05e1-2865-4e95-8406-a34e7e1f0b4b-kube-api-access-8fgdl\") on node \"localhost\" DevicePath \"\"" Nov 7 00:02:04.450627 kubelet[2789]: I1107 00:02:04.450454 2789 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f22b05e1-2865-4e95-8406-a34e7e1f0b4b-lib-modules\") on node \"localhost\" DevicePath \"\"" Nov 7 00:02:04.450627 kubelet[2789]: I1107 00:02:04.450464 2789 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f22b05e1-2865-4e95-8406-a34e7e1f0b4b-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Nov 7 00:02:04.450627 kubelet[2789]: I1107 00:02:04.450477 2789 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f22b05e1-2865-4e95-8406-a34e7e1f0b4b-hostproc\") on node \"localhost\" DevicePath \"\"" Nov 7 00:02:04.450627 kubelet[2789]: I1107 00:02:04.450487 2789 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f22b05e1-2865-4e95-8406-a34e7e1f0b4b-xtables-lock\") on node \"localhost\" DevicePath \"\"" Nov 7 00:02:04.450819 kubelet[2789]: I1107 00:02:04.450497 2789 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f22b05e1-2865-4e95-8406-a34e7e1f0b4b-bpf-maps\") on node \"localhost\" DevicePath \"\"" Nov 7 00:02:04.450819 kubelet[2789]: I1107 00:02:04.450507 2789 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f22b05e1-2865-4e95-8406-a34e7e1f0b4b-cni-path\") on node \"localhost\" DevicePath \"\"" Nov 7 00:02:04.625652 systemd[1]: Removed slice kubepods-burstable-podf22b05e1_2865_4e95_8406_a34e7e1f0b4b.slice - libcontainer container kubepods-burstable-podf22b05e1_2865_4e95_8406_a34e7e1f0b4b.slice. Nov 7 00:02:04.625928 systemd[1]: kubepods-burstable-podf22b05e1_2865_4e95_8406_a34e7e1f0b4b.slice: Consumed 6.671s CPU time, 125.8M memory peak, 232K read from disk, 13.3M written to disk. Nov 7 00:02:04.921970 systemd[1]: var-lib-kubelet-pods-a8f1149e\x2df58d\x2d4354\x2dbec5\x2dede0ad122c5a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnlbr7.mount: Deactivated successfully. Nov 7 00:02:04.922171 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cca5ce73f71a5b051d443293d193eddb1f5b194c5f865032efeadf4cb144e3af-shm.mount: Deactivated successfully. Nov 7 00:02:04.922285 systemd[1]: var-lib-kubelet-pods-f22b05e1\x2d2865\x2d4e95\x2d8406\x2da34e7e1f0b4b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8fgdl.mount: Deactivated successfully. Nov 7 00:02:04.922403 systemd[1]: var-lib-kubelet-pods-f22b05e1\x2d2865\x2d4e95\x2d8406\x2da34e7e1f0b4b-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Nov 7 00:02:04.922521 systemd[1]: var-lib-kubelet-pods-f22b05e1\x2d2865\x2d4e95\x2d8406\x2da34e7e1f0b4b-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Nov 7 00:02:05.925181 sshd[4403]: Connection closed by 10.0.0.1 port 37706 Nov 7 00:02:05.925715 sshd-session[4400]: pam_unix(sshd:session): session closed for user core Nov 7 00:02:05.935136 systemd[1]: sshd@24-10.0.0.46:22-10.0.0.1:37706.service: Deactivated successfully. Nov 7 00:02:05.937098 systemd[1]: session-25.scope: Deactivated successfully. Nov 7 00:02:05.937848 systemd-logind[1583]: Session 25 logged out. Waiting for processes to exit. Nov 7 00:02:05.940954 systemd[1]: Started sshd@25-10.0.0.46:22-10.0.0.1:43894.service - OpenSSH per-connection server daemon (10.0.0.1:43894). Nov 7 00:02:05.941772 systemd-logind[1583]: Removed session 25. Nov 7 00:02:06.008932 sshd[4557]: Accepted publickey for core from 10.0.0.1 port 43894 ssh2: RSA SHA256:byrlmJ17egF8Tfblhd3C23XmFA3LkMTDn1Cz5Op8b3A Nov 7 00:02:06.010721 sshd-session[4557]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 7 00:02:06.015698 systemd-logind[1583]: New session 26 of user core. Nov 7 00:02:06.029299 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 7 00:02:06.120520 kubelet[2789]: I1107 00:02:06.120467 2789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8f1149e-f58d-4354-bec5-ede0ad122c5a" path="/var/lib/kubelet/pods/a8f1149e-f58d-4354-bec5-ede0ad122c5a/volumes" Nov 7 00:02:06.121040 kubelet[2789]: I1107 00:02:06.121011 2789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f22b05e1-2865-4e95-8406-a34e7e1f0b4b" path="/var/lib/kubelet/pods/f22b05e1-2865-4e95-8406-a34e7e1f0b4b/volumes" Nov 7 00:02:07.185912 kubelet[2789]: E1107 00:02:07.185866 2789 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Nov 7 00:02:07.304843 sshd[4560]: Connection closed by 10.0.0.1 port 43894 Nov 7 00:02:07.305520 sshd-session[4557]: pam_unix(sshd:session): session closed for user core Nov 7 00:02:07.319718 systemd[1]: sshd@25-10.0.0.46:22-10.0.0.1:43894.service: Deactivated successfully. Nov 7 00:02:07.324481 systemd[1]: session-26.scope: Deactivated successfully. Nov 7 00:02:07.325871 systemd-logind[1583]: Session 26 logged out. Waiting for processes to exit. Nov 7 00:02:07.330628 systemd[1]: Started sshd@26-10.0.0.46:22-10.0.0.1:43904.service - OpenSSH per-connection server daemon (10.0.0.1:43904). Nov 7 00:02:07.334220 systemd-logind[1583]: Removed session 26. Nov 7 00:02:07.389764 sshd[4572]: Accepted publickey for core from 10.0.0.1 port 43904 ssh2: RSA SHA256:byrlmJ17egF8Tfblhd3C23XmFA3LkMTDn1Cz5Op8b3A Nov 7 00:02:07.391183 sshd-session[4572]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 7 00:02:07.396172 systemd-logind[1583]: New session 27 of user core. Nov 7 00:02:07.407312 systemd[1]: Started session-27.scope - Session 27 of User core. Nov 7 00:02:07.457986 sshd[4575]: Connection closed by 10.0.0.1 port 43904 Nov 7 00:02:07.458620 sshd-session[4572]: pam_unix(sshd:session): session closed for user core Nov 7 00:02:07.467921 systemd[1]: sshd@26-10.0.0.46:22-10.0.0.1:43904.service: Deactivated successfully. Nov 7 00:02:07.470089 systemd[1]: session-27.scope: Deactivated successfully. Nov 7 00:02:07.471033 systemd-logind[1583]: Session 27 logged out. Waiting for processes to exit. Nov 7 00:02:07.473958 systemd[1]: Started sshd@27-10.0.0.46:22-10.0.0.1:43910.service - OpenSSH per-connection server daemon (10.0.0.1:43910). Nov 7 00:02:07.475301 systemd-logind[1583]: Removed session 27. Nov 7 00:02:07.542016 sshd[4582]: Accepted publickey for core from 10.0.0.1 port 43910 ssh2: RSA SHA256:byrlmJ17egF8Tfblhd3C23XmFA3LkMTDn1Cz5Op8b3A Nov 7 00:02:07.544040 sshd-session[4582]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 7 00:02:07.549312 systemd-logind[1583]: New session 28 of user core. Nov 7 00:02:07.561598 systemd[1]: Started session-28.scope - Session 28 of User core. Nov 7 00:02:07.581660 systemd[1]: Created slice kubepods-burstable-pode3100adc_92bd_4e48_8070_033604b34aa7.slice - libcontainer container kubepods-burstable-pode3100adc_92bd_4e48_8070_033604b34aa7.slice. Nov 7 00:02:07.667848 kubelet[2789]: I1107 00:02:07.667789 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e3100adc-92bd-4e48-8070-033604b34aa7-host-proc-sys-kernel\") pod \"cilium-jsvvq\" (UID: \"e3100adc-92bd-4e48-8070-033604b34aa7\") " pod="kube-system/cilium-jsvvq" Nov 7 00:02:07.667848 kubelet[2789]: I1107 00:02:07.667847 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e3100adc-92bd-4e48-8070-033604b34aa7-hubble-tls\") pod \"cilium-jsvvq\" (UID: \"e3100adc-92bd-4e48-8070-033604b34aa7\") " pod="kube-system/cilium-jsvvq" Nov 7 00:02:07.668006 kubelet[2789]: I1107 00:02:07.667870 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e3100adc-92bd-4e48-8070-033604b34aa7-hostproc\") pod \"cilium-jsvvq\" (UID: \"e3100adc-92bd-4e48-8070-033604b34aa7\") " pod="kube-system/cilium-jsvvq" Nov 7 00:02:07.668006 kubelet[2789]: I1107 00:02:07.667890 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e3100adc-92bd-4e48-8070-033604b34aa7-lib-modules\") pod \"cilium-jsvvq\" (UID: \"e3100adc-92bd-4e48-8070-033604b34aa7\") " pod="kube-system/cilium-jsvvq" Nov 7 00:02:07.668006 kubelet[2789]: I1107 00:02:07.667909 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e3100adc-92bd-4e48-8070-033604b34aa7-clustermesh-secrets\") pod \"cilium-jsvvq\" (UID: \"e3100adc-92bd-4e48-8070-033604b34aa7\") " pod="kube-system/cilium-jsvvq" Nov 7 00:02:07.668006 kubelet[2789]: I1107 00:02:07.667923 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e3100adc-92bd-4e48-8070-033604b34aa7-xtables-lock\") pod \"cilium-jsvvq\" (UID: \"e3100adc-92bd-4e48-8070-033604b34aa7\") " pod="kube-system/cilium-jsvvq" Nov 7 00:02:07.668006 kubelet[2789]: I1107 00:02:07.667937 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e3100adc-92bd-4e48-8070-033604b34aa7-host-proc-sys-net\") pod \"cilium-jsvvq\" (UID: \"e3100adc-92bd-4e48-8070-033604b34aa7\") " pod="kube-system/cilium-jsvvq" Nov 7 00:02:07.668006 kubelet[2789]: I1107 00:02:07.667955 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsnc4\" (UniqueName: \"kubernetes.io/projected/e3100adc-92bd-4e48-8070-033604b34aa7-kube-api-access-zsnc4\") pod \"cilium-jsvvq\" (UID: \"e3100adc-92bd-4e48-8070-033604b34aa7\") " pod="kube-system/cilium-jsvvq" Nov 7 00:02:07.668195 kubelet[2789]: I1107 00:02:07.667968 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e3100adc-92bd-4e48-8070-033604b34aa7-cilium-run\") pod \"cilium-jsvvq\" (UID: \"e3100adc-92bd-4e48-8070-033604b34aa7\") " pod="kube-system/cilium-jsvvq" Nov 7 00:02:07.668195 kubelet[2789]: I1107 00:02:07.667983 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e3100adc-92bd-4e48-8070-033604b34aa7-bpf-maps\") pod \"cilium-jsvvq\" (UID: \"e3100adc-92bd-4e48-8070-033604b34aa7\") " pod="kube-system/cilium-jsvvq" Nov 7 00:02:07.668195 kubelet[2789]: I1107 00:02:07.667996 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e3100adc-92bd-4e48-8070-033604b34aa7-cni-path\") pod \"cilium-jsvvq\" (UID: \"e3100adc-92bd-4e48-8070-033604b34aa7\") " pod="kube-system/cilium-jsvvq" Nov 7 00:02:07.668195 kubelet[2789]: I1107 00:02:07.668009 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e3100adc-92bd-4e48-8070-033604b34aa7-cilium-cgroup\") pod \"cilium-jsvvq\" (UID: \"e3100adc-92bd-4e48-8070-033604b34aa7\") " pod="kube-system/cilium-jsvvq" Nov 7 00:02:07.668195 kubelet[2789]: I1107 00:02:07.668023 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e3100adc-92bd-4e48-8070-033604b34aa7-cilium-config-path\") pod \"cilium-jsvvq\" (UID: \"e3100adc-92bd-4e48-8070-033604b34aa7\") " pod="kube-system/cilium-jsvvq" Nov 7 00:02:07.668195 kubelet[2789]: I1107 00:02:07.668040 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e3100adc-92bd-4e48-8070-033604b34aa7-cilium-ipsec-secrets\") pod \"cilium-jsvvq\" (UID: \"e3100adc-92bd-4e48-8070-033604b34aa7\") " pod="kube-system/cilium-jsvvq" Nov 7 00:02:07.668337 kubelet[2789]: I1107 00:02:07.668053 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e3100adc-92bd-4e48-8070-033604b34aa7-etc-cni-netd\") pod \"cilium-jsvvq\" (UID: \"e3100adc-92bd-4e48-8070-033604b34aa7\") " pod="kube-system/cilium-jsvvq" Nov 7 00:02:07.891383 kubelet[2789]: E1107 00:02:07.891335 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 00:02:07.895018 containerd[1601]: time="2025-11-07T00:02:07.894967825Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jsvvq,Uid:e3100adc-92bd-4e48-8070-033604b34aa7,Namespace:kube-system,Attempt:0,}" Nov 7 00:02:07.913687 containerd[1601]: time="2025-11-07T00:02:07.913640979Z" level=info msg="connecting to shim e458e7b64be04fcda9ae35e569e9efe8c40b2fee93945cce3cb56a4caa0966d3" address="unix:///run/containerd/s/160a500022253742819f7c3c0968fe9f0160f83fd1a4bd3c37e1410a8471a0bb" namespace=k8s.io protocol=ttrpc version=3 Nov 7 00:02:07.942315 systemd[1]: Started cri-containerd-e458e7b64be04fcda9ae35e569e9efe8c40b2fee93945cce3cb56a4caa0966d3.scope - libcontainer container e458e7b64be04fcda9ae35e569e9efe8c40b2fee93945cce3cb56a4caa0966d3. Nov 7 00:02:07.966710 containerd[1601]: time="2025-11-07T00:02:07.966646739Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jsvvq,Uid:e3100adc-92bd-4e48-8070-033604b34aa7,Namespace:kube-system,Attempt:0,} returns sandbox id \"e458e7b64be04fcda9ae35e569e9efe8c40b2fee93945cce3cb56a4caa0966d3\"" Nov 7 00:02:07.967394 kubelet[2789]: E1107 00:02:07.967354 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 00:02:07.972714 containerd[1601]: time="2025-11-07T00:02:07.972659310Z" level=info msg="CreateContainer within sandbox \"e458e7b64be04fcda9ae35e569e9efe8c40b2fee93945cce3cb56a4caa0966d3\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 7 00:02:07.981153 containerd[1601]: time="2025-11-07T00:02:07.981089453Z" level=info msg="Container 95261879504d2b84d512d5456fee168d6c49056dc3cb0f408b06924d39c599b1: CDI devices from CRI Config.CDIDevices: []" Nov 7 00:02:07.986610 containerd[1601]: time="2025-11-07T00:02:07.986563761Z" level=info msg="CreateContainer within sandbox \"e458e7b64be04fcda9ae35e569e9efe8c40b2fee93945cce3cb56a4caa0966d3\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"95261879504d2b84d512d5456fee168d6c49056dc3cb0f408b06924d39c599b1\"" Nov 7 00:02:07.987107 containerd[1601]: time="2025-11-07T00:02:07.987062098Z" level=info msg="StartContainer for \"95261879504d2b84d512d5456fee168d6c49056dc3cb0f408b06924d39c599b1\"" Nov 7 00:02:07.987965 containerd[1601]: time="2025-11-07T00:02:07.987940739Z" level=info msg="connecting to shim 95261879504d2b84d512d5456fee168d6c49056dc3cb0f408b06924d39c599b1" address="unix:///run/containerd/s/160a500022253742819f7c3c0968fe9f0160f83fd1a4bd3c37e1410a8471a0bb" protocol=ttrpc version=3 Nov 7 00:02:08.010282 systemd[1]: Started cri-containerd-95261879504d2b84d512d5456fee168d6c49056dc3cb0f408b06924d39c599b1.scope - libcontainer container 95261879504d2b84d512d5456fee168d6c49056dc3cb0f408b06924d39c599b1. Nov 7 00:02:08.040503 containerd[1601]: time="2025-11-07T00:02:08.040450252Z" level=info msg="StartContainer for \"95261879504d2b84d512d5456fee168d6c49056dc3cb0f408b06924d39c599b1\" returns successfully" Nov 7 00:02:08.051314 systemd[1]: cri-containerd-95261879504d2b84d512d5456fee168d6c49056dc3cb0f408b06924d39c599b1.scope: Deactivated successfully. Nov 7 00:02:08.053635 containerd[1601]: time="2025-11-07T00:02:08.053577743Z" level=info msg="received exit event container_id:\"95261879504d2b84d512d5456fee168d6c49056dc3cb0f408b06924d39c599b1\" id:\"95261879504d2b84d512d5456fee168d6c49056dc3cb0f408b06924d39c599b1\" pid:4657 exited_at:{seconds:1762473728 nanos:53292748}" Nov 7 00:02:08.053731 containerd[1601]: time="2025-11-07T00:02:08.053696847Z" level=info msg="TaskExit event in podsandbox handler container_id:\"95261879504d2b84d512d5456fee168d6c49056dc3cb0f408b06924d39c599b1\" id:\"95261879504d2b84d512d5456fee168d6c49056dc3cb0f408b06924d39c599b1\" pid:4657 exited_at:{seconds:1762473728 nanos:53292748}" Nov 7 00:02:08.329700 kubelet[2789]: E1107 00:02:08.329559 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 00:02:08.334805 containerd[1601]: time="2025-11-07T00:02:08.334752279Z" level=info msg="CreateContainer within sandbox \"e458e7b64be04fcda9ae35e569e9efe8c40b2fee93945cce3cb56a4caa0966d3\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 7 00:02:08.342104 containerd[1601]: time="2025-11-07T00:02:08.342056826Z" level=info msg="Container bece83bc1504ff9dd63aa635937b83415d758b2a4671fa738a256fae6ae9ae8a: CDI devices from CRI Config.CDIDevices: []" Nov 7 00:02:08.359111 containerd[1601]: time="2025-11-07T00:02:08.359053639Z" level=info msg="CreateContainer within sandbox \"e458e7b64be04fcda9ae35e569e9efe8c40b2fee93945cce3cb56a4caa0966d3\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"bece83bc1504ff9dd63aa635937b83415d758b2a4671fa738a256fae6ae9ae8a\"" Nov 7 00:02:08.359947 containerd[1601]: time="2025-11-07T00:02:08.359901551Z" level=info msg="StartContainer for \"bece83bc1504ff9dd63aa635937b83415d758b2a4671fa738a256fae6ae9ae8a\"" Nov 7 00:02:08.362134 containerd[1601]: time="2025-11-07T00:02:08.362103580Z" level=info msg="connecting to shim bece83bc1504ff9dd63aa635937b83415d758b2a4671fa738a256fae6ae9ae8a" address="unix:///run/containerd/s/160a500022253742819f7c3c0968fe9f0160f83fd1a4bd3c37e1410a8471a0bb" protocol=ttrpc version=3 Nov 7 00:02:08.387287 systemd[1]: Started cri-containerd-bece83bc1504ff9dd63aa635937b83415d758b2a4671fa738a256fae6ae9ae8a.scope - libcontainer container bece83bc1504ff9dd63aa635937b83415d758b2a4671fa738a256fae6ae9ae8a. Nov 7 00:02:08.417162 containerd[1601]: time="2025-11-07T00:02:08.416593173Z" level=info msg="StartContainer for \"bece83bc1504ff9dd63aa635937b83415d758b2a4671fa738a256fae6ae9ae8a\" returns successfully" Nov 7 00:02:08.423385 systemd[1]: cri-containerd-bece83bc1504ff9dd63aa635937b83415d758b2a4671fa738a256fae6ae9ae8a.scope: Deactivated successfully. Nov 7 00:02:08.423868 containerd[1601]: time="2025-11-07T00:02:08.423825285Z" level=info msg="received exit event container_id:\"bece83bc1504ff9dd63aa635937b83415d758b2a4671fa738a256fae6ae9ae8a\" id:\"bece83bc1504ff9dd63aa635937b83415d758b2a4671fa738a256fae6ae9ae8a\" pid:4703 exited_at:{seconds:1762473728 nanos:423645226}" Nov 7 00:02:08.424498 containerd[1601]: time="2025-11-07T00:02:08.423943787Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bece83bc1504ff9dd63aa635937b83415d758b2a4671fa738a256fae6ae9ae8a\" id:\"bece83bc1504ff9dd63aa635937b83415d758b2a4671fa738a256fae6ae9ae8a\" pid:4703 exited_at:{seconds:1762473728 nanos:423645226}" Nov 7 00:02:09.333658 kubelet[2789]: E1107 00:02:09.333619 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 00:02:09.339683 containerd[1601]: time="2025-11-07T00:02:09.339620528Z" level=info msg="CreateContainer within sandbox \"e458e7b64be04fcda9ae35e569e9efe8c40b2fee93945cce3cb56a4caa0966d3\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 7 00:02:09.380027 containerd[1601]: time="2025-11-07T00:02:09.379964865Z" level=info msg="Container 868caf8a4934ba42dd8134009ea438195bfa7f161edc1ef1b09778848027e85f: CDI devices from CRI Config.CDIDevices: []" Nov 7 00:02:09.418929 containerd[1601]: time="2025-11-07T00:02:09.418874654Z" level=info msg="CreateContainer within sandbox \"e458e7b64be04fcda9ae35e569e9efe8c40b2fee93945cce3cb56a4caa0966d3\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"868caf8a4934ba42dd8134009ea438195bfa7f161edc1ef1b09778848027e85f\"" Nov 7 00:02:09.419478 containerd[1601]: time="2025-11-07T00:02:09.419452190Z" level=info msg="StartContainer for \"868caf8a4934ba42dd8134009ea438195bfa7f161edc1ef1b09778848027e85f\"" Nov 7 00:02:09.421374 containerd[1601]: time="2025-11-07T00:02:09.421338625Z" level=info msg="connecting to shim 868caf8a4934ba42dd8134009ea438195bfa7f161edc1ef1b09778848027e85f" address="unix:///run/containerd/s/160a500022253742819f7c3c0968fe9f0160f83fd1a4bd3c37e1410a8471a0bb" protocol=ttrpc version=3 Nov 7 00:02:09.440295 systemd[1]: Started cri-containerd-868caf8a4934ba42dd8134009ea438195bfa7f161edc1ef1b09778848027e85f.scope - libcontainer container 868caf8a4934ba42dd8134009ea438195bfa7f161edc1ef1b09778848027e85f. Nov 7 00:02:09.483488 systemd[1]: cri-containerd-868caf8a4934ba42dd8134009ea438195bfa7f161edc1ef1b09778848027e85f.scope: Deactivated successfully. Nov 7 00:02:09.484844 containerd[1601]: time="2025-11-07T00:02:09.484806024Z" level=info msg="received exit event container_id:\"868caf8a4934ba42dd8134009ea438195bfa7f161edc1ef1b09778848027e85f\" id:\"868caf8a4934ba42dd8134009ea438195bfa7f161edc1ef1b09778848027e85f\" pid:4747 exited_at:{seconds:1762473729 nanos:484622970}" Nov 7 00:02:09.485311 containerd[1601]: time="2025-11-07T00:02:09.485282620Z" level=info msg="StartContainer for \"868caf8a4934ba42dd8134009ea438195bfa7f161edc1ef1b09778848027e85f\" returns successfully" Nov 7 00:02:09.495749 containerd[1601]: time="2025-11-07T00:02:09.495678756Z" level=info msg="TaskExit event in podsandbox handler container_id:\"868caf8a4934ba42dd8134009ea438195bfa7f161edc1ef1b09778848027e85f\" id:\"868caf8a4934ba42dd8134009ea438195bfa7f161edc1ef1b09778848027e85f\" pid:4747 exited_at:{seconds:1762473729 nanos:484622970}" Nov 7 00:02:09.510232 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-868caf8a4934ba42dd8134009ea438195bfa7f161edc1ef1b09778848027e85f-rootfs.mount: Deactivated successfully. Nov 7 00:02:10.337164 kubelet[2789]: E1107 00:02:10.337086 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 00:02:10.449040 containerd[1601]: time="2025-11-07T00:02:10.448990184Z" level=info msg="CreateContainer within sandbox \"e458e7b64be04fcda9ae35e569e9efe8c40b2fee93945cce3cb56a4caa0966d3\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 7 00:02:10.854798 containerd[1601]: time="2025-11-07T00:02:10.854496289Z" level=info msg="Container 7cd547242b5c3e6138afd41694effd4a0862913c885af8c38f6fc664a4594b6e: CDI devices from CRI Config.CDIDevices: []" Nov 7 00:02:11.224380 containerd[1601]: time="2025-11-07T00:02:11.224319433Z" level=info msg="CreateContainer within sandbox \"e458e7b64be04fcda9ae35e569e9efe8c40b2fee93945cce3cb56a4caa0966d3\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"7cd547242b5c3e6138afd41694effd4a0862913c885af8c38f6fc664a4594b6e\"" Nov 7 00:02:11.224821 containerd[1601]: time="2025-11-07T00:02:11.224793323Z" level=info msg="StartContainer for \"7cd547242b5c3e6138afd41694effd4a0862913c885af8c38f6fc664a4594b6e\"" Nov 7 00:02:11.225606 containerd[1601]: time="2025-11-07T00:02:11.225573529Z" level=info msg="connecting to shim 7cd547242b5c3e6138afd41694effd4a0862913c885af8c38f6fc664a4594b6e" address="unix:///run/containerd/s/160a500022253742819f7c3c0968fe9f0160f83fd1a4bd3c37e1410a8471a0bb" protocol=ttrpc version=3 Nov 7 00:02:11.251376 systemd[1]: Started cri-containerd-7cd547242b5c3e6138afd41694effd4a0862913c885af8c38f6fc664a4594b6e.scope - libcontainer container 7cd547242b5c3e6138afd41694effd4a0862913c885af8c38f6fc664a4594b6e. Nov 7 00:02:11.280938 systemd[1]: cri-containerd-7cd547242b5c3e6138afd41694effd4a0862913c885af8c38f6fc664a4594b6e.scope: Deactivated successfully. Nov 7 00:02:11.281379 containerd[1601]: time="2025-11-07T00:02:11.281338263Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7cd547242b5c3e6138afd41694effd4a0862913c885af8c38f6fc664a4594b6e\" id:\"7cd547242b5c3e6138afd41694effd4a0862913c885af8c38f6fc664a4594b6e\" pid:4784 exited_at:{seconds:1762473731 nanos:281077743}" Nov 7 00:02:11.284934 containerd[1601]: time="2025-11-07T00:02:11.284878504Z" level=info msg="received exit event container_id:\"7cd547242b5c3e6138afd41694effd4a0862913c885af8c38f6fc664a4594b6e\" id:\"7cd547242b5c3e6138afd41694effd4a0862913c885af8c38f6fc664a4594b6e\" pid:4784 exited_at:{seconds:1762473731 nanos:281077743}" Nov 7 00:02:11.293122 containerd[1601]: time="2025-11-07T00:02:11.293061221Z" level=info msg="StartContainer for \"7cd547242b5c3e6138afd41694effd4a0862913c885af8c38f6fc664a4594b6e\" returns successfully" Nov 7 00:02:11.306335 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7cd547242b5c3e6138afd41694effd4a0862913c885af8c38f6fc664a4594b6e-rootfs.mount: Deactivated successfully. Nov 7 00:02:11.342349 kubelet[2789]: E1107 00:02:11.342313 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 00:02:12.186470 kubelet[2789]: E1107 00:02:12.186425 2789 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Nov 7 00:02:12.347965 kubelet[2789]: E1107 00:02:12.347926 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 00:02:12.351806 containerd[1601]: time="2025-11-07T00:02:12.351739722Z" level=info msg="CreateContainer within sandbox \"e458e7b64be04fcda9ae35e569e9efe8c40b2fee93945cce3cb56a4caa0966d3\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 7 00:02:12.365525 containerd[1601]: time="2025-11-07T00:02:12.365477574Z" level=info msg="Container 04e5d27b75bcfae63c990bad6beb079bf9c6b973b183b39378a8752080ea57f7: CDI devices from CRI Config.CDIDevices: []" Nov 7 00:02:12.373864 containerd[1601]: time="2025-11-07T00:02:12.373811533Z" level=info msg="CreateContainer within sandbox \"e458e7b64be04fcda9ae35e569e9efe8c40b2fee93945cce3cb56a4caa0966d3\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"04e5d27b75bcfae63c990bad6beb079bf9c6b973b183b39378a8752080ea57f7\"" Nov 7 00:02:12.374195 containerd[1601]: time="2025-11-07T00:02:12.374161521Z" level=info msg="StartContainer for \"04e5d27b75bcfae63c990bad6beb079bf9c6b973b183b39378a8752080ea57f7\"" Nov 7 00:02:12.374981 containerd[1601]: time="2025-11-07T00:02:12.374937149Z" level=info msg="connecting to shim 04e5d27b75bcfae63c990bad6beb079bf9c6b973b183b39378a8752080ea57f7" address="unix:///run/containerd/s/160a500022253742819f7c3c0968fe9f0160f83fd1a4bd3c37e1410a8471a0bb" protocol=ttrpc version=3 Nov 7 00:02:12.412421 systemd[1]: Started cri-containerd-04e5d27b75bcfae63c990bad6beb079bf9c6b973b183b39378a8752080ea57f7.scope - libcontainer container 04e5d27b75bcfae63c990bad6beb079bf9c6b973b183b39378a8752080ea57f7. Nov 7 00:02:12.447743 containerd[1601]: time="2025-11-07T00:02:12.447499989Z" level=info msg="StartContainer for \"04e5d27b75bcfae63c990bad6beb079bf9c6b973b183b39378a8752080ea57f7\" returns successfully" Nov 7 00:02:12.517242 containerd[1601]: time="2025-11-07T00:02:12.517163062Z" level=info msg="TaskExit event in podsandbox handler container_id:\"04e5d27b75bcfae63c990bad6beb079bf9c6b973b183b39378a8752080ea57f7\" id:\"884765eb7883a0f62cc6c009bac222bd6a358556670423fc12cfda9843e9d186\" pid:4852 exited_at:{seconds:1762473732 nanos:516833863}" Nov 7 00:02:12.870178 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Nov 7 00:02:13.118707 kubelet[2789]: E1107 00:02:13.118458 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 00:02:13.354485 kubelet[2789]: E1107 00:02:13.354442 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 00:02:13.373443 kubelet[2789]: I1107 00:02:13.373226 2789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-jsvvq" podStartSLOduration=6.373204094 podStartE2EDuration="6.373204094s" podCreationTimestamp="2025-11-07 00:02:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-07 00:02:13.372750852 +0000 UTC m=+91.352509152" watchObservedRunningTime="2025-11-07 00:02:13.373204094 +0000 UTC m=+91.352962384" Nov 7 00:02:14.262733 containerd[1601]: time="2025-11-07T00:02:14.262679410Z" level=info msg="TaskExit event in podsandbox handler container_id:\"04e5d27b75bcfae63c990bad6beb079bf9c6b973b183b39378a8752080ea57f7\" id:\"f67849043f781cc28b58301fdc005b23cd3d00a7a8c322bdd997ecc73aaeb4fd\" pid:4959 exit_status:1 exited_at:{seconds:1762473734 nanos:262354510}" Nov 7 00:02:14.356287 kubelet[2789]: E1107 00:02:14.356250 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 00:02:15.124180 kubelet[2789]: I1107 00:02:15.123627 2789 setters.go:618] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-07T00:02:15Z","lastTransitionTime":"2025-11-07T00:02:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Nov 7 00:02:16.254717 systemd-networkd[1517]: lxc_health: Link UP Nov 7 00:02:16.255062 systemd-networkd[1517]: lxc_health: Gained carrier Nov 7 00:02:16.395969 containerd[1601]: time="2025-11-07T00:02:16.395927444Z" level=info msg="TaskExit event in podsandbox handler container_id:\"04e5d27b75bcfae63c990bad6beb079bf9c6b973b183b39378a8752080ea57f7\" id:\"ed610420649853766cb781e34503193df2b2be2571ee7354dc8ee18806a0bda8\" pid:5406 exited_at:{seconds:1762473736 nanos:395448275}" Nov 7 00:02:17.445372 systemd-networkd[1517]: lxc_health: Gained IPv6LL Nov 7 00:02:17.892943 kubelet[2789]: E1107 00:02:17.892897 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 00:02:18.364561 kubelet[2789]: E1107 00:02:18.364435 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 00:02:18.496494 containerd[1601]: time="2025-11-07T00:02:18.496423908Z" level=info msg="TaskExit event in podsandbox handler container_id:\"04e5d27b75bcfae63c990bad6beb079bf9c6b973b183b39378a8752080ea57f7\" id:\"1493b7f3942971d7596d91a7d5754ec2954bf6f2c1a56e85e363cabf970b42f7\" pid:5445 exited_at:{seconds:1762473738 nanos:496011363}" Nov 7 00:02:19.366511 kubelet[2789]: E1107 00:02:19.366121 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 00:02:20.587089 containerd[1601]: time="2025-11-07T00:02:20.587028186Z" level=info msg="TaskExit event in podsandbox handler container_id:\"04e5d27b75bcfae63c990bad6beb079bf9c6b973b183b39378a8752080ea57f7\" id:\"aa756f554b5014eace1b1d7db73c038a04ea96b21e278c6cca20779e130118fc\" pid:5480 exited_at:{seconds:1762473740 nanos:586662419}" Nov 7 00:02:22.684862 containerd[1601]: time="2025-11-07T00:02:22.684817606Z" level=info msg="TaskExit event in podsandbox handler container_id:\"04e5d27b75bcfae63c990bad6beb079bf9c6b973b183b39378a8752080ea57f7\" id:\"6c6c1fdeb5e414759138d3558adf7aaf32506f07e34d71b6d49229384dd95ac4\" pid:5506 exited_at:{seconds:1762473742 nanos:684457791}" Nov 7 00:02:22.690390 sshd[4587]: Connection closed by 10.0.0.1 port 43910 Nov 7 00:02:22.690882 sshd-session[4582]: pam_unix(sshd:session): session closed for user core Nov 7 00:02:22.695340 systemd[1]: sshd@27-10.0.0.46:22-10.0.0.1:43910.service: Deactivated successfully. Nov 7 00:02:22.697990 systemd[1]: session-28.scope: Deactivated successfully. Nov 7 00:02:22.698960 systemd-logind[1583]: Session 28 logged out. Waiting for processes to exit. Nov 7 00:02:22.700083 systemd-logind[1583]: Removed session 28.