Nov 1 00:20:29.035226 kernel: Linux version 6.6.113-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Oct 31 22:41:55 -00 2025 Nov 1 00:20:29.035252 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ade41980c48607de3d2d18dc444731ec5388853e3a75ed2d5a13ce616b36f478 Nov 1 00:20:29.035266 kernel: BIOS-provided physical RAM map: Nov 1 00:20:29.035274 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Nov 1 00:20:29.035316 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Nov 1 00:20:29.035324 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Nov 1 00:20:29.035333 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Nov 1 00:20:29.035342 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Nov 1 00:20:29.035349 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Nov 1 00:20:29.035357 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Nov 1 00:20:29.035369 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Nov 1 00:20:29.035377 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Nov 1 00:20:29.035388 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Nov 1 00:20:29.035396 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Nov 1 00:20:29.035409 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Nov 1 00:20:29.035417 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Nov 1 00:20:29.035429 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Nov 1 00:20:29.035438 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Nov 1 00:20:29.035446 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Nov 1 00:20:29.035454 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Nov 1 00:20:29.035463 kernel: NX (Execute Disable) protection: active Nov 1 00:20:29.035471 kernel: APIC: Static calls initialized Nov 1 00:20:29.035479 kernel: efi: EFI v2.7 by EDK II Nov 1 00:20:29.035488 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b675198 Nov 1 00:20:29.035496 kernel: SMBIOS 2.8 present. Nov 1 00:20:29.035505 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Nov 1 00:20:29.035513 kernel: Hypervisor detected: KVM Nov 1 00:20:29.035524 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 1 00:20:29.035535 kernel: kvm-clock: using sched offset of 5576193721 cycles Nov 1 00:20:29.035544 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 1 00:20:29.035553 kernel: tsc: Detected 2794.750 MHz processor Nov 1 00:20:29.035562 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 1 00:20:29.035571 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 1 00:20:29.035580 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Nov 1 00:20:29.035589 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Nov 1 00:20:29.035597 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 1 00:20:29.035609 kernel: Using GB pages for direct mapping Nov 1 00:20:29.035617 kernel: Secure boot disabled Nov 1 00:20:29.035626 kernel: ACPI: Early table checksum verification disabled Nov 1 00:20:29.035635 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Nov 1 00:20:29.035662 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Nov 1 00:20:29.035672 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:20:29.035681 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:20:29.035694 kernel: ACPI: FACS 0x000000009CBDD000 000040 Nov 1 00:20:29.035703 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:20:29.035715 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:20:29.035724 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:20:29.035734 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:20:29.035743 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Nov 1 00:20:29.035752 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Nov 1 00:20:29.035764 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Nov 1 00:20:29.035773 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Nov 1 00:20:29.035782 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Nov 1 00:20:29.035791 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Nov 1 00:20:29.035800 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Nov 1 00:20:29.035809 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Nov 1 00:20:29.035818 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Nov 1 00:20:29.035827 kernel: No NUMA configuration found Nov 1 00:20:29.035839 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Nov 1 00:20:29.035851 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Nov 1 00:20:29.035860 kernel: Zone ranges: Nov 1 00:20:29.035869 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 1 00:20:29.035878 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Nov 1 00:20:29.035887 kernel: Normal empty Nov 1 00:20:29.035897 kernel: Movable zone start for each node Nov 1 00:20:29.035905 kernel: Early memory node ranges Nov 1 00:20:29.035917 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Nov 1 00:20:29.035926 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Nov 1 00:20:29.035935 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Nov 1 00:20:29.035947 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Nov 1 00:20:29.035956 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Nov 1 00:20:29.035965 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Nov 1 00:20:29.035977 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Nov 1 00:20:29.035986 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 1 00:20:29.035995 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Nov 1 00:20:29.036005 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Nov 1 00:20:29.036014 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 1 00:20:29.036023 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Nov 1 00:20:29.036035 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Nov 1 00:20:29.036044 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Nov 1 00:20:29.036053 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 1 00:20:29.036062 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 1 00:20:29.036071 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 1 00:20:29.036081 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 1 00:20:29.036093 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 1 00:20:29.036102 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 1 00:20:29.036111 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 1 00:20:29.036123 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 1 00:20:29.036132 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 1 00:20:29.036141 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 1 00:20:29.036151 kernel: TSC deadline timer available Nov 1 00:20:29.036160 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Nov 1 00:20:29.036169 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 1 00:20:29.036178 kernel: kvm-guest: KVM setup pv remote TLB flush Nov 1 00:20:29.036187 kernel: kvm-guest: setup PV sched yield Nov 1 00:20:29.036196 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Nov 1 00:20:29.036208 kernel: Booting paravirtualized kernel on KVM Nov 1 00:20:29.036217 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 1 00:20:29.036227 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Nov 1 00:20:29.036236 kernel: percpu: Embedded 58 pages/cpu s196712 r8192 d32664 u524288 Nov 1 00:20:29.036245 kernel: pcpu-alloc: s196712 r8192 d32664 u524288 alloc=1*2097152 Nov 1 00:20:29.036254 kernel: pcpu-alloc: [0] 0 1 2 3 Nov 1 00:20:29.036263 kernel: kvm-guest: PV spinlocks enabled Nov 1 00:20:29.036272 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 1 00:20:29.036290 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ade41980c48607de3d2d18dc444731ec5388853e3a75ed2d5a13ce616b36f478 Nov 1 00:20:29.036305 kernel: random: crng init done Nov 1 00:20:29.036315 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 1 00:20:29.036324 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 1 00:20:29.036333 kernel: Fallback order for Node 0: 0 Nov 1 00:20:29.036342 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Nov 1 00:20:29.036351 kernel: Policy zone: DMA32 Nov 1 00:20:29.036360 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 1 00:20:29.036370 kernel: Memory: 2400600K/2567000K available (12288K kernel code, 2288K rwdata, 22748K rodata, 42884K init, 2316K bss, 166140K reserved, 0K cma-reserved) Nov 1 00:20:29.036382 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Nov 1 00:20:29.036391 kernel: ftrace: allocating 37980 entries in 149 pages Nov 1 00:20:29.036400 kernel: ftrace: allocated 149 pages with 4 groups Nov 1 00:20:29.036410 kernel: Dynamic Preempt: voluntary Nov 1 00:20:29.036419 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 1 00:20:29.036439 kernel: rcu: RCU event tracing is enabled. Nov 1 00:20:29.036452 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Nov 1 00:20:29.036462 kernel: Trampoline variant of Tasks RCU enabled. Nov 1 00:20:29.036472 kernel: Rude variant of Tasks RCU enabled. Nov 1 00:20:29.036481 kernel: Tracing variant of Tasks RCU enabled. Nov 1 00:20:29.036491 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 1 00:20:29.036500 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Nov 1 00:20:29.036513 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Nov 1 00:20:29.036522 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 1 00:20:29.036532 kernel: Console: colour dummy device 80x25 Nov 1 00:20:29.036541 kernel: printk: console [ttyS0] enabled Nov 1 00:20:29.036554 kernel: ACPI: Core revision 20230628 Nov 1 00:20:29.036567 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 1 00:20:29.036576 kernel: APIC: Switch to symmetric I/O mode setup Nov 1 00:20:29.036586 kernel: x2apic enabled Nov 1 00:20:29.036595 kernel: APIC: Switched APIC routing to: physical x2apic Nov 1 00:20:29.036605 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Nov 1 00:20:29.036615 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Nov 1 00:20:29.036624 kernel: kvm-guest: setup PV IPIs Nov 1 00:20:29.036637 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 1 00:20:29.036731 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Nov 1 00:20:29.036744 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Nov 1 00:20:29.036754 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Nov 1 00:20:29.036763 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Nov 1 00:20:29.036773 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Nov 1 00:20:29.036783 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 1 00:20:29.036792 kernel: Spectre V2 : Mitigation: Retpolines Nov 1 00:20:29.036802 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 1 00:20:29.036812 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Nov 1 00:20:29.036821 kernel: active return thunk: retbleed_return_thunk Nov 1 00:20:29.036833 kernel: RETBleed: Mitigation: untrained return thunk Nov 1 00:20:29.036843 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 1 00:20:29.036853 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 1 00:20:29.036863 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Nov 1 00:20:29.036876 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Nov 1 00:20:29.036886 kernel: active return thunk: srso_return_thunk Nov 1 00:20:29.036896 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Nov 1 00:20:29.036905 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 1 00:20:29.036918 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 1 00:20:29.036927 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 1 00:20:29.036937 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 1 00:20:29.036947 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Nov 1 00:20:29.036957 kernel: Freeing SMP alternatives memory: 32K Nov 1 00:20:29.036966 kernel: pid_max: default: 32768 minimum: 301 Nov 1 00:20:29.036976 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 1 00:20:29.036985 kernel: landlock: Up and running. Nov 1 00:20:29.036995 kernel: SELinux: Initializing. Nov 1 00:20:29.037007 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 1 00:20:29.037017 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 1 00:20:29.037029 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Nov 1 00:20:29.037039 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 1 00:20:29.037048 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 1 00:20:29.037058 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 1 00:20:29.037068 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Nov 1 00:20:29.037080 kernel: ... version: 0 Nov 1 00:20:29.037089 kernel: ... bit width: 48 Nov 1 00:20:29.037102 kernel: ... generic registers: 6 Nov 1 00:20:29.037111 kernel: ... value mask: 0000ffffffffffff Nov 1 00:20:29.037121 kernel: ... max period: 00007fffffffffff Nov 1 00:20:29.037130 kernel: ... fixed-purpose events: 0 Nov 1 00:20:29.037140 kernel: ... event mask: 000000000000003f Nov 1 00:20:29.037149 kernel: signal: max sigframe size: 1776 Nov 1 00:20:29.037159 kernel: rcu: Hierarchical SRCU implementation. Nov 1 00:20:29.037169 kernel: rcu: Max phase no-delay instances is 400. Nov 1 00:20:29.037178 kernel: smp: Bringing up secondary CPUs ... Nov 1 00:20:29.037192 kernel: smpboot: x86: Booting SMP configuration: Nov 1 00:20:29.037202 kernel: .... node #0, CPUs: #1 #2 #3 Nov 1 00:20:29.037211 kernel: smp: Brought up 1 node, 4 CPUs Nov 1 00:20:29.037221 kernel: smpboot: Max logical packages: 1 Nov 1 00:20:29.037230 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Nov 1 00:20:29.037240 kernel: devtmpfs: initialized Nov 1 00:20:29.037249 kernel: x86/mm: Memory block size: 128MB Nov 1 00:20:29.037259 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Nov 1 00:20:29.037269 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Nov 1 00:20:29.037290 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Nov 1 00:20:29.037300 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Nov 1 00:20:29.037309 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Nov 1 00:20:29.037319 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 1 00:20:29.037329 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Nov 1 00:20:29.037339 kernel: pinctrl core: initialized pinctrl subsystem Nov 1 00:20:29.037348 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 1 00:20:29.037358 kernel: audit: initializing netlink subsys (disabled) Nov 1 00:20:29.037368 kernel: audit: type=2000 audit(1761956427.999:1): state=initialized audit_enabled=0 res=1 Nov 1 00:20:29.037381 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 1 00:20:29.037390 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 1 00:20:29.037400 kernel: cpuidle: using governor menu Nov 1 00:20:29.037409 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 1 00:20:29.037419 kernel: dca service started, version 1.12.1 Nov 1 00:20:29.037429 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Nov 1 00:20:29.037438 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Nov 1 00:20:29.037448 kernel: PCI: Using configuration type 1 for base access Nov 1 00:20:29.037458 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 1 00:20:29.037471 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 1 00:20:29.037480 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 1 00:20:29.037490 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 1 00:20:29.037500 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 1 00:20:29.037509 kernel: ACPI: Added _OSI(Module Device) Nov 1 00:20:29.037519 kernel: ACPI: Added _OSI(Processor Device) Nov 1 00:20:29.037529 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 1 00:20:29.037538 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 1 00:20:29.037548 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Nov 1 00:20:29.037561 kernel: ACPI: Interpreter enabled Nov 1 00:20:29.037571 kernel: ACPI: PM: (supports S0 S3 S5) Nov 1 00:20:29.037580 kernel: ACPI: Using IOAPIC for interrupt routing Nov 1 00:20:29.037590 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 1 00:20:29.037599 kernel: PCI: Using E820 reservations for host bridge windows Nov 1 00:20:29.037609 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Nov 1 00:20:29.037619 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 1 00:20:29.037900 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 1 00:20:29.038067 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Nov 1 00:20:29.038270 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Nov 1 00:20:29.038292 kernel: PCI host bridge to bus 0000:00 Nov 1 00:20:29.038474 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 1 00:20:29.038625 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 1 00:20:29.038806 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 1 00:20:29.038945 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Nov 1 00:20:29.039087 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Nov 1 00:20:29.039218 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Nov 1 00:20:29.039361 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 1 00:20:29.039548 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Nov 1 00:20:29.039738 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Nov 1 00:20:29.039886 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Nov 1 00:20:29.040035 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Nov 1 00:20:29.040178 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Nov 1 00:20:29.040346 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Nov 1 00:20:29.040492 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 1 00:20:29.040862 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Nov 1 00:20:29.041012 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Nov 1 00:20:29.041161 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Nov 1 00:20:29.041322 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Nov 1 00:20:29.041491 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Nov 1 00:20:29.041635 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Nov 1 00:20:29.041799 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Nov 1 00:20:29.041943 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Nov 1 00:20:29.042114 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Nov 1 00:20:29.042261 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Nov 1 00:20:29.042424 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Nov 1 00:20:29.042569 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Nov 1 00:20:29.042734 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Nov 1 00:20:29.042904 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Nov 1 00:20:29.043058 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Nov 1 00:20:29.043222 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Nov 1 00:20:29.043384 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Nov 1 00:20:29.043535 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Nov 1 00:20:29.043784 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Nov 1 00:20:29.043931 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Nov 1 00:20:29.043947 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 1 00:20:29.043957 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 1 00:20:29.043967 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 1 00:20:29.043977 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 1 00:20:29.043991 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Nov 1 00:20:29.044001 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Nov 1 00:20:29.044011 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Nov 1 00:20:29.044021 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Nov 1 00:20:29.044031 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Nov 1 00:20:29.044041 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Nov 1 00:20:29.044051 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Nov 1 00:20:29.044061 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Nov 1 00:20:29.044070 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Nov 1 00:20:29.044084 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Nov 1 00:20:29.044093 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Nov 1 00:20:29.044103 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Nov 1 00:20:29.044113 kernel: iommu: Default domain type: Translated Nov 1 00:20:29.044123 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 1 00:20:29.044133 kernel: efivars: Registered efivars operations Nov 1 00:20:29.044143 kernel: PCI: Using ACPI for IRQ routing Nov 1 00:20:29.044153 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 1 00:20:29.044162 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Nov 1 00:20:29.044175 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Nov 1 00:20:29.044185 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Nov 1 00:20:29.044203 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Nov 1 00:20:29.044380 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Nov 1 00:20:29.044526 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Nov 1 00:20:29.044719 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 1 00:20:29.044735 kernel: vgaarb: loaded Nov 1 00:20:29.044746 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 1 00:20:29.044756 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 1 00:20:29.044772 kernel: clocksource: Switched to clocksource kvm-clock Nov 1 00:20:29.044782 kernel: VFS: Disk quotas dquot_6.6.0 Nov 1 00:20:29.044793 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 1 00:20:29.044802 kernel: pnp: PnP ACPI init Nov 1 00:20:29.044977 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Nov 1 00:20:29.044992 kernel: pnp: PnP ACPI: found 6 devices Nov 1 00:20:29.045002 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 1 00:20:29.045012 kernel: NET: Registered PF_INET protocol family Nov 1 00:20:29.045022 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 1 00:20:29.045037 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 1 00:20:29.045049 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 1 00:20:29.045059 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 1 00:20:29.045069 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 1 00:20:29.045079 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 1 00:20:29.045089 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 1 00:20:29.045099 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 1 00:20:29.045109 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 1 00:20:29.045122 kernel: NET: Registered PF_XDP protocol family Nov 1 00:20:29.045266 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Nov 1 00:20:29.045419 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Nov 1 00:20:29.045555 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 1 00:20:29.045701 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 1 00:20:29.045838 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 1 00:20:29.045970 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Nov 1 00:20:29.046099 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Nov 1 00:20:29.046235 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Nov 1 00:20:29.046248 kernel: PCI: CLS 0 bytes, default 64 Nov 1 00:20:29.046258 kernel: Initialise system trusted keyrings Nov 1 00:20:29.046268 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 1 00:20:29.046278 kernel: Key type asymmetric registered Nov 1 00:20:29.046297 kernel: Asymmetric key parser 'x509' registered Nov 1 00:20:29.046307 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Nov 1 00:20:29.046317 kernel: io scheduler mq-deadline registered Nov 1 00:20:29.046327 kernel: io scheduler kyber registered Nov 1 00:20:29.046341 kernel: io scheduler bfq registered Nov 1 00:20:29.046351 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 1 00:20:29.046362 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Nov 1 00:20:29.046372 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Nov 1 00:20:29.046382 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Nov 1 00:20:29.046392 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 1 00:20:29.046402 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 1 00:20:29.046412 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 1 00:20:29.046422 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 1 00:20:29.046435 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 1 00:20:29.046617 kernel: rtc_cmos 00:04: RTC can wake from S4 Nov 1 00:20:29.046631 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 1 00:20:29.046800 kernel: rtc_cmos 00:04: registered as rtc0 Nov 1 00:20:29.046941 kernel: rtc_cmos 00:04: setting system clock to 2025-11-01T00:20:28 UTC (1761956428) Nov 1 00:20:29.047076 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Nov 1 00:20:29.047088 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Nov 1 00:20:29.047098 kernel: efifb: probing for efifb Nov 1 00:20:29.047113 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Nov 1 00:20:29.047123 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Nov 1 00:20:29.047133 kernel: efifb: scrolling: redraw Nov 1 00:20:29.047142 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Nov 1 00:20:29.047152 kernel: Console: switching to colour frame buffer device 100x37 Nov 1 00:20:29.047162 kernel: fb0: EFI VGA frame buffer device Nov 1 00:20:29.047193 kernel: pstore: Using crash dump compression: deflate Nov 1 00:20:29.047206 kernel: pstore: Registered efi_pstore as persistent store backend Nov 1 00:20:29.047217 kernel: NET: Registered PF_INET6 protocol family Nov 1 00:20:29.047230 kernel: Segment Routing with IPv6 Nov 1 00:20:29.047240 kernel: In-situ OAM (IOAM) with IPv6 Nov 1 00:20:29.047250 kernel: NET: Registered PF_PACKET protocol family Nov 1 00:20:29.047260 kernel: Key type dns_resolver registered Nov 1 00:20:29.047270 kernel: IPI shorthand broadcast: enabled Nov 1 00:20:29.047291 kernel: sched_clock: Marking stable (960003271, 221536450)->(1248910718, -67370997) Nov 1 00:20:29.047301 kernel: registered taskstats version 1 Nov 1 00:20:29.047311 kernel: Loading compiled-in X.509 certificates Nov 1 00:20:29.047322 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.113-flatcar: cc4975b6f5d9e3149f7a95c8552b8f9120c3a1f4' Nov 1 00:20:29.047338 kernel: Key type .fscrypt registered Nov 1 00:20:29.047348 kernel: Key type fscrypt-provisioning registered Nov 1 00:20:29.047358 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 1 00:20:29.047368 kernel: ima: Allocated hash algorithm: sha1 Nov 1 00:20:29.047378 kernel: ima: No architecture policies found Nov 1 00:20:29.047388 kernel: clk: Disabling unused clocks Nov 1 00:20:29.047398 kernel: Freeing unused kernel image (initmem) memory: 42884K Nov 1 00:20:29.047408 kernel: Write protecting the kernel read-only data: 36864k Nov 1 00:20:29.047418 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Nov 1 00:20:29.047431 kernel: Run /init as init process Nov 1 00:20:29.047444 kernel: with arguments: Nov 1 00:20:29.047454 kernel: /init Nov 1 00:20:29.047463 kernel: with environment: Nov 1 00:20:29.047473 kernel: HOME=/ Nov 1 00:20:29.047483 kernel: TERM=linux Nov 1 00:20:29.047497 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 1 00:20:29.047510 systemd[1]: Detected virtualization kvm. Nov 1 00:20:29.047524 systemd[1]: Detected architecture x86-64. Nov 1 00:20:29.047534 systemd[1]: Running in initrd. Nov 1 00:20:29.047548 systemd[1]: No hostname configured, using default hostname. Nov 1 00:20:29.047558 systemd[1]: Hostname set to . Nov 1 00:20:29.047572 systemd[1]: Initializing machine ID from VM UUID. Nov 1 00:20:29.047583 systemd[1]: Queued start job for default target initrd.target. Nov 1 00:20:29.047594 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 1 00:20:29.047605 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 1 00:20:29.047616 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 1 00:20:29.047627 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 1 00:20:29.047727 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 1 00:20:29.047739 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 1 00:20:29.047756 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 1 00:20:29.047767 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 1 00:20:29.047778 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 1 00:20:29.047789 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 1 00:20:29.047800 systemd[1]: Reached target paths.target - Path Units. Nov 1 00:20:29.047811 systemd[1]: Reached target slices.target - Slice Units. Nov 1 00:20:29.047821 systemd[1]: Reached target swap.target - Swaps. Nov 1 00:20:29.047832 systemd[1]: Reached target timers.target - Timer Units. Nov 1 00:20:29.047846 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 1 00:20:29.047857 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 1 00:20:29.047867 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 1 00:20:29.047878 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 1 00:20:29.047889 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 1 00:20:29.047900 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 1 00:20:29.047911 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 1 00:20:29.047922 systemd[1]: Reached target sockets.target - Socket Units. Nov 1 00:20:29.047936 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 1 00:20:29.047947 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 1 00:20:29.047958 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 1 00:20:29.047968 systemd[1]: Starting systemd-fsck-usr.service... Nov 1 00:20:29.047979 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 1 00:20:29.047995 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 1 00:20:29.048007 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 00:20:29.048018 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 1 00:20:29.048031 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 1 00:20:29.048048 systemd[1]: Finished systemd-fsck-usr.service. Nov 1 00:20:29.048059 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 1 00:20:29.048101 systemd-journald[193]: Collecting audit messages is disabled. Nov 1 00:20:29.048132 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 1 00:20:29.048143 systemd-journald[193]: Journal started Nov 1 00:20:29.048167 systemd-journald[193]: Runtime Journal (/run/log/journal/8dc62e7212ba4cf1867b18bca4398d9e) is 6.0M, max 48.3M, 42.2M free. Nov 1 00:20:29.060165 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 1 00:20:29.054248 systemd-modules-load[194]: Inserted module 'overlay' Nov 1 00:20:29.065217 systemd[1]: Started systemd-journald.service - Journal Service. Nov 1 00:20:29.067662 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:20:29.097685 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 1 00:20:29.098857 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 1 00:20:29.128130 kernel: Bridge firewalling registered Nov 1 00:20:29.133250 systemd-modules-load[194]: Inserted module 'br_netfilter' Nov 1 00:20:29.135976 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 1 00:20:29.140784 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 1 00:20:29.145341 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 1 00:20:29.150796 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 00:20:29.159842 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 1 00:20:29.162562 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 1 00:20:29.165857 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 1 00:20:29.174310 dracut-cmdline[221]: dracut-dracut-053 Nov 1 00:20:29.177992 dracut-cmdline[221]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ade41980c48607de3d2d18dc444731ec5388853e3a75ed2d5a13ce616b36f478 Nov 1 00:20:29.181290 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 1 00:20:29.200853 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 1 00:20:29.236529 systemd-resolved[250]: Positive Trust Anchors: Nov 1 00:20:29.236544 systemd-resolved[250]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 1 00:20:29.236576 systemd-resolved[250]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 1 00:20:29.239295 systemd-resolved[250]: Defaulting to hostname 'linux'. Nov 1 00:20:29.240709 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 1 00:20:29.243067 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 1 00:20:29.280676 kernel: SCSI subsystem initialized Nov 1 00:20:29.290670 kernel: Loading iSCSI transport class v2.0-870. Nov 1 00:20:29.302683 kernel: iscsi: registered transport (tcp) Nov 1 00:20:29.326763 kernel: iscsi: registered transport (qla4xxx) Nov 1 00:20:29.326852 kernel: QLogic iSCSI HBA Driver Nov 1 00:20:29.417912 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 1 00:20:29.443986 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 1 00:20:29.531702 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 1 00:20:29.531790 kernel: device-mapper: uevent: version 1.0.3 Nov 1 00:20:29.531808 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 1 00:20:29.664383 kernel: raid6: avx2x4 gen() 16384 MB/s Nov 1 00:20:29.684298 kernel: raid6: avx2x2 gen() 15225 MB/s Nov 1 00:20:29.705321 kernel: raid6: avx2x1 gen() 12684 MB/s Nov 1 00:20:29.705403 kernel: raid6: using algorithm avx2x4 gen() 16384 MB/s Nov 1 00:20:29.723211 kernel: raid6: .... xor() 5241 MB/s, rmw enabled Nov 1 00:20:29.723296 kernel: raid6: using avx2x2 recovery algorithm Nov 1 00:20:29.764688 kernel: xor: automatically using best checksumming function avx Nov 1 00:20:30.060686 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 1 00:20:30.081692 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 1 00:20:30.097890 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 1 00:20:30.114882 systemd-udevd[415]: Using default interface naming scheme 'v255'. Nov 1 00:20:30.121751 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 1 00:20:30.141363 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 1 00:20:30.170100 dracut-pre-trigger[425]: rd.md=0: removing MD RAID activation Nov 1 00:20:30.245559 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 1 00:20:30.265856 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 1 00:20:30.445055 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 1 00:20:30.494540 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 1 00:20:30.552071 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 1 00:20:30.560389 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 1 00:20:30.567322 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 1 00:20:30.571750 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 1 00:20:30.631699 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 1 00:20:30.675355 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Nov 1 00:20:30.700534 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Nov 1 00:20:30.702755 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 1 00:20:30.710394 kernel: cryptd: max_cpu_qlen set to 1000 Nov 1 00:20:30.710089 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 1 00:20:30.726503 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 1 00:20:30.726531 kernel: GPT:9289727 != 19775487 Nov 1 00:20:30.726544 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 1 00:20:30.726557 kernel: GPT:9289727 != 19775487 Nov 1 00:20:30.726569 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 1 00:20:30.726582 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 1 00:20:30.710359 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 00:20:30.729340 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 1 00:20:30.731513 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 1 00:20:30.734137 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:20:30.738272 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 00:20:30.754010 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 00:20:30.774134 kernel: libata version 3.00 loaded. Nov 1 00:20:30.802690 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:20:30.834179 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 1 00:20:30.855281 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (458) Nov 1 00:20:30.855362 kernel: ahci 0000:00:1f.2: version 3.0 Nov 1 00:20:30.875676 kernel: BTRFS: device fsid 5d5360dd-ce7d-46d0-bc66-772f2084023b devid 1 transid 34 /dev/vda3 scanned by (udev-worker) (457) Nov 1 00:20:30.875756 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Nov 1 00:20:30.885877 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 00:20:30.937120 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Nov 1 00:20:30.937407 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Nov 1 00:20:30.942476 kernel: scsi host0: ahci Nov 1 00:20:30.942775 kernel: AVX2 version of gcm_enc/dec engaged. Nov 1 00:20:30.942810 kernel: scsi host1: ahci Nov 1 00:20:30.943027 kernel: scsi host2: ahci Nov 1 00:20:30.943218 kernel: scsi host3: ahci Nov 1 00:20:30.943478 kernel: scsi host4: ahci Nov 1 00:20:30.947539 kernel: scsi host5: ahci Nov 1 00:20:30.947757 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Nov 1 00:20:30.949663 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Nov 1 00:20:30.949687 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Nov 1 00:20:30.950727 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Nov 1 00:20:30.956479 kernel: AES CTR mode by8 optimization enabled Nov 1 00:20:30.952156 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 1 00:20:30.983531 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Nov 1 00:20:30.983577 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Nov 1 00:20:30.985329 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 1 00:20:31.018817 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Nov 1 00:20:31.039136 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 1 00:20:31.080825 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 1 00:20:31.125952 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 1 00:20:31.166828 disk-uuid[564]: Primary Header is updated. Nov 1 00:20:31.166828 disk-uuid[564]: Secondary Entries is updated. Nov 1 00:20:31.166828 disk-uuid[564]: Secondary Header is updated. Nov 1 00:20:31.188266 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 1 00:20:31.206716 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 1 00:20:31.299289 kernel: ata2: SATA link down (SStatus 0 SControl 300) Nov 1 00:20:31.299360 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 1 00:20:31.299377 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 1 00:20:31.309091 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Nov 1 00:20:31.317667 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 1 00:20:31.317729 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Nov 1 00:20:31.317745 kernel: ata3.00: applying bridge limits Nov 1 00:20:31.339431 kernel: ata3.00: configured for UDMA/100 Nov 1 00:20:31.374833 kernel: ata1: SATA link down (SStatus 0 SControl 300) Nov 1 00:20:31.386792 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Nov 1 00:20:31.518245 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Nov 1 00:20:31.518718 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 1 00:20:31.545069 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Nov 1 00:20:32.212004 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 1 00:20:32.212079 disk-uuid[565]: The operation has completed successfully. Nov 1 00:20:32.365175 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 1 00:20:32.366259 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 1 00:20:32.435010 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 1 00:20:32.449012 sh[592]: Success Nov 1 00:20:32.527926 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Nov 1 00:20:32.610064 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 1 00:20:32.664944 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 1 00:20:32.690199 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 1 00:20:32.726296 kernel: BTRFS info (device dm-0): first mount of filesystem 5d5360dd-ce7d-46d0-bc66-772f2084023b Nov 1 00:20:32.726368 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:20:32.726384 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 1 00:20:32.726399 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 1 00:20:32.730688 kernel: BTRFS info (device dm-0): using free space tree Nov 1 00:20:32.755319 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 1 00:20:32.761039 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 1 00:20:32.775902 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 1 00:20:32.781933 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 1 00:20:32.839125 kernel: BTRFS info (device vda6): first mount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 00:20:32.839229 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:20:32.839245 kernel: BTRFS info (device vda6): using free space tree Nov 1 00:20:32.853979 kernel: BTRFS info (device vda6): auto enabling async discard Nov 1 00:20:32.876104 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 1 00:20:32.882049 kernel: BTRFS info (device vda6): last unmount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 00:20:32.914025 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 1 00:20:32.930118 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 1 00:20:33.032287 ignition[694]: Ignition 2.19.0 Nov 1 00:20:33.032718 ignition[694]: Stage: fetch-offline Nov 1 00:20:33.032779 ignition[694]: no configs at "/usr/lib/ignition/base.d" Nov 1 00:20:33.032804 ignition[694]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 1 00:20:33.032979 ignition[694]: parsed url from cmdline: "" Nov 1 00:20:33.032988 ignition[694]: no config URL provided Nov 1 00:20:33.033001 ignition[694]: reading system config file "/usr/lib/ignition/user.ign" Nov 1 00:20:33.033023 ignition[694]: no config at "/usr/lib/ignition/user.ign" Nov 1 00:20:33.033081 ignition[694]: op(1): [started] loading QEMU firmware config module Nov 1 00:20:33.033118 ignition[694]: op(1): executing: "modprobe" "qemu_fw_cfg" Nov 1 00:20:33.047883 ignition[694]: op(1): [finished] loading QEMU firmware config module Nov 1 00:20:33.066036 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 1 00:20:33.082036 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 1 00:20:33.132109 systemd-networkd[781]: lo: Link UP Nov 1 00:20:33.132121 systemd-networkd[781]: lo: Gained carrier Nov 1 00:20:33.138816 systemd-networkd[781]: Enumeration completed Nov 1 00:20:33.141034 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 1 00:20:33.141250 systemd[1]: Reached target network.target - Network. Nov 1 00:20:33.147661 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 1 00:20:33.147666 systemd-networkd[781]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 00:20:33.154366 ignition[694]: parsing config with SHA512: 1b05ef72ad06720ae54a6176cb6e4b3be36c9d4b2be28f5c633dad66e2f488aa6b026fa8c98103b92b515067426fc66900e58da5ea1c66b315e7d88e16300c08 Nov 1 00:20:33.148785 systemd-networkd[781]: eth0: Link UP Nov 1 00:20:33.160830 ignition[694]: fetch-offline: fetch-offline passed Nov 1 00:20:33.148790 systemd-networkd[781]: eth0: Gained carrier Nov 1 00:20:33.160907 ignition[694]: Ignition finished successfully Nov 1 00:20:33.148800 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 1 00:20:33.160293 unknown[694]: fetched base config from "system" Nov 1 00:20:33.160308 unknown[694]: fetched user config from "qemu" Nov 1 00:20:33.178926 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 1 00:20:33.179349 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 1 00:20:33.191780 systemd-networkd[781]: eth0: DHCPv4 address 10.0.0.76/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 1 00:20:33.192029 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 1 00:20:33.215816 ignition[784]: Ignition 2.19.0 Nov 1 00:20:33.215828 ignition[784]: Stage: kargs Nov 1 00:20:33.216011 ignition[784]: no configs at "/usr/lib/ignition/base.d" Nov 1 00:20:33.216024 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 1 00:20:33.217050 ignition[784]: kargs: kargs passed Nov 1 00:20:33.217105 ignition[784]: Ignition finished successfully Nov 1 00:20:33.223297 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 1 00:20:33.244042 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 1 00:20:33.262226 ignition[794]: Ignition 2.19.0 Nov 1 00:20:33.262240 ignition[794]: Stage: disks Nov 1 00:20:33.262469 ignition[794]: no configs at "/usr/lib/ignition/base.d" Nov 1 00:20:33.262486 ignition[794]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 1 00:20:33.268030 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 1 00:20:33.263492 ignition[794]: disks: disks passed Nov 1 00:20:33.269069 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 1 00:20:33.263545 ignition[794]: Ignition finished successfully Nov 1 00:20:33.277257 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 1 00:20:33.281480 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 1 00:20:33.284551 systemd[1]: Reached target sysinit.target - System Initialization. Nov 1 00:20:33.289257 systemd[1]: Reached target basic.target - Basic System. Nov 1 00:20:33.313099 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 1 00:20:33.342052 systemd-fsck[806]: ROOT: clean, 14/553520 files, 52654/553472 blocks Nov 1 00:20:33.357108 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 1 00:20:33.371829 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 1 00:20:33.521263 kernel: EXT4-fs (vda9): mounted filesystem cb9d31b8-5e00-461c-b45e-c304d1f8091c r/w with ordered data mode. Quota mode: none. Nov 1 00:20:33.521966 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 1 00:20:33.524734 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 1 00:20:33.540325 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 1 00:20:33.547518 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 1 00:20:33.551301 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 1 00:20:33.551360 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 1 00:20:33.551390 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 1 00:20:33.585501 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (814) Nov 1 00:20:33.585534 kernel: BTRFS info (device vda6): first mount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 00:20:33.585548 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:20:33.585561 kernel: BTRFS info (device vda6): using free space tree Nov 1 00:20:33.568121 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 1 00:20:33.588835 kernel: BTRFS info (device vda6): auto enabling async discard Nov 1 00:20:33.594453 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 1 00:20:33.609115 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 1 00:20:33.657186 initrd-setup-root[838]: cut: /sysroot/etc/passwd: No such file or directory Nov 1 00:20:33.663397 initrd-setup-root[845]: cut: /sysroot/etc/group: No such file or directory Nov 1 00:20:33.670129 initrd-setup-root[852]: cut: /sysroot/etc/shadow: No such file or directory Nov 1 00:20:33.678521 initrd-setup-root[859]: cut: /sysroot/etc/gshadow: No such file or directory Nov 1 00:20:33.869847 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 1 00:20:33.892862 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 1 00:20:33.906553 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 1 00:20:33.923493 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 1 00:20:33.928402 kernel: BTRFS info (device vda6): last unmount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 00:20:33.974810 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 1 00:20:33.988378 ignition[927]: INFO : Ignition 2.19.0 Nov 1 00:20:33.988378 ignition[927]: INFO : Stage: mount Nov 1 00:20:33.991463 ignition[927]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 00:20:33.991463 ignition[927]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 1 00:20:33.997070 ignition[927]: INFO : mount: mount passed Nov 1 00:20:33.998898 ignition[927]: INFO : Ignition finished successfully Nov 1 00:20:34.003230 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 1 00:20:34.021115 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 1 00:20:34.043848 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 1 00:20:34.073706 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (940) Nov 1 00:20:34.077861 kernel: BTRFS info (device vda6): first mount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 00:20:34.077897 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:20:34.077910 kernel: BTRFS info (device vda6): using free space tree Nov 1 00:20:34.089687 kernel: BTRFS info (device vda6): auto enabling async discard Nov 1 00:20:34.093177 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 1 00:20:34.132790 ignition[957]: INFO : Ignition 2.19.0 Nov 1 00:20:34.132790 ignition[957]: INFO : Stage: files Nov 1 00:20:34.136878 ignition[957]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 00:20:34.136878 ignition[957]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 1 00:20:34.136878 ignition[957]: DEBUG : files: compiled without relabeling support, skipping Nov 1 00:20:34.136878 ignition[957]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 1 00:20:34.136878 ignition[957]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 1 00:20:34.148349 ignition[957]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 1 00:20:34.148349 ignition[957]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 1 00:20:34.148349 ignition[957]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 1 00:20:34.148349 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Nov 1 00:20:34.148349 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Nov 1 00:20:34.148349 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 1 00:20:34.148349 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Nov 1 00:20:34.142593 unknown[957]: wrote ssh authorized keys file for user: core Nov 1 00:20:34.202934 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Nov 1 00:20:34.394297 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 1 00:20:34.394297 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 1 00:20:34.415821 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Nov 1 00:20:34.637496 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Nov 1 00:20:34.790067 systemd-networkd[781]: eth0: Gained IPv6LL Nov 1 00:20:34.874982 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 1 00:20:34.878591 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Nov 1 00:20:34.878591 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Nov 1 00:20:34.878591 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 1 00:20:34.878591 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 1 00:20:34.878591 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 00:20:34.878591 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 00:20:34.878591 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 00:20:34.878591 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 00:20:34.878591 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 00:20:34.878591 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 00:20:34.878591 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 00:20:34.878591 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 00:20:34.878591 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 00:20:34.878591 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Nov 1 00:20:35.122827 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Nov 1 00:20:35.994150 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 00:20:35.994150 ignition[957]: INFO : files: op(d): [started] processing unit "containerd.service" Nov 1 00:20:36.002633 ignition[957]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Nov 1 00:20:36.002633 ignition[957]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Nov 1 00:20:36.002633 ignition[957]: INFO : files: op(d): [finished] processing unit "containerd.service" Nov 1 00:20:36.002633 ignition[957]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Nov 1 00:20:36.002633 ignition[957]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 00:20:36.002633 ignition[957]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 00:20:36.002633 ignition[957]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Nov 1 00:20:36.002633 ignition[957]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" Nov 1 00:20:36.002633 ignition[957]: INFO : files: op(11): op(12): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 1 00:20:36.002633 ignition[957]: INFO : files: op(11): op(12): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 1 00:20:36.002633 ignition[957]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" Nov 1 00:20:36.002633 ignition[957]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" Nov 1 00:20:36.049967 ignition[957]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" Nov 1 00:20:36.054465 ignition[957]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" Nov 1 00:20:36.057827 ignition[957]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" Nov 1 00:20:36.057827 ignition[957]: INFO : files: op(15): [started] setting preset to enabled for "prepare-helm.service" Nov 1 00:20:36.057827 ignition[957]: INFO : files: op(15): [finished] setting preset to enabled for "prepare-helm.service" Nov 1 00:20:36.057827 ignition[957]: INFO : files: createResultFile: createFiles: op(16): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 1 00:20:36.057827 ignition[957]: INFO : files: createResultFile: createFiles: op(16): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 1 00:20:36.057827 ignition[957]: INFO : files: files passed Nov 1 00:20:36.057827 ignition[957]: INFO : Ignition finished successfully Nov 1 00:20:36.060102 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 1 00:20:36.078242 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 1 00:20:36.082931 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 1 00:20:36.086217 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 1 00:20:36.086396 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 1 00:20:36.102774 initrd-setup-root-after-ignition[985]: grep: /sysroot/oem/oem-release: No such file or directory Nov 1 00:20:36.108103 initrd-setup-root-after-ignition[987]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 1 00:20:36.108103 initrd-setup-root-after-ignition[987]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 1 00:20:36.114332 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 1 00:20:36.120681 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 1 00:20:36.129657 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 1 00:20:36.152070 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 1 00:20:36.186782 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 1 00:20:36.187037 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 1 00:20:36.193397 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 1 00:20:36.196457 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 1 00:20:36.200315 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 1 00:20:36.222377 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 1 00:20:36.244563 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 1 00:20:36.257994 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 1 00:20:36.291817 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 1 00:20:36.297305 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 1 00:20:36.303411 systemd[1]: Stopped target timers.target - Timer Units. Nov 1 00:20:36.309399 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 1 00:20:36.311592 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 1 00:20:36.320236 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 1 00:20:36.322864 systemd[1]: Stopped target basic.target - Basic System. Nov 1 00:20:36.333176 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 1 00:20:36.338033 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 1 00:20:36.343851 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 1 00:20:36.348715 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 1 00:20:36.354465 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 1 00:20:36.355206 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 1 00:20:36.379287 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 1 00:20:36.385361 systemd[1]: Stopped target swap.target - Swaps. Nov 1 00:20:36.387224 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 1 00:20:36.387435 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 1 00:20:36.394916 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 1 00:20:36.398222 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 1 00:20:36.400575 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 1 00:20:36.400853 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 1 00:20:36.405783 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 1 00:20:36.405989 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 1 00:20:36.413378 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 1 00:20:36.413593 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 1 00:20:36.416231 systemd[1]: Stopped target paths.target - Path Units. Nov 1 00:20:36.419904 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 1 00:20:36.424745 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 1 00:20:36.425796 systemd[1]: Stopped target slices.target - Slice Units. Nov 1 00:20:36.431751 systemd[1]: Stopped target sockets.target - Socket Units. Nov 1 00:20:36.433419 systemd[1]: iscsid.socket: Deactivated successfully. Nov 1 00:20:36.433554 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 1 00:20:36.438509 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 1 00:20:36.438630 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 1 00:20:36.447692 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 1 00:20:36.447874 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 1 00:20:36.514121 systemd[1]: ignition-files.service: Deactivated successfully. Nov 1 00:20:36.514321 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 1 00:20:36.535979 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 1 00:20:36.536159 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 1 00:20:36.536339 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 1 00:20:36.550893 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 1 00:20:36.553279 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 1 00:20:36.553543 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 1 00:20:36.571723 ignition[1011]: INFO : Ignition 2.19.0 Nov 1 00:20:36.571723 ignition[1011]: INFO : Stage: umount Nov 1 00:20:36.571723 ignition[1011]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 00:20:36.571723 ignition[1011]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 1 00:20:36.571723 ignition[1011]: INFO : umount: umount passed Nov 1 00:20:36.571723 ignition[1011]: INFO : Ignition finished successfully Nov 1 00:20:36.559732 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 1 00:20:36.559922 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 1 00:20:36.565834 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 1 00:20:36.565998 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 1 00:20:36.572270 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 1 00:20:36.572420 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 1 00:20:36.575242 systemd[1]: Stopped target network.target - Network. Nov 1 00:20:36.576322 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 1 00:20:36.576405 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 1 00:20:36.579288 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 1 00:20:36.579349 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 1 00:20:36.580350 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 1 00:20:36.580474 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 1 00:20:36.581367 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 1 00:20:36.581444 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 1 00:20:36.582230 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 1 00:20:36.582978 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 1 00:20:36.625796 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 1 00:20:36.635794 systemd-networkd[781]: eth0: DHCPv6 lease lost Nov 1 00:20:36.636623 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 1 00:20:36.640867 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 1 00:20:36.641102 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 1 00:20:36.647810 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 1 00:20:36.647869 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 1 00:20:36.655795 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 1 00:20:36.657561 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 1 00:20:36.659380 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 1 00:20:36.666221 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 1 00:20:36.666335 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 1 00:20:36.669936 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 1 00:20:36.671635 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 1 00:20:36.675699 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 1 00:20:36.677459 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 1 00:20:36.686038 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 1 00:20:36.694112 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 1 00:20:36.695225 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 1 00:20:36.695402 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 1 00:20:36.707209 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 1 00:20:36.707464 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 1 00:20:36.713807 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 1 00:20:36.714008 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 1 00:20:36.717426 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 1 00:20:36.717554 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 1 00:20:36.721093 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 1 00:20:36.721141 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 1 00:20:36.725268 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 1 00:20:36.725363 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 1 00:20:36.731448 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 1 00:20:36.731575 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 1 00:20:36.735891 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 1 00:20:36.735991 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 00:20:36.740532 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 1 00:20:36.740636 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 1 00:20:36.754875 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 1 00:20:36.758317 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 1 00:20:36.758401 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 1 00:20:36.762391 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 1 00:20:36.762474 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:20:36.766874 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 1 00:20:36.767029 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 1 00:20:36.772765 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 1 00:20:36.786869 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 1 00:20:36.795132 systemd[1]: Switching root. Nov 1 00:20:36.831789 systemd-journald[193]: Journal stopped Nov 1 00:20:39.190205 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Nov 1 00:20:39.190300 kernel: SELinux: policy capability network_peer_controls=1 Nov 1 00:20:39.190321 kernel: SELinux: policy capability open_perms=1 Nov 1 00:20:39.190335 kernel: SELinux: policy capability extended_socket_class=1 Nov 1 00:20:39.190356 kernel: SELinux: policy capability always_check_network=0 Nov 1 00:20:39.190371 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 1 00:20:39.190386 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 1 00:20:39.190399 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 1 00:20:39.190413 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 1 00:20:39.190431 kernel: audit: type=1403 audit(1761956437.637:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 1 00:20:39.190452 systemd[1]: Successfully loaded SELinux policy in 51.304ms. Nov 1 00:20:39.190483 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 15.353ms. Nov 1 00:20:39.190500 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 1 00:20:39.190515 systemd[1]: Detected virtualization kvm. Nov 1 00:20:39.190530 systemd[1]: Detected architecture x86-64. Nov 1 00:20:39.190544 systemd[1]: Detected first boot. Nov 1 00:20:39.190559 systemd[1]: Initializing machine ID from VM UUID. Nov 1 00:20:39.190574 zram_generator::config[1078]: No configuration found. Nov 1 00:20:39.190599 systemd[1]: Populated /etc with preset unit settings. Nov 1 00:20:39.190614 systemd[1]: Queued start job for default target multi-user.target. Nov 1 00:20:39.190629 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 1 00:20:39.192689 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 1 00:20:39.192712 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 1 00:20:39.192727 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 1 00:20:39.192742 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 1 00:20:39.192758 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 1 00:20:39.192784 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 1 00:20:39.192799 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 1 00:20:39.192814 systemd[1]: Created slice user.slice - User and Session Slice. Nov 1 00:20:39.192830 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 1 00:20:39.192846 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 1 00:20:39.192861 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 1 00:20:39.192876 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 1 00:20:39.192892 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 1 00:20:39.192907 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 1 00:20:39.192930 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 1 00:20:39.192945 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 1 00:20:39.192960 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 1 00:20:39.192975 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 1 00:20:39.192990 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 1 00:20:39.193014 systemd[1]: Reached target slices.target - Slice Units. Nov 1 00:20:39.193029 systemd[1]: Reached target swap.target - Swaps. Nov 1 00:20:39.193044 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 1 00:20:39.193064 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 1 00:20:39.193080 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 1 00:20:39.193098 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 1 00:20:39.193113 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 1 00:20:39.193128 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 1 00:20:39.193143 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 1 00:20:39.193158 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 1 00:20:39.193174 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 1 00:20:39.193190 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 1 00:20:39.193208 systemd[1]: Mounting media.mount - External Media Directory... Nov 1 00:20:39.193223 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:20:39.193238 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 1 00:20:39.193253 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 1 00:20:39.193267 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 1 00:20:39.193283 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 1 00:20:39.193304 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 1 00:20:39.193320 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 1 00:20:39.193335 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 1 00:20:39.193356 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 1 00:20:39.193371 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 1 00:20:39.193387 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 1 00:20:39.193403 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 1 00:20:39.193419 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 1 00:20:39.193434 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 1 00:20:39.193449 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Nov 1 00:20:39.193465 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Nov 1 00:20:39.193486 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 1 00:20:39.193501 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 1 00:20:39.193516 kernel: fuse: init (API version 7.39) Nov 1 00:20:39.193531 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 1 00:20:39.193545 kernel: loop: module loaded Nov 1 00:20:39.193560 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 1 00:20:39.193575 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 1 00:20:39.193591 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:20:39.193606 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 1 00:20:39.193624 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 1 00:20:39.193663 kernel: ACPI: bus type drm_connector registered Nov 1 00:20:39.193678 systemd[1]: Mounted media.mount - External Media Directory. Nov 1 00:20:39.193693 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 1 00:20:39.193708 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 1 00:20:39.193723 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 1 00:20:39.193738 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 1 00:20:39.193753 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 1 00:20:39.193774 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 1 00:20:39.193790 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:20:39.193805 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 1 00:20:39.193820 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 1 00:20:39.193835 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 1 00:20:39.193854 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:20:39.193868 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 1 00:20:39.193884 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 1 00:20:39.193898 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 1 00:20:39.193914 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:20:39.193929 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 1 00:20:39.193946 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 1 00:20:39.193962 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 1 00:20:39.193978 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 1 00:20:39.194006 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 1 00:20:39.194022 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 1 00:20:39.194038 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 1 00:20:39.194053 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 1 00:20:39.194089 systemd-journald[1152]: Collecting audit messages is disabled. Nov 1 00:20:39.194116 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 1 00:20:39.194135 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 1 00:20:39.194150 systemd-journald[1152]: Journal started Nov 1 00:20:39.203565 systemd-journald[1152]: Runtime Journal (/run/log/journal/8dc62e7212ba4cf1867b18bca4398d9e) is 6.0M, max 48.3M, 42.2M free. Nov 1 00:20:39.203621 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 1 00:20:39.203663 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 1 00:20:39.201295 systemd-tmpfiles[1172]: ACLs are not supported, ignoring. Nov 1 00:20:39.201312 systemd-tmpfiles[1172]: ACLs are not supported, ignoring. Nov 1 00:20:39.241835 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 1 00:20:39.251424 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 1 00:20:39.256730 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 00:20:39.260706 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 1 00:20:39.264681 systemd[1]: Started systemd-journald.service - Journal Service. Nov 1 00:20:39.269102 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 1 00:20:39.272031 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 1 00:20:39.278365 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 1 00:20:39.345829 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 1 00:20:39.350117 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Nov 1 00:20:39.357501 systemd-journald[1152]: Time spent on flushing to /var/log/journal/8dc62e7212ba4cf1867b18bca4398d9e is 15.854ms for 988 entries. Nov 1 00:20:39.357501 systemd-journald[1152]: System Journal (/var/log/journal/8dc62e7212ba4cf1867b18bca4398d9e) is 8.0M, max 195.6M, 187.6M free. Nov 1 00:20:39.400022 systemd-journald[1152]: Received client request to flush runtime journal. Nov 1 00:20:39.363231 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 1 00:20:39.365945 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 1 00:20:39.370353 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 1 00:20:39.380923 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 1 00:20:39.384925 udevadm[1222]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Nov 1 00:20:39.403306 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 1 00:20:39.415984 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 1 00:20:39.426931 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 1 00:20:39.449491 systemd-tmpfiles[1235]: ACLs are not supported, ignoring. Nov 1 00:20:39.449516 systemd-tmpfiles[1235]: ACLs are not supported, ignoring. Nov 1 00:20:39.458134 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 1 00:20:40.335752 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 1 00:20:40.350163 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 1 00:20:40.382867 systemd-udevd[1241]: Using default interface naming scheme 'v255'. Nov 1 00:20:40.410165 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 1 00:20:40.423317 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 1 00:20:40.440913 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 1 00:20:40.456427 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Nov 1 00:20:40.466668 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1249) Nov 1 00:20:40.569180 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 1 00:20:40.683742 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Nov 1 00:20:40.695426 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Nov 1 00:20:40.704700 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Nov 1 00:20:40.708429 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Nov 1 00:20:40.709334 kernel: ACPI: button: Power Button [PWRF] Nov 1 00:20:40.709361 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Nov 1 00:20:40.709599 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Nov 1 00:20:40.710573 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 1 00:20:40.735874 systemd-networkd[1245]: lo: Link UP Nov 1 00:20:40.735888 systemd-networkd[1245]: lo: Gained carrier Nov 1 00:20:40.744944 systemd-networkd[1245]: Enumeration completed Nov 1 00:20:40.745624 systemd-networkd[1245]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 1 00:20:40.745629 systemd-networkd[1245]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 00:20:40.745765 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 1 00:20:40.747343 systemd-networkd[1245]: eth0: Link UP Nov 1 00:20:40.747465 systemd-networkd[1245]: eth0: Gained carrier Nov 1 00:20:40.747542 systemd-networkd[1245]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 1 00:20:40.756793 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 1 00:20:40.757664 kernel: mousedev: PS/2 mouse device common for all mice Nov 1 00:20:40.760730 systemd-networkd[1245]: eth0: DHCPv4 address 10.0.0.76/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 1 00:20:40.777988 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 00:20:40.910028 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 1 00:20:40.910449 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:20:40.915243 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 00:20:40.922255 kernel: kvm_amd: TSC scaling supported Nov 1 00:20:40.922315 kernel: kvm_amd: Nested Virtualization enabled Nov 1 00:20:40.922366 kernel: kvm_amd: Nested Paging enabled Nov 1 00:20:40.923205 kernel: kvm_amd: LBR virtualization supported Nov 1 00:20:40.924292 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Nov 1 00:20:40.924335 kernel: kvm_amd: Virtual GIF supported Nov 1 00:20:40.955667 kernel: EDAC MC: Ver: 3.0.0 Nov 1 00:20:40.989432 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:20:41.029475 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Nov 1 00:20:41.044021 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Nov 1 00:20:41.053893 lvm[1291]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 1 00:20:41.120246 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Nov 1 00:20:41.167130 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 1 00:20:41.179937 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Nov 1 00:20:41.237877 lvm[1295]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 1 00:20:41.261362 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Nov 1 00:20:41.300138 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 1 00:20:41.302345 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 1 00:20:41.302385 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 1 00:20:41.304444 systemd[1]: Reached target machines.target - Containers. Nov 1 00:20:41.308190 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Nov 1 00:20:41.319971 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 1 00:20:41.370694 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 1 00:20:41.372769 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 1 00:20:41.374442 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 1 00:20:41.378176 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Nov 1 00:20:41.382845 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 1 00:20:41.439877 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 1 00:20:41.446058 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 1 00:20:41.452680 kernel: loop0: detected capacity change from 0 to 140768 Nov 1 00:20:41.590675 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 1 00:20:41.616685 kernel: loop1: detected capacity change from 0 to 224512 Nov 1 00:20:41.659695 kernel: loop2: detected capacity change from 0 to 142488 Nov 1 00:20:41.726813 kernel: loop3: detected capacity change from 0 to 140768 Nov 1 00:20:41.790860 kernel: loop4: detected capacity change from 0 to 224512 Nov 1 00:20:41.814693 kernel: loop5: detected capacity change from 0 to 142488 Nov 1 00:20:41.820567 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 1 00:20:41.821591 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Nov 1 00:20:41.831488 (sd-merge)[1313]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Nov 1 00:20:41.832474 (sd-merge)[1313]: Merged extensions into '/usr'. Nov 1 00:20:41.837454 systemd[1]: Reloading requested from client PID 1303 ('systemd-sysext') (unit systemd-sysext.service)... Nov 1 00:20:41.837475 systemd[1]: Reloading... Nov 1 00:20:41.910700 zram_generator::config[1344]: No configuration found. Nov 1 00:20:42.133676 ldconfig[1300]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 1 00:20:42.141685 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:20:42.222283 systemd[1]: Reloading finished in 384 ms. Nov 1 00:20:42.246131 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 1 00:20:42.250129 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 1 00:20:42.278847 systemd-networkd[1245]: eth0: Gained IPv6LL Nov 1 00:20:42.285283 systemd[1]: Starting ensure-sysext.service... Nov 1 00:20:42.290239 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 1 00:20:42.326393 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 1 00:20:42.336273 systemd[1]: Reloading requested from client PID 1387 ('systemctl') (unit ensure-sysext.service)... Nov 1 00:20:42.336304 systemd[1]: Reloading... Nov 1 00:20:42.355376 systemd-tmpfiles[1389]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 1 00:20:42.355801 systemd-tmpfiles[1389]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 1 00:20:42.356941 systemd-tmpfiles[1389]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 1 00:20:42.357298 systemd-tmpfiles[1389]: ACLs are not supported, ignoring. Nov 1 00:20:42.357382 systemd-tmpfiles[1389]: ACLs are not supported, ignoring. Nov 1 00:20:42.361382 systemd-tmpfiles[1389]: Detected autofs mount point /boot during canonicalization of boot. Nov 1 00:20:42.361398 systemd-tmpfiles[1389]: Skipping /boot Nov 1 00:20:42.378592 systemd-tmpfiles[1389]: Detected autofs mount point /boot during canonicalization of boot. Nov 1 00:20:42.378611 systemd-tmpfiles[1389]: Skipping /boot Nov 1 00:20:42.423682 zram_generator::config[1421]: No configuration found. Nov 1 00:20:42.572335 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:20:42.659129 systemd[1]: Reloading finished in 322 ms. Nov 1 00:20:42.680086 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 1 00:20:42.763924 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 1 00:20:42.822217 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 1 00:20:42.838845 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 1 00:20:42.845866 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 1 00:20:42.851323 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 1 00:20:42.861364 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:20:42.861564 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 1 00:20:42.864482 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 1 00:20:42.869595 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 1 00:20:42.914106 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 1 00:20:42.920130 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 1 00:20:42.920510 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:20:42.922890 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 1 00:20:42.929828 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:20:42.930364 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 1 00:20:42.935635 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:20:42.936106 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 1 00:20:42.940161 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:20:42.940622 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 1 00:20:42.963249 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 1 00:20:42.966927 augenrules[1497]: No rules Nov 1 00:20:42.968636 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 1 00:20:42.972052 systemd[1]: Finished ensure-sysext.service. Nov 1 00:20:42.975008 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 1 00:20:42.986707 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:20:42.987294 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 1 00:20:42.998885 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 1 00:20:43.010982 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 1 00:20:43.014593 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 1 00:20:43.018878 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 1 00:20:43.021057 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 1 00:20:43.026393 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 1 00:20:43.054842 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 1 00:20:43.057170 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 1 00:20:43.057220 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:20:43.058665 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:20:43.059127 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 1 00:20:43.062636 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 1 00:20:43.062989 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 1 00:20:43.065718 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:20:43.065981 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 1 00:20:43.068715 systemd-resolved[1474]: Positive Trust Anchors: Nov 1 00:20:43.068735 systemd-resolved[1474]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 1 00:20:43.068775 systemd-resolved[1474]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 1 00:20:43.068855 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:20:43.069188 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 1 00:20:43.077931 systemd-resolved[1474]: Defaulting to hostname 'linux'. Nov 1 00:20:43.078395 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 1 00:20:43.130736 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 1 00:20:43.134481 systemd[1]: Reached target network.target - Network. Nov 1 00:20:43.136711 systemd[1]: Reached target network-online.target - Network is Online. Nov 1 00:20:43.138836 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 1 00:20:43.141091 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 00:20:43.141187 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 1 00:20:43.218995 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 1 00:20:43.245431 systemd[1]: Reached target sysinit.target - System Initialization. Nov 1 00:20:43.776904 systemd-resolved[1474]: Clock change detected. Flushing caches. Nov 1 00:20:43.776981 systemd-timesyncd[1515]: Contacted time server 10.0.0.1:123 (10.0.0.1). Nov 1 00:20:43.777052 systemd-timesyncd[1515]: Initial clock synchronization to Sat 2025-11-01 00:20:43.776816 UTC. Nov 1 00:20:43.778641 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 1 00:20:43.818889 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 1 00:20:43.821479 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 1 00:20:43.823915 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 1 00:20:43.823956 systemd[1]: Reached target paths.target - Path Units. Nov 1 00:20:43.825704 systemd[1]: Reached target time-set.target - System Time Set. Nov 1 00:20:43.827913 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 1 00:20:43.830073 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 1 00:20:43.832468 systemd[1]: Reached target timers.target - Timer Units. Nov 1 00:20:43.834947 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 1 00:20:43.839303 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 1 00:20:43.842624 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 1 00:20:43.859442 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 1 00:20:43.861617 systemd[1]: Reached target sockets.target - Socket Units. Nov 1 00:20:43.863554 systemd[1]: Reached target basic.target - Basic System. Nov 1 00:20:43.865621 systemd[1]: System is tainted: cgroupsv1 Nov 1 00:20:43.865686 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 1 00:20:43.865722 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 1 00:20:43.867769 systemd[1]: Starting containerd.service - containerd container runtime... Nov 1 00:20:43.871483 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Nov 1 00:20:43.875634 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 1 00:20:43.882385 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 1 00:20:43.888189 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 1 00:20:43.890295 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 1 00:20:43.894301 jq[1533]: false Nov 1 00:20:43.894228 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:20:43.900184 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 1 00:20:43.905742 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 1 00:20:43.910730 extend-filesystems[1535]: Found loop3 Nov 1 00:20:43.988120 extend-filesystems[1535]: Found loop4 Nov 1 00:20:43.988120 extend-filesystems[1535]: Found loop5 Nov 1 00:20:43.988120 extend-filesystems[1535]: Found sr0 Nov 1 00:20:43.988120 extend-filesystems[1535]: Found vda Nov 1 00:20:43.988120 extend-filesystems[1535]: Found vda1 Nov 1 00:20:43.988120 extend-filesystems[1535]: Found vda2 Nov 1 00:20:43.988120 extend-filesystems[1535]: Found vda3 Nov 1 00:20:43.988120 extend-filesystems[1535]: Found usr Nov 1 00:20:43.988120 extend-filesystems[1535]: Found vda4 Nov 1 00:20:43.988120 extend-filesystems[1535]: Found vda6 Nov 1 00:20:43.988120 extend-filesystems[1535]: Found vda7 Nov 1 00:20:43.988120 extend-filesystems[1535]: Found vda9 Nov 1 00:20:43.988120 extend-filesystems[1535]: Checking size of /dev/vda9 Nov 1 00:20:44.091210 extend-filesystems[1535]: Resized partition /dev/vda9 Nov 1 00:20:43.990717 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 1 00:20:43.994830 dbus-daemon[1531]: [system] SELinux support is enabled Nov 1 00:20:44.104854 extend-filesystems[1556]: resize2fs 1.47.1 (20-May-2024) Nov 1 00:20:43.995637 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 1 00:20:44.076536 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 1 00:20:44.082549 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 1 00:20:44.083374 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 1 00:20:44.087520 systemd[1]: Starting update-engine.service - Update Engine... Nov 1 00:20:44.092964 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 1 00:20:44.102872 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 1 00:20:44.115848 jq[1559]: true Nov 1 00:20:44.113780 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 1 00:20:44.114142 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 1 00:20:44.119823 systemd[1]: motdgen.service: Deactivated successfully. Nov 1 00:20:44.120140 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 1 00:20:44.147312 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Nov 1 00:20:44.150126 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 1 00:20:44.150561 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 1 00:20:44.169848 update_engine[1558]: I20251101 00:20:44.169764 1558 main.cc:92] Flatcar Update Engine starting Nov 1 00:20:44.171488 update_engine[1558]: I20251101 00:20:44.171460 1558 update_check_scheduler.cc:74] Next update check in 8m33s Nov 1 00:20:44.184305 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1576) Nov 1 00:20:44.200718 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 1 00:20:44.201665 jq[1581]: true Nov 1 00:20:44.202913 (ntainerd)[1583]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 1 00:20:44.248527 systemd[1]: coreos-metadata.service: Deactivated successfully. Nov 1 00:20:44.249135 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Nov 1 00:20:44.264604 systemd[1]: Started update-engine.service - Update Engine. Nov 1 00:20:44.267125 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 1 00:20:44.267230 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 1 00:20:44.267255 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 1 00:20:44.270141 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 1 00:20:44.270165 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 1 00:20:44.273564 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 1 00:20:44.284445 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 1 00:20:44.489041 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 1 00:20:44.689565 tar[1572]: linux-amd64/LICENSE Nov 1 00:20:44.693554 tar[1572]: linux-amd64/helm Nov 1 00:20:44.696641 systemd-logind[1557]: Watching system buttons on /dev/input/event1 (Power Button) Nov 1 00:20:44.696683 systemd-logind[1557]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 1 00:20:44.697065 systemd-logind[1557]: New seat seat0. Nov 1 00:20:44.699243 systemd[1]: Started systemd-logind.service - User Login Management. Nov 1 00:20:44.843866 sshd_keygen[1573]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 1 00:20:44.866843 locksmithd[1617]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 1 00:20:44.871319 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Nov 1 00:20:44.881643 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 1 00:20:44.899752 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 1 00:20:44.915248 systemd[1]: Started sshd@0-10.0.0.76:22-10.0.0.1:33070.service - OpenSSH per-connection server daemon (10.0.0.1:33070). Nov 1 00:20:44.923873 systemd[1]: issuegen.service: Deactivated successfully. Nov 1 00:20:44.924314 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 1 00:20:44.943629 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 1 00:20:45.010546 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 1 00:20:45.035965 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 1 00:20:45.040148 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 1 00:20:45.043159 systemd[1]: Reached target getty.target - Login Prompts. Nov 1 00:20:45.275603 extend-filesystems[1556]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 1 00:20:45.275603 extend-filesystems[1556]: old_desc_blocks = 1, new_desc_blocks = 1 Nov 1 00:20:45.275603 extend-filesystems[1556]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Nov 1 00:20:45.283750 extend-filesystems[1535]: Resized filesystem in /dev/vda9 Nov 1 00:20:45.280730 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 1 00:20:45.281216 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 1 00:20:45.371781 sshd[1638]: Connection closed by authenticating user core 10.0.0.1 port 33070 [preauth] Nov 1 00:20:45.376328 systemd[1]: sshd@0-10.0.0.76:22-10.0.0.1:33070.service: Deactivated successfully. Nov 1 00:20:45.515382 bash[1618]: Updated "/home/core/.ssh/authorized_keys" Nov 1 00:20:45.517893 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 1 00:20:45.522668 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Nov 1 00:20:45.602787 containerd[1583]: time="2025-11-01T00:20:45.602630528Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Nov 1 00:20:45.635742 containerd[1583]: time="2025-11-01T00:20:45.635682891Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:20:45.638616 containerd[1583]: time="2025-11-01T00:20:45.638532534Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.113-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:20:45.638616 containerd[1583]: time="2025-11-01T00:20:45.638600873Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 1 00:20:45.638678 containerd[1583]: time="2025-11-01T00:20:45.638624848Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 1 00:20:45.638934 containerd[1583]: time="2025-11-01T00:20:45.638905414Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Nov 1 00:20:45.638979 containerd[1583]: time="2025-11-01T00:20:45.638931913Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Nov 1 00:20:45.639057 containerd[1583]: time="2025-11-01T00:20:45.639027322Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:20:45.639057 containerd[1583]: time="2025-11-01T00:20:45.639050846Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:20:45.639487 containerd[1583]: time="2025-11-01T00:20:45.639436950Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:20:45.639487 containerd[1583]: time="2025-11-01T00:20:45.639473549Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 1 00:20:45.639542 containerd[1583]: time="2025-11-01T00:20:45.639489839Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:20:45.639542 containerd[1583]: time="2025-11-01T00:20:45.639503355Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 1 00:20:45.639839 containerd[1583]: time="2025-11-01T00:20:45.639806363Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:20:45.640139 containerd[1583]: time="2025-11-01T00:20:45.640111585Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:20:45.640366 containerd[1583]: time="2025-11-01T00:20:45.640336186Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:20:45.640366 containerd[1583]: time="2025-11-01T00:20:45.640358498Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 1 00:20:45.640532 containerd[1583]: time="2025-11-01T00:20:45.640500084Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 1 00:20:45.640600 containerd[1583]: time="2025-11-01T00:20:45.640581647Z" level=info msg="metadata content store policy set" policy=shared Nov 1 00:20:45.853600 containerd[1583]: time="2025-11-01T00:20:45.853389231Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 1 00:20:45.853600 containerd[1583]: time="2025-11-01T00:20:45.853558558Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 1 00:20:45.853600 containerd[1583]: time="2025-11-01T00:20:45.853584928Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Nov 1 00:20:45.853936 containerd[1583]: time="2025-11-01T00:20:45.853674746Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Nov 1 00:20:45.853936 containerd[1583]: time="2025-11-01T00:20:45.853706726Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 1 00:20:45.854042 containerd[1583]: time="2025-11-01T00:20:45.854028139Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 1 00:20:45.854695 containerd[1583]: time="2025-11-01T00:20:45.854620469Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 1 00:20:45.854845 containerd[1583]: time="2025-11-01T00:20:45.854826195Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Nov 1 00:20:45.854878 containerd[1583]: time="2025-11-01T00:20:45.854848126Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Nov 1 00:20:45.854929 containerd[1583]: time="2025-11-01T00:20:45.854874085Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Nov 1 00:20:45.854929 containerd[1583]: time="2025-11-01T00:20:45.854894423Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 1 00:20:45.854929 containerd[1583]: time="2025-11-01T00:20:45.854910493Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 1 00:20:45.854929 containerd[1583]: time="2025-11-01T00:20:45.854925902Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 1 00:20:45.855014 containerd[1583]: time="2025-11-01T00:20:45.854942814Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 1 00:20:45.855014 containerd[1583]: time="2025-11-01T00:20:45.854959926Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 1 00:20:45.855014 containerd[1583]: time="2025-11-01T00:20:45.854975986Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 1 00:20:45.855014 containerd[1583]: time="2025-11-01T00:20:45.854990804Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 1 00:20:45.855014 containerd[1583]: time="2025-11-01T00:20:45.855011162Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 1 00:20:45.855128 containerd[1583]: time="2025-11-01T00:20:45.855037080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 1 00:20:45.855128 containerd[1583]: time="2025-11-01T00:20:45.855059132Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 1 00:20:45.855128 containerd[1583]: time="2025-11-01T00:20:45.855075783Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 1 00:20:45.855128 containerd[1583]: time="2025-11-01T00:20:45.855089940Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 1 00:20:45.855128 containerd[1583]: time="2025-11-01T00:20:45.855106851Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 1 00:20:45.855128 containerd[1583]: time="2025-11-01T00:20:45.855122641Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 1 00:20:45.855301 containerd[1583]: time="2025-11-01T00:20:45.855136878Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 1 00:20:45.855301 containerd[1583]: time="2025-11-01T00:20:45.855154210Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 1 00:20:45.855301 containerd[1583]: time="2025-11-01T00:20:45.855169749Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Nov 1 00:20:45.855301 containerd[1583]: time="2025-11-01T00:20:45.855188975Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Nov 1 00:20:45.855301 containerd[1583]: time="2025-11-01T00:20:45.855205356Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 1 00:20:45.855301 containerd[1583]: time="2025-11-01T00:20:45.855221687Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Nov 1 00:20:45.855301 containerd[1583]: time="2025-11-01T00:20:45.855239250Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 1 00:20:45.855301 containerd[1583]: time="2025-11-01T00:20:45.855260059Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Nov 1 00:20:45.855666 containerd[1583]: time="2025-11-01T00:20:45.855306746Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Nov 1 00:20:45.855666 containerd[1583]: time="2025-11-01T00:20:45.855324109Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 1 00:20:45.855666 containerd[1583]: time="2025-11-01T00:20:45.855338666Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 1 00:20:45.855666 containerd[1583]: time="2025-11-01T00:20:45.855418185Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 1 00:20:45.855666 containerd[1583]: time="2025-11-01T00:20:45.855441479Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Nov 1 00:20:45.855666 containerd[1583]: time="2025-11-01T00:20:45.855464963Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 1 00:20:45.855666 containerd[1583]: time="2025-11-01T00:20:45.855478678Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Nov 1 00:20:45.855666 containerd[1583]: time="2025-11-01T00:20:45.855489799Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 1 00:20:45.855666 containerd[1583]: time="2025-11-01T00:20:45.855506410Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Nov 1 00:20:45.855666 containerd[1583]: time="2025-11-01T00:20:45.855519415Z" level=info msg="NRI interface is disabled by configuration." Nov 1 00:20:45.855666 containerd[1583]: time="2025-11-01T00:20:45.855532349Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 1 00:20:45.856009 containerd[1583]: time="2025-11-01T00:20:45.855891763Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 1 00:20:45.856009 containerd[1583]: time="2025-11-01T00:20:45.855962125Z" level=info msg="Connect containerd service" Nov 1 00:20:45.856009 containerd[1583]: time="2025-11-01T00:20:45.856001940Z" level=info msg="using legacy CRI server" Nov 1 00:20:45.856009 containerd[1583]: time="2025-11-01T00:20:45.856010015Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 1 00:20:45.856817 containerd[1583]: time="2025-11-01T00:20:45.856137404Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 1 00:20:45.857418 containerd[1583]: time="2025-11-01T00:20:45.857384752Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 1 00:20:45.858872 containerd[1583]: time="2025-11-01T00:20:45.857897193Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 1 00:20:45.858872 containerd[1583]: time="2025-11-01T00:20:45.857965962Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 1 00:20:45.858872 containerd[1583]: time="2025-11-01T00:20:45.858046914Z" level=info msg="Start subscribing containerd event" Nov 1 00:20:45.858872 containerd[1583]: time="2025-11-01T00:20:45.858091748Z" level=info msg="Start recovering state" Nov 1 00:20:45.858872 containerd[1583]: time="2025-11-01T00:20:45.858200001Z" level=info msg="Start event monitor" Nov 1 00:20:45.858872 containerd[1583]: time="2025-11-01T00:20:45.858227563Z" level=info msg="Start snapshots syncer" Nov 1 00:20:45.858872 containerd[1583]: time="2025-11-01T00:20:45.858241068Z" level=info msg="Start cni network conf syncer for default" Nov 1 00:20:45.858872 containerd[1583]: time="2025-11-01T00:20:45.858284650Z" level=info msg="Start streaming server" Nov 1 00:20:45.858872 containerd[1583]: time="2025-11-01T00:20:45.858377814Z" level=info msg="containerd successfully booted in 0.257453s" Nov 1 00:20:45.858731 systemd[1]: Started containerd.service - containerd container runtime. Nov 1 00:20:45.898388 tar[1572]: linux-amd64/README.md Nov 1 00:20:45.921344 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 1 00:20:47.374314 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:20:47.377727 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 1 00:20:47.380400 systemd[1]: Startup finished in 9.994s (kernel) + 9.262s (userspace) = 19.257s. Nov 1 00:20:47.381538 (kubelet)[1681]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 1 00:20:48.083908 kubelet[1681]: E1101 00:20:48.083766 1681 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:20:48.088007 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:20:48.088357 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:20:55.385524 systemd[1]: Started sshd@1-10.0.0.76:22-10.0.0.1:49380.service - OpenSSH per-connection server daemon (10.0.0.1:49380). Nov 1 00:20:55.430700 sshd[1694]: Accepted publickey for core from 10.0.0.1 port 49380 ssh2: RSA SHA256:jhdmrhZXRbwOcBbBZQf03C3Vb9eSOfvUjDz7MOKA8jE Nov 1 00:20:55.433636 sshd[1694]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:20:55.444821 systemd-logind[1557]: New session 1 of user core. Nov 1 00:20:55.446177 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 1 00:20:55.456560 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 1 00:20:55.472495 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 1 00:20:55.475431 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 1 00:20:55.485238 (systemd)[1699]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:20:55.606569 systemd[1699]: Queued start job for default target default.target. Nov 1 00:20:55.606962 systemd[1699]: Created slice app.slice - User Application Slice. Nov 1 00:20:55.606985 systemd[1699]: Reached target paths.target - Paths. Nov 1 00:20:55.606998 systemd[1699]: Reached target timers.target - Timers. Nov 1 00:20:55.622384 systemd[1699]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 1 00:20:55.630357 systemd[1699]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 1 00:20:55.630430 systemd[1699]: Reached target sockets.target - Sockets. Nov 1 00:20:55.630453 systemd[1699]: Reached target basic.target - Basic System. Nov 1 00:20:55.630495 systemd[1699]: Reached target default.target - Main User Target. Nov 1 00:20:55.630539 systemd[1699]: Startup finished in 135ms. Nov 1 00:20:55.631509 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 1 00:20:55.633689 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 1 00:20:55.697897 systemd[1]: Started sshd@2-10.0.0.76:22-10.0.0.1:49382.service - OpenSSH per-connection server daemon (10.0.0.1:49382). Nov 1 00:20:55.740340 sshd[1712]: Accepted publickey for core from 10.0.0.1 port 49382 ssh2: RSA SHA256:jhdmrhZXRbwOcBbBZQf03C3Vb9eSOfvUjDz7MOKA8jE Nov 1 00:20:55.742711 sshd[1712]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:20:55.749725 systemd-logind[1557]: New session 2 of user core. Nov 1 00:20:55.759615 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 1 00:20:55.822477 sshd[1712]: pam_unix(sshd:session): session closed for user core Nov 1 00:20:55.838630 systemd[1]: Started sshd@3-10.0.0.76:22-10.0.0.1:49384.service - OpenSSH per-connection server daemon (10.0.0.1:49384). Nov 1 00:20:55.839377 systemd[1]: sshd@2-10.0.0.76:22-10.0.0.1:49382.service: Deactivated successfully. Nov 1 00:20:55.842564 systemd[1]: session-2.scope: Deactivated successfully. Nov 1 00:20:55.843384 systemd-logind[1557]: Session 2 logged out. Waiting for processes to exit. Nov 1 00:20:55.844933 systemd-logind[1557]: Removed session 2. Nov 1 00:20:55.869919 sshd[1717]: Accepted publickey for core from 10.0.0.1 port 49384 ssh2: RSA SHA256:jhdmrhZXRbwOcBbBZQf03C3Vb9eSOfvUjDz7MOKA8jE Nov 1 00:20:55.871951 sshd[1717]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:20:55.877403 systemd-logind[1557]: New session 3 of user core. Nov 1 00:20:55.886589 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 1 00:20:55.940160 sshd[1717]: pam_unix(sshd:session): session closed for user core Nov 1 00:20:55.952611 systemd[1]: Started sshd@4-10.0.0.76:22-10.0.0.1:49386.service - OpenSSH per-connection server daemon (10.0.0.1:49386). Nov 1 00:20:55.953319 systemd[1]: sshd@3-10.0.0.76:22-10.0.0.1:49384.service: Deactivated successfully. Nov 1 00:20:55.957031 systemd-logind[1557]: Session 3 logged out. Waiting for processes to exit. Nov 1 00:20:55.957766 systemd[1]: session-3.scope: Deactivated successfully. Nov 1 00:20:55.958912 systemd-logind[1557]: Removed session 3. Nov 1 00:20:55.985482 sshd[1725]: Accepted publickey for core from 10.0.0.1 port 49386 ssh2: RSA SHA256:jhdmrhZXRbwOcBbBZQf03C3Vb9eSOfvUjDz7MOKA8jE Nov 1 00:20:55.987586 sshd[1725]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:20:55.992441 systemd-logind[1557]: New session 4 of user core. Nov 1 00:20:56.006692 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 1 00:20:56.066980 sshd[1725]: pam_unix(sshd:session): session closed for user core Nov 1 00:20:56.080558 systemd[1]: Started sshd@5-10.0.0.76:22-10.0.0.1:49398.service - OpenSSH per-connection server daemon (10.0.0.1:49398). Nov 1 00:20:56.081208 systemd[1]: sshd@4-10.0.0.76:22-10.0.0.1:49386.service: Deactivated successfully. Nov 1 00:20:56.083495 systemd[1]: session-4.scope: Deactivated successfully. Nov 1 00:20:56.085077 systemd-logind[1557]: Session 4 logged out. Waiting for processes to exit. Nov 1 00:20:56.086386 systemd-logind[1557]: Removed session 4. Nov 1 00:20:56.111181 sshd[1733]: Accepted publickey for core from 10.0.0.1 port 49398 ssh2: RSA SHA256:jhdmrhZXRbwOcBbBZQf03C3Vb9eSOfvUjDz7MOKA8jE Nov 1 00:20:56.113518 sshd[1733]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:20:56.118474 systemd-logind[1557]: New session 5 of user core. Nov 1 00:20:56.129598 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 1 00:20:56.193421 sudo[1740]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 1 00:20:56.193970 sudo[1740]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 1 00:20:56.219995 sudo[1740]: pam_unix(sudo:session): session closed for user root Nov 1 00:20:56.222786 sshd[1733]: pam_unix(sshd:session): session closed for user core Nov 1 00:20:56.235840 systemd[1]: Started sshd@6-10.0.0.76:22-10.0.0.1:49412.service - OpenSSH per-connection server daemon (10.0.0.1:49412). Nov 1 00:20:56.236946 systemd[1]: sshd@5-10.0.0.76:22-10.0.0.1:49398.service: Deactivated successfully. Nov 1 00:20:56.239897 systemd[1]: session-5.scope: Deactivated successfully. Nov 1 00:20:56.242932 systemd-logind[1557]: Session 5 logged out. Waiting for processes to exit. Nov 1 00:20:56.245179 systemd-logind[1557]: Removed session 5. Nov 1 00:20:56.268320 sshd[1742]: Accepted publickey for core from 10.0.0.1 port 49412 ssh2: RSA SHA256:jhdmrhZXRbwOcBbBZQf03C3Vb9eSOfvUjDz7MOKA8jE Nov 1 00:20:56.270337 sshd[1742]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:20:56.275835 systemd-logind[1557]: New session 6 of user core. Nov 1 00:20:56.293201 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 1 00:20:56.353850 sudo[1750]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 1 00:20:56.354226 sudo[1750]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 1 00:20:56.358833 sudo[1750]: pam_unix(sudo:session): session closed for user root Nov 1 00:20:56.368115 sudo[1749]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Nov 1 00:20:56.368695 sudo[1749]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 1 00:20:56.389053 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Nov 1 00:20:56.391338 auditctl[1753]: No rules Nov 1 00:20:56.393077 systemd[1]: audit-rules.service: Deactivated successfully. Nov 1 00:20:56.393569 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Nov 1 00:20:56.396446 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 1 00:20:56.438865 augenrules[1772]: No rules Nov 1 00:20:56.441903 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 1 00:20:56.443793 sudo[1749]: pam_unix(sudo:session): session closed for user root Nov 1 00:20:56.446683 sshd[1742]: pam_unix(sshd:session): session closed for user core Nov 1 00:20:56.458529 systemd[1]: Started sshd@7-10.0.0.76:22-10.0.0.1:49420.service - OpenSSH per-connection server daemon (10.0.0.1:49420). Nov 1 00:20:56.459056 systemd[1]: sshd@6-10.0.0.76:22-10.0.0.1:49412.service: Deactivated successfully. Nov 1 00:20:56.461953 systemd-logind[1557]: Session 6 logged out. Waiting for processes to exit. Nov 1 00:20:56.463342 systemd[1]: session-6.scope: Deactivated successfully. Nov 1 00:20:56.464511 systemd-logind[1557]: Removed session 6. Nov 1 00:20:56.498202 sshd[1778]: Accepted publickey for core from 10.0.0.1 port 49420 ssh2: RSA SHA256:jhdmrhZXRbwOcBbBZQf03C3Vb9eSOfvUjDz7MOKA8jE Nov 1 00:20:56.501544 sshd[1778]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:20:56.508415 systemd-logind[1557]: New session 7 of user core. Nov 1 00:20:56.518594 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 1 00:20:56.577125 sudo[1785]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 1 00:20:56.577613 sudo[1785]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 1 00:20:58.338984 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 1 00:20:58.813532 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:20:59.096719 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 1 00:20:59.096892 (dockerd)[1809]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 1 00:20:59.116346 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:20:59.123922 (kubelet)[1817]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 1 00:20:59.192629 kubelet[1817]: E1101 00:20:59.192542 1817 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:20:59.201619 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:20:59.201991 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:20:59.423226 dockerd[1809]: time="2025-11-01T00:20:59.423030535Z" level=info msg="Starting up" Nov 1 00:21:00.383574 dockerd[1809]: time="2025-11-01T00:21:00.383495176Z" level=info msg="Loading containers: start." Nov 1 00:21:00.619317 kernel: Initializing XFRM netlink socket Nov 1 00:21:00.956952 systemd-networkd[1245]: docker0: Link UP Nov 1 00:21:00.990855 dockerd[1809]: time="2025-11-01T00:21:00.990767982Z" level=info msg="Loading containers: done." Nov 1 00:21:01.202919 dockerd[1809]: time="2025-11-01T00:21:01.202826361Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 1 00:21:01.203158 dockerd[1809]: time="2025-11-01T00:21:01.202987113Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Nov 1 00:21:01.203158 dockerd[1809]: time="2025-11-01T00:21:01.203142714Z" level=info msg="Daemon has completed initialization" Nov 1 00:21:01.257679 dockerd[1809]: time="2025-11-01T00:21:01.257490577Z" level=info msg="API listen on /run/docker.sock" Nov 1 00:21:01.258633 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 1 00:21:02.655236 containerd[1583]: time="2025-11-01T00:21:02.655158704Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Nov 1 00:21:03.393207 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4201262626.mount: Deactivated successfully. Nov 1 00:21:04.876792 containerd[1583]: time="2025-11-01T00:21:04.876707828Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:21:04.877623 containerd[1583]: time="2025-11-01T00:21:04.877496256Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=28837916" Nov 1 00:21:04.879156 containerd[1583]: time="2025-11-01T00:21:04.879098330Z" level=info msg="ImageCreate event name:\"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:21:04.883119 containerd[1583]: time="2025-11-01T00:21:04.883070248Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:21:04.884967 containerd[1583]: time="2025-11-01T00:21:04.884910829Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"28834515\" in 2.229680149s" Nov 1 00:21:04.885022 containerd[1583]: time="2025-11-01T00:21:04.884970100Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Nov 1 00:21:04.885768 containerd[1583]: time="2025-11-01T00:21:04.885728281Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Nov 1 00:21:06.849966 containerd[1583]: time="2025-11-01T00:21:06.849875206Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:21:06.850694 containerd[1583]: time="2025-11-01T00:21:06.850614282Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=24787027" Nov 1 00:21:06.852081 containerd[1583]: time="2025-11-01T00:21:06.852022703Z" level=info msg="ImageCreate event name:\"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:21:06.855716 containerd[1583]: time="2025-11-01T00:21:06.855627542Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:21:06.856879 containerd[1583]: time="2025-11-01T00:21:06.856799570Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"26421706\" in 1.971021174s" Nov 1 00:21:06.856879 containerd[1583]: time="2025-11-01T00:21:06.856858911Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Nov 1 00:21:06.857475 containerd[1583]: time="2025-11-01T00:21:06.857425353Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Nov 1 00:21:08.564878 containerd[1583]: time="2025-11-01T00:21:08.564780403Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:21:08.566074 containerd[1583]: time="2025-11-01T00:21:08.566018545Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=19176289" Nov 1 00:21:08.567454 containerd[1583]: time="2025-11-01T00:21:08.567409994Z" level=info msg="ImageCreate event name:\"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:21:08.570965 containerd[1583]: time="2025-11-01T00:21:08.570921648Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:21:08.572401 containerd[1583]: time="2025-11-01T00:21:08.572363221Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"20810986\" in 1.714903674s" Nov 1 00:21:08.572457 containerd[1583]: time="2025-11-01T00:21:08.572403627Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Nov 1 00:21:08.572931 containerd[1583]: time="2025-11-01T00:21:08.572903614Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Nov 1 00:21:09.402950 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 1 00:21:09.412447 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:21:09.719958 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:21:09.726103 (kubelet)[2053]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 1 00:21:10.028379 kubelet[2053]: E1101 00:21:10.028194 2053 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:21:10.032988 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:21:10.033375 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:21:10.539929 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3316427929.mount: Deactivated successfully. Nov 1 00:21:11.465859 containerd[1583]: time="2025-11-01T00:21:11.465702010Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:21:11.466869 containerd[1583]: time="2025-11-01T00:21:11.466685314Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=30924206" Nov 1 00:21:11.468163 containerd[1583]: time="2025-11-01T00:21:11.468106609Z" level=info msg="ImageCreate event name:\"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:21:11.471491 containerd[1583]: time="2025-11-01T00:21:11.471410023Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:21:11.472912 containerd[1583]: time="2025-11-01T00:21:11.472839062Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"30923225\" in 2.899895042s" Nov 1 00:21:11.472912 containerd[1583]: time="2025-11-01T00:21:11.472896590Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Nov 1 00:21:11.473623 containerd[1583]: time="2025-11-01T00:21:11.473591313Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Nov 1 00:21:11.987371 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount576307465.mount: Deactivated successfully. Nov 1 00:21:14.367258 containerd[1583]: time="2025-11-01T00:21:14.367178518Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:21:14.368306 containerd[1583]: time="2025-11-01T00:21:14.368242864Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Nov 1 00:21:14.369764 containerd[1583]: time="2025-11-01T00:21:14.369722077Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:21:14.373133 containerd[1583]: time="2025-11-01T00:21:14.373103858Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:21:14.374646 containerd[1583]: time="2025-11-01T00:21:14.374610002Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.90097666s" Nov 1 00:21:14.374728 containerd[1583]: time="2025-11-01T00:21:14.374646080Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Nov 1 00:21:14.375290 containerd[1583]: time="2025-11-01T00:21:14.375112304Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 1 00:21:15.020300 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4013182924.mount: Deactivated successfully. Nov 1 00:21:15.028142 containerd[1583]: time="2025-11-01T00:21:15.028093834Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:21:15.028973 containerd[1583]: time="2025-11-01T00:21:15.028903192Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Nov 1 00:21:15.030194 containerd[1583]: time="2025-11-01T00:21:15.030144018Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:21:15.032234 containerd[1583]: time="2025-11-01T00:21:15.032198220Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:21:15.032950 containerd[1583]: time="2025-11-01T00:21:15.032919803Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 657.779957ms" Nov 1 00:21:15.033148 containerd[1583]: time="2025-11-01T00:21:15.032950180Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 1 00:21:15.033477 containerd[1583]: time="2025-11-01T00:21:15.033456239Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Nov 1 00:21:15.515152 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1191115204.mount: Deactivated successfully. Nov 1 00:21:19.231155 containerd[1583]: time="2025-11-01T00:21:19.231071594Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:21:19.232736 containerd[1583]: time="2025-11-01T00:21:19.232670665Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Nov 1 00:21:19.234276 containerd[1583]: time="2025-11-01T00:21:19.234228258Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:21:19.239096 containerd[1583]: time="2025-11-01T00:21:19.239029049Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:21:19.241115 containerd[1583]: time="2025-11-01T00:21:19.241026939Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 4.207539692s" Nov 1 00:21:19.241115 containerd[1583]: time="2025-11-01T00:21:19.241108646Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Nov 1 00:21:20.152946 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Nov 1 00:21:20.167667 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:21:20.348665 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:21:20.354864 (kubelet)[2215]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 1 00:21:20.410222 kubelet[2215]: E1101 00:21:20.410022 2215 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:21:20.415607 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:21:20.415966 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:21:23.146026 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:21:23.165748 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:21:23.266415 systemd[1]: Reloading requested from client PID 2232 ('systemctl') (unit session-7.scope)... Nov 1 00:21:23.270198 systemd[1]: Reloading... Nov 1 00:21:23.468301 zram_generator::config[2278]: No configuration found. Nov 1 00:21:25.008360 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:21:25.112459 systemd[1]: Reloading finished in 1840 ms. Nov 1 00:21:25.245395 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 1 00:21:25.245544 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 1 00:21:25.246105 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:21:25.266634 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:21:25.529721 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:21:25.534030 (kubelet)[2331]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 1 00:21:25.755805 kubelet[2331]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:21:25.755805 kubelet[2331]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 1 00:21:25.755805 kubelet[2331]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:21:25.756412 kubelet[2331]: I1101 00:21:25.755820 2331 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 1 00:21:25.989636 kubelet[2331]: I1101 00:21:25.989571 2331 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 1 00:21:25.989636 kubelet[2331]: I1101 00:21:25.989617 2331 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 1 00:21:25.989964 kubelet[2331]: I1101 00:21:25.989940 2331 server.go:954] "Client rotation is on, will bootstrap in background" Nov 1 00:21:26.030174 kubelet[2331]: E1101 00:21:26.030105 2331 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.76:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.76:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:21:26.032114 kubelet[2331]: I1101 00:21:26.032066 2331 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 1 00:21:26.043493 kubelet[2331]: E1101 00:21:26.043433 2331 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 1 00:21:26.043493 kubelet[2331]: I1101 00:21:26.043477 2331 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 1 00:21:26.050706 kubelet[2331]: I1101 00:21:26.050381 2331 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 1 00:21:26.052926 kubelet[2331]: I1101 00:21:26.052833 2331 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 1 00:21:26.053207 kubelet[2331]: I1101 00:21:26.052911 2331 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Nov 1 00:21:26.053375 kubelet[2331]: I1101 00:21:26.053216 2331 topology_manager.go:138] "Creating topology manager with none policy" Nov 1 00:21:26.053375 kubelet[2331]: I1101 00:21:26.053231 2331 container_manager_linux.go:304] "Creating device plugin manager" Nov 1 00:21:26.053535 kubelet[2331]: I1101 00:21:26.053502 2331 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:21:26.062311 kubelet[2331]: I1101 00:21:26.062147 2331 kubelet.go:446] "Attempting to sync node with API server" Nov 1 00:21:26.062311 kubelet[2331]: I1101 00:21:26.062219 2331 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 1 00:21:26.062311 kubelet[2331]: I1101 00:21:26.062295 2331 kubelet.go:352] "Adding apiserver pod source" Nov 1 00:21:26.062311 kubelet[2331]: I1101 00:21:26.062324 2331 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 1 00:21:26.077928 kubelet[2331]: W1101 00:21:26.077854 2331 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.76:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.76:6443: connect: connection refused Nov 1 00:21:26.078247 kubelet[2331]: E1101 00:21:26.078200 2331 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.76:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.76:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:21:26.078518 kubelet[2331]: I1101 00:21:26.078492 2331 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 1 00:21:26.079347 kubelet[2331]: I1101 00:21:26.079315 2331 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 1 00:21:26.080700 kubelet[2331]: W1101 00:21:26.080157 2331 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 1 00:21:26.081380 kubelet[2331]: W1101 00:21:26.081318 2331 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.76:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.76:6443: connect: connection refused Nov 1 00:21:26.081464 kubelet[2331]: E1101 00:21:26.081390 2331 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.76:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.76:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:21:26.137530 kubelet[2331]: I1101 00:21:26.137458 2331 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 1 00:21:26.137714 kubelet[2331]: I1101 00:21:26.137561 2331 server.go:1287] "Started kubelet" Nov 1 00:21:26.139808 kubelet[2331]: I1101 00:21:26.139192 2331 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 1 00:21:26.140253 kubelet[2331]: I1101 00:21:26.140198 2331 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 1 00:21:26.143457 kubelet[2331]: I1101 00:21:26.142253 2331 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 1 00:21:26.143457 kubelet[2331]: I1101 00:21:26.142820 2331 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 1 00:21:26.144846 kubelet[2331]: I1101 00:21:26.144815 2331 server.go:479] "Adding debug handlers to kubelet server" Nov 1 00:21:26.146846 kubelet[2331]: I1101 00:21:26.146360 2331 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 1 00:21:26.148450 kubelet[2331]: E1101 00:21:26.148422 2331 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 1 00:21:26.148779 kubelet[2331]: I1101 00:21:26.148762 2331 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 1 00:21:26.149074 kubelet[2331]: I1101 00:21:26.149059 2331 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 1 00:21:26.149210 kubelet[2331]: I1101 00:21:26.149199 2331 reconciler.go:26] "Reconciler: start to sync state" Nov 1 00:21:26.149959 kubelet[2331]: W1101 00:21:26.149922 2331 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.76:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.76:6443: connect: connection refused Nov 1 00:21:26.150056 kubelet[2331]: E1101 00:21:26.150038 2331 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.76:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.76:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:21:26.150213 kubelet[2331]: I1101 00:21:26.150139 2331 factory.go:221] Registration of the systemd container factory successfully Nov 1 00:21:26.150394 kubelet[2331]: I1101 00:21:26.150288 2331 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 1 00:21:26.153211 kubelet[2331]: E1101 00:21:26.150767 2331 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.76:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.76:6443: connect: connection refused" interval="200ms" Nov 1 00:21:26.153211 kubelet[2331]: E1101 00:21:26.151312 2331 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 1 00:21:26.153211 kubelet[2331]: I1101 00:21:26.153104 2331 factory.go:221] Registration of the containerd container factory successfully Nov 1 00:21:26.153459 kubelet[2331]: E1101 00:21:26.149889 2331 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.76:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.76:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1873ba16c1ec205a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-11-01 00:21:26.137520218 +0000 UTC m=+0.594874415,LastTimestamp:2025-11-01 00:21:26.137520218 +0000 UTC m=+0.594874415,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 1 00:21:26.175643 kubelet[2331]: I1101 00:21:26.175548 2331 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 1 00:21:26.178653 kubelet[2331]: I1101 00:21:26.178597 2331 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 1 00:21:26.178772 kubelet[2331]: I1101 00:21:26.178661 2331 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 1 00:21:26.178772 kubelet[2331]: I1101 00:21:26.178696 2331 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 1 00:21:26.178772 kubelet[2331]: I1101 00:21:26.178712 2331 kubelet.go:2382] "Starting kubelet main sync loop" Nov 1 00:21:26.178917 kubelet[2331]: E1101 00:21:26.178788 2331 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 1 00:21:26.183002 kubelet[2331]: W1101 00:21:26.182938 2331 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.76:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.76:6443: connect: connection refused Nov 1 00:21:26.183209 kubelet[2331]: E1101 00:21:26.183146 2331 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.76:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.76:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:21:26.185640 kubelet[2331]: I1101 00:21:26.185179 2331 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 1 00:21:26.185640 kubelet[2331]: I1101 00:21:26.185202 2331 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 1 00:21:26.185640 kubelet[2331]: I1101 00:21:26.185241 2331 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:21:26.249739 kubelet[2331]: E1101 00:21:26.249550 2331 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 1 00:21:26.280020 kubelet[2331]: E1101 00:21:26.279937 2331 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 1 00:21:26.350306 kubelet[2331]: E1101 00:21:26.350230 2331 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 1 00:21:26.351980 kubelet[2331]: E1101 00:21:26.351940 2331 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.76:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.76:6443: connect: connection refused" interval="400ms" Nov 1 00:21:26.451439 kubelet[2331]: E1101 00:21:26.451354 2331 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 1 00:21:26.480797 kubelet[2331]: E1101 00:21:26.480630 2331 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 1 00:21:26.501696 kubelet[2331]: I1101 00:21:26.501045 2331 policy_none.go:49] "None policy: Start" Nov 1 00:21:26.501696 kubelet[2331]: I1101 00:21:26.501106 2331 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 1 00:21:26.501696 kubelet[2331]: I1101 00:21:26.501134 2331 state_mem.go:35] "Initializing new in-memory state store" Nov 1 00:21:26.537231 kubelet[2331]: I1101 00:21:26.537170 2331 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 1 00:21:26.538865 kubelet[2331]: I1101 00:21:26.537527 2331 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 1 00:21:26.541585 kubelet[2331]: I1101 00:21:26.537558 2331 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 1 00:21:26.541585 kubelet[2331]: I1101 00:21:26.541038 2331 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 1 00:21:26.544020 kubelet[2331]: E1101 00:21:26.543993 2331 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 1 00:21:26.544170 kubelet[2331]: E1101 00:21:26.544153 2331 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Nov 1 00:21:26.648161 kubelet[2331]: I1101 00:21:26.647783 2331 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 1 00:21:26.648680 kubelet[2331]: E1101 00:21:26.648627 2331 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.76:6443/api/v1/nodes\": dial tcp 10.0.0.76:6443: connect: connection refused" node="localhost" Nov 1 00:21:26.753997 kubelet[2331]: E1101 00:21:26.752767 2331 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.76:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.76:6443: connect: connection refused" interval="800ms" Nov 1 00:21:26.850590 kubelet[2331]: I1101 00:21:26.850537 2331 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 1 00:21:26.851099 kubelet[2331]: E1101 00:21:26.851064 2331 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.76:6443/api/v1/nodes\": dial tcp 10.0.0.76:6443: connect: connection refused" node="localhost" Nov 1 00:21:26.889405 kubelet[2331]: E1101 00:21:26.889353 2331 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 00:21:26.890501 kubelet[2331]: E1101 00:21:26.890472 2331 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 00:21:26.892774 kubelet[2331]: E1101 00:21:26.892739 2331 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 00:21:26.951515 kubelet[2331]: I1101 00:21:26.951304 2331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:21:26.951806 kubelet[2331]: I1101 00:21:26.951523 2331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:21:26.951806 kubelet[2331]: I1101 00:21:26.951655 2331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fde7ffffbd3a0a21523ab920350d203d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"fde7ffffbd3a0a21523ab920350d203d\") " pod="kube-system/kube-apiserver-localhost" Nov 1 00:21:26.951806 kubelet[2331]: I1101 00:21:26.951686 2331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fde7ffffbd3a0a21523ab920350d203d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"fde7ffffbd3a0a21523ab920350d203d\") " pod="kube-system/kube-apiserver-localhost" Nov 1 00:21:26.951806 kubelet[2331]: I1101 00:21:26.951735 2331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:21:26.951806 kubelet[2331]: I1101 00:21:26.951757 2331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a1d51be1ff02022474f2598f6e43038f-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a1d51be1ff02022474f2598f6e43038f\") " pod="kube-system/kube-scheduler-localhost" Nov 1 00:21:26.952050 kubelet[2331]: I1101 00:21:26.951777 2331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fde7ffffbd3a0a21523ab920350d203d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"fde7ffffbd3a0a21523ab920350d203d\") " pod="kube-system/kube-apiserver-localhost" Nov 1 00:21:26.952050 kubelet[2331]: I1101 00:21:26.951808 2331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:21:26.952050 kubelet[2331]: I1101 00:21:26.951832 2331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:21:27.191160 kubelet[2331]: E1101 00:21:27.190879 2331 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:21:27.191160 kubelet[2331]: E1101 00:21:27.191052 2331 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:21:27.192221 containerd[1583]: time="2025-11-01T00:21:27.191989538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a1d51be1ff02022474f2598f6e43038f,Namespace:kube-system,Attempt:0,}" Nov 1 00:21:27.192221 containerd[1583]: time="2025-11-01T00:21:27.192026999Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:fde7ffffbd3a0a21523ab920350d203d,Namespace:kube-system,Attempt:0,}" Nov 1 00:21:27.193727 kubelet[2331]: E1101 00:21:27.193703 2331 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:21:27.194082 containerd[1583]: time="2025-11-01T00:21:27.194028424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4654b122dbb389158fe3c0766e603624,Namespace:kube-system,Attempt:0,}" Nov 1 00:21:27.233215 kubelet[2331]: W1101 00:21:27.233107 2331 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.76:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.76:6443: connect: connection refused Nov 1 00:21:27.233215 kubelet[2331]: E1101 00:21:27.233191 2331 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.76:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.76:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:21:27.256198 kubelet[2331]: I1101 00:21:27.256159 2331 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 1 00:21:27.256915 kubelet[2331]: E1101 00:21:27.256866 2331 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.76:6443/api/v1/nodes\": dial tcp 10.0.0.76:6443: connect: connection refused" node="localhost" Nov 1 00:21:27.555356 kubelet[2331]: E1101 00:21:27.555140 2331 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.76:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.76:6443: connect: connection refused" interval="1.6s" Nov 1 00:21:27.565888 kubelet[2331]: W1101 00:21:27.565756 2331 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.76:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.76:6443: connect: connection refused Nov 1 00:21:27.565994 kubelet[2331]: E1101 00:21:27.565896 2331 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.76:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.76:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:21:27.567535 kubelet[2331]: W1101 00:21:27.567476 2331 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.76:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.76:6443: connect: connection refused Nov 1 00:21:27.567601 kubelet[2331]: E1101 00:21:27.567536 2331 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.76:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.76:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:21:27.678406 kubelet[2331]: W1101 00:21:27.678260 2331 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.76:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.76:6443: connect: connection refused Nov 1 00:21:27.678406 kubelet[2331]: E1101 00:21:27.678405 2331 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.76:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.76:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:21:28.059649 kubelet[2331]: I1101 00:21:28.059599 2331 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 1 00:21:28.060189 kubelet[2331]: E1101 00:21:28.060051 2331 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.76:6443/api/v1/nodes\": dial tcp 10.0.0.76:6443: connect: connection refused" node="localhost" Nov 1 00:21:28.091633 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3390401258.mount: Deactivated successfully. Nov 1 00:21:28.100141 containerd[1583]: time="2025-11-01T00:21:28.100078207Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 00:21:28.101737 containerd[1583]: time="2025-11-01T00:21:28.101672921Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 00:21:28.102893 containerd[1583]: time="2025-11-01T00:21:28.102826442Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 1 00:21:28.104155 containerd[1583]: time="2025-11-01T00:21:28.104103114Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 00:21:28.105431 containerd[1583]: time="2025-11-01T00:21:28.105389737Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Nov 1 00:21:28.106324 containerd[1583]: time="2025-11-01T00:21:28.106283065Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 1 00:21:28.107296 containerd[1583]: time="2025-11-01T00:21:28.107239934Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 00:21:28.111866 containerd[1583]: time="2025-11-01T00:21:28.111813909Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 00:21:28.112863 containerd[1583]: time="2025-11-01T00:21:28.112819560Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 918.731383ms" Nov 1 00:21:28.115844 containerd[1583]: time="2025-11-01T00:21:28.115809702Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 923.727598ms" Nov 1 00:21:28.117590 containerd[1583]: time="2025-11-01T00:21:28.116493233Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 924.418865ms" Nov 1 00:21:28.224626 kubelet[2331]: E1101 00:21:28.200170 2331 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.76:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.76:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:21:28.489299 containerd[1583]: time="2025-11-01T00:21:28.488965906Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:21:28.489299 containerd[1583]: time="2025-11-01T00:21:28.489087005Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:21:28.489299 containerd[1583]: time="2025-11-01T00:21:28.489105500Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:21:28.491416 containerd[1583]: time="2025-11-01T00:21:28.491114538Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:21:28.491416 containerd[1583]: time="2025-11-01T00:21:28.491174271Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:21:28.491416 containerd[1583]: time="2025-11-01T00:21:28.491186744Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:21:28.491416 containerd[1583]: time="2025-11-01T00:21:28.491348449Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:21:28.494343 containerd[1583]: time="2025-11-01T00:21:28.490077647Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:21:28.501164 containerd[1583]: time="2025-11-01T00:21:28.500372997Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:21:28.501164 containerd[1583]: time="2025-11-01T00:21:28.500511318Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:21:28.501164 containerd[1583]: time="2025-11-01T00:21:28.500567695Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:21:28.501164 containerd[1583]: time="2025-11-01T00:21:28.500815133Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:21:28.596444 containerd[1583]: time="2025-11-01T00:21:28.596366951Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4654b122dbb389158fe3c0766e603624,Namespace:kube-system,Attempt:0,} returns sandbox id \"f140c0feed042d87813b528afabe790708a4199fcf681584c4e7867eea799a1d\"" Nov 1 00:21:28.602300 kubelet[2331]: E1101 00:21:28.600879 2331 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:21:28.626633 containerd[1583]: time="2025-11-01T00:21:28.626583345Z" level=info msg="CreateContainer within sandbox \"f140c0feed042d87813b528afabe790708a4199fcf681584c4e7867eea799a1d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 1 00:21:28.640074 containerd[1583]: time="2025-11-01T00:21:28.640025933Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a1d51be1ff02022474f2598f6e43038f,Namespace:kube-system,Attempt:0,} returns sandbox id \"15c9262ebf431240cac4f4dd9f416dc715f43882844386ea3e86d56e5cc0320f\"" Nov 1 00:21:28.642925 containerd[1583]: time="2025-11-01T00:21:28.642866953Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:fde7ffffbd3a0a21523ab920350d203d,Namespace:kube-system,Attempt:0,} returns sandbox id \"375e74d0469745d604e682fa3c219087808d753c5f6a8ece468ffe4adbb9df23\"" Nov 1 00:21:28.643185 kubelet[2331]: E1101 00:21:28.643149 2331 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:21:28.644318 kubelet[2331]: E1101 00:21:28.643876 2331 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:21:28.645591 containerd[1583]: time="2025-11-01T00:21:28.645507073Z" level=info msg="CreateContainer within sandbox \"15c9262ebf431240cac4f4dd9f416dc715f43882844386ea3e86d56e5cc0320f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 1 00:21:28.646699 containerd[1583]: time="2025-11-01T00:21:28.646654883Z" level=info msg="CreateContainer within sandbox \"375e74d0469745d604e682fa3c219087808d753c5f6a8ece468ffe4adbb9df23\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 1 00:21:28.678797 containerd[1583]: time="2025-11-01T00:21:28.678634890Z" level=info msg="CreateContainer within sandbox \"f140c0feed042d87813b528afabe790708a4199fcf681584c4e7867eea799a1d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ad753988c45307ac9708e43bcfb83ede9c38c6f35c433b6573c7c96036e9f61d\"" Nov 1 00:21:28.680311 containerd[1583]: time="2025-11-01T00:21:28.680110179Z" level=info msg="StartContainer for \"ad753988c45307ac9708e43bcfb83ede9c38c6f35c433b6573c7c96036e9f61d\"" Nov 1 00:21:28.702397 containerd[1583]: time="2025-11-01T00:21:28.702335979Z" level=info msg="CreateContainer within sandbox \"15c9262ebf431240cac4f4dd9f416dc715f43882844386ea3e86d56e5cc0320f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"bc19e4c8197877fa53eda8ebbf290c7fa72cce68c9ee1299632930821218284f\"" Nov 1 00:21:28.703458 containerd[1583]: time="2025-11-01T00:21:28.703404499Z" level=info msg="StartContainer for \"bc19e4c8197877fa53eda8ebbf290c7fa72cce68c9ee1299632930821218284f\"" Nov 1 00:21:28.704734 containerd[1583]: time="2025-11-01T00:21:28.704594268Z" level=info msg="CreateContainer within sandbox \"375e74d0469745d604e682fa3c219087808d753c5f6a8ece468ffe4adbb9df23\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"5c9137fd8bce9c57001c6abe4947d71869cf3ae53ae758360e1c3da218a058a7\"" Nov 1 00:21:28.705323 containerd[1583]: time="2025-11-01T00:21:28.705288960Z" level=info msg="StartContainer for \"5c9137fd8bce9c57001c6abe4947d71869cf3ae53ae758360e1c3da218a058a7\"" Nov 1 00:21:28.972966 containerd[1583]: time="2025-11-01T00:21:28.972378572Z" level=info msg="StartContainer for \"5c9137fd8bce9c57001c6abe4947d71869cf3ae53ae758360e1c3da218a058a7\" returns successfully" Nov 1 00:21:28.972966 containerd[1583]: time="2025-11-01T00:21:28.972403690Z" level=info msg="StartContainer for \"ad753988c45307ac9708e43bcfb83ede9c38c6f35c433b6573c7c96036e9f61d\" returns successfully" Nov 1 00:21:29.035370 containerd[1583]: time="2025-11-01T00:21:29.035165206Z" level=info msg="StartContainer for \"bc19e4c8197877fa53eda8ebbf290c7fa72cce68c9ee1299632930821218284f\" returns successfully" Nov 1 00:21:29.152591 update_engine[1558]: I20251101 00:21:29.152374 1558 update_attempter.cc:509] Updating boot flags... Nov 1 00:21:29.199102 kubelet[2331]: E1101 00:21:29.199048 2331 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 00:21:29.202571 kubelet[2331]: E1101 00:21:29.202492 2331 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:21:29.205862 kubelet[2331]: E1101 00:21:29.205830 2331 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 00:21:29.206140 kubelet[2331]: E1101 00:21:29.206028 2331 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:21:29.552827 kubelet[2331]: E1101 00:21:29.552468 2331 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 00:21:29.552827 kubelet[2331]: E1101 00:21:29.552729 2331 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:21:29.606311 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2613) Nov 1 00:21:29.667573 kubelet[2331]: I1101 00:21:29.667529 2331 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 1 00:21:29.733310 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2617) Nov 1 00:21:30.577516 kubelet[2331]: E1101 00:21:30.575859 2331 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 00:21:30.577516 kubelet[2331]: E1101 00:21:30.576038 2331 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:21:30.578205 kubelet[2331]: E1101 00:21:30.577588 2331 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 00:21:30.578205 kubelet[2331]: E1101 00:21:30.577717 2331 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:21:30.801456 kubelet[2331]: I1101 00:21:30.801396 2331 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 1 00:21:30.801456 kubelet[2331]: E1101 00:21:30.801441 2331 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Nov 1 00:21:30.851293 kubelet[2331]: I1101 00:21:30.851065 2331 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 1 00:21:31.048615 kubelet[2331]: E1101 00:21:31.048551 2331 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="3.2s" Nov 1 00:21:31.050738 kubelet[2331]: E1101 00:21:31.050096 2331 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Nov 1 00:21:31.050738 kubelet[2331]: I1101 00:21:31.050135 2331 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 1 00:21:31.052384 kubelet[2331]: E1101 00:21:31.052347 2331 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Nov 1 00:21:31.052384 kubelet[2331]: I1101 00:21:31.052379 2331 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 1 00:21:31.054896 kubelet[2331]: E1101 00:21:31.054865 2331 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Nov 1 00:21:31.078359 kubelet[2331]: I1101 00:21:31.078219 2331 apiserver.go:52] "Watching apiserver" Nov 1 00:21:31.150135 kubelet[2331]: I1101 00:21:31.149845 2331 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 1 00:21:33.264207 kubelet[2331]: I1101 00:21:33.264157 2331 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 1 00:21:33.486821 kubelet[2331]: E1101 00:21:33.486565 2331 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:21:33.577548 kubelet[2331]: E1101 00:21:33.577386 2331 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:21:33.676237 systemd[1]: Reloading requested from client PID 2622 ('systemctl') (unit session-7.scope)... Nov 1 00:21:33.676255 systemd[1]: Reloading... Nov 1 00:21:33.770304 zram_generator::config[2664]: No configuration found. Nov 1 00:21:33.906209 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:21:34.012339 systemd[1]: Reloading finished in 335 ms. Nov 1 00:21:34.065351 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:21:34.094961 systemd[1]: kubelet.service: Deactivated successfully. Nov 1 00:21:34.095458 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:21:34.112876 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:21:34.321619 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:21:34.330377 (kubelet)[2716]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 1 00:21:34.408414 kubelet[2716]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:21:34.408414 kubelet[2716]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 1 00:21:34.408414 kubelet[2716]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:21:34.409135 kubelet[2716]: I1101 00:21:34.408535 2716 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 1 00:21:34.418880 kubelet[2716]: I1101 00:21:34.418812 2716 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 1 00:21:34.418880 kubelet[2716]: I1101 00:21:34.418846 2716 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 1 00:21:34.419206 kubelet[2716]: I1101 00:21:34.419174 2716 server.go:954] "Client rotation is on, will bootstrap in background" Nov 1 00:21:34.420760 kubelet[2716]: I1101 00:21:34.420717 2716 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 1 00:21:34.425389 kubelet[2716]: I1101 00:21:34.425331 2716 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 1 00:21:34.434621 kubelet[2716]: E1101 00:21:34.434560 2716 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 1 00:21:34.434621 kubelet[2716]: I1101 00:21:34.434613 2716 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 1 00:21:34.441651 kubelet[2716]: I1101 00:21:34.441569 2716 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 1 00:21:34.442449 kubelet[2716]: I1101 00:21:34.442403 2716 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 1 00:21:34.442689 kubelet[2716]: I1101 00:21:34.442446 2716 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Nov 1 00:21:34.442799 kubelet[2716]: I1101 00:21:34.442693 2716 topology_manager.go:138] "Creating topology manager with none policy" Nov 1 00:21:34.442799 kubelet[2716]: I1101 00:21:34.442705 2716 container_manager_linux.go:304] "Creating device plugin manager" Nov 1 00:21:34.442799 kubelet[2716]: I1101 00:21:34.442772 2716 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:21:34.443004 kubelet[2716]: I1101 00:21:34.442974 2716 kubelet.go:446] "Attempting to sync node with API server" Nov 1 00:21:34.443004 kubelet[2716]: I1101 00:21:34.443008 2716 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 1 00:21:34.443088 kubelet[2716]: I1101 00:21:34.443036 2716 kubelet.go:352] "Adding apiserver pod source" Nov 1 00:21:34.443088 kubelet[2716]: I1101 00:21:34.443050 2716 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 1 00:21:34.448591 kubelet[2716]: I1101 00:21:34.445102 2716 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 1 00:21:34.448591 kubelet[2716]: I1101 00:21:34.447920 2716 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 1 00:21:34.448711 kubelet[2716]: I1101 00:21:34.448637 2716 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 1 00:21:34.448711 kubelet[2716]: I1101 00:21:34.448668 2716 server.go:1287] "Started kubelet" Nov 1 00:21:34.456147 kubelet[2716]: I1101 00:21:34.453605 2716 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 1 00:21:34.456147 kubelet[2716]: I1101 00:21:34.454028 2716 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 1 00:21:34.456147 kubelet[2716]: I1101 00:21:34.454202 2716 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 1 00:21:34.456147 kubelet[2716]: I1101 00:21:34.454746 2716 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 1 00:21:34.456147 kubelet[2716]: I1101 00:21:34.455325 2716 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 1 00:21:34.456147 kubelet[2716]: I1101 00:21:34.455381 2716 server.go:479] "Adding debug handlers to kubelet server" Nov 1 00:21:34.466447 kubelet[2716]: I1101 00:21:34.464702 2716 factory.go:221] Registration of the systemd container factory successfully Nov 1 00:21:34.466447 kubelet[2716]: I1101 00:21:34.464862 2716 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 1 00:21:34.466447 kubelet[2716]: E1101 00:21:34.465345 2716 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 1 00:21:34.466447 kubelet[2716]: I1101 00:21:34.465452 2716 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 1 00:21:34.466447 kubelet[2716]: I1101 00:21:34.465614 2716 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 1 00:21:34.466447 kubelet[2716]: I1101 00:21:34.465790 2716 reconciler.go:26] "Reconciler: start to sync state" Nov 1 00:21:34.467117 kubelet[2716]: I1101 00:21:34.467087 2716 factory.go:221] Registration of the containerd container factory successfully Nov 1 00:21:34.483892 kubelet[2716]: I1101 00:21:34.483431 2716 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 1 00:21:34.485561 kubelet[2716]: I1101 00:21:34.485534 2716 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 1 00:21:34.485614 kubelet[2716]: I1101 00:21:34.485591 2716 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 1 00:21:34.485682 kubelet[2716]: I1101 00:21:34.485619 2716 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 1 00:21:34.485682 kubelet[2716]: I1101 00:21:34.485630 2716 kubelet.go:2382] "Starting kubelet main sync loop" Nov 1 00:21:34.485890 kubelet[2716]: E1101 00:21:34.485710 2716 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 1 00:21:34.541866 kubelet[2716]: I1101 00:21:34.541830 2716 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 1 00:21:34.541866 kubelet[2716]: I1101 00:21:34.541847 2716 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 1 00:21:34.541866 kubelet[2716]: I1101 00:21:34.541867 2716 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:21:34.542096 kubelet[2716]: I1101 00:21:34.542070 2716 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 1 00:21:34.542135 kubelet[2716]: I1101 00:21:34.542087 2716 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 1 00:21:34.542135 kubelet[2716]: I1101 00:21:34.542115 2716 policy_none.go:49] "None policy: Start" Nov 1 00:21:34.542135 kubelet[2716]: I1101 00:21:34.542128 2716 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 1 00:21:34.542217 kubelet[2716]: I1101 00:21:34.542143 2716 state_mem.go:35] "Initializing new in-memory state store" Nov 1 00:21:34.542501 kubelet[2716]: I1101 00:21:34.542254 2716 state_mem.go:75] "Updated machine memory state" Nov 1 00:21:34.543857 kubelet[2716]: I1101 00:21:34.543806 2716 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 1 00:21:34.544016 kubelet[2716]: I1101 00:21:34.543988 2716 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 1 00:21:34.544069 kubelet[2716]: I1101 00:21:34.544003 2716 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 1 00:21:34.545919 kubelet[2716]: I1101 00:21:34.544647 2716 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 1 00:21:34.545138 sudo[2749]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Nov 1 00:21:34.545659 sudo[2749]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Nov 1 00:21:34.549129 kubelet[2716]: E1101 00:21:34.549081 2716 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 1 00:21:34.587103 kubelet[2716]: I1101 00:21:34.586980 2716 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 1 00:21:34.587751 kubelet[2716]: I1101 00:21:34.587141 2716 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 1 00:21:34.588773 kubelet[2716]: I1101 00:21:34.587258 2716 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 1 00:21:34.598521 kubelet[2716]: E1101 00:21:34.598292 2716 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Nov 1 00:21:34.654912 kubelet[2716]: I1101 00:21:34.654870 2716 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 1 00:21:34.663358 kubelet[2716]: I1101 00:21:34.663304 2716 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Nov 1 00:21:34.663554 kubelet[2716]: I1101 00:21:34.663416 2716 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 1 00:21:34.670236 kubelet[2716]: I1101 00:21:34.669420 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fde7ffffbd3a0a21523ab920350d203d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"fde7ffffbd3a0a21523ab920350d203d\") " pod="kube-system/kube-apiserver-localhost" Nov 1 00:21:34.670236 kubelet[2716]: I1101 00:21:34.669582 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fde7ffffbd3a0a21523ab920350d203d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"fde7ffffbd3a0a21523ab920350d203d\") " pod="kube-system/kube-apiserver-localhost" Nov 1 00:21:34.670236 kubelet[2716]: I1101 00:21:34.669802 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:21:34.670236 kubelet[2716]: I1101 00:21:34.669838 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:21:34.670236 kubelet[2716]: I1101 00:21:34.670241 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:21:34.670570 kubelet[2716]: I1101 00:21:34.670330 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a1d51be1ff02022474f2598f6e43038f-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a1d51be1ff02022474f2598f6e43038f\") " pod="kube-system/kube-scheduler-localhost" Nov 1 00:21:34.670570 kubelet[2716]: I1101 00:21:34.670349 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fde7ffffbd3a0a21523ab920350d203d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"fde7ffffbd3a0a21523ab920350d203d\") " pod="kube-system/kube-apiserver-localhost" Nov 1 00:21:34.670570 kubelet[2716]: I1101 00:21:34.670437 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:21:34.672589 kubelet[2716]: I1101 00:21:34.670540 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:21:34.894937 kubelet[2716]: E1101 00:21:34.894795 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:21:34.898641 kubelet[2716]: E1101 00:21:34.898531 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:21:34.898856 kubelet[2716]: E1101 00:21:34.898836 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:21:35.108891 sudo[2749]: pam_unix(sudo:session): session closed for user root Nov 1 00:21:35.443878 kubelet[2716]: I1101 00:21:35.443817 2716 apiserver.go:52] "Watching apiserver" Nov 1 00:21:35.466392 kubelet[2716]: I1101 00:21:35.466337 2716 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 1 00:21:35.507652 kubelet[2716]: I1101 00:21:35.507365 2716 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 1 00:21:35.507652 kubelet[2716]: I1101 00:21:35.507472 2716 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 1 00:21:35.508436 kubelet[2716]: I1101 00:21:35.508409 2716 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 1 00:21:36.388633 kubelet[2716]: E1101 00:21:36.388530 2716 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Nov 1 00:21:36.388813 kubelet[2716]: E1101 00:21:36.388772 2716 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Nov 1 00:21:36.389473 kubelet[2716]: E1101 00:21:36.388980 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:21:36.389810 kubelet[2716]: E1101 00:21:36.389684 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:21:36.390613 kubelet[2716]: E1101 00:21:36.390374 2716 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Nov 1 00:21:36.390613 kubelet[2716]: E1101 00:21:36.390543 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:21:36.509110 kubelet[2716]: E1101 00:21:36.509032 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:21:36.509721 kubelet[2716]: E1101 00:21:36.509255 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:21:36.509721 kubelet[2716]: E1101 00:21:36.509462 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:21:36.714178 kubelet[2716]: I1101 00:21:36.714010 2716 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.713976455 podStartE2EDuration="2.713976455s" podCreationTimestamp="2025-11-01 00:21:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:21:35.862815175 +0000 UTC m=+1.525343272" watchObservedRunningTime="2025-11-01 00:21:36.713976455 +0000 UTC m=+2.376504542" Nov 1 00:21:37.113104 kubelet[2716]: I1101 00:21:37.112929 2716 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.112867218 podStartE2EDuration="3.112867218s" podCreationTimestamp="2025-11-01 00:21:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:21:36.714127098 +0000 UTC m=+2.376655185" watchObservedRunningTime="2025-11-01 00:21:37.112867218 +0000 UTC m=+2.775395305" Nov 1 00:21:37.190120 kubelet[2716]: I1101 00:21:37.189954 2716 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=4.189928141 podStartE2EDuration="4.189928141s" podCreationTimestamp="2025-11-01 00:21:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:21:37.113166342 +0000 UTC m=+2.775694429" watchObservedRunningTime="2025-11-01 00:21:37.189928141 +0000 UTC m=+2.852456228" Nov 1 00:21:38.692592 kubelet[2716]: I1101 00:21:38.692529 2716 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 1 00:21:38.693197 containerd[1583]: time="2025-11-01T00:21:38.693079228Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 1 00:21:38.693517 kubelet[2716]: I1101 00:21:38.693332 2716 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 1 00:21:39.100357 kubelet[2716]: I1101 00:21:39.100149 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5e838fae-ac9e-45a7-8ef8-574f43dd7cd7-cilium-cgroup\") pod \"cilium-r9mqv\" (UID: \"5e838fae-ac9e-45a7-8ef8-574f43dd7cd7\") " pod="kube-system/cilium-r9mqv" Nov 1 00:21:39.100357 kubelet[2716]: I1101 00:21:39.100206 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tmpn\" (UniqueName: \"kubernetes.io/projected/05408183-19f3-42ba-bace-f8d5b6d68a05-kube-api-access-5tmpn\") pod \"kube-proxy-rl2xg\" (UID: \"05408183-19f3-42ba-bace-f8d5b6d68a05\") " pod="kube-system/kube-proxy-rl2xg" Nov 1 00:21:39.100357 kubelet[2716]: I1101 00:21:39.100229 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5e838fae-ac9e-45a7-8ef8-574f43dd7cd7-bpf-maps\") pod \"cilium-r9mqv\" (UID: \"5e838fae-ac9e-45a7-8ef8-574f43dd7cd7\") " pod="kube-system/cilium-r9mqv" Nov 1 00:21:39.100357 kubelet[2716]: I1101 00:21:39.100251 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5e838fae-ac9e-45a7-8ef8-574f43dd7cd7-hostproc\") pod \"cilium-r9mqv\" (UID: \"5e838fae-ac9e-45a7-8ef8-574f43dd7cd7\") " pod="kube-system/cilium-r9mqv" Nov 1 00:21:39.100357 kubelet[2716]: I1101 00:21:39.100305 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5e838fae-ac9e-45a7-8ef8-574f43dd7cd7-cilium-config-path\") pod \"cilium-r9mqv\" (UID: \"5e838fae-ac9e-45a7-8ef8-574f43dd7cd7\") " pod="kube-system/cilium-r9mqv" Nov 1 00:21:39.100357 kubelet[2716]: I1101 00:21:39.100334 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/05408183-19f3-42ba-bace-f8d5b6d68a05-xtables-lock\") pod \"kube-proxy-rl2xg\" (UID: \"05408183-19f3-42ba-bace-f8d5b6d68a05\") " pod="kube-system/kube-proxy-rl2xg" Nov 1 00:21:39.100679 kubelet[2716]: I1101 00:21:39.100355 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5e838fae-ac9e-45a7-8ef8-574f43dd7cd7-etc-cni-netd\") pod \"cilium-r9mqv\" (UID: \"5e838fae-ac9e-45a7-8ef8-574f43dd7cd7\") " pod="kube-system/cilium-r9mqv" Nov 1 00:21:39.100679 kubelet[2716]: I1101 00:21:39.100381 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5e838fae-ac9e-45a7-8ef8-574f43dd7cd7-lib-modules\") pod \"cilium-r9mqv\" (UID: \"5e838fae-ac9e-45a7-8ef8-574f43dd7cd7\") " pod="kube-system/cilium-r9mqv" Nov 1 00:21:39.100679 kubelet[2716]: I1101 00:21:39.100397 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5e838fae-ac9e-45a7-8ef8-574f43dd7cd7-hubble-tls\") pod \"cilium-r9mqv\" (UID: \"5e838fae-ac9e-45a7-8ef8-574f43dd7cd7\") " pod="kube-system/cilium-r9mqv" Nov 1 00:21:39.100679 kubelet[2716]: I1101 00:21:39.100412 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/05408183-19f3-42ba-bace-f8d5b6d68a05-lib-modules\") pod \"kube-proxy-rl2xg\" (UID: \"05408183-19f3-42ba-bace-f8d5b6d68a05\") " pod="kube-system/kube-proxy-rl2xg" Nov 1 00:21:39.100679 kubelet[2716]: I1101 00:21:39.100427 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5e838fae-ac9e-45a7-8ef8-574f43dd7cd7-xtables-lock\") pod \"cilium-r9mqv\" (UID: \"5e838fae-ac9e-45a7-8ef8-574f43dd7cd7\") " pod="kube-system/cilium-r9mqv" Nov 1 00:21:39.100679 kubelet[2716]: I1101 00:21:39.100441 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5e838fae-ac9e-45a7-8ef8-574f43dd7cd7-clustermesh-secrets\") pod \"cilium-r9mqv\" (UID: \"5e838fae-ac9e-45a7-8ef8-574f43dd7cd7\") " pod="kube-system/cilium-r9mqv" Nov 1 00:21:39.100873 kubelet[2716]: I1101 00:21:39.100482 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5e838fae-ac9e-45a7-8ef8-574f43dd7cd7-host-proc-sys-net\") pod \"cilium-r9mqv\" (UID: \"5e838fae-ac9e-45a7-8ef8-574f43dd7cd7\") " pod="kube-system/cilium-r9mqv" Nov 1 00:21:39.100873 kubelet[2716]: I1101 00:21:39.100511 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/05408183-19f3-42ba-bace-f8d5b6d68a05-kube-proxy\") pod \"kube-proxy-rl2xg\" (UID: \"05408183-19f3-42ba-bace-f8d5b6d68a05\") " pod="kube-system/kube-proxy-rl2xg" Nov 1 00:21:39.100873 kubelet[2716]: I1101 00:21:39.100527 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5e838fae-ac9e-45a7-8ef8-574f43dd7cd7-cilium-run\") pod \"cilium-r9mqv\" (UID: \"5e838fae-ac9e-45a7-8ef8-574f43dd7cd7\") " pod="kube-system/cilium-r9mqv" Nov 1 00:21:39.100873 kubelet[2716]: I1101 00:21:39.100545 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5e838fae-ac9e-45a7-8ef8-574f43dd7cd7-host-proc-sys-kernel\") pod \"cilium-r9mqv\" (UID: \"5e838fae-ac9e-45a7-8ef8-574f43dd7cd7\") " pod="kube-system/cilium-r9mqv" Nov 1 00:21:39.100873 kubelet[2716]: I1101 00:21:39.100560 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7g8t\" (UniqueName: \"kubernetes.io/projected/5e838fae-ac9e-45a7-8ef8-574f43dd7cd7-kube-api-access-k7g8t\") pod \"cilium-r9mqv\" (UID: \"5e838fae-ac9e-45a7-8ef8-574f43dd7cd7\") " pod="kube-system/cilium-r9mqv" Nov 1 00:21:39.100873 kubelet[2716]: I1101 00:21:39.100573 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5e838fae-ac9e-45a7-8ef8-574f43dd7cd7-cni-path\") pod \"cilium-r9mqv\" (UID: \"5e838fae-ac9e-45a7-8ef8-574f43dd7cd7\") " pod="kube-system/cilium-r9mqv" Nov 1 00:21:39.616754 kubelet[2716]: E1101 00:21:39.616720 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:21:39.617356 containerd[1583]: time="2025-11-01T00:21:39.617311635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-r9mqv,Uid:5e838fae-ac9e-45a7-8ef8-574f43dd7cd7,Namespace:kube-system,Attempt:0,}" Nov 1 00:21:39.617510 kubelet[2716]: E1101 00:21:39.617492 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:21:39.617912 containerd[1583]: time="2025-11-01T00:21:39.617881047Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rl2xg,Uid:05408183-19f3-42ba-bace-f8d5b6d68a05,Namespace:kube-system,Attempt:0,}" Nov 1 00:21:39.833182 sudo[1785]: pam_unix(sudo:session): session closed for user root Nov 1 00:21:39.843611 sshd[1778]: pam_unix(sshd:session): session closed for user core Nov 1 00:21:39.848672 systemd[1]: sshd@7-10.0.0.76:22-10.0.0.1:49420.service: Deactivated successfully. Nov 1 00:21:39.851572 systemd-logind[1557]: Session 7 logged out. Waiting for processes to exit. Nov 1 00:21:39.851673 systemd[1]: session-7.scope: Deactivated successfully. Nov 1 00:21:39.856167 systemd-logind[1557]: Removed session 7. Nov 1 00:21:40.185090 containerd[1583]: time="2025-11-01T00:21:40.184774230Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:21:40.185090 containerd[1583]: time="2025-11-01T00:21:40.184869330Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:21:40.185090 containerd[1583]: time="2025-11-01T00:21:40.184885220Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:21:40.185090 containerd[1583]: time="2025-11-01T00:21:40.185004955Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:21:40.232779 containerd[1583]: time="2025-11-01T00:21:40.232742546Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-r9mqv,Uid:5e838fae-ac9e-45a7-8ef8-574f43dd7cd7,Namespace:kube-system,Attempt:0,} returns sandbox id \"8aa4932c6ae829dda34f17ea323911360b64ea13ee13adf835897a529da46706\"" Nov 1 00:21:40.233333 kubelet[2716]: E1101 00:21:40.233311 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:21:40.234792 containerd[1583]: time="2025-11-01T00:21:40.234754813Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Nov 1 00:21:40.246588 containerd[1583]: time="2025-11-01T00:21:40.246482074Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:21:40.246588 containerd[1583]: time="2025-11-01T00:21:40.246571102Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:21:40.246588 containerd[1583]: time="2025-11-01T00:21:40.246589556Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:21:40.246819 containerd[1583]: time="2025-11-01T00:21:40.246699303Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:21:40.292058 containerd[1583]: time="2025-11-01T00:21:40.291771458Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rl2xg,Uid:05408183-19f3-42ba-bace-f8d5b6d68a05,Namespace:kube-system,Attempt:0,} returns sandbox id \"4adcbd5a1b27e4884a5724424b3d85455b056a00ef789aaf2005013d22628a6d\"" Nov 1 00:21:40.292710 kubelet[2716]: E1101 00:21:40.292678 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:21:40.295149 containerd[1583]: time="2025-11-01T00:21:40.295096545Z" level=info msg="CreateContainer within sandbox \"4adcbd5a1b27e4884a5724424b3d85455b056a00ef789aaf2005013d22628a6d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 1 00:21:40.410775 kubelet[2716]: I1101 00:21:40.410723 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vpvw\" (UniqueName: \"kubernetes.io/projected/8161d1e0-74df-43b4-bec3-a4658844240a-kube-api-access-6vpvw\") pod \"cilium-operator-6c4d7847fc-7224p\" (UID: \"8161d1e0-74df-43b4-bec3-a4658844240a\") " pod="kube-system/cilium-operator-6c4d7847fc-7224p" Nov 1 00:21:40.410775 kubelet[2716]: I1101 00:21:40.410778 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8161d1e0-74df-43b4-bec3-a4658844240a-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-7224p\" (UID: \"8161d1e0-74df-43b4-bec3-a4658844240a\") " pod="kube-system/cilium-operator-6c4d7847fc-7224p" Nov 1 00:21:40.651893 kubelet[2716]: E1101 00:21:40.651708 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:21:40.652476 containerd[1583]: time="2025-11-01T00:21:40.652432768Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-7224p,Uid:8161d1e0-74df-43b4-bec3-a4658844240a,Namespace:kube-system,Attempt:0,}" Nov 1 00:21:41.024417 containerd[1583]: time="2025-11-01T00:21:41.024251966Z" level=info msg="CreateContainer within sandbox \"4adcbd5a1b27e4884a5724424b3d85455b056a00ef789aaf2005013d22628a6d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c74f97b1fcc1db263cd206eaeceb31a3986a27d553971a71c5d7ec9544f96628\"" Nov 1 00:21:41.025761 containerd[1583]: time="2025-11-01T00:21:41.025280471Z" level=info msg="StartContainer for \"c74f97b1fcc1db263cd206eaeceb31a3986a27d553971a71c5d7ec9544f96628\"" Nov 1 00:21:41.344069 containerd[1583]: time="2025-11-01T00:21:41.343794371Z" level=info msg="StartContainer for \"c74f97b1fcc1db263cd206eaeceb31a3986a27d553971a71c5d7ec9544f96628\" returns successfully" Nov 1 00:21:41.525812 kubelet[2716]: E1101 00:21:41.525772 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:21:42.028551 containerd[1583]: time="2025-11-01T00:21:42.028190756Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:21:42.028551 containerd[1583]: time="2025-11-01T00:21:42.028258944Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:21:42.028551 containerd[1583]: time="2025-11-01T00:21:42.028331971Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:21:42.028551 containerd[1583]: time="2025-11-01T00:21:42.028444283Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:21:42.089599 containerd[1583]: time="2025-11-01T00:21:42.089551735Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-7224p,Uid:8161d1e0-74df-43b4-bec3-a4658844240a,Namespace:kube-system,Attempt:0,} returns sandbox id \"29f9415cb952c115ba4bf1c1b80a2a467c2aa91b252ea14d0bbe30fb480fa234\"" Nov 1 00:21:42.090523 kubelet[2716]: E1101 00:21:42.090489 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:21:42.531067 kubelet[2716]: E1101 00:21:42.531012 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:21:44.142411 kubelet[2716]: E1101 00:21:44.142370 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:21:44.222568 kubelet[2716]: I1101 00:21:44.222475 2716 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-rl2xg" podStartSLOduration=6.222448986 podStartE2EDuration="6.222448986s" podCreationTimestamp="2025-11-01 00:21:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:21:41.785330294 +0000 UTC m=+7.447858401" watchObservedRunningTime="2025-11-01 00:21:44.222448986 +0000 UTC m=+9.884977073" Nov 1 00:21:44.537566 kubelet[2716]: E1101 00:21:44.537498 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:21:44.884773 kubelet[2716]: E1101 00:21:44.884608 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:21:45.437699 kubelet[2716]: E1101 00:21:45.437390 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:21:45.540247 kubelet[2716]: E1101 00:21:45.540123 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:21:46.827354 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1818086003.mount: Deactivated successfully. Nov 1 00:21:51.279753 containerd[1583]: time="2025-11-01T00:21:51.279677130Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:21:51.281256 containerd[1583]: time="2025-11-01T00:21:51.281197165Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Nov 1 00:21:51.284242 containerd[1583]: time="2025-11-01T00:21:51.284201650Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:21:51.286369 containerd[1583]: time="2025-11-01T00:21:51.286326070Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 11.05153039s" Nov 1 00:21:51.286429 containerd[1583]: time="2025-11-01T00:21:51.286382075Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Nov 1 00:21:51.287459 containerd[1583]: time="2025-11-01T00:21:51.287419735Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Nov 1 00:21:51.293039 containerd[1583]: time="2025-11-01T00:21:51.293003714Z" level=info msg="CreateContainer within sandbox \"8aa4932c6ae829dda34f17ea323911360b64ea13ee13adf835897a529da46706\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 1 00:21:51.323973 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1529275896.mount: Deactivated successfully. Nov 1 00:21:51.364790 containerd[1583]: time="2025-11-01T00:21:51.364716912Z" level=info msg="CreateContainer within sandbox \"8aa4932c6ae829dda34f17ea323911360b64ea13ee13adf835897a529da46706\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"7a57dc6b222fe8cd8e997b887c7fcb2993f65a374b4680e5166b84bec79de761\"" Nov 1 00:21:51.366140 containerd[1583]: time="2025-11-01T00:21:51.365839501Z" level=info msg="StartContainer for \"7a57dc6b222fe8cd8e997b887c7fcb2993f65a374b4680e5166b84bec79de761\"" Nov 1 00:21:51.644312 containerd[1583]: time="2025-11-01T00:21:51.644101728Z" level=info msg="StartContainer for \"7a57dc6b222fe8cd8e997b887c7fcb2993f65a374b4680e5166b84bec79de761\" returns successfully" Nov 1 00:21:51.800450 kubelet[2716]: E1101 00:21:51.800417 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:21:51.992638 systemd-resolved[1474]: Under memory pressure, flushing caches. Nov 1 00:21:52.013702 systemd-journald[1152]: Under memory pressure, flushing caches. Nov 1 00:21:51.992685 systemd-resolved[1474]: Flushed all caches. Nov 1 00:21:52.320379 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7a57dc6b222fe8cd8e997b887c7fcb2993f65a374b4680e5166b84bec79de761-rootfs.mount: Deactivated successfully. Nov 1 00:21:52.672502 containerd[1583]: time="2025-11-01T00:21:52.670816772Z" level=info msg="shim disconnected" id=7a57dc6b222fe8cd8e997b887c7fcb2993f65a374b4680e5166b84bec79de761 namespace=k8s.io Nov 1 00:21:52.672502 containerd[1583]: time="2025-11-01T00:21:52.672483673Z" level=warning msg="cleaning up after shim disconnected" id=7a57dc6b222fe8cd8e997b887c7fcb2993f65a374b4680e5166b84bec79de761 namespace=k8s.io Nov 1 00:21:52.672502 containerd[1583]: time="2025-11-01T00:21:52.672501086Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 1 00:21:52.802780 kubelet[2716]: E1101 00:21:52.802749 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:21:52.804866 containerd[1583]: time="2025-11-01T00:21:52.804747583Z" level=info msg="CreateContainer within sandbox \"8aa4932c6ae829dda34f17ea323911360b64ea13ee13adf835897a529da46706\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 1 00:21:53.020931 containerd[1583]: time="2025-11-01T00:21:53.020792150Z" level=info msg="CreateContainer within sandbox \"8aa4932c6ae829dda34f17ea323911360b64ea13ee13adf835897a529da46706\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"3532380d262894951163483eafe6e1af5cbca70cda6e134aaed8d00751a6e8cc\"" Nov 1 00:21:53.021496 containerd[1583]: time="2025-11-01T00:21:53.021448972Z" level=info msg="StartContainer for \"3532380d262894951163483eafe6e1af5cbca70cda6e134aaed8d00751a6e8cc\"" Nov 1 00:21:53.090253 containerd[1583]: time="2025-11-01T00:21:53.090194658Z" level=info msg="StartContainer for \"3532380d262894951163483eafe6e1af5cbca70cda6e134aaed8d00751a6e8cc\" returns successfully" Nov 1 00:21:53.101429 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 1 00:21:53.102697 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 1 00:21:53.102926 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Nov 1 00:21:53.113904 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 1 00:21:53.147676 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 1 00:21:53.316672 containerd[1583]: time="2025-11-01T00:21:53.316016026Z" level=info msg="shim disconnected" id=3532380d262894951163483eafe6e1af5cbca70cda6e134aaed8d00751a6e8cc namespace=k8s.io Nov 1 00:21:53.316672 containerd[1583]: time="2025-11-01T00:21:53.316102648Z" level=warning msg="cleaning up after shim disconnected" id=3532380d262894951163483eafe6e1af5cbca70cda6e134aaed8d00751a6e8cc namespace=k8s.io Nov 1 00:21:53.316672 containerd[1583]: time="2025-11-01T00:21:53.316112797Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 1 00:21:53.320819 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3532380d262894951163483eafe6e1af5cbca70cda6e134aaed8d00751a6e8cc-rootfs.mount: Deactivated successfully. Nov 1 00:21:53.806688 kubelet[2716]: E1101 00:21:53.806648 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:21:53.808709 containerd[1583]: time="2025-11-01T00:21:53.808656043Z" level=info msg="CreateContainer within sandbox \"8aa4932c6ae829dda34f17ea323911360b64ea13ee13adf835897a529da46706\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 1 00:21:54.554463 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1725092948.mount: Deactivated successfully. Nov 1 00:21:54.582747 containerd[1583]: time="2025-11-01T00:21:54.582664425Z" level=info msg="CreateContainer within sandbox \"8aa4932c6ae829dda34f17ea323911360b64ea13ee13adf835897a529da46706\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"39c3e9d35fc218dcbfd8dbdaed35fa8538fff0d39d5f35f8e5d2aa9350f39a05\"" Nov 1 00:21:54.583631 containerd[1583]: time="2025-11-01T00:21:54.583591857Z" level=info msg="StartContainer for \"39c3e9d35fc218dcbfd8dbdaed35fa8538fff0d39d5f35f8e5d2aa9350f39a05\"" Nov 1 00:21:54.780336 containerd[1583]: time="2025-11-01T00:21:54.780226875Z" level=info msg="StartContainer for \"39c3e9d35fc218dcbfd8dbdaed35fa8538fff0d39d5f35f8e5d2aa9350f39a05\" returns successfully" Nov 1 00:21:54.810501 kubelet[2716]: E1101 00:21:54.810366 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:21:54.822606 containerd[1583]: time="2025-11-01T00:21:54.822504234Z" level=info msg="shim disconnected" id=39c3e9d35fc218dcbfd8dbdaed35fa8538fff0d39d5f35f8e5d2aa9350f39a05 namespace=k8s.io Nov 1 00:21:54.822606 containerd[1583]: time="2025-11-01T00:21:54.822603180Z" level=warning msg="cleaning up after shim disconnected" id=39c3e9d35fc218dcbfd8dbdaed35fa8538fff0d39d5f35f8e5d2aa9350f39a05 namespace=k8s.io Nov 1 00:21:54.822606 containerd[1583]: time="2025-11-01T00:21:54.822617697Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 1 00:21:55.378535 containerd[1583]: time="2025-11-01T00:21:55.378081579Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:21:55.381427 containerd[1583]: time="2025-11-01T00:21:55.381382406Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Nov 1 00:21:55.385370 containerd[1583]: time="2025-11-01T00:21:55.385333034Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:21:55.389160 containerd[1583]: time="2025-11-01T00:21:55.388510039Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 4.101045701s" Nov 1 00:21:55.389160 containerd[1583]: time="2025-11-01T00:21:55.388549383Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Nov 1 00:21:55.392081 containerd[1583]: time="2025-11-01T00:21:55.391923729Z" level=info msg="CreateContainer within sandbox \"29f9415cb952c115ba4bf1c1b80a2a467c2aa91b252ea14d0bbe30fb480fa234\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Nov 1 00:21:55.420722 containerd[1583]: time="2025-11-01T00:21:55.420525253Z" level=info msg="CreateContainer within sandbox \"29f9415cb952c115ba4bf1c1b80a2a467c2aa91b252ea14d0bbe30fb480fa234\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"2e5c187855d0196255d77f9e2b06a1e96e042617a70bdace36b741b439182572\"" Nov 1 00:21:55.427413 containerd[1583]: time="2025-11-01T00:21:55.423586741Z" level=info msg="StartContainer for \"2e5c187855d0196255d77f9e2b06a1e96e042617a70bdace36b741b439182572\"" Nov 1 00:21:55.508624 containerd[1583]: time="2025-11-01T00:21:55.508568358Z" level=info msg="StartContainer for \"2e5c187855d0196255d77f9e2b06a1e96e042617a70bdace36b741b439182572\" returns successfully" Nov 1 00:21:55.550971 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-39c3e9d35fc218dcbfd8dbdaed35fa8538fff0d39d5f35f8e5d2aa9350f39a05-rootfs.mount: Deactivated successfully. Nov 1 00:21:55.817753 kubelet[2716]: E1101 00:21:55.817493 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:21:55.819348 kubelet[2716]: E1101 00:21:55.819244 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:21:55.821850 containerd[1583]: time="2025-11-01T00:21:55.821792525Z" level=info msg="CreateContainer within sandbox \"8aa4932c6ae829dda34f17ea323911360b64ea13ee13adf835897a529da46706\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 1 00:21:55.851669 containerd[1583]: time="2025-11-01T00:21:55.851540481Z" level=info msg="CreateContainer within sandbox \"8aa4932c6ae829dda34f17ea323911360b64ea13ee13adf835897a529da46706\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ea44e1b2b4e5e2f601842d88ed14672e5dae79064e0f9b35c9e3797a0f657e9e\"" Nov 1 00:21:55.853977 containerd[1583]: time="2025-11-01T00:21:55.852993960Z" level=info msg="StartContainer for \"ea44e1b2b4e5e2f601842d88ed14672e5dae79064e0f9b35c9e3797a0f657e9e\"" Nov 1 00:21:55.855465 kubelet[2716]: I1101 00:21:55.855340 2716 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-7224p" podStartSLOduration=3.5563907930000003 podStartE2EDuration="16.855297626s" podCreationTimestamp="2025-11-01 00:21:39 +0000 UTC" firstStartedPulling="2025-11-01 00:21:42.091219792 +0000 UTC m=+7.753747879" lastFinishedPulling="2025-11-01 00:21:55.390126625 +0000 UTC m=+21.052654712" observedRunningTime="2025-11-01 00:21:55.854954171 +0000 UTC m=+21.517482258" watchObservedRunningTime="2025-11-01 00:21:55.855297626 +0000 UTC m=+21.517825713" Nov 1 00:21:55.994598 containerd[1583]: time="2025-11-01T00:21:55.994530516Z" level=info msg="StartContainer for \"ea44e1b2b4e5e2f601842d88ed14672e5dae79064e0f9b35c9e3797a0f657e9e\" returns successfully" Nov 1 00:21:56.014379 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ea44e1b2b4e5e2f601842d88ed14672e5dae79064e0f9b35c9e3797a0f657e9e-rootfs.mount: Deactivated successfully. Nov 1 00:21:56.532260 containerd[1583]: time="2025-11-01T00:21:56.532175449Z" level=info msg="shim disconnected" id=ea44e1b2b4e5e2f601842d88ed14672e5dae79064e0f9b35c9e3797a0f657e9e namespace=k8s.io Nov 1 00:21:56.532260 containerd[1583]: time="2025-11-01T00:21:56.532252774Z" level=warning msg="cleaning up after shim disconnected" id=ea44e1b2b4e5e2f601842d88ed14672e5dae79064e0f9b35c9e3797a0f657e9e namespace=k8s.io Nov 1 00:21:56.532260 containerd[1583]: time="2025-11-01T00:21:56.532292218Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 1 00:21:56.822032 kubelet[2716]: E1101 00:21:56.821649 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:21:56.822032 kubelet[2716]: E1101 00:21:56.821796 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:21:56.823943 containerd[1583]: time="2025-11-01T00:21:56.823904977Z" level=info msg="CreateContainer within sandbox \"8aa4932c6ae829dda34f17ea323911360b64ea13ee13adf835897a529da46706\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 1 00:21:56.848319 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2958043268.mount: Deactivated successfully. Nov 1 00:21:56.849139 containerd[1583]: time="2025-11-01T00:21:56.848806710Z" level=info msg="CreateContainer within sandbox \"8aa4932c6ae829dda34f17ea323911360b64ea13ee13adf835897a529da46706\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"5caba774871a2f75f8829a2c7972ca3caa552559b19af08e179bb3e4a1794ea1\"" Nov 1 00:21:56.849870 containerd[1583]: time="2025-11-01T00:21:56.849781942Z" level=info msg="StartContainer for \"5caba774871a2f75f8829a2c7972ca3caa552559b19af08e179bb3e4a1794ea1\"" Nov 1 00:21:56.933148 containerd[1583]: time="2025-11-01T00:21:56.933096523Z" level=info msg="StartContainer for \"5caba774871a2f75f8829a2c7972ca3caa552559b19af08e179bb3e4a1794ea1\" returns successfully" Nov 1 00:21:57.073069 kubelet[2716]: I1101 00:21:57.072868 2716 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 1 00:21:57.621049 kubelet[2716]: I1101 00:21:57.620995 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a4db634d-ed5d-4a0e-b97e-6643e94cce37-config-volume\") pod \"coredns-668d6bf9bc-s2sn8\" (UID: \"a4db634d-ed5d-4a0e-b97e-6643e94cce37\") " pod="kube-system/coredns-668d6bf9bc-s2sn8" Nov 1 00:21:57.621049 kubelet[2716]: I1101 00:21:57.621050 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrmbc\" (UniqueName: \"kubernetes.io/projected/a4db634d-ed5d-4a0e-b97e-6643e94cce37-kube-api-access-zrmbc\") pod \"coredns-668d6bf9bc-s2sn8\" (UID: \"a4db634d-ed5d-4a0e-b97e-6643e94cce37\") " pod="kube-system/coredns-668d6bf9bc-s2sn8" Nov 1 00:21:57.721970 kubelet[2716]: I1101 00:21:57.721887 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d6dfcf49-16e8-4faa-8c41-b6f1f993511f-config-volume\") pod \"coredns-668d6bf9bc-sjhj9\" (UID: \"d6dfcf49-16e8-4faa-8c41-b6f1f993511f\") " pod="kube-system/coredns-668d6bf9bc-sjhj9" Nov 1 00:21:57.722145 kubelet[2716]: I1101 00:21:57.721989 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-887cq\" (UniqueName: \"kubernetes.io/projected/d6dfcf49-16e8-4faa-8c41-b6f1f993511f-kube-api-access-887cq\") pod \"coredns-668d6bf9bc-sjhj9\" (UID: \"d6dfcf49-16e8-4faa-8c41-b6f1f993511f\") " pod="kube-system/coredns-668d6bf9bc-sjhj9" Nov 1 00:21:57.827721 kubelet[2716]: E1101 00:21:57.827652 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:21:57.847481 kubelet[2716]: I1101 00:21:57.846614 2716 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-r9mqv" podStartSLOduration=8.793376802000001 podStartE2EDuration="19.846592981s" podCreationTimestamp="2025-11-01 00:21:38 +0000 UTC" firstStartedPulling="2025-11-01 00:21:40.234037473 +0000 UTC m=+5.896565560" lastFinishedPulling="2025-11-01 00:21:51.287253622 +0000 UTC m=+16.949781739" observedRunningTime="2025-11-01 00:21:57.84657143 +0000 UTC m=+23.509099507" watchObservedRunningTime="2025-11-01 00:21:57.846592981 +0000 UTC m=+23.509121068" Nov 1 00:21:57.869056 kubelet[2716]: E1101 00:21:57.869014 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:21:57.870137 containerd[1583]: time="2025-11-01T00:21:57.870094812Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-s2sn8,Uid:a4db634d-ed5d-4a0e-b97e-6643e94cce37,Namespace:kube-system,Attempt:0,}" Nov 1 00:21:57.935996 kubelet[2716]: E1101 00:21:57.935515 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:21:57.937079 containerd[1583]: time="2025-11-01T00:21:57.936605484Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-sjhj9,Uid:d6dfcf49-16e8-4faa-8c41-b6f1f993511f,Namespace:kube-system,Attempt:0,}" Nov 1 00:21:58.828341 kubelet[2716]: E1101 00:21:58.828303 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:21:59.399563 systemd-networkd[1245]: cilium_host: Link UP Nov 1 00:21:59.399761 systemd-networkd[1245]: cilium_net: Link UP Nov 1 00:21:59.399987 systemd-networkd[1245]: cilium_net: Gained carrier Nov 1 00:21:59.400196 systemd-networkd[1245]: cilium_host: Gained carrier Nov 1 00:21:59.531508 systemd-networkd[1245]: cilium_vxlan: Link UP Nov 1 00:21:59.531518 systemd-networkd[1245]: cilium_vxlan: Gained carrier Nov 1 00:21:59.769300 kernel: NET: Registered PF_ALG protocol family Nov 1 00:21:59.829751 kubelet[2716]: E1101 00:21:59.829721 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:22:00.056433 systemd-networkd[1245]: cilium_net: Gained IPv6LL Nov 1 00:22:00.120475 systemd-networkd[1245]: cilium_host: Gained IPv6LL Nov 1 00:22:00.525587 systemd-networkd[1245]: lxc_health: Link UP Nov 1 00:22:00.539027 systemd-networkd[1245]: lxc_health: Gained carrier Nov 1 00:22:00.696504 systemd-networkd[1245]: cilium_vxlan: Gained IPv6LL Nov 1 00:22:00.969792 systemd-networkd[1245]: lxc8630f053643f: Link UP Nov 1 00:22:00.992305 kernel: eth0: renamed from tmp3bb8b Nov 1 00:22:00.996280 systemd-networkd[1245]: lxc8630f053643f: Gained carrier Nov 1 00:22:01.009072 systemd-networkd[1245]: lxcefc876c4c4be: Link UP Nov 1 00:22:01.016295 kernel: eth0: renamed from tmp4e5be Nov 1 00:22:01.021504 systemd-networkd[1245]: lxcefc876c4c4be: Gained carrier Nov 1 00:22:01.618850 kubelet[2716]: E1101 00:22:01.618787 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:22:01.861396 kubelet[2716]: E1101 00:22:01.861345 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:22:02.296707 systemd-networkd[1245]: lxc8630f053643f: Gained IPv6LL Nov 1 00:22:02.426036 systemd-networkd[1245]: lxcefc876c4c4be: Gained IPv6LL Nov 1 00:22:02.552545 systemd-networkd[1245]: lxc_health: Gained IPv6LL Nov 1 00:22:02.858459 kubelet[2716]: E1101 00:22:02.858323 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:22:05.001646 containerd[1583]: time="2025-11-01T00:22:05.001158417Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:22:05.001646 containerd[1583]: time="2025-11-01T00:22:05.001263545Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:22:05.001646 containerd[1583]: time="2025-11-01T00:22:05.001343304Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:22:05.001646 containerd[1583]: time="2025-11-01T00:22:05.001495820Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:22:05.004309 containerd[1583]: time="2025-11-01T00:22:05.003221198Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:22:05.004309 containerd[1583]: time="2025-11-01T00:22:05.003383342Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:22:05.004309 containerd[1583]: time="2025-11-01T00:22:05.003441511Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:22:05.004309 containerd[1583]: time="2025-11-01T00:22:05.003685549Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:22:05.039072 systemd-resolved[1474]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 1 00:22:05.040875 systemd-resolved[1474]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 1 00:22:05.079729 containerd[1583]: time="2025-11-01T00:22:05.079661680Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-sjhj9,Uid:d6dfcf49-16e8-4faa-8c41-b6f1f993511f,Namespace:kube-system,Attempt:0,} returns sandbox id \"4e5be0f229f6f385c9c2797c75bf9bb8707851692c54cab78086858da7dcf359\"" Nov 1 00:22:05.082173 kubelet[2716]: E1101 00:22:05.082129 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:22:05.084713 containerd[1583]: time="2025-11-01T00:22:05.084619342Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-s2sn8,Uid:a4db634d-ed5d-4a0e-b97e-6643e94cce37,Namespace:kube-system,Attempt:0,} returns sandbox id \"3bb8be1ba77ad4224a3cdaa46fd658428617ef39a034a549077859e81d836e1d\"" Nov 1 00:22:05.086117 kubelet[2716]: E1101 00:22:05.086090 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:22:05.086908 containerd[1583]: time="2025-11-01T00:22:05.086810554Z" level=info msg="CreateContainer within sandbox \"4e5be0f229f6f385c9c2797c75bf9bb8707851692c54cab78086858da7dcf359\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 1 00:22:05.088481 containerd[1583]: time="2025-11-01T00:22:05.088333521Z" level=info msg="CreateContainer within sandbox \"3bb8be1ba77ad4224a3cdaa46fd658428617ef39a034a549077859e81d836e1d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 1 00:22:05.121955 containerd[1583]: time="2025-11-01T00:22:05.121879728Z" level=info msg="CreateContainer within sandbox \"4e5be0f229f6f385c9c2797c75bf9bb8707851692c54cab78086858da7dcf359\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"93ad25e6224dd93d13631fb05d3b092f2ca8d6a88b75ad06108aea61c69c8e60\"" Nov 1 00:22:05.122632 containerd[1583]: time="2025-11-01T00:22:05.122597304Z" level=info msg="StartContainer for \"93ad25e6224dd93d13631fb05d3b092f2ca8d6a88b75ad06108aea61c69c8e60\"" Nov 1 00:22:05.127187 containerd[1583]: time="2025-11-01T00:22:05.127140048Z" level=info msg="CreateContainer within sandbox \"3bb8be1ba77ad4224a3cdaa46fd658428617ef39a034a549077859e81d836e1d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"84ae02fb5d33a9b1b3b6c3195b14a7670e1fa90012e877ae3f5948f84f293c7c\"" Nov 1 00:22:05.127921 containerd[1583]: time="2025-11-01T00:22:05.127888081Z" level=info msg="StartContainer for \"84ae02fb5d33a9b1b3b6c3195b14a7670e1fa90012e877ae3f5948f84f293c7c\"" Nov 1 00:22:05.192806 containerd[1583]: time="2025-11-01T00:22:05.192758634Z" level=info msg="StartContainer for \"93ad25e6224dd93d13631fb05d3b092f2ca8d6a88b75ad06108aea61c69c8e60\" returns successfully" Nov 1 00:22:05.197587 containerd[1583]: time="2025-11-01T00:22:05.197539525Z" level=info msg="StartContainer for \"84ae02fb5d33a9b1b3b6c3195b14a7670e1fa90012e877ae3f5948f84f293c7c\" returns successfully" Nov 1 00:22:05.887250 kubelet[2716]: E1101 00:22:05.887055 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:22:05.889168 kubelet[2716]: E1101 00:22:05.889046 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:22:05.901102 kubelet[2716]: I1101 00:22:05.901043 2716 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-sjhj9" podStartSLOduration=25.901026211 podStartE2EDuration="25.901026211s" podCreationTimestamp="2025-11-01 00:21:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:22:05.900063475 +0000 UTC m=+31.562591562" watchObservedRunningTime="2025-11-01 00:22:05.901026211 +0000 UTC m=+31.563554298" Nov 1 00:22:06.331570 systemd[1]: Started sshd@8-10.0.0.76:22-10.0.0.1:58858.service - OpenSSH per-connection server daemon (10.0.0.1:58858). Nov 1 00:22:06.380844 sshd[4099]: Accepted publickey for core from 10.0.0.1 port 58858 ssh2: RSA SHA256:jhdmrhZXRbwOcBbBZQf03C3Vb9eSOfvUjDz7MOKA8jE Nov 1 00:22:06.383164 sshd[4099]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:22:06.387924 systemd-logind[1557]: New session 8 of user core. Nov 1 00:22:06.398173 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 1 00:22:06.752634 sshd[4099]: pam_unix(sshd:session): session closed for user core Nov 1 00:22:06.758721 systemd[1]: sshd@8-10.0.0.76:22-10.0.0.1:58858.service: Deactivated successfully. Nov 1 00:22:06.761625 systemd[1]: session-8.scope: Deactivated successfully. Nov 1 00:22:06.762597 systemd-logind[1557]: Session 8 logged out. Waiting for processes to exit. Nov 1 00:22:06.763622 systemd-logind[1557]: Removed session 8. Nov 1 00:22:06.891106 kubelet[2716]: E1101 00:22:06.891067 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:22:06.891741 kubelet[2716]: E1101 00:22:06.891182 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:22:07.892716 kubelet[2716]: E1101 00:22:07.892658 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:22:07.892716 kubelet[2716]: E1101 00:22:07.892710 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:22:11.764705 systemd[1]: Started sshd@9-10.0.0.76:22-10.0.0.1:58870.service - OpenSSH per-connection server daemon (10.0.0.1:58870). Nov 1 00:22:11.805225 sshd[4118]: Accepted publickey for core from 10.0.0.1 port 58870 ssh2: RSA SHA256:jhdmrhZXRbwOcBbBZQf03C3Vb9eSOfvUjDz7MOKA8jE Nov 1 00:22:11.807631 sshd[4118]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:22:11.812632 systemd-logind[1557]: New session 9 of user core. Nov 1 00:22:11.823688 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 1 00:22:11.948210 sshd[4118]: pam_unix(sshd:session): session closed for user core Nov 1 00:22:11.952104 systemd[1]: sshd@9-10.0.0.76:22-10.0.0.1:58870.service: Deactivated successfully. Nov 1 00:22:11.955769 systemd-logind[1557]: Session 9 logged out. Waiting for processes to exit. Nov 1 00:22:11.956740 systemd[1]: session-9.scope: Deactivated successfully. Nov 1 00:22:11.957760 systemd-logind[1557]: Removed session 9. Nov 1 00:22:16.965675 systemd[1]: Started sshd@10-10.0.0.76:22-10.0.0.1:55260.service - OpenSSH per-connection server daemon (10.0.0.1:55260). Nov 1 00:22:17.071310 sshd[4134]: Accepted publickey for core from 10.0.0.1 port 55260 ssh2: RSA SHA256:jhdmrhZXRbwOcBbBZQf03C3Vb9eSOfvUjDz7MOKA8jE Nov 1 00:22:17.073476 sshd[4134]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:22:17.078829 systemd-logind[1557]: New session 10 of user core. Nov 1 00:22:17.094670 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 1 00:22:17.242090 sshd[4134]: pam_unix(sshd:session): session closed for user core Nov 1 00:22:17.246606 systemd[1]: sshd@10-10.0.0.76:22-10.0.0.1:55260.service: Deactivated successfully. Nov 1 00:22:17.249234 systemd-logind[1557]: Session 10 logged out. Waiting for processes to exit. Nov 1 00:22:17.249347 systemd[1]: session-10.scope: Deactivated successfully. Nov 1 00:22:17.250949 systemd-logind[1557]: Removed session 10. Nov 1 00:22:22.257648 systemd[1]: Started sshd@11-10.0.0.76:22-10.0.0.1:55270.service - OpenSSH per-connection server daemon (10.0.0.1:55270). Nov 1 00:22:22.289460 sshd[4151]: Accepted publickey for core from 10.0.0.1 port 55270 ssh2: RSA SHA256:jhdmrhZXRbwOcBbBZQf03C3Vb9eSOfvUjDz7MOKA8jE Nov 1 00:22:22.291704 sshd[4151]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:22:22.297107 systemd-logind[1557]: New session 11 of user core. Nov 1 00:22:22.303771 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 1 00:22:22.425906 sshd[4151]: pam_unix(sshd:session): session closed for user core Nov 1 00:22:22.431504 systemd[1]: sshd@11-10.0.0.76:22-10.0.0.1:55270.service: Deactivated successfully. Nov 1 00:22:22.434517 systemd-logind[1557]: Session 11 logged out. Waiting for processes to exit. Nov 1 00:22:22.434532 systemd[1]: session-11.scope: Deactivated successfully. Nov 1 00:22:22.436048 systemd-logind[1557]: Removed session 11. Nov 1 00:22:27.439716 systemd[1]: Started sshd@12-10.0.0.76:22-10.0.0.1:35968.service - OpenSSH per-connection server daemon (10.0.0.1:35968). Nov 1 00:22:27.472488 sshd[4168]: Accepted publickey for core from 10.0.0.1 port 35968 ssh2: RSA SHA256:jhdmrhZXRbwOcBbBZQf03C3Vb9eSOfvUjDz7MOKA8jE Nov 1 00:22:27.474642 sshd[4168]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:22:27.480042 systemd-logind[1557]: New session 12 of user core. Nov 1 00:22:27.486618 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 1 00:22:27.613332 sshd[4168]: pam_unix(sshd:session): session closed for user core Nov 1 00:22:27.622547 systemd[1]: Started sshd@13-10.0.0.76:22-10.0.0.1:35982.service - OpenSSH per-connection server daemon (10.0.0.1:35982). Nov 1 00:22:27.623139 systemd[1]: sshd@12-10.0.0.76:22-10.0.0.1:35968.service: Deactivated successfully. Nov 1 00:22:27.625686 systemd[1]: session-12.scope: Deactivated successfully. Nov 1 00:22:27.627217 systemd-logind[1557]: Session 12 logged out. Waiting for processes to exit. Nov 1 00:22:27.628281 systemd-logind[1557]: Removed session 12. Nov 1 00:22:27.659032 sshd[4182]: Accepted publickey for core from 10.0.0.1 port 35982 ssh2: RSA SHA256:jhdmrhZXRbwOcBbBZQf03C3Vb9eSOfvUjDz7MOKA8jE Nov 1 00:22:27.661095 sshd[4182]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:22:27.666436 systemd-logind[1557]: New session 13 of user core. Nov 1 00:22:27.675933 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 1 00:22:28.033548 sshd[4182]: pam_unix(sshd:session): session closed for user core Nov 1 00:22:28.040537 systemd[1]: Started sshd@14-10.0.0.76:22-10.0.0.1:35990.service - OpenSSH per-connection server daemon (10.0.0.1:35990). Nov 1 00:22:28.041033 systemd[1]: sshd@13-10.0.0.76:22-10.0.0.1:35982.service: Deactivated successfully. Nov 1 00:22:28.044773 systemd-logind[1557]: Session 13 logged out. Waiting for processes to exit. Nov 1 00:22:28.046059 systemd[1]: session-13.scope: Deactivated successfully. Nov 1 00:22:28.047044 systemd-logind[1557]: Removed session 13. Nov 1 00:22:28.077878 sshd[4195]: Accepted publickey for core from 10.0.0.1 port 35990 ssh2: RSA SHA256:jhdmrhZXRbwOcBbBZQf03C3Vb9eSOfvUjDz7MOKA8jE Nov 1 00:22:28.079828 sshd[4195]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:22:28.084872 systemd-logind[1557]: New session 14 of user core. Nov 1 00:22:28.093915 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 1 00:22:28.285898 sshd[4195]: pam_unix(sshd:session): session closed for user core Nov 1 00:22:28.290646 systemd[1]: sshd@14-10.0.0.76:22-10.0.0.1:35990.service: Deactivated successfully. Nov 1 00:22:28.293213 systemd-logind[1557]: Session 14 logged out. Waiting for processes to exit. Nov 1 00:22:28.293416 systemd[1]: session-14.scope: Deactivated successfully. Nov 1 00:22:28.294846 systemd-logind[1557]: Removed session 14. Nov 1 00:22:33.304706 systemd[1]: Started sshd@15-10.0.0.76:22-10.0.0.1:59680.service - OpenSSH per-connection server daemon (10.0.0.1:59680). Nov 1 00:22:33.334628 sshd[4213]: Accepted publickey for core from 10.0.0.1 port 59680 ssh2: RSA SHA256:jhdmrhZXRbwOcBbBZQf03C3Vb9eSOfvUjDz7MOKA8jE Nov 1 00:22:33.336402 sshd[4213]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:22:33.340636 systemd-logind[1557]: New session 15 of user core. Nov 1 00:22:33.347545 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 1 00:22:33.471037 sshd[4213]: pam_unix(sshd:session): session closed for user core Nov 1 00:22:33.476559 systemd[1]: sshd@15-10.0.0.76:22-10.0.0.1:59680.service: Deactivated successfully. Nov 1 00:22:33.479996 systemd-logind[1557]: Session 15 logged out. Waiting for processes to exit. Nov 1 00:22:33.480036 systemd[1]: session-15.scope: Deactivated successfully. Nov 1 00:22:33.481404 systemd-logind[1557]: Removed session 15. Nov 1 00:22:38.480505 systemd[1]: Started sshd@16-10.0.0.76:22-10.0.0.1:59694.service - OpenSSH per-connection server daemon (10.0.0.1:59694). Nov 1 00:22:38.508901 sshd[4231]: Accepted publickey for core from 10.0.0.1 port 59694 ssh2: RSA SHA256:jhdmrhZXRbwOcBbBZQf03C3Vb9eSOfvUjDz7MOKA8jE Nov 1 00:22:38.510964 sshd[4231]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:22:38.515673 systemd-logind[1557]: New session 16 of user core. Nov 1 00:22:38.533604 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 1 00:22:38.657869 sshd[4231]: pam_unix(sshd:session): session closed for user core Nov 1 00:22:38.664534 systemd[1]: Started sshd@17-10.0.0.76:22-10.0.0.1:59710.service - OpenSSH per-connection server daemon (10.0.0.1:59710). Nov 1 00:22:38.665094 systemd[1]: sshd@16-10.0.0.76:22-10.0.0.1:59694.service: Deactivated successfully. Nov 1 00:22:38.668417 systemd[1]: session-16.scope: Deactivated successfully. Nov 1 00:22:38.670769 systemd-logind[1557]: Session 16 logged out. Waiting for processes to exit. Nov 1 00:22:38.671860 systemd-logind[1557]: Removed session 16. Nov 1 00:22:38.700981 sshd[4244]: Accepted publickey for core from 10.0.0.1 port 59710 ssh2: RSA SHA256:jhdmrhZXRbwOcBbBZQf03C3Vb9eSOfvUjDz7MOKA8jE Nov 1 00:22:38.702741 sshd[4244]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:22:38.708186 systemd-logind[1557]: New session 17 of user core. Nov 1 00:22:38.713558 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 1 00:22:39.008102 sshd[4244]: pam_unix(sshd:session): session closed for user core Nov 1 00:22:39.019609 systemd[1]: Started sshd@18-10.0.0.76:22-10.0.0.1:59724.service - OpenSSH per-connection server daemon (10.0.0.1:59724). Nov 1 00:22:39.020199 systemd[1]: sshd@17-10.0.0.76:22-10.0.0.1:59710.service: Deactivated successfully. Nov 1 00:22:39.024836 systemd[1]: session-17.scope: Deactivated successfully. Nov 1 00:22:39.025654 systemd-logind[1557]: Session 17 logged out. Waiting for processes to exit. Nov 1 00:22:39.026978 systemd-logind[1557]: Removed session 17. Nov 1 00:22:39.053223 sshd[4258]: Accepted publickey for core from 10.0.0.1 port 59724 ssh2: RSA SHA256:jhdmrhZXRbwOcBbBZQf03C3Vb9eSOfvUjDz7MOKA8jE Nov 1 00:22:39.055517 sshd[4258]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:22:39.060354 systemd-logind[1557]: New session 18 of user core. Nov 1 00:22:39.067648 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 1 00:22:39.616398 sshd[4258]: pam_unix(sshd:session): session closed for user core Nov 1 00:22:39.620963 systemd[1]: sshd@18-10.0.0.76:22-10.0.0.1:59724.service: Deactivated successfully. Nov 1 00:22:39.626942 systemd-logind[1557]: Session 18 logged out. Waiting for processes to exit. Nov 1 00:22:39.628180 systemd[1]: session-18.scope: Deactivated successfully. Nov 1 00:22:39.637590 systemd[1]: Started sshd@19-10.0.0.76:22-10.0.0.1:59726.service - OpenSSH per-connection server daemon (10.0.0.1:59726). Nov 1 00:22:39.638416 systemd-logind[1557]: Removed session 18. Nov 1 00:22:39.687702 sshd[4280]: Accepted publickey for core from 10.0.0.1 port 59726 ssh2: RSA SHA256:jhdmrhZXRbwOcBbBZQf03C3Vb9eSOfvUjDz7MOKA8jE Nov 1 00:22:39.689988 sshd[4280]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:22:39.694937 systemd-logind[1557]: New session 19 of user core. Nov 1 00:22:39.703666 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 1 00:22:40.087548 sshd[4280]: pam_unix(sshd:session): session closed for user core Nov 1 00:22:40.096719 systemd[1]: Started sshd@20-10.0.0.76:22-10.0.0.1:59730.service - OpenSSH per-connection server daemon (10.0.0.1:59730). Nov 1 00:22:40.097323 systemd[1]: sshd@19-10.0.0.76:22-10.0.0.1:59726.service: Deactivated successfully. Nov 1 00:22:40.101982 systemd-logind[1557]: Session 19 logged out. Waiting for processes to exit. Nov 1 00:22:40.106061 systemd[1]: session-19.scope: Deactivated successfully. Nov 1 00:22:40.110483 systemd-logind[1557]: Removed session 19. Nov 1 00:22:40.131228 sshd[4292]: Accepted publickey for core from 10.0.0.1 port 59730 ssh2: RSA SHA256:jhdmrhZXRbwOcBbBZQf03C3Vb9eSOfvUjDz7MOKA8jE Nov 1 00:22:40.133070 sshd[4292]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:22:40.137825 systemd-logind[1557]: New session 20 of user core. Nov 1 00:22:40.144577 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 1 00:22:40.267322 sshd[4292]: pam_unix(sshd:session): session closed for user core Nov 1 00:22:40.272102 systemd[1]: sshd@20-10.0.0.76:22-10.0.0.1:59730.service: Deactivated successfully. Nov 1 00:22:40.275430 systemd[1]: session-20.scope: Deactivated successfully. Nov 1 00:22:40.276211 systemd-logind[1557]: Session 20 logged out. Waiting for processes to exit. Nov 1 00:22:40.277167 systemd-logind[1557]: Removed session 20. Nov 1 00:22:45.277657 systemd[1]: Started sshd@21-10.0.0.76:22-10.0.0.1:52316.service - OpenSSH per-connection server daemon (10.0.0.1:52316). Nov 1 00:22:45.309360 sshd[4313]: Accepted publickey for core from 10.0.0.1 port 52316 ssh2: RSA SHA256:jhdmrhZXRbwOcBbBZQf03C3Vb9eSOfvUjDz7MOKA8jE Nov 1 00:22:45.311642 sshd[4313]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:22:45.316530 systemd-logind[1557]: New session 21 of user core. Nov 1 00:22:45.327968 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 1 00:22:45.445413 sshd[4313]: pam_unix(sshd:session): session closed for user core Nov 1 00:22:45.450420 systemd[1]: sshd@21-10.0.0.76:22-10.0.0.1:52316.service: Deactivated successfully. Nov 1 00:22:45.453094 systemd-logind[1557]: Session 21 logged out. Waiting for processes to exit. Nov 1 00:22:45.453192 systemd[1]: session-21.scope: Deactivated successfully. Nov 1 00:22:45.454299 systemd-logind[1557]: Removed session 21. Nov 1 00:22:45.487340 kubelet[2716]: E1101 00:22:45.487231 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:22:48.487017 kubelet[2716]: E1101 00:22:48.486928 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:22:50.464680 systemd[1]: Started sshd@22-10.0.0.76:22-10.0.0.1:52326.service - OpenSSH per-connection server daemon (10.0.0.1:52326). Nov 1 00:22:50.512507 sshd[4330]: Accepted publickey for core from 10.0.0.1 port 52326 ssh2: RSA SHA256:jhdmrhZXRbwOcBbBZQf03C3Vb9eSOfvUjDz7MOKA8jE Nov 1 00:22:50.523516 sshd[4330]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:22:50.540310 systemd-logind[1557]: New session 22 of user core. Nov 1 00:22:50.554870 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 1 00:22:50.718561 sshd[4330]: pam_unix(sshd:session): session closed for user core Nov 1 00:22:50.724442 systemd[1]: sshd@22-10.0.0.76:22-10.0.0.1:52326.service: Deactivated successfully. Nov 1 00:22:50.727428 systemd-logind[1557]: Session 22 logged out. Waiting for processes to exit. Nov 1 00:22:50.727681 systemd[1]: session-22.scope: Deactivated successfully. Nov 1 00:22:50.729402 systemd-logind[1557]: Removed session 22. Nov 1 00:22:52.487887 kubelet[2716]: E1101 00:22:52.487462 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:22:55.733363 systemd[1]: Started sshd@23-10.0.0.76:22-10.0.0.1:51946.service - OpenSSH per-connection server daemon (10.0.0.1:51946). Nov 1 00:22:55.771060 sshd[4346]: Accepted publickey for core from 10.0.0.1 port 51946 ssh2: RSA SHA256:jhdmrhZXRbwOcBbBZQf03C3Vb9eSOfvUjDz7MOKA8jE Nov 1 00:22:55.773458 sshd[4346]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:22:55.785339 systemd-logind[1557]: New session 23 of user core. Nov 1 00:22:55.799779 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 1 00:22:55.928258 sshd[4346]: pam_unix(sshd:session): session closed for user core Nov 1 00:22:55.934131 systemd[1]: sshd@23-10.0.0.76:22-10.0.0.1:51946.service: Deactivated successfully. Nov 1 00:22:55.937236 systemd[1]: session-23.scope: Deactivated successfully. Nov 1 00:22:55.938757 systemd-logind[1557]: Session 23 logged out. Waiting for processes to exit. Nov 1 00:22:55.939970 systemd-logind[1557]: Removed session 23. Nov 1 00:23:00.942679 systemd[1]: Started sshd@24-10.0.0.76:22-10.0.0.1:51950.service - OpenSSH per-connection server daemon (10.0.0.1:51950). Nov 1 00:23:00.988344 sshd[4362]: Accepted publickey for core from 10.0.0.1 port 51950 ssh2: RSA SHA256:jhdmrhZXRbwOcBbBZQf03C3Vb9eSOfvUjDz7MOKA8jE Nov 1 00:23:00.991148 sshd[4362]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:23:01.006815 systemd-logind[1557]: New session 24 of user core. Nov 1 00:23:01.020337 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 1 00:23:01.156905 sshd[4362]: pam_unix(sshd:session): session closed for user core Nov 1 00:23:01.162937 systemd[1]: sshd@24-10.0.0.76:22-10.0.0.1:51950.service: Deactivated successfully. Nov 1 00:23:01.167965 systemd-logind[1557]: Session 24 logged out. Waiting for processes to exit. Nov 1 00:23:01.168168 systemd[1]: session-24.scope: Deactivated successfully. Nov 1 00:23:01.169552 systemd-logind[1557]: Removed session 24. Nov 1 00:23:03.487162 kubelet[2716]: E1101 00:23:03.486827 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:23:06.167523 systemd[1]: Started sshd@25-10.0.0.76:22-10.0.0.1:45490.service - OpenSSH per-connection server daemon (10.0.0.1:45490). Nov 1 00:23:06.201817 sshd[4377]: Accepted publickey for core from 10.0.0.1 port 45490 ssh2: RSA SHA256:jhdmrhZXRbwOcBbBZQf03C3Vb9eSOfvUjDz7MOKA8jE Nov 1 00:23:06.204409 sshd[4377]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:23:06.210316 systemd-logind[1557]: New session 25 of user core. Nov 1 00:23:06.221917 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 1 00:23:06.337107 sshd[4377]: pam_unix(sshd:session): session closed for user core Nov 1 00:23:06.350815 systemd[1]: Started sshd@26-10.0.0.76:22-10.0.0.1:45500.service - OpenSSH per-connection server daemon (10.0.0.1:45500). Nov 1 00:23:06.351525 systemd[1]: sshd@25-10.0.0.76:22-10.0.0.1:45490.service: Deactivated successfully. Nov 1 00:23:06.357677 systemd-logind[1557]: Session 25 logged out. Waiting for processes to exit. Nov 1 00:23:06.357827 systemd[1]: session-25.scope: Deactivated successfully. Nov 1 00:23:06.359982 systemd-logind[1557]: Removed session 25. Nov 1 00:23:06.386715 sshd[4389]: Accepted publickey for core from 10.0.0.1 port 45500 ssh2: RSA SHA256:jhdmrhZXRbwOcBbBZQf03C3Vb9eSOfvUjDz7MOKA8jE Nov 1 00:23:06.389557 sshd[4389]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:23:06.395089 systemd-logind[1557]: New session 26 of user core. Nov 1 00:23:06.400590 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 1 00:23:06.487681 kubelet[2716]: E1101 00:23:06.487498 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:23:08.173044 kubelet[2716]: I1101 00:23:08.172925 2716 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-s2sn8" podStartSLOduration=89.172904837 podStartE2EDuration="1m29.172904837s" podCreationTimestamp="2025-11-01 00:21:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:22:05.933921115 +0000 UTC m=+31.596449212" watchObservedRunningTime="2025-11-01 00:23:08.172904837 +0000 UTC m=+93.835432924" Nov 1 00:23:08.208771 containerd[1583]: time="2025-11-01T00:23:08.208689011Z" level=info msg="StopContainer for \"2e5c187855d0196255d77f9e2b06a1e96e042617a70bdace36b741b439182572\" with timeout 30 (s)" Nov 1 00:23:08.209458 containerd[1583]: time="2025-11-01T00:23:08.209411551Z" level=info msg="Stop container \"2e5c187855d0196255d77f9e2b06a1e96e042617a70bdace36b741b439182572\" with signal terminated" Nov 1 00:23:08.261886 systemd[1]: run-containerd-runc-k8s.io-5caba774871a2f75f8829a2c7972ca3caa552559b19af08e179bb3e4a1794ea1-runc.YTPzhb.mount: Deactivated successfully. Nov 1 00:23:08.288608 containerd[1583]: time="2025-11-01T00:23:08.288515485Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 1 00:23:08.310044 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2e5c187855d0196255d77f9e2b06a1e96e042617a70bdace36b741b439182572-rootfs.mount: Deactivated successfully. Nov 1 00:23:08.329729 containerd[1583]: time="2025-11-01T00:23:08.329625002Z" level=info msg="StopContainer for \"5caba774871a2f75f8829a2c7972ca3caa552559b19af08e179bb3e4a1794ea1\" with timeout 2 (s)" Nov 1 00:23:08.330040 containerd[1583]: time="2025-11-01T00:23:08.329888391Z" level=info msg="Stop container \"5caba774871a2f75f8829a2c7972ca3caa552559b19af08e179bb3e4a1794ea1\" with signal terminated" Nov 1 00:23:08.341574 systemd-networkd[1245]: lxc_health: Link DOWN Nov 1 00:23:08.341585 systemd-networkd[1245]: lxc_health: Lost carrier Nov 1 00:23:08.731543 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5caba774871a2f75f8829a2c7972ca3caa552559b19af08e179bb3e4a1794ea1-rootfs.mount: Deactivated successfully. Nov 1 00:23:08.818366 containerd[1583]: time="2025-11-01T00:23:08.818260254Z" level=info msg="shim disconnected" id=2e5c187855d0196255d77f9e2b06a1e96e042617a70bdace36b741b439182572 namespace=k8s.io Nov 1 00:23:08.818366 containerd[1583]: time="2025-11-01T00:23:08.818356547Z" level=warning msg="cleaning up after shim disconnected" id=2e5c187855d0196255d77f9e2b06a1e96e042617a70bdace36b741b439182572 namespace=k8s.io Nov 1 00:23:08.818366 containerd[1583]: time="2025-11-01T00:23:08.818367197Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 1 00:23:08.893560 containerd[1583]: time="2025-11-01T00:23:08.893394226Z" level=info msg="shim disconnected" id=5caba774871a2f75f8829a2c7972ca3caa552559b19af08e179bb3e4a1794ea1 namespace=k8s.io Nov 1 00:23:08.893560 containerd[1583]: time="2025-11-01T00:23:08.893468297Z" level=warning msg="cleaning up after shim disconnected" id=5caba774871a2f75f8829a2c7972ca3caa552559b19af08e179bb3e4a1794ea1 namespace=k8s.io Nov 1 00:23:08.893560 containerd[1583]: time="2025-11-01T00:23:08.893481452Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 1 00:23:08.903249 containerd[1583]: time="2025-11-01T00:23:08.899728463Z" level=info msg="StopContainer for \"2e5c187855d0196255d77f9e2b06a1e96e042617a70bdace36b741b439182572\" returns successfully" Nov 1 00:23:08.906488 containerd[1583]: time="2025-11-01T00:23:08.905096716Z" level=info msg="StopPodSandbox for \"29f9415cb952c115ba4bf1c1b80a2a467c2aa91b252ea14d0bbe30fb480fa234\"" Nov 1 00:23:08.906488 containerd[1583]: time="2025-11-01T00:23:08.905182178Z" level=info msg="Container to stop \"2e5c187855d0196255d77f9e2b06a1e96e042617a70bdace36b741b439182572\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 1 00:23:08.912184 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-29f9415cb952c115ba4bf1c1b80a2a467c2aa91b252ea14d0bbe30fb480fa234-shm.mount: Deactivated successfully. Nov 1 00:23:08.938310 containerd[1583]: time="2025-11-01T00:23:08.938207128Z" level=info msg="StopContainer for \"5caba774871a2f75f8829a2c7972ca3caa552559b19af08e179bb3e4a1794ea1\" returns successfully" Nov 1 00:23:08.939565 containerd[1583]: time="2025-11-01T00:23:08.939512414Z" level=info msg="StopPodSandbox for \"8aa4932c6ae829dda34f17ea323911360b64ea13ee13adf835897a529da46706\"" Nov 1 00:23:08.939565 containerd[1583]: time="2025-11-01T00:23:08.939554684Z" level=info msg="Container to stop \"ea44e1b2b4e5e2f601842d88ed14672e5dae79064e0f9b35c9e3797a0f657e9e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 1 00:23:08.939565 containerd[1583]: time="2025-11-01T00:23:08.939573681Z" level=info msg="Container to stop \"7a57dc6b222fe8cd8e997b887c7fcb2993f65a374b4680e5166b84bec79de761\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 1 00:23:08.939565 containerd[1583]: time="2025-11-01T00:23:08.939584231Z" level=info msg="Container to stop \"3532380d262894951163483eafe6e1af5cbca70cda6e134aaed8d00751a6e8cc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 1 00:23:08.939870 containerd[1583]: time="2025-11-01T00:23:08.939596023Z" level=info msg="Container to stop \"39c3e9d35fc218dcbfd8dbdaed35fa8538fff0d39d5f35f8e5d2aa9350f39a05\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 1 00:23:08.939870 containerd[1583]: time="2025-11-01T00:23:08.939607665Z" level=info msg="Container to stop \"5caba774871a2f75f8829a2c7972ca3caa552559b19af08e179bb3e4a1794ea1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 1 00:23:09.091284 containerd[1583]: time="2025-11-01T00:23:09.091062031Z" level=info msg="shim disconnected" id=8aa4932c6ae829dda34f17ea323911360b64ea13ee13adf835897a529da46706 namespace=k8s.io Nov 1 00:23:09.091284 containerd[1583]: time="2025-11-01T00:23:09.091139668Z" level=warning msg="cleaning up after shim disconnected" id=8aa4932c6ae829dda34f17ea323911360b64ea13ee13adf835897a529da46706 namespace=k8s.io Nov 1 00:23:09.091284 containerd[1583]: time="2025-11-01T00:23:09.091154567Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 1 00:23:09.091613 containerd[1583]: time="2025-11-01T00:23:09.091145669Z" level=info msg="shim disconnected" id=29f9415cb952c115ba4bf1c1b80a2a467c2aa91b252ea14d0bbe30fb480fa234 namespace=k8s.io Nov 1 00:23:09.091613 containerd[1583]: time="2025-11-01T00:23:09.091375154Z" level=warning msg="cleaning up after shim disconnected" id=29f9415cb952c115ba4bf1c1b80a2a467c2aa91b252ea14d0bbe30fb480fa234 namespace=k8s.io Nov 1 00:23:09.091613 containerd[1583]: time="2025-11-01T00:23:09.091391837Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 1 00:23:09.110956 containerd[1583]: time="2025-11-01T00:23:09.110325797Z" level=info msg="TearDown network for sandbox \"29f9415cb952c115ba4bf1c1b80a2a467c2aa91b252ea14d0bbe30fb480fa234\" successfully" Nov 1 00:23:09.110956 containerd[1583]: time="2025-11-01T00:23:09.110451896Z" level=info msg="StopPodSandbox for \"29f9415cb952c115ba4bf1c1b80a2a467c2aa91b252ea14d0bbe30fb480fa234\" returns successfully" Nov 1 00:23:09.113055 containerd[1583]: time="2025-11-01T00:23:09.112975903Z" level=info msg="TearDown network for sandbox \"8aa4932c6ae829dda34f17ea323911360b64ea13ee13adf835897a529da46706\" successfully" Nov 1 00:23:09.113055 containerd[1583]: time="2025-11-01T00:23:09.113006321Z" level=info msg="StopPodSandbox for \"8aa4932c6ae829dda34f17ea323911360b64ea13ee13adf835897a529da46706\" returns successfully" Nov 1 00:23:09.211972 kubelet[2716]: I1101 00:23:09.211870 2716 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5e838fae-ac9e-45a7-8ef8-574f43dd7cd7-lib-modules\") pod \"5e838fae-ac9e-45a7-8ef8-574f43dd7cd7\" (UID: \"5e838fae-ac9e-45a7-8ef8-574f43dd7cd7\") " Nov 1 00:23:09.211972 kubelet[2716]: I1101 00:23:09.211963 2716 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5e838fae-ac9e-45a7-8ef8-574f43dd7cd7-host-proc-sys-net\") pod \"5e838fae-ac9e-45a7-8ef8-574f43dd7cd7\" (UID: \"5e838fae-ac9e-45a7-8ef8-574f43dd7cd7\") " Nov 1 00:23:09.211972 kubelet[2716]: I1101 00:23:09.211989 2716 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5e838fae-ac9e-45a7-8ef8-574f43dd7cd7-bpf-maps\") pod \"5e838fae-ac9e-45a7-8ef8-574f43dd7cd7\" (UID: \"5e838fae-ac9e-45a7-8ef8-574f43dd7cd7\") " Nov 1 00:23:09.212658 kubelet[2716]: I1101 00:23:09.212022 2716 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5e838fae-ac9e-45a7-8ef8-574f43dd7cd7-hubble-tls\") pod \"5e838fae-ac9e-45a7-8ef8-574f43dd7cd7\" (UID: \"5e838fae-ac9e-45a7-8ef8-574f43dd7cd7\") " Nov 1 00:23:09.212658 kubelet[2716]: I1101 00:23:09.212049 2716 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5e838fae-ac9e-45a7-8ef8-574f43dd7cd7-cilium-run\") pod \"5e838fae-ac9e-45a7-8ef8-574f43dd7cd7\" (UID: \"5e838fae-ac9e-45a7-8ef8-574f43dd7cd7\") " Nov 1 00:23:09.212658 kubelet[2716]: I1101 00:23:09.212071 2716 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5e838fae-ac9e-45a7-8ef8-574f43dd7cd7-etc-cni-netd\") pod \"5e838fae-ac9e-45a7-8ef8-574f43dd7cd7\" (UID: \"5e838fae-ac9e-45a7-8ef8-574f43dd7cd7\") " Nov 1 00:23:09.212658 kubelet[2716]: I1101 00:23:09.212092 2716 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5e838fae-ac9e-45a7-8ef8-574f43dd7cd7-xtables-lock\") pod \"5e838fae-ac9e-45a7-8ef8-574f43dd7cd7\" (UID: \"5e838fae-ac9e-45a7-8ef8-574f43dd7cd7\") " Nov 1 00:23:09.212658 kubelet[2716]: I1101 00:23:09.212093 2716 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5e838fae-ac9e-45a7-8ef8-574f43dd7cd7-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "5e838fae-ac9e-45a7-8ef8-574f43dd7cd7" (UID: "5e838fae-ac9e-45a7-8ef8-574f43dd7cd7"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:23:09.212658 kubelet[2716]: I1101 00:23:09.212145 2716 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5e838fae-ac9e-45a7-8ef8-574f43dd7cd7-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "5e838fae-ac9e-45a7-8ef8-574f43dd7cd7" (UID: "5e838fae-ac9e-45a7-8ef8-574f43dd7cd7"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:23:09.212999 kubelet[2716]: I1101 00:23:09.212117 2716 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8161d1e0-74df-43b4-bec3-a4658844240a-cilium-config-path\") pod \"8161d1e0-74df-43b4-bec3-a4658844240a\" (UID: \"8161d1e0-74df-43b4-bec3-a4658844240a\") " Nov 1 00:23:09.212999 kubelet[2716]: I1101 00:23:09.212180 2716 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5e838fae-ac9e-45a7-8ef8-574f43dd7cd7-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "5e838fae-ac9e-45a7-8ef8-574f43dd7cd7" (UID: "5e838fae-ac9e-45a7-8ef8-574f43dd7cd7"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:23:09.212999 kubelet[2716]: I1101 00:23:09.212233 2716 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5e838fae-ac9e-45a7-8ef8-574f43dd7cd7-host-proc-sys-kernel\") pod \"5e838fae-ac9e-45a7-8ef8-574f43dd7cd7\" (UID: \"5e838fae-ac9e-45a7-8ef8-574f43dd7cd7\") " Nov 1 00:23:09.212999 kubelet[2716]: I1101 00:23:09.212290 2716 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6vpvw\" (UniqueName: \"kubernetes.io/projected/8161d1e0-74df-43b4-bec3-a4658844240a-kube-api-access-6vpvw\") pod \"8161d1e0-74df-43b4-bec3-a4658844240a\" (UID: \"8161d1e0-74df-43b4-bec3-a4658844240a\") " Nov 1 00:23:09.212999 kubelet[2716]: I1101 00:23:09.212323 2716 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k7g8t\" (UniqueName: \"kubernetes.io/projected/5e838fae-ac9e-45a7-8ef8-574f43dd7cd7-kube-api-access-k7g8t\") pod \"5e838fae-ac9e-45a7-8ef8-574f43dd7cd7\" (UID: \"5e838fae-ac9e-45a7-8ef8-574f43dd7cd7\") " Nov 1 00:23:09.212999 kubelet[2716]: I1101 00:23:09.212350 2716 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5e838fae-ac9e-45a7-8ef8-574f43dd7cd7-cilium-cgroup\") pod \"5e838fae-ac9e-45a7-8ef8-574f43dd7cd7\" (UID: \"5e838fae-ac9e-45a7-8ef8-574f43dd7cd7\") " Nov 1 00:23:09.213191 kubelet[2716]: I1101 00:23:09.212380 2716 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5e838fae-ac9e-45a7-8ef8-574f43dd7cd7-hostproc\") pod \"5e838fae-ac9e-45a7-8ef8-574f43dd7cd7\" (UID: \"5e838fae-ac9e-45a7-8ef8-574f43dd7cd7\") " Nov 1 00:23:09.213191 kubelet[2716]: I1101 00:23:09.212403 2716 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5e838fae-ac9e-45a7-8ef8-574f43dd7cd7-cilium-config-path\") pod \"5e838fae-ac9e-45a7-8ef8-574f43dd7cd7\" (UID: \"5e838fae-ac9e-45a7-8ef8-574f43dd7cd7\") " Nov 1 00:23:09.213191 kubelet[2716]: I1101 00:23:09.212423 2716 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5e838fae-ac9e-45a7-8ef8-574f43dd7cd7-cni-path\") pod \"5e838fae-ac9e-45a7-8ef8-574f43dd7cd7\" (UID: \"5e838fae-ac9e-45a7-8ef8-574f43dd7cd7\") " Nov 1 00:23:09.213191 kubelet[2716]: I1101 00:23:09.212445 2716 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5e838fae-ac9e-45a7-8ef8-574f43dd7cd7-clustermesh-secrets\") pod \"5e838fae-ac9e-45a7-8ef8-574f43dd7cd7\" (UID: \"5e838fae-ac9e-45a7-8ef8-574f43dd7cd7\") " Nov 1 00:23:09.213191 kubelet[2716]: I1101 00:23:09.212586 2716 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5e838fae-ac9e-45a7-8ef8-574f43dd7cd7-lib-modules\") on node \"localhost\" DevicePath \"\"" Nov 1 00:23:09.213191 kubelet[2716]: I1101 00:23:09.212599 2716 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5e838fae-ac9e-45a7-8ef8-574f43dd7cd7-cilium-run\") on node \"localhost\" DevicePath \"\"" Nov 1 00:23:09.213191 kubelet[2716]: I1101 00:23:09.212609 2716 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5e838fae-ac9e-45a7-8ef8-574f43dd7cd7-bpf-maps\") on node \"localhost\" DevicePath \"\"" Nov 1 00:23:09.217594 kubelet[2716]: I1101 00:23:09.215341 2716 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5e838fae-ac9e-45a7-8ef8-574f43dd7cd7-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "5e838fae-ac9e-45a7-8ef8-574f43dd7cd7" (UID: "5e838fae-ac9e-45a7-8ef8-574f43dd7cd7"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:23:09.217594 kubelet[2716]: I1101 00:23:09.215383 2716 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5e838fae-ac9e-45a7-8ef8-574f43dd7cd7-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "5e838fae-ac9e-45a7-8ef8-574f43dd7cd7" (UID: "5e838fae-ac9e-45a7-8ef8-574f43dd7cd7"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:23:09.217594 kubelet[2716]: I1101 00:23:09.215427 2716 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5e838fae-ac9e-45a7-8ef8-574f43dd7cd7-cni-path" (OuterVolumeSpecName: "cni-path") pod "5e838fae-ac9e-45a7-8ef8-574f43dd7cd7" (UID: "5e838fae-ac9e-45a7-8ef8-574f43dd7cd7"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:23:09.217594 kubelet[2716]: I1101 00:23:09.215450 2716 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5e838fae-ac9e-45a7-8ef8-574f43dd7cd7-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "5e838fae-ac9e-45a7-8ef8-574f43dd7cd7" (UID: "5e838fae-ac9e-45a7-8ef8-574f43dd7cd7"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:23:09.217594 kubelet[2716]: I1101 00:23:09.215470 2716 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5e838fae-ac9e-45a7-8ef8-574f43dd7cd7-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "5e838fae-ac9e-45a7-8ef8-574f43dd7cd7" (UID: "5e838fae-ac9e-45a7-8ef8-574f43dd7cd7"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:23:09.217919 kubelet[2716]: I1101 00:23:09.212093 2716 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5e838fae-ac9e-45a7-8ef8-574f43dd7cd7-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "5e838fae-ac9e-45a7-8ef8-574f43dd7cd7" (UID: "5e838fae-ac9e-45a7-8ef8-574f43dd7cd7"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:23:09.217919 kubelet[2716]: I1101 00:23:09.215497 2716 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5e838fae-ac9e-45a7-8ef8-574f43dd7cd7-hostproc" (OuterVolumeSpecName: "hostproc") pod "5e838fae-ac9e-45a7-8ef8-574f43dd7cd7" (UID: "5e838fae-ac9e-45a7-8ef8-574f43dd7cd7"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:23:09.221162 kubelet[2716]: I1101 00:23:09.221092 2716 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8161d1e0-74df-43b4-bec3-a4658844240a-kube-api-access-6vpvw" (OuterVolumeSpecName: "kube-api-access-6vpvw") pod "8161d1e0-74df-43b4-bec3-a4658844240a" (UID: "8161d1e0-74df-43b4-bec3-a4658844240a"). InnerVolumeSpecName "kube-api-access-6vpvw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 1 00:23:09.222013 kubelet[2716]: I1101 00:23:09.221903 2716 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e838fae-ac9e-45a7-8ef8-574f43dd7cd7-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "5e838fae-ac9e-45a7-8ef8-574f43dd7cd7" (UID: "5e838fae-ac9e-45a7-8ef8-574f43dd7cd7"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 1 00:23:09.222013 kubelet[2716]: I1101 00:23:09.221990 2716 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e838fae-ac9e-45a7-8ef8-574f43dd7cd7-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "5e838fae-ac9e-45a7-8ef8-574f43dd7cd7" (UID: "5e838fae-ac9e-45a7-8ef8-574f43dd7cd7"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 1 00:23:09.223572 kubelet[2716]: I1101 00:23:09.223500 2716 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e838fae-ac9e-45a7-8ef8-574f43dd7cd7-kube-api-access-k7g8t" (OuterVolumeSpecName: "kube-api-access-k7g8t") pod "5e838fae-ac9e-45a7-8ef8-574f43dd7cd7" (UID: "5e838fae-ac9e-45a7-8ef8-574f43dd7cd7"). InnerVolumeSpecName "kube-api-access-k7g8t". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 1 00:23:09.223836 kubelet[2716]: I1101 00:23:09.223795 2716 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8161d1e0-74df-43b4-bec3-a4658844240a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8161d1e0-74df-43b4-bec3-a4658844240a" (UID: "8161d1e0-74df-43b4-bec3-a4658844240a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 1 00:23:09.224525 kubelet[2716]: I1101 00:23:09.224488 2716 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e838fae-ac9e-45a7-8ef8-574f43dd7cd7-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "5e838fae-ac9e-45a7-8ef8-574f43dd7cd7" (UID: "5e838fae-ac9e-45a7-8ef8-574f43dd7cd7"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 1 00:23:09.255977 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-29f9415cb952c115ba4bf1c1b80a2a467c2aa91b252ea14d0bbe30fb480fa234-rootfs.mount: Deactivated successfully. Nov 1 00:23:09.256221 systemd[1]: var-lib-kubelet-pods-8161d1e0\x2d74df\x2d43b4\x2dbec3\x2da4658844240a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6vpvw.mount: Deactivated successfully. Nov 1 00:23:09.256418 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8aa4932c6ae829dda34f17ea323911360b64ea13ee13adf835897a529da46706-rootfs.mount: Deactivated successfully. Nov 1 00:23:09.256605 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8aa4932c6ae829dda34f17ea323911360b64ea13ee13adf835897a529da46706-shm.mount: Deactivated successfully. Nov 1 00:23:09.256776 systemd[1]: var-lib-kubelet-pods-5e838fae\x2dac9e\x2d45a7\x2d8ef8\x2d574f43dd7cd7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dk7g8t.mount: Deactivated successfully. Nov 1 00:23:09.257062 systemd[1]: var-lib-kubelet-pods-5e838fae\x2dac9e\x2d45a7\x2d8ef8\x2d574f43dd7cd7-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Nov 1 00:23:09.257247 systemd[1]: var-lib-kubelet-pods-5e838fae\x2dac9e\x2d45a7\x2d8ef8\x2d574f43dd7cd7-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Nov 1 00:23:09.312912 kubelet[2716]: I1101 00:23:09.312830 2716 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5e838fae-ac9e-45a7-8ef8-574f43dd7cd7-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Nov 1 00:23:09.313439 kubelet[2716]: I1101 00:23:09.313221 2716 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6vpvw\" (UniqueName: \"kubernetes.io/projected/8161d1e0-74df-43b4-bec3-a4658844240a-kube-api-access-6vpvw\") on node \"localhost\" DevicePath \"\"" Nov 1 00:23:09.313439 kubelet[2716]: I1101 00:23:09.313250 2716 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-k7g8t\" (UniqueName: \"kubernetes.io/projected/5e838fae-ac9e-45a7-8ef8-574f43dd7cd7-kube-api-access-k7g8t\") on node \"localhost\" DevicePath \"\"" Nov 1 00:23:09.313439 kubelet[2716]: I1101 00:23:09.313312 2716 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5e838fae-ac9e-45a7-8ef8-574f43dd7cd7-hostproc\") on node \"localhost\" DevicePath \"\"" Nov 1 00:23:09.313439 kubelet[2716]: I1101 00:23:09.313327 2716 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5e838fae-ac9e-45a7-8ef8-574f43dd7cd7-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Nov 1 00:23:09.313439 kubelet[2716]: I1101 00:23:09.313340 2716 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5e838fae-ac9e-45a7-8ef8-574f43dd7cd7-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Nov 1 00:23:09.313439 kubelet[2716]: I1101 00:23:09.313350 2716 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5e838fae-ac9e-45a7-8ef8-574f43dd7cd7-cni-path\") on node \"localhost\" DevicePath \"\"" Nov 1 00:23:09.313439 kubelet[2716]: I1101 00:23:09.313360 2716 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5e838fae-ac9e-45a7-8ef8-574f43dd7cd7-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Nov 1 00:23:09.313439 kubelet[2716]: I1101 00:23:09.313370 2716 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5e838fae-ac9e-45a7-8ef8-574f43dd7cd7-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Nov 1 00:23:09.313744 kubelet[2716]: I1101 00:23:09.313380 2716 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5e838fae-ac9e-45a7-8ef8-574f43dd7cd7-hubble-tls\") on node \"localhost\" DevicePath \"\"" Nov 1 00:23:09.313744 kubelet[2716]: I1101 00:23:09.313391 2716 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5e838fae-ac9e-45a7-8ef8-574f43dd7cd7-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Nov 1 00:23:09.313744 kubelet[2716]: I1101 00:23:09.313401 2716 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5e838fae-ac9e-45a7-8ef8-574f43dd7cd7-xtables-lock\") on node \"localhost\" DevicePath \"\"" Nov 1 00:23:09.313744 kubelet[2716]: I1101 00:23:09.313413 2716 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8161d1e0-74df-43b4-bec3-a4658844240a-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Nov 1 00:23:09.580133 kubelet[2716]: E1101 00:23:09.580049 2716 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Nov 1 00:23:09.987166 sshd[4389]: pam_unix(sshd:session): session closed for user core Nov 1 00:23:10.003900 systemd[1]: Started sshd@27-10.0.0.76:22-10.0.0.1:45504.service - OpenSSH per-connection server daemon (10.0.0.1:45504). Nov 1 00:23:10.006311 systemd[1]: sshd@26-10.0.0.76:22-10.0.0.1:45500.service: Deactivated successfully. Nov 1 00:23:10.012711 systemd[1]: session-26.scope: Deactivated successfully. Nov 1 00:23:10.015967 systemd-logind[1557]: Session 26 logged out. Waiting for processes to exit. Nov 1 00:23:10.017884 systemd-logind[1557]: Removed session 26. Nov 1 00:23:10.038296 kubelet[2716]: I1101 00:23:10.036282 2716 scope.go:117] "RemoveContainer" containerID="2e5c187855d0196255d77f9e2b06a1e96e042617a70bdace36b741b439182572" Nov 1 00:23:10.040594 containerd[1583]: time="2025-11-01T00:23:10.040483908Z" level=info msg="RemoveContainer for \"2e5c187855d0196255d77f9e2b06a1e96e042617a70bdace36b741b439182572\"" Nov 1 00:23:10.075841 sshd[4560]: Accepted publickey for core from 10.0.0.1 port 45504 ssh2: RSA SHA256:jhdmrhZXRbwOcBbBZQf03C3Vb9eSOfvUjDz7MOKA8jE Nov 1 00:23:10.078613 sshd[4560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:23:10.094660 systemd-logind[1557]: New session 27 of user core. Nov 1 00:23:10.104003 systemd[1]: Started session-27.scope - Session 27 of User core. Nov 1 00:23:10.109607 containerd[1583]: time="2025-11-01T00:23:10.109556030Z" level=info msg="RemoveContainer for \"2e5c187855d0196255d77f9e2b06a1e96e042617a70bdace36b741b439182572\" returns successfully" Nov 1 00:23:10.110129 kubelet[2716]: I1101 00:23:10.110103 2716 scope.go:117] "RemoveContainer" containerID="5caba774871a2f75f8829a2c7972ca3caa552559b19af08e179bb3e4a1794ea1" Nov 1 00:23:10.111238 containerd[1583]: time="2025-11-01T00:23:10.111199737Z" level=info msg="RemoveContainer for \"5caba774871a2f75f8829a2c7972ca3caa552559b19af08e179bb3e4a1794ea1\"" Nov 1 00:23:10.116044 containerd[1583]: time="2025-11-01T00:23:10.116013585Z" level=info msg="RemoveContainer for \"5caba774871a2f75f8829a2c7972ca3caa552559b19af08e179bb3e4a1794ea1\" returns successfully" Nov 1 00:23:10.116201 kubelet[2716]: I1101 00:23:10.116173 2716 scope.go:117] "RemoveContainer" containerID="ea44e1b2b4e5e2f601842d88ed14672e5dae79064e0f9b35c9e3797a0f657e9e" Nov 1 00:23:10.117387 containerd[1583]: time="2025-11-01T00:23:10.117138829Z" level=info msg="RemoveContainer for \"ea44e1b2b4e5e2f601842d88ed14672e5dae79064e0f9b35c9e3797a0f657e9e\"" Nov 1 00:23:10.123364 containerd[1583]: time="2025-11-01T00:23:10.123331400Z" level=info msg="RemoveContainer for \"ea44e1b2b4e5e2f601842d88ed14672e5dae79064e0f9b35c9e3797a0f657e9e\" returns successfully" Nov 1 00:23:10.123489 kubelet[2716]: I1101 00:23:10.123466 2716 scope.go:117] "RemoveContainer" containerID="39c3e9d35fc218dcbfd8dbdaed35fa8538fff0d39d5f35f8e5d2aa9350f39a05" Nov 1 00:23:10.124455 containerd[1583]: time="2025-11-01T00:23:10.124427119Z" level=info msg="RemoveContainer for \"39c3e9d35fc218dcbfd8dbdaed35fa8538fff0d39d5f35f8e5d2aa9350f39a05\"" Nov 1 00:23:10.130836 containerd[1583]: time="2025-11-01T00:23:10.130794693Z" level=info msg="RemoveContainer for \"39c3e9d35fc218dcbfd8dbdaed35fa8538fff0d39d5f35f8e5d2aa9350f39a05\" returns successfully" Nov 1 00:23:10.131018 kubelet[2716]: I1101 00:23:10.130991 2716 scope.go:117] "RemoveContainer" containerID="3532380d262894951163483eafe6e1af5cbca70cda6e134aaed8d00751a6e8cc" Nov 1 00:23:10.132006 containerd[1583]: time="2025-11-01T00:23:10.131976043Z" level=info msg="RemoveContainer for \"3532380d262894951163483eafe6e1af5cbca70cda6e134aaed8d00751a6e8cc\"" Nov 1 00:23:10.141766 containerd[1583]: time="2025-11-01T00:23:10.141439347Z" level=info msg="RemoveContainer for \"3532380d262894951163483eafe6e1af5cbca70cda6e134aaed8d00751a6e8cc\" returns successfully" Nov 1 00:23:10.141989 kubelet[2716]: I1101 00:23:10.141700 2716 scope.go:117] "RemoveContainer" containerID="7a57dc6b222fe8cd8e997b887c7fcb2993f65a374b4680e5166b84bec79de761" Nov 1 00:23:10.143331 containerd[1583]: time="2025-11-01T00:23:10.143261051Z" level=info msg="RemoveContainer for \"7a57dc6b222fe8cd8e997b887c7fcb2993f65a374b4680e5166b84bec79de761\"" Nov 1 00:23:10.160866 containerd[1583]: time="2025-11-01T00:23:10.160716709Z" level=info msg="RemoveContainer for \"7a57dc6b222fe8cd8e997b887c7fcb2993f65a374b4680e5166b84bec79de761\" returns successfully" Nov 1 00:23:10.488919 kubelet[2716]: I1101 00:23:10.488836 2716 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e838fae-ac9e-45a7-8ef8-574f43dd7cd7" path="/var/lib/kubelet/pods/5e838fae-ac9e-45a7-8ef8-574f43dd7cd7/volumes" Nov 1 00:23:10.489861 kubelet[2716]: I1101 00:23:10.489838 2716 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8161d1e0-74df-43b4-bec3-a4658844240a" path="/var/lib/kubelet/pods/8161d1e0-74df-43b4-bec3-a4658844240a/volumes" Nov 1 00:23:10.681165 sshd[4560]: pam_unix(sshd:session): session closed for user core Nov 1 00:23:10.697791 kubelet[2716]: I1101 00:23:10.697604 2716 memory_manager.go:355] "RemoveStaleState removing state" podUID="5e838fae-ac9e-45a7-8ef8-574f43dd7cd7" containerName="cilium-agent" Nov 1 00:23:10.697791 kubelet[2716]: I1101 00:23:10.697639 2716 memory_manager.go:355] "RemoveStaleState removing state" podUID="8161d1e0-74df-43b4-bec3-a4658844240a" containerName="cilium-operator" Nov 1 00:23:10.698107 systemd[1]: Started sshd@28-10.0.0.76:22-10.0.0.1:45512.service - OpenSSH per-connection server daemon (10.0.0.1:45512). Nov 1 00:23:10.701740 systemd[1]: sshd@27-10.0.0.76:22-10.0.0.1:45504.service: Deactivated successfully. Nov 1 00:23:10.710010 kubelet[2716]: I1101 00:23:10.708150 2716 status_manager.go:890] "Failed to get status for pod" podUID="d4d4c2b2-baa1-4811-8878-235c6e139a26" pod="kube-system/cilium-2vt62" err="pods \"cilium-2vt62\" is forbidden: User \"system:node:localhost\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" Nov 1 00:23:10.714053 systemd[1]: session-27.scope: Deactivated successfully. Nov 1 00:23:10.717724 systemd-logind[1557]: Session 27 logged out. Waiting for processes to exit. Nov 1 00:23:10.720241 systemd-logind[1557]: Removed session 27. Nov 1 00:23:10.758228 sshd[4574]: Accepted publickey for core from 10.0.0.1 port 45512 ssh2: RSA SHA256:jhdmrhZXRbwOcBbBZQf03C3Vb9eSOfvUjDz7MOKA8jE Nov 1 00:23:10.760844 sshd[4574]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:23:10.774699 systemd-logind[1557]: New session 28 of user core. Nov 1 00:23:10.779448 systemd[1]: Started session-28.scope - Session 28 of User core. Nov 1 00:23:10.825618 kubelet[2716]: I1101 00:23:10.825452 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d4d4c2b2-baa1-4811-8878-235c6e139a26-etc-cni-netd\") pod \"cilium-2vt62\" (UID: \"d4d4c2b2-baa1-4811-8878-235c6e139a26\") " pod="kube-system/cilium-2vt62" Nov 1 00:23:10.825811 kubelet[2716]: I1101 00:23:10.825677 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d4d4c2b2-baa1-4811-8878-235c6e139a26-cilium-config-path\") pod \"cilium-2vt62\" (UID: \"d4d4c2b2-baa1-4811-8878-235c6e139a26\") " pod="kube-system/cilium-2vt62" Nov 1 00:23:10.825811 kubelet[2716]: I1101 00:23:10.825708 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5vzd\" (UniqueName: \"kubernetes.io/projected/d4d4c2b2-baa1-4811-8878-235c6e139a26-kube-api-access-d5vzd\") pod \"cilium-2vt62\" (UID: \"d4d4c2b2-baa1-4811-8878-235c6e139a26\") " pod="kube-system/cilium-2vt62" Nov 1 00:23:10.825811 kubelet[2716]: I1101 00:23:10.825727 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/d4d4c2b2-baa1-4811-8878-235c6e139a26-cilium-ipsec-secrets\") pod \"cilium-2vt62\" (UID: \"d4d4c2b2-baa1-4811-8878-235c6e139a26\") " pod="kube-system/cilium-2vt62" Nov 1 00:23:10.825811 kubelet[2716]: I1101 00:23:10.825745 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d4d4c2b2-baa1-4811-8878-235c6e139a26-host-proc-sys-net\") pod \"cilium-2vt62\" (UID: \"d4d4c2b2-baa1-4811-8878-235c6e139a26\") " pod="kube-system/cilium-2vt62" Nov 1 00:23:10.825811 kubelet[2716]: I1101 00:23:10.825763 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d4d4c2b2-baa1-4811-8878-235c6e139a26-lib-modules\") pod \"cilium-2vt62\" (UID: \"d4d4c2b2-baa1-4811-8878-235c6e139a26\") " pod="kube-system/cilium-2vt62" Nov 1 00:23:10.825947 kubelet[2716]: I1101 00:23:10.825779 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d4d4c2b2-baa1-4811-8878-235c6e139a26-hubble-tls\") pod \"cilium-2vt62\" (UID: \"d4d4c2b2-baa1-4811-8878-235c6e139a26\") " pod="kube-system/cilium-2vt62" Nov 1 00:23:10.825947 kubelet[2716]: I1101 00:23:10.825795 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d4d4c2b2-baa1-4811-8878-235c6e139a26-cilium-run\") pod \"cilium-2vt62\" (UID: \"d4d4c2b2-baa1-4811-8878-235c6e139a26\") " pod="kube-system/cilium-2vt62" Nov 1 00:23:10.825947 kubelet[2716]: I1101 00:23:10.825813 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d4d4c2b2-baa1-4811-8878-235c6e139a26-cni-path\") pod \"cilium-2vt62\" (UID: \"d4d4c2b2-baa1-4811-8878-235c6e139a26\") " pod="kube-system/cilium-2vt62" Nov 1 00:23:10.825947 kubelet[2716]: I1101 00:23:10.825830 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d4d4c2b2-baa1-4811-8878-235c6e139a26-xtables-lock\") pod \"cilium-2vt62\" (UID: \"d4d4c2b2-baa1-4811-8878-235c6e139a26\") " pod="kube-system/cilium-2vt62" Nov 1 00:23:10.825947 kubelet[2716]: I1101 00:23:10.825850 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d4d4c2b2-baa1-4811-8878-235c6e139a26-bpf-maps\") pod \"cilium-2vt62\" (UID: \"d4d4c2b2-baa1-4811-8878-235c6e139a26\") " pod="kube-system/cilium-2vt62" Nov 1 00:23:10.825947 kubelet[2716]: I1101 00:23:10.825869 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d4d4c2b2-baa1-4811-8878-235c6e139a26-cilium-cgroup\") pod \"cilium-2vt62\" (UID: \"d4d4c2b2-baa1-4811-8878-235c6e139a26\") " pod="kube-system/cilium-2vt62" Nov 1 00:23:10.826112 kubelet[2716]: I1101 00:23:10.825885 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d4d4c2b2-baa1-4811-8878-235c6e139a26-clustermesh-secrets\") pod \"cilium-2vt62\" (UID: \"d4d4c2b2-baa1-4811-8878-235c6e139a26\") " pod="kube-system/cilium-2vt62" Nov 1 00:23:10.826112 kubelet[2716]: I1101 00:23:10.825900 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d4d4c2b2-baa1-4811-8878-235c6e139a26-host-proc-sys-kernel\") pod \"cilium-2vt62\" (UID: \"d4d4c2b2-baa1-4811-8878-235c6e139a26\") " pod="kube-system/cilium-2vt62" Nov 1 00:23:10.826112 kubelet[2716]: I1101 00:23:10.825918 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d4d4c2b2-baa1-4811-8878-235c6e139a26-hostproc\") pod \"cilium-2vt62\" (UID: \"d4d4c2b2-baa1-4811-8878-235c6e139a26\") " pod="kube-system/cilium-2vt62" Nov 1 00:23:10.855386 sshd[4574]: pam_unix(sshd:session): session closed for user core Nov 1 00:23:10.864861 systemd[1]: Started sshd@29-10.0.0.76:22-10.0.0.1:45518.service - OpenSSH per-connection server daemon (10.0.0.1:45518). Nov 1 00:23:10.865563 systemd[1]: sshd@28-10.0.0.76:22-10.0.0.1:45512.service: Deactivated successfully. Nov 1 00:23:10.875783 systemd-logind[1557]: Session 28 logged out. Waiting for processes to exit. Nov 1 00:23:10.877342 systemd[1]: session-28.scope: Deactivated successfully. Nov 1 00:23:10.878713 systemd-logind[1557]: Removed session 28. Nov 1 00:23:10.913313 sshd[4585]: Accepted publickey for core from 10.0.0.1 port 45518 ssh2: RSA SHA256:jhdmrhZXRbwOcBbBZQf03C3Vb9eSOfvUjDz7MOKA8jE Nov 1 00:23:10.915910 sshd[4585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:23:10.930038 systemd-logind[1557]: New session 29 of user core. Nov 1 00:23:10.943884 systemd[1]: Started session-29.scope - Session 29 of User core. Nov 1 00:23:11.015310 kubelet[2716]: E1101 00:23:11.013591 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:23:11.015425 containerd[1583]: time="2025-11-01T00:23:11.014670517Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2vt62,Uid:d4d4c2b2-baa1-4811-8878-235c6e139a26,Namespace:kube-system,Attempt:0,}" Nov 1 00:23:11.234937 containerd[1583]: time="2025-11-01T00:23:11.234782987Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:23:11.234937 containerd[1583]: time="2025-11-01T00:23:11.234875282Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:23:11.234937 containerd[1583]: time="2025-11-01T00:23:11.234914766Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:23:11.235609 containerd[1583]: time="2025-11-01T00:23:11.235076814Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:23:11.280616 containerd[1583]: time="2025-11-01T00:23:11.280494885Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2vt62,Uid:d4d4c2b2-baa1-4811-8878-235c6e139a26,Namespace:kube-system,Attempt:0,} returns sandbox id \"0c4efdf3aec4673d9739eecf5eac9df3e16ea07e91b61934a60584ee97d40a8a\"" Nov 1 00:23:11.281502 kubelet[2716]: E1101 00:23:11.281469 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:23:11.284382 containerd[1583]: time="2025-11-01T00:23:11.284341086Z" level=info msg="CreateContainer within sandbox \"0c4efdf3aec4673d9739eecf5eac9df3e16ea07e91b61934a60584ee97d40a8a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 1 00:23:11.819761 containerd[1583]: time="2025-11-01T00:23:11.819666693Z" level=info msg="CreateContainer within sandbox \"0c4efdf3aec4673d9739eecf5eac9df3e16ea07e91b61934a60584ee97d40a8a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"8c937d2bfe532f46dd6fb2e1260b257ef435aada92596e19e96f190258942cd1\"" Nov 1 00:23:11.820482 containerd[1583]: time="2025-11-01T00:23:11.820423748Z" level=info msg="StartContainer for \"8c937d2bfe532f46dd6fb2e1260b257ef435aada92596e19e96f190258942cd1\"" Nov 1 00:23:11.923774 containerd[1583]: time="2025-11-01T00:23:11.923699685Z" level=info msg="StartContainer for \"8c937d2bfe532f46dd6fb2e1260b257ef435aada92596e19e96f190258942cd1\" returns successfully" Nov 1 00:23:11.947206 systemd[1]: run-containerd-runc-k8s.io-0c4efdf3aec4673d9739eecf5eac9df3e16ea07e91b61934a60584ee97d40a8a-runc.0nw1i4.mount: Deactivated successfully. Nov 1 00:23:12.048458 containerd[1583]: time="2025-11-01T00:23:12.048374738Z" level=info msg="shim disconnected" id=8c937d2bfe532f46dd6fb2e1260b257ef435aada92596e19e96f190258942cd1 namespace=k8s.io Nov 1 00:23:12.048458 containerd[1583]: time="2025-11-01T00:23:12.048447557Z" level=warning msg="cleaning up after shim disconnected" id=8c937d2bfe532f46dd6fb2e1260b257ef435aada92596e19e96f190258942cd1 namespace=k8s.io Nov 1 00:23:12.048458 containerd[1583]: time="2025-11-01T00:23:12.048457145Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 1 00:23:12.053602 kubelet[2716]: E1101 00:23:12.053493 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:23:13.057541 kubelet[2716]: E1101 00:23:13.057022 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:23:13.058963 containerd[1583]: time="2025-11-01T00:23:13.058924314Z" level=info msg="CreateContainer within sandbox \"0c4efdf3aec4673d9739eecf5eac9df3e16ea07e91b61934a60584ee97d40a8a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 1 00:23:13.373703 containerd[1583]: time="2025-11-01T00:23:13.373485436Z" level=info msg="CreateContainer within sandbox \"0c4efdf3aec4673d9739eecf5eac9df3e16ea07e91b61934a60584ee97d40a8a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"2742129f88bf99f2ee92e5cef8391eba7cb24294603290201f5fc8c938e4a89e\"" Nov 1 00:23:13.374408 containerd[1583]: time="2025-11-01T00:23:13.374356957Z" level=info msg="StartContainer for \"2742129f88bf99f2ee92e5cef8391eba7cb24294603290201f5fc8c938e4a89e\"" Nov 1 00:23:13.495828 containerd[1583]: time="2025-11-01T00:23:13.495735275Z" level=info msg="StartContainer for \"2742129f88bf99f2ee92e5cef8391eba7cb24294603290201f5fc8c938e4a89e\" returns successfully" Nov 1 00:23:13.521563 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2742129f88bf99f2ee92e5cef8391eba7cb24294603290201f5fc8c938e4a89e-rootfs.mount: Deactivated successfully. Nov 1 00:23:13.598811 containerd[1583]: time="2025-11-01T00:23:13.598668260Z" level=info msg="shim disconnected" id=2742129f88bf99f2ee92e5cef8391eba7cb24294603290201f5fc8c938e4a89e namespace=k8s.io Nov 1 00:23:13.598811 containerd[1583]: time="2025-11-01T00:23:13.598789308Z" level=warning msg="cleaning up after shim disconnected" id=2742129f88bf99f2ee92e5cef8391eba7cb24294603290201f5fc8c938e4a89e namespace=k8s.io Nov 1 00:23:13.598811 containerd[1583]: time="2025-11-01T00:23:13.598805890Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 1 00:23:14.061807 kubelet[2716]: E1101 00:23:14.061641 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:23:14.064212 containerd[1583]: time="2025-11-01T00:23:14.064079512Z" level=info msg="CreateContainer within sandbox \"0c4efdf3aec4673d9739eecf5eac9df3e16ea07e91b61934a60584ee97d40a8a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 1 00:23:14.105570 containerd[1583]: time="2025-11-01T00:23:14.105479628Z" level=info msg="CreateContainer within sandbox \"0c4efdf3aec4673d9739eecf5eac9df3e16ea07e91b61934a60584ee97d40a8a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ff2c492be68e44555f5f207659c7fec64a617139a8d6246d7d1c4653aea93a2e\"" Nov 1 00:23:14.106379 containerd[1583]: time="2025-11-01T00:23:14.106332935Z" level=info msg="StartContainer for \"ff2c492be68e44555f5f207659c7fec64a617139a8d6246d7d1c4653aea93a2e\"" Nov 1 00:23:14.206168 containerd[1583]: time="2025-11-01T00:23:14.206096746Z" level=info msg="StartContainer for \"ff2c492be68e44555f5f207659c7fec64a617139a8d6246d7d1c4653aea93a2e\" returns successfully" Nov 1 00:23:14.237560 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ff2c492be68e44555f5f207659c7fec64a617139a8d6246d7d1c4653aea93a2e-rootfs.mount: Deactivated successfully. Nov 1 00:23:14.250228 containerd[1583]: time="2025-11-01T00:23:14.250157162Z" level=info msg="shim disconnected" id=ff2c492be68e44555f5f207659c7fec64a617139a8d6246d7d1c4653aea93a2e namespace=k8s.io Nov 1 00:23:14.250228 containerd[1583]: time="2025-11-01T00:23:14.250221173Z" level=warning msg="cleaning up after shim disconnected" id=ff2c492be68e44555f5f207659c7fec64a617139a8d6246d7d1c4653aea93a2e namespace=k8s.io Nov 1 00:23:14.250228 containerd[1583]: time="2025-11-01T00:23:14.250231884Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 1 00:23:14.581975 kubelet[2716]: E1101 00:23:14.581922 2716 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Nov 1 00:23:15.072035 kubelet[2716]: E1101 00:23:15.071795 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:23:15.074118 containerd[1583]: time="2025-11-01T00:23:15.074060190Z" level=info msg="CreateContainer within sandbox \"0c4efdf3aec4673d9739eecf5eac9df3e16ea07e91b61934a60584ee97d40a8a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 1 00:23:15.144232 containerd[1583]: time="2025-11-01T00:23:15.144073200Z" level=info msg="CreateContainer within sandbox \"0c4efdf3aec4673d9739eecf5eac9df3e16ea07e91b61934a60584ee97d40a8a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"de8c64f14c4970482a804368d9101d30a9ef1fd1f2be84a576dbb4df4e6d1d1c\"" Nov 1 00:23:15.147141 containerd[1583]: time="2025-11-01T00:23:15.145908116Z" level=info msg="StartContainer for \"de8c64f14c4970482a804368d9101d30a9ef1fd1f2be84a576dbb4df4e6d1d1c\"" Nov 1 00:23:15.311762 containerd[1583]: time="2025-11-01T00:23:15.311675886Z" level=info msg="StartContainer for \"de8c64f14c4970482a804368d9101d30a9ef1fd1f2be84a576dbb4df4e6d1d1c\" returns successfully" Nov 1 00:23:15.354061 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-de8c64f14c4970482a804368d9101d30a9ef1fd1f2be84a576dbb4df4e6d1d1c-rootfs.mount: Deactivated successfully. Nov 1 00:23:15.384932 containerd[1583]: time="2025-11-01T00:23:15.384845886Z" level=info msg="shim disconnected" id=de8c64f14c4970482a804368d9101d30a9ef1fd1f2be84a576dbb4df4e6d1d1c namespace=k8s.io Nov 1 00:23:15.384932 containerd[1583]: time="2025-11-01T00:23:15.384924425Z" level=warning msg="cleaning up after shim disconnected" id=de8c64f14c4970482a804368d9101d30a9ef1fd1f2be84a576dbb4df4e6d1d1c namespace=k8s.io Nov 1 00:23:15.384932 containerd[1583]: time="2025-11-01T00:23:15.384936759Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 1 00:23:16.091063 kubelet[2716]: E1101 00:23:16.090998 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:23:16.098228 containerd[1583]: time="2025-11-01T00:23:16.098156788Z" level=info msg="CreateContainer within sandbox \"0c4efdf3aec4673d9739eecf5eac9df3e16ea07e91b61934a60584ee97d40a8a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 1 00:23:16.278125 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2838230783.mount: Deactivated successfully. Nov 1 00:23:16.290776 containerd[1583]: time="2025-11-01T00:23:16.290673018Z" level=info msg="CreateContainer within sandbox \"0c4efdf3aec4673d9739eecf5eac9df3e16ea07e91b61934a60584ee97d40a8a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"be59906b03271f4032da454bc24a436dbca6d63801e5b70153608d9c5db18a4f\"" Nov 1 00:23:16.295606 kubelet[2716]: I1101 00:23:16.295119 2716 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-01T00:23:16Z","lastTransitionTime":"2025-11-01T00:23:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Nov 1 00:23:16.295826 containerd[1583]: time="2025-11-01T00:23:16.295650395Z" level=info msg="StartContainer for \"be59906b03271f4032da454bc24a436dbca6d63801e5b70153608d9c5db18a4f\"" Nov 1 00:23:16.495647 containerd[1583]: time="2025-11-01T00:23:16.495503670Z" level=info msg="StartContainer for \"be59906b03271f4032da454bc24a436dbca6d63801e5b70153608d9c5db18a4f\" returns successfully" Nov 1 00:23:17.000297 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Nov 1 00:23:17.095828 kubelet[2716]: E1101 00:23:17.095791 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:23:17.140742 kubelet[2716]: I1101 00:23:17.140658 2716 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-2vt62" podStartSLOduration=7.140638561 podStartE2EDuration="7.140638561s" podCreationTimestamp="2025-11-01 00:23:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:23:17.138799629 +0000 UTC m=+102.801327736" watchObservedRunningTime="2025-11-01 00:23:17.140638561 +0000 UTC m=+102.803166648" Nov 1 00:23:17.268398 systemd[1]: run-containerd-runc-k8s.io-be59906b03271f4032da454bc24a436dbca6d63801e5b70153608d9c5db18a4f-runc.dHVUEt.mount: Deactivated successfully. Nov 1 00:23:17.486171 kubelet[2716]: E1101 00:23:17.486086 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-s2sn8" podUID="a4db634d-ed5d-4a0e-b97e-6643e94cce37" Nov 1 00:23:18.486830 kubelet[2716]: E1101 00:23:18.486766 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-sjhj9" podUID="d6dfcf49-16e8-4faa-8c41-b6f1f993511f" Nov 1 00:23:19.016977 kubelet[2716]: E1101 00:23:19.016328 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:23:19.488297 kubelet[2716]: E1101 00:23:19.487399 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-s2sn8" podUID="a4db634d-ed5d-4a0e-b97e-6643e94cce37" Nov 1 00:23:20.486855 kubelet[2716]: E1101 00:23:20.486797 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:23:20.495207 systemd-networkd[1245]: lxc_health: Link UP Nov 1 00:23:20.506675 systemd-networkd[1245]: lxc_health: Gained carrier Nov 1 00:23:21.016674 kubelet[2716]: E1101 00:23:21.016178 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:23:21.104606 kubelet[2716]: E1101 00:23:21.103994 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:23:21.487054 kubelet[2716]: E1101 00:23:21.487016 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:23:22.040535 systemd-networkd[1245]: lxc_health: Gained IPv6LL Nov 1 00:23:22.106326 kubelet[2716]: E1101 00:23:22.106257 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:23:26.329132 sshd[4585]: pam_unix(sshd:session): session closed for user core Nov 1 00:23:26.334591 systemd[1]: sshd@29-10.0.0.76:22-10.0.0.1:45518.service: Deactivated successfully. Nov 1 00:23:26.338047 systemd[1]: session-29.scope: Deactivated successfully. Nov 1 00:23:26.340201 systemd-logind[1557]: Session 29 logged out. Waiting for processes to exit. Nov 1 00:23:26.341680 systemd-logind[1557]: Removed session 29.