Nov 6 00:25:07.011414 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Wed Nov 5 22:12:28 -00 2025 Nov 6 00:25:07.011440 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=59ca0b9e28689480cec05e5a7a50ffb2fd81e743a9e2986eb3bceb3b87f6702e Nov 6 00:25:07.011451 kernel: BIOS-provided physical RAM map: Nov 6 00:25:07.011458 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Nov 6 00:25:07.011464 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Nov 6 00:25:07.011471 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Nov 6 00:25:07.011479 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Nov 6 00:25:07.011486 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Nov 6 00:25:07.011492 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Nov 6 00:25:07.011499 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Nov 6 00:25:07.011507 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 6 00:25:07.011514 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Nov 6 00:25:07.011520 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Nov 6 00:25:07.011527 kernel: NX (Execute Disable) protection: active Nov 6 00:25:07.011535 kernel: APIC: Static calls initialized Nov 6 00:25:07.011542 kernel: SMBIOS 2.8 present. Nov 6 00:25:07.011552 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Nov 6 00:25:07.011559 kernel: DMI: Memory slots populated: 1/1 Nov 6 00:25:07.011566 kernel: Hypervisor detected: KVM Nov 6 00:25:07.011573 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Nov 6 00:25:07.011580 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 6 00:25:07.011587 kernel: kvm-clock: using sched offset of 6269532641 cycles Nov 6 00:25:07.011594 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 6 00:25:07.011602 kernel: tsc: Detected 2794.748 MHz processor Nov 6 00:25:07.011610 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 6 00:25:07.011617 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 6 00:25:07.011626 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Nov 6 00:25:07.011634 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Nov 6 00:25:07.011641 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 6 00:25:07.011648 kernel: Using GB pages for direct mapping Nov 6 00:25:07.011656 kernel: ACPI: Early table checksum verification disabled Nov 6 00:25:07.011663 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Nov 6 00:25:07.011670 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 00:25:07.011678 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 00:25:07.011685 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 00:25:07.011694 kernel: ACPI: FACS 0x000000009CFE0000 000040 Nov 6 00:25:07.011702 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 00:25:07.011709 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 00:25:07.011716 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 00:25:07.011724 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 00:25:07.011734 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Nov 6 00:25:07.011744 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Nov 6 00:25:07.011751 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Nov 6 00:25:07.011759 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Nov 6 00:25:07.011767 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Nov 6 00:25:07.011774 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Nov 6 00:25:07.011782 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Nov 6 00:25:07.011789 kernel: No NUMA configuration found Nov 6 00:25:07.011797 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Nov 6 00:25:07.011816 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Nov 6 00:25:07.011825 kernel: Zone ranges: Nov 6 00:25:07.011832 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 6 00:25:07.011839 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Nov 6 00:25:07.011847 kernel: Normal empty Nov 6 00:25:07.011854 kernel: Device empty Nov 6 00:25:07.011862 kernel: Movable zone start for each node Nov 6 00:25:07.011869 kernel: Early memory node ranges Nov 6 00:25:07.011877 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Nov 6 00:25:07.011898 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Nov 6 00:25:07.011908 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Nov 6 00:25:07.011915 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 6 00:25:07.011923 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Nov 6 00:25:07.011930 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Nov 6 00:25:07.011938 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 6 00:25:07.011945 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 6 00:25:07.011964 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 6 00:25:07.011980 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 6 00:25:07.011988 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 6 00:25:07.011999 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 6 00:25:07.012006 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 6 00:25:07.012014 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 6 00:25:07.012022 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 6 00:25:07.012029 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 6 00:25:07.012037 kernel: TSC deadline timer available Nov 6 00:25:07.012044 kernel: CPU topo: Max. logical packages: 1 Nov 6 00:25:07.012052 kernel: CPU topo: Max. logical dies: 1 Nov 6 00:25:07.012059 kernel: CPU topo: Max. dies per package: 1 Nov 6 00:25:07.012069 kernel: CPU topo: Max. threads per core: 1 Nov 6 00:25:07.012076 kernel: CPU topo: Num. cores per package: 4 Nov 6 00:25:07.012084 kernel: CPU topo: Num. threads per package: 4 Nov 6 00:25:07.012091 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Nov 6 00:25:07.012099 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 6 00:25:07.012106 kernel: kvm-guest: KVM setup pv remote TLB flush Nov 6 00:25:07.012114 kernel: kvm-guest: setup PV sched yield Nov 6 00:25:07.012121 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Nov 6 00:25:07.012129 kernel: Booting paravirtualized kernel on KVM Nov 6 00:25:07.012136 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 6 00:25:07.012146 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Nov 6 00:25:07.012154 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Nov 6 00:25:07.012162 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Nov 6 00:25:07.012169 kernel: pcpu-alloc: [0] 0 1 2 3 Nov 6 00:25:07.012176 kernel: kvm-guest: PV spinlocks enabled Nov 6 00:25:07.012184 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 6 00:25:07.012193 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=59ca0b9e28689480cec05e5a7a50ffb2fd81e743a9e2986eb3bceb3b87f6702e Nov 6 00:25:07.012201 kernel: random: crng init done Nov 6 00:25:07.012211 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 6 00:25:07.012219 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 6 00:25:07.012226 kernel: Fallback order for Node 0: 0 Nov 6 00:25:07.012234 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Nov 6 00:25:07.012241 kernel: Policy zone: DMA32 Nov 6 00:25:07.012249 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 6 00:25:07.012256 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Nov 6 00:25:07.012264 kernel: ftrace: allocating 40021 entries in 157 pages Nov 6 00:25:07.012271 kernel: ftrace: allocated 157 pages with 5 groups Nov 6 00:25:07.012281 kernel: Dynamic Preempt: voluntary Nov 6 00:25:07.012288 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 6 00:25:07.012297 kernel: rcu: RCU event tracing is enabled. Nov 6 00:25:07.012304 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Nov 6 00:25:07.012312 kernel: Trampoline variant of Tasks RCU enabled. Nov 6 00:25:07.012320 kernel: Rude variant of Tasks RCU enabled. Nov 6 00:25:07.012328 kernel: Tracing variant of Tasks RCU enabled. Nov 6 00:25:07.012335 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 6 00:25:07.012343 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Nov 6 00:25:07.012352 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 6 00:25:07.012360 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 6 00:25:07.012368 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 6 00:25:07.012375 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Nov 6 00:25:07.012383 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 6 00:25:07.012398 kernel: Console: colour VGA+ 80x25 Nov 6 00:25:07.012408 kernel: printk: legacy console [ttyS0] enabled Nov 6 00:25:07.012416 kernel: ACPI: Core revision 20240827 Nov 6 00:25:07.012424 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 6 00:25:07.012431 kernel: APIC: Switch to symmetric I/O mode setup Nov 6 00:25:07.012439 kernel: x2apic enabled Nov 6 00:25:07.012447 kernel: APIC: Switched APIC routing to: physical x2apic Nov 6 00:25:07.012457 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Nov 6 00:25:07.012465 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Nov 6 00:25:07.012473 kernel: kvm-guest: setup PV IPIs Nov 6 00:25:07.012481 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 6 00:25:07.012491 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Nov 6 00:25:07.012499 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Nov 6 00:25:07.012507 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Nov 6 00:25:07.012515 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Nov 6 00:25:07.012523 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Nov 6 00:25:07.012531 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 6 00:25:07.012538 kernel: Spectre V2 : Mitigation: Retpolines Nov 6 00:25:07.012546 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 6 00:25:07.012554 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Nov 6 00:25:07.012564 kernel: active return thunk: retbleed_return_thunk Nov 6 00:25:07.012572 kernel: RETBleed: Mitigation: untrained return thunk Nov 6 00:25:07.012580 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 6 00:25:07.012588 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 6 00:25:07.012596 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Nov 6 00:25:07.012605 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Nov 6 00:25:07.012613 kernel: active return thunk: srso_return_thunk Nov 6 00:25:07.012620 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Nov 6 00:25:07.012628 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 6 00:25:07.012638 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 6 00:25:07.012646 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 6 00:25:07.012654 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 6 00:25:07.012662 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Nov 6 00:25:07.012670 kernel: Freeing SMP alternatives memory: 32K Nov 6 00:25:07.012678 kernel: pid_max: default: 32768 minimum: 301 Nov 6 00:25:07.012685 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Nov 6 00:25:07.012693 kernel: landlock: Up and running. Nov 6 00:25:07.012701 kernel: SELinux: Initializing. Nov 6 00:25:07.012711 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 6 00:25:07.012719 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 6 00:25:07.012727 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Nov 6 00:25:07.012735 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Nov 6 00:25:07.012743 kernel: ... version: 0 Nov 6 00:25:07.012751 kernel: ... bit width: 48 Nov 6 00:25:07.012758 kernel: ... generic registers: 6 Nov 6 00:25:07.012766 kernel: ... value mask: 0000ffffffffffff Nov 6 00:25:07.012774 kernel: ... max period: 00007fffffffffff Nov 6 00:25:07.012784 kernel: ... fixed-purpose events: 0 Nov 6 00:25:07.012792 kernel: ... event mask: 000000000000003f Nov 6 00:25:07.012800 kernel: signal: max sigframe size: 1776 Nov 6 00:25:07.012815 kernel: rcu: Hierarchical SRCU implementation. Nov 6 00:25:07.012823 kernel: rcu: Max phase no-delay instances is 400. Nov 6 00:25:07.012831 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Nov 6 00:25:07.012839 kernel: smp: Bringing up secondary CPUs ... Nov 6 00:25:07.012847 kernel: smpboot: x86: Booting SMP configuration: Nov 6 00:25:07.012855 kernel: .... node #0, CPUs: #1 #2 #3 Nov 6 00:25:07.012864 kernel: smp: Brought up 1 node, 4 CPUs Nov 6 00:25:07.012872 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Nov 6 00:25:07.012892 kernel: Memory: 2422768K/2571752K available (14336K kernel code, 2436K rwdata, 26048K rodata, 45548K init, 1180K bss, 143048K reserved, 0K cma-reserved) Nov 6 00:25:07.012900 kernel: devtmpfs: initialized Nov 6 00:25:07.012908 kernel: x86/mm: Memory block size: 128MB Nov 6 00:25:07.012916 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 6 00:25:07.012924 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Nov 6 00:25:07.012932 kernel: pinctrl core: initialized pinctrl subsystem Nov 6 00:25:07.012940 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 6 00:25:07.012950 kernel: audit: initializing netlink subsys (disabled) Nov 6 00:25:07.012958 kernel: audit: type=2000 audit(1762388702.423:1): state=initialized audit_enabled=0 res=1 Nov 6 00:25:07.012966 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 6 00:25:07.012973 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 6 00:25:07.012981 kernel: cpuidle: using governor menu Nov 6 00:25:07.012989 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 6 00:25:07.012997 kernel: dca service started, version 1.12.1 Nov 6 00:25:07.013005 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Nov 6 00:25:07.013013 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Nov 6 00:25:07.013022 kernel: PCI: Using configuration type 1 for base access Nov 6 00:25:07.013030 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 6 00:25:07.013038 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 6 00:25:07.013046 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 6 00:25:07.013054 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 6 00:25:07.013062 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 6 00:25:07.013070 kernel: ACPI: Added _OSI(Module Device) Nov 6 00:25:07.013077 kernel: ACPI: Added _OSI(Processor Device) Nov 6 00:25:07.013085 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 6 00:25:07.013095 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 6 00:25:07.013103 kernel: ACPI: Interpreter enabled Nov 6 00:25:07.013111 kernel: ACPI: PM: (supports S0 S3 S5) Nov 6 00:25:07.013118 kernel: ACPI: Using IOAPIC for interrupt routing Nov 6 00:25:07.013126 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 6 00:25:07.013134 kernel: PCI: Using E820 reservations for host bridge windows Nov 6 00:25:07.013142 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Nov 6 00:25:07.013150 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 6 00:25:07.013359 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 6 00:25:07.013488 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Nov 6 00:25:07.013648 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Nov 6 00:25:07.013660 kernel: PCI host bridge to bus 0000:00 Nov 6 00:25:07.013813 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 6 00:25:07.013970 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 6 00:25:07.014079 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 6 00:25:07.014194 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Nov 6 00:25:07.014300 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Nov 6 00:25:07.014406 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Nov 6 00:25:07.014512 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 6 00:25:07.014676 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Nov 6 00:25:07.014844 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Nov 6 00:25:07.015033 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Nov 6 00:25:07.015152 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Nov 6 00:25:07.015268 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Nov 6 00:25:07.015389 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 6 00:25:07.015524 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Nov 6 00:25:07.015643 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Nov 6 00:25:07.015761 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Nov 6 00:25:07.015926 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Nov 6 00:25:07.016088 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Nov 6 00:25:07.016208 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Nov 6 00:25:07.016326 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Nov 6 00:25:07.016442 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Nov 6 00:25:07.016576 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Nov 6 00:25:07.016694 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Nov 6 00:25:07.016827 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Nov 6 00:25:07.016961 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Nov 6 00:25:07.017083 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Nov 6 00:25:07.017216 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Nov 6 00:25:07.017333 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Nov 6 00:25:07.017463 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Nov 6 00:25:07.017588 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Nov 6 00:25:07.017704 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Nov 6 00:25:07.017858 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Nov 6 00:25:07.018019 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Nov 6 00:25:07.018031 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 6 00:25:07.018039 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 6 00:25:07.018047 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 6 00:25:07.018055 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 6 00:25:07.018067 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Nov 6 00:25:07.018075 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Nov 6 00:25:07.018083 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Nov 6 00:25:07.018091 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Nov 6 00:25:07.018099 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Nov 6 00:25:07.018106 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Nov 6 00:25:07.018114 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Nov 6 00:25:07.018122 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Nov 6 00:25:07.018130 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Nov 6 00:25:07.018140 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Nov 6 00:25:07.018148 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Nov 6 00:25:07.018156 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Nov 6 00:25:07.018164 kernel: iommu: Default domain type: Translated Nov 6 00:25:07.018172 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 6 00:25:07.018180 kernel: PCI: Using ACPI for IRQ routing Nov 6 00:25:07.018188 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 6 00:25:07.018195 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Nov 6 00:25:07.018203 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Nov 6 00:25:07.018329 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Nov 6 00:25:07.018455 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Nov 6 00:25:07.018572 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 6 00:25:07.018583 kernel: vgaarb: loaded Nov 6 00:25:07.018591 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 6 00:25:07.018599 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 6 00:25:07.018607 kernel: clocksource: Switched to clocksource kvm-clock Nov 6 00:25:07.018615 kernel: VFS: Disk quotas dquot_6.6.0 Nov 6 00:25:07.018626 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 6 00:25:07.018634 kernel: pnp: PnP ACPI init Nov 6 00:25:07.018830 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Nov 6 00:25:07.018846 kernel: pnp: PnP ACPI: found 6 devices Nov 6 00:25:07.018854 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 6 00:25:07.018862 kernel: NET: Registered PF_INET protocol family Nov 6 00:25:07.018870 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 6 00:25:07.018878 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 6 00:25:07.018900 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 6 00:25:07.018912 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 6 00:25:07.018920 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 6 00:25:07.018928 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 6 00:25:07.018936 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 6 00:25:07.018944 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 6 00:25:07.018951 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 6 00:25:07.018959 kernel: NET: Registered PF_XDP protocol family Nov 6 00:25:07.019073 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 6 00:25:07.019184 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 6 00:25:07.019290 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 6 00:25:07.019395 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Nov 6 00:25:07.019502 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Nov 6 00:25:07.019608 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Nov 6 00:25:07.019619 kernel: PCI: CLS 0 bytes, default 64 Nov 6 00:25:07.019627 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Nov 6 00:25:07.019635 kernel: Initialise system trusted keyrings Nov 6 00:25:07.019643 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 6 00:25:07.019654 kernel: Key type asymmetric registered Nov 6 00:25:07.019662 kernel: Asymmetric key parser 'x509' registered Nov 6 00:25:07.019670 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Nov 6 00:25:07.019678 kernel: io scheduler mq-deadline registered Nov 6 00:25:07.019686 kernel: io scheduler kyber registered Nov 6 00:25:07.019694 kernel: io scheduler bfq registered Nov 6 00:25:07.019702 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 6 00:25:07.019711 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Nov 6 00:25:07.019719 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Nov 6 00:25:07.019729 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Nov 6 00:25:07.019737 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 6 00:25:07.019745 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 6 00:25:07.019753 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 6 00:25:07.019761 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 6 00:25:07.019769 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 6 00:25:07.019928 kernel: rtc_cmos 00:04: RTC can wake from S4 Nov 6 00:25:07.019941 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 6 00:25:07.020061 kernel: rtc_cmos 00:04: registered as rtc0 Nov 6 00:25:07.020171 kernel: rtc_cmos 00:04: setting system clock to 2025-11-06T00:25:06 UTC (1762388706) Nov 6 00:25:07.020281 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Nov 6 00:25:07.020291 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Nov 6 00:25:07.020299 kernel: NET: Registered PF_INET6 protocol family Nov 6 00:25:07.020307 kernel: Segment Routing with IPv6 Nov 6 00:25:07.020315 kernel: In-situ OAM (IOAM) with IPv6 Nov 6 00:25:07.020323 kernel: NET: Registered PF_PACKET protocol family Nov 6 00:25:07.020331 kernel: Key type dns_resolver registered Nov 6 00:25:07.020342 kernel: IPI shorthand broadcast: enabled Nov 6 00:25:07.020350 kernel: sched_clock: Marking stable (3662185571, 227285590)->(3948499205, -59028044) Nov 6 00:25:07.020358 kernel: registered taskstats version 1 Nov 6 00:25:07.020366 kernel: Loading compiled-in X.509 certificates Nov 6 00:25:07.020374 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: f906521ec29cbf079ae365554bad8eb8ed6ecb31' Nov 6 00:25:07.020382 kernel: Demotion targets for Node 0: null Nov 6 00:25:07.020390 kernel: Key type .fscrypt registered Nov 6 00:25:07.020398 kernel: Key type fscrypt-provisioning registered Nov 6 00:25:07.020405 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 6 00:25:07.020415 kernel: ima: Allocated hash algorithm: sha1 Nov 6 00:25:07.020423 kernel: ima: No architecture policies found Nov 6 00:25:07.020431 kernel: clk: Disabling unused clocks Nov 6 00:25:07.020439 kernel: Warning: unable to open an initial console. Nov 6 00:25:07.020447 kernel: Freeing unused kernel image (initmem) memory: 45548K Nov 6 00:25:07.020455 kernel: Write protecting the kernel read-only data: 40960k Nov 6 00:25:07.020464 kernel: Freeing unused kernel image (rodata/data gap) memory: 576K Nov 6 00:25:07.020471 kernel: Run /init as init process Nov 6 00:25:07.020479 kernel: with arguments: Nov 6 00:25:07.020489 kernel: /init Nov 6 00:25:07.020497 kernel: with environment: Nov 6 00:25:07.020505 kernel: HOME=/ Nov 6 00:25:07.020513 kernel: TERM=linux Nov 6 00:25:07.020522 systemd[1]: Successfully made /usr/ read-only. Nov 6 00:25:07.020533 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 6 00:25:07.020555 systemd[1]: Detected virtualization kvm. Nov 6 00:25:07.020563 systemd[1]: Detected architecture x86-64. Nov 6 00:25:07.020571 systemd[1]: Running in initrd. Nov 6 00:25:07.020580 systemd[1]: No hostname configured, using default hostname. Nov 6 00:25:07.020589 systemd[1]: Hostname set to . Nov 6 00:25:07.020598 systemd[1]: Initializing machine ID from VM UUID. Nov 6 00:25:07.020606 systemd[1]: Queued start job for default target initrd.target. Nov 6 00:25:07.020617 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 6 00:25:07.020626 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 6 00:25:07.020635 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 6 00:25:07.020644 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 6 00:25:07.020652 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 6 00:25:07.020662 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 6 00:25:07.020672 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 6 00:25:07.020683 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 6 00:25:07.020692 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 6 00:25:07.020701 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 6 00:25:07.020709 systemd[1]: Reached target paths.target - Path Units. Nov 6 00:25:07.020718 systemd[1]: Reached target slices.target - Slice Units. Nov 6 00:25:07.020727 systemd[1]: Reached target swap.target - Swaps. Nov 6 00:25:07.020735 systemd[1]: Reached target timers.target - Timer Units. Nov 6 00:25:07.020744 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 6 00:25:07.020753 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 6 00:25:07.020764 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 6 00:25:07.020772 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Nov 6 00:25:07.020781 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 6 00:25:07.020790 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 6 00:25:07.020799 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 6 00:25:07.020818 systemd[1]: Reached target sockets.target - Socket Units. Nov 6 00:25:07.020827 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 6 00:25:07.020838 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 6 00:25:07.020847 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 6 00:25:07.020856 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Nov 6 00:25:07.020865 systemd[1]: Starting systemd-fsck-usr.service... Nov 6 00:25:07.020874 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 6 00:25:07.020895 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 6 00:25:07.020906 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 00:25:07.020915 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 6 00:25:07.020924 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 6 00:25:07.020933 systemd[1]: Finished systemd-fsck-usr.service. Nov 6 00:25:07.020942 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 6 00:25:07.020975 systemd-journald[201]: Collecting audit messages is disabled. Nov 6 00:25:07.020995 systemd-journald[201]: Journal started Nov 6 00:25:07.021018 systemd-journald[201]: Runtime Journal (/run/log/journal/918d2427bab841b78fe8d2572f3a56b1) is 6M, max 48.3M, 42.2M free. Nov 6 00:25:07.012030 systemd-modules-load[202]: Inserted module 'overlay' Nov 6 00:25:07.101226 systemd[1]: Started systemd-journald.service - Journal Service. Nov 6 00:25:07.101255 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 6 00:25:07.101268 kernel: Bridge firewalling registered Nov 6 00:25:07.048501 systemd-modules-load[202]: Inserted module 'br_netfilter' Nov 6 00:25:07.099533 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 6 00:25:07.102561 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 00:25:07.105581 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 6 00:25:07.112924 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 6 00:25:07.119302 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 6 00:25:07.127692 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 6 00:25:07.130002 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 6 00:25:07.150907 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 6 00:25:07.152122 systemd-tmpfiles[222]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Nov 6 00:25:07.157329 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 6 00:25:07.159603 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 6 00:25:07.160498 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 6 00:25:07.167099 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 6 00:25:07.171984 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 6 00:25:07.196133 dracut-cmdline[242]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=59ca0b9e28689480cec05e5a7a50ffb2fd81e743a9e2986eb3bceb3b87f6702e Nov 6 00:25:07.227581 systemd-resolved[243]: Positive Trust Anchors: Nov 6 00:25:07.227600 systemd-resolved[243]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 6 00:25:07.227640 systemd-resolved[243]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 6 00:25:07.232449 systemd-resolved[243]: Defaulting to hostname 'linux'. Nov 6 00:25:07.233869 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 6 00:25:07.245919 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 6 00:25:07.361949 kernel: SCSI subsystem initialized Nov 6 00:25:07.375042 kernel: Loading iSCSI transport class v2.0-870. Nov 6 00:25:07.388961 kernel: iscsi: registered transport (tcp) Nov 6 00:25:07.412957 kernel: iscsi: registered transport (qla4xxx) Nov 6 00:25:07.413067 kernel: QLogic iSCSI HBA Driver Nov 6 00:25:07.438005 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 6 00:25:07.459040 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 6 00:25:07.460359 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 6 00:25:07.529824 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 6 00:25:07.533482 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 6 00:25:07.602938 kernel: raid6: avx2x4 gen() 28993 MB/s Nov 6 00:25:07.619923 kernel: raid6: avx2x2 gen() 29191 MB/s Nov 6 00:25:07.638060 kernel: raid6: avx2x1 gen() 17768 MB/s Nov 6 00:25:07.638153 kernel: raid6: using algorithm avx2x2 gen() 29191 MB/s Nov 6 00:25:07.656942 kernel: raid6: .... xor() 15965 MB/s, rmw enabled Nov 6 00:25:07.657034 kernel: raid6: using avx2x2 recovery algorithm Nov 6 00:25:07.683921 kernel: xor: automatically using best checksumming function avx Nov 6 00:25:07.874925 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 6 00:25:07.882838 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 6 00:25:07.886351 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 6 00:25:07.926306 systemd-udevd[453]: Using default interface naming scheme 'v255'. Nov 6 00:25:07.933259 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 6 00:25:07.935404 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 6 00:25:07.971727 dracut-pre-trigger[460]: rd.md=0: removing MD RAID activation Nov 6 00:25:08.003622 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 6 00:25:08.009290 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 6 00:25:08.111346 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 6 00:25:08.114559 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 6 00:25:08.153918 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Nov 6 00:25:08.160570 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Nov 6 00:25:08.170998 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 6 00:25:08.171044 kernel: GPT:9289727 != 19775487 Nov 6 00:25:08.171059 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 6 00:25:08.171073 kernel: GPT:9289727 != 19775487 Nov 6 00:25:08.171094 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 6 00:25:08.171107 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 6 00:25:08.172125 kernel: cryptd: max_cpu_qlen set to 1000 Nov 6 00:25:08.187936 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 6 00:25:08.188043 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 00:25:08.194499 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 00:25:08.200953 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 00:25:08.211763 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Nov 6 00:25:08.214550 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Nov 6 00:25:08.216904 kernel: AES CTR mode by8 optimization enabled Nov 6 00:25:08.219467 kernel: libata version 3.00 loaded. Nov 6 00:25:08.260128 kernel: ahci 0000:00:1f.2: version 3.0 Nov 6 00:25:08.260490 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Nov 6 00:25:08.260517 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Nov 6 00:25:08.260877 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Nov 6 00:25:08.261183 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Nov 6 00:25:08.261518 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 6 00:25:08.344486 kernel: scsi host0: ahci Nov 6 00:25:08.344767 kernel: scsi host1: ahci Nov 6 00:25:08.344997 kernel: scsi host2: ahci Nov 6 00:25:08.345166 kernel: scsi host3: ahci Nov 6 00:25:08.345319 kernel: scsi host4: ahci Nov 6 00:25:08.345478 kernel: scsi host5: ahci Nov 6 00:25:08.345619 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 lpm-pol 1 Nov 6 00:25:08.345631 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 lpm-pol 1 Nov 6 00:25:08.345641 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 lpm-pol 1 Nov 6 00:25:08.345653 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 lpm-pol 1 Nov 6 00:25:08.345671 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 lpm-pol 1 Nov 6 00:25:08.345685 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 lpm-pol 1 Nov 6 00:25:08.348459 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 00:25:08.378304 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 6 00:25:08.388224 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Nov 6 00:25:08.389260 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 6 00:25:08.403664 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 6 00:25:08.408083 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 6 00:25:08.442497 disk-uuid[615]: Primary Header is updated. Nov 6 00:25:08.442497 disk-uuid[615]: Secondary Entries is updated. Nov 6 00:25:08.442497 disk-uuid[615]: Secondary Header is updated. Nov 6 00:25:08.448912 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 6 00:25:08.454915 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 6 00:25:08.576924 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 6 00:25:08.576993 kernel: ata2: SATA link down (SStatus 0 SControl 300) Nov 6 00:25:08.577912 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Nov 6 00:25:08.579909 kernel: ata1: SATA link down (SStatus 0 SControl 300) Nov 6 00:25:08.580919 kernel: ata3.00: LPM support broken, forcing max_power Nov 6 00:25:08.582477 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Nov 6 00:25:08.582508 kernel: ata3.00: applying bridge limits Nov 6 00:25:08.583916 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 6 00:25:08.584922 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 6 00:25:08.588529 kernel: ata3.00: LPM support broken, forcing max_power Nov 6 00:25:08.588558 kernel: ata3.00: configured for UDMA/100 Nov 6 00:25:08.590941 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Nov 6 00:25:08.647568 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Nov 6 00:25:08.647980 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 6 00:25:08.675921 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Nov 6 00:25:09.118419 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 6 00:25:09.121817 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 6 00:25:09.124668 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 6 00:25:09.126944 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 6 00:25:09.131843 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 6 00:25:09.170903 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 6 00:25:09.454908 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 6 00:25:09.455809 disk-uuid[616]: The operation has completed successfully. Nov 6 00:25:09.552773 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 6 00:25:09.552949 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 6 00:25:09.566791 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 6 00:25:09.603503 sh[645]: Success Nov 6 00:25:09.625117 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 6 00:25:09.625226 kernel: device-mapper: uevent: version 1.0.3 Nov 6 00:25:09.626896 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Nov 6 00:25:09.636957 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Nov 6 00:25:09.673205 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 6 00:25:09.676968 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 6 00:25:09.692080 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 6 00:25:09.701981 kernel: BTRFS: device fsid 85d805c5-984c-4a6a-aaeb-49fff3689175 devid 1 transid 38 /dev/mapper/usr (253:0) scanned by mount (657) Nov 6 00:25:09.705908 kernel: BTRFS info (device dm-0): first mount of filesystem 85d805c5-984c-4a6a-aaeb-49fff3689175 Nov 6 00:25:09.706013 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 6 00:25:09.714254 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 6 00:25:09.714314 kernel: BTRFS info (device dm-0): enabling free space tree Nov 6 00:25:09.716040 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 6 00:25:09.719690 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Nov 6 00:25:09.723324 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 6 00:25:09.727423 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 6 00:25:09.731469 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 6 00:25:09.762057 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (690) Nov 6 00:25:09.765853 kernel: BTRFS info (device vda6): first mount of filesystem ca2bb832-66d5-4dca-a6d2-cbf7440d9381 Nov 6 00:25:09.765947 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 6 00:25:09.771823 kernel: BTRFS info (device vda6): turning on async discard Nov 6 00:25:09.771899 kernel: BTRFS info (device vda6): enabling free space tree Nov 6 00:25:09.779579 kernel: BTRFS info (device vda6): last unmount of filesystem ca2bb832-66d5-4dca-a6d2-cbf7440d9381 Nov 6 00:25:09.779941 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 6 00:25:09.783279 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 6 00:25:09.974444 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 6 00:25:09.978908 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 6 00:25:10.080267 systemd-networkd[828]: lo: Link UP Nov 6 00:25:10.080277 systemd-networkd[828]: lo: Gained carrier Nov 6 00:25:10.081853 systemd-networkd[828]: Enumeration completed Nov 6 00:25:10.081964 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 6 00:25:10.082315 systemd-networkd[828]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 6 00:25:10.082320 systemd-networkd[828]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 6 00:25:10.083792 systemd-networkd[828]: eth0: Link UP Nov 6 00:25:10.084148 systemd-networkd[828]: eth0: Gained carrier Nov 6 00:25:10.084158 systemd-networkd[828]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 6 00:25:10.090453 systemd[1]: Reached target network.target - Network. Nov 6 00:25:10.108351 systemd-networkd[828]: eth0: DHCPv4 address 10.0.0.113/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 6 00:25:10.132595 ignition[743]: Ignition 2.22.0 Nov 6 00:25:10.132614 ignition[743]: Stage: fetch-offline Nov 6 00:25:10.132676 ignition[743]: no configs at "/usr/lib/ignition/base.d" Nov 6 00:25:10.132686 ignition[743]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 6 00:25:10.132859 ignition[743]: parsed url from cmdline: "" Nov 6 00:25:10.132867 ignition[743]: no config URL provided Nov 6 00:25:10.132875 ignition[743]: reading system config file "/usr/lib/ignition/user.ign" Nov 6 00:25:10.132908 ignition[743]: no config at "/usr/lib/ignition/user.ign" Nov 6 00:25:10.132951 ignition[743]: op(1): [started] loading QEMU firmware config module Nov 6 00:25:10.132962 ignition[743]: op(1): executing: "modprobe" "qemu_fw_cfg" Nov 6 00:25:10.199330 ignition[743]: op(1): [finished] loading QEMU firmware config module Nov 6 00:25:10.284917 ignition[743]: parsing config with SHA512: fed0bd23cdb68f295a391cd6557e83a432a78af4e677f2d12bc3f06dadd1f6b7e2428e1da2bff69a60dd28bbbf071719340fcf19c5d38f0ce0a59f6e18a7b7cb Nov 6 00:25:10.290906 unknown[743]: fetched base config from "system" Nov 6 00:25:10.292027 unknown[743]: fetched user config from "qemu" Nov 6 00:25:10.292737 ignition[743]: fetch-offline: fetch-offline passed Nov 6 00:25:10.292867 ignition[743]: Ignition finished successfully Nov 6 00:25:10.295536 systemd-resolved[243]: Detected conflict on linux IN A 10.0.0.113 Nov 6 00:25:10.295549 systemd-resolved[243]: Hostname conflict, changing published hostname from 'linux' to 'linux6'. Nov 6 00:25:10.296830 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 6 00:25:10.297772 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 6 00:25:10.303174 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 6 00:25:10.420069 ignition[842]: Ignition 2.22.0 Nov 6 00:25:10.420092 ignition[842]: Stage: kargs Nov 6 00:25:10.420350 ignition[842]: no configs at "/usr/lib/ignition/base.d" Nov 6 00:25:10.420374 ignition[842]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 6 00:25:10.427322 ignition[842]: kargs: kargs passed Nov 6 00:25:10.428490 ignition[842]: Ignition finished successfully Nov 6 00:25:10.433261 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 6 00:25:10.435490 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 6 00:25:10.472840 ignition[850]: Ignition 2.22.0 Nov 6 00:25:10.472855 ignition[850]: Stage: disks Nov 6 00:25:10.473055 ignition[850]: no configs at "/usr/lib/ignition/base.d" Nov 6 00:25:10.473068 ignition[850]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 6 00:25:10.474114 ignition[850]: disks: disks passed Nov 6 00:25:10.478145 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 6 00:25:10.474179 ignition[850]: Ignition finished successfully Nov 6 00:25:10.479920 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 6 00:25:10.483412 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 6 00:25:10.486318 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 6 00:25:10.489623 systemd[1]: Reached target sysinit.target - System Initialization. Nov 6 00:25:10.494067 systemd[1]: Reached target basic.target - Basic System. Nov 6 00:25:10.500627 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 6 00:25:10.544406 systemd-fsck[861]: ROOT: clean, 15/553520 files, 52789/553472 blocks Nov 6 00:25:10.552926 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 6 00:25:10.559079 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 6 00:25:10.743934 kernel: EXT4-fs (vda9): mounted filesystem 25ee01aa-0270-4de7-b5da-d8936d968d16 r/w with ordered data mode. Quota mode: none. Nov 6 00:25:10.745144 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 6 00:25:10.746996 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 6 00:25:10.751853 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 6 00:25:10.754858 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 6 00:25:10.757403 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 6 00:25:10.757466 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 6 00:25:10.757500 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 6 00:25:10.778124 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 6 00:25:10.781918 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (869) Nov 6 00:25:10.784549 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 6 00:25:10.920682 kernel: BTRFS info (device vda6): first mount of filesystem ca2bb832-66d5-4dca-a6d2-cbf7440d9381 Nov 6 00:25:10.920740 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 6 00:25:10.920752 kernel: BTRFS info (device vda6): turning on async discard Nov 6 00:25:10.920778 kernel: BTRFS info (device vda6): enabling free space tree Nov 6 00:25:10.919113 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 6 00:25:10.972454 initrd-setup-root[893]: cut: /sysroot/etc/passwd: No such file or directory Nov 6 00:25:10.979602 initrd-setup-root[900]: cut: /sysroot/etc/group: No such file or directory Nov 6 00:25:10.984418 initrd-setup-root[907]: cut: /sysroot/etc/shadow: No such file or directory Nov 6 00:25:10.989605 initrd-setup-root[914]: cut: /sysroot/etc/gshadow: No such file or directory Nov 6 00:25:11.132607 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 6 00:25:11.135192 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 6 00:25:11.138925 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 6 00:25:11.157802 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 6 00:25:11.159530 kernel: BTRFS info (device vda6): last unmount of filesystem ca2bb832-66d5-4dca-a6d2-cbf7440d9381 Nov 6 00:25:11.169096 systemd-networkd[828]: eth0: Gained IPv6LL Nov 6 00:25:11.173076 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 6 00:25:11.198566 ignition[982]: INFO : Ignition 2.22.0 Nov 6 00:25:11.198566 ignition[982]: INFO : Stage: mount Nov 6 00:25:11.201287 ignition[982]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 6 00:25:11.201287 ignition[982]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 6 00:25:11.201287 ignition[982]: INFO : mount: mount passed Nov 6 00:25:11.201287 ignition[982]: INFO : Ignition finished successfully Nov 6 00:25:11.210277 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 6 00:25:11.213580 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 6 00:25:11.747334 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 6 00:25:11.779927 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (994) Nov 6 00:25:11.783905 kernel: BTRFS info (device vda6): first mount of filesystem ca2bb832-66d5-4dca-a6d2-cbf7440d9381 Nov 6 00:25:11.783978 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 6 00:25:11.790121 kernel: BTRFS info (device vda6): turning on async discard Nov 6 00:25:11.790205 kernel: BTRFS info (device vda6): enabling free space tree Nov 6 00:25:11.792492 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 6 00:25:11.904611 ignition[1011]: INFO : Ignition 2.22.0 Nov 6 00:25:11.904611 ignition[1011]: INFO : Stage: files Nov 6 00:25:11.907633 ignition[1011]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 6 00:25:11.907633 ignition[1011]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 6 00:25:11.913817 ignition[1011]: DEBUG : files: compiled without relabeling support, skipping Nov 6 00:25:11.916015 ignition[1011]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 6 00:25:11.916015 ignition[1011]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 6 00:25:11.921153 ignition[1011]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 6 00:25:11.921153 ignition[1011]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 6 00:25:11.921153 ignition[1011]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 6 00:25:11.919057 unknown[1011]: wrote ssh authorized keys file for user: core Nov 6 00:25:11.930217 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 6 00:25:11.930217 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Nov 6 00:25:11.967507 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 6 00:25:12.111645 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 6 00:25:12.111645 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 6 00:25:12.117982 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Nov 6 00:25:12.384222 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Nov 6 00:25:12.683576 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 6 00:25:12.683576 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Nov 6 00:25:12.689737 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Nov 6 00:25:12.692704 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 6 00:25:12.695936 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 6 00:25:12.698854 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 6 00:25:12.702008 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 6 00:25:12.704935 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 6 00:25:12.708093 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 6 00:25:12.715762 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 6 00:25:12.719167 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 6 00:25:12.722694 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 6 00:25:12.727224 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 6 00:25:12.732428 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 6 00:25:12.732428 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Nov 6 00:25:12.976684 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Nov 6 00:25:13.770531 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 6 00:25:13.770531 ignition[1011]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Nov 6 00:25:13.915902 ignition[1011]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 6 00:25:14.315377 ignition[1011]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 6 00:25:14.315377 ignition[1011]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Nov 6 00:25:14.315377 ignition[1011]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Nov 6 00:25:14.326655 ignition[1011]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 6 00:25:14.326655 ignition[1011]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 6 00:25:14.326655 ignition[1011]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Nov 6 00:25:14.326655 ignition[1011]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Nov 6 00:25:14.348271 ignition[1011]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Nov 6 00:25:14.353477 ignition[1011]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Nov 6 00:25:14.356920 ignition[1011]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Nov 6 00:25:14.356920 ignition[1011]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Nov 6 00:25:14.362395 ignition[1011]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Nov 6 00:25:14.362395 ignition[1011]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 6 00:25:14.362395 ignition[1011]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 6 00:25:14.362395 ignition[1011]: INFO : files: files passed Nov 6 00:25:14.362395 ignition[1011]: INFO : Ignition finished successfully Nov 6 00:25:14.367092 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 6 00:25:14.371639 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 6 00:25:14.387966 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 6 00:25:14.391432 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 6 00:25:14.391640 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 6 00:25:14.402309 initrd-setup-root-after-ignition[1040]: grep: /sysroot/oem/oem-release: No such file or directory Nov 6 00:25:14.407681 initrd-setup-root-after-ignition[1042]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 6 00:25:14.407681 initrd-setup-root-after-ignition[1042]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 6 00:25:14.413288 initrd-setup-root-after-ignition[1046]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 6 00:25:14.417433 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 6 00:25:14.418834 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 6 00:25:14.428589 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 6 00:25:14.615454 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 6 00:25:14.615641 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 6 00:25:14.618754 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 6 00:25:14.622250 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 6 00:25:14.626384 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 6 00:25:14.628701 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 6 00:25:14.674130 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 6 00:25:14.677917 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 6 00:25:14.701676 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 6 00:25:14.702531 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 6 00:25:14.706352 systemd[1]: Stopped target timers.target - Timer Units. Nov 6 00:25:14.709718 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 6 00:25:14.709948 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 6 00:25:14.715242 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 6 00:25:14.716160 systemd[1]: Stopped target basic.target - Basic System. Nov 6 00:25:14.716696 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 6 00:25:14.734972 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 6 00:25:14.738318 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 6 00:25:14.738857 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Nov 6 00:25:14.745390 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 6 00:25:14.748829 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 6 00:25:14.752412 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 6 00:25:14.756023 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 6 00:25:14.759451 systemd[1]: Stopped target swap.target - Swaps. Nov 6 00:25:14.762284 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 6 00:25:14.762431 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 6 00:25:14.767502 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 6 00:25:14.770777 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 6 00:25:14.771653 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 6 00:25:14.777573 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 6 00:25:14.781933 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 6 00:25:14.782156 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 6 00:25:14.786759 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 6 00:25:14.787000 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 6 00:25:14.787961 systemd[1]: Stopped target paths.target - Path Units. Nov 6 00:25:14.794312 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 6 00:25:14.798005 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 6 00:25:14.798810 systemd[1]: Stopped target slices.target - Slice Units. Nov 6 00:25:14.799660 systemd[1]: Stopped target sockets.target - Socket Units. Nov 6 00:25:14.805873 systemd[1]: iscsid.socket: Deactivated successfully. Nov 6 00:25:14.806029 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 6 00:25:14.808569 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 6 00:25:14.808666 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 6 00:25:14.811440 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 6 00:25:14.811562 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 6 00:25:14.814517 systemd[1]: ignition-files.service: Deactivated successfully. Nov 6 00:25:14.814656 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 6 00:25:14.822216 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 6 00:25:14.823634 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 6 00:25:14.829332 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 6 00:25:14.829569 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 6 00:25:14.831152 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 6 00:25:14.831575 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 6 00:25:14.844718 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 6 00:25:14.846299 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 6 00:25:14.868041 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 6 00:25:14.873644 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 6 00:25:14.873787 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 6 00:25:14.881089 ignition[1066]: INFO : Ignition 2.22.0 Nov 6 00:25:14.881089 ignition[1066]: INFO : Stage: umount Nov 6 00:25:14.883516 ignition[1066]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 6 00:25:14.883516 ignition[1066]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 6 00:25:14.883516 ignition[1066]: INFO : umount: umount passed Nov 6 00:25:14.883516 ignition[1066]: INFO : Ignition finished successfully Nov 6 00:25:14.890936 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 6 00:25:14.891088 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 6 00:25:14.895465 systemd[1]: Stopped target network.target - Network. Nov 6 00:25:14.898150 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 6 00:25:14.898230 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 6 00:25:14.903793 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 6 00:25:14.903852 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 6 00:25:14.907583 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 6 00:25:14.907655 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 6 00:25:14.911405 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 6 00:25:14.911460 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 6 00:25:14.914421 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 6 00:25:14.914482 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 6 00:25:14.917532 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 6 00:25:14.917998 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 6 00:25:14.927359 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 6 00:25:14.927550 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 6 00:25:14.934380 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Nov 6 00:25:14.934746 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 6 00:25:14.934802 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 6 00:25:14.940819 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Nov 6 00:25:14.950320 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 6 00:25:14.950496 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 6 00:25:14.956429 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Nov 6 00:25:14.956690 systemd[1]: Stopped target network-pre.target - Preparation for Network. Nov 6 00:25:14.957799 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 6 00:25:14.957843 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 6 00:25:14.967272 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 6 00:25:14.970520 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 6 00:25:14.970594 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 6 00:25:14.972984 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 6 00:25:14.973042 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 6 00:25:14.979701 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 6 00:25:14.979814 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 6 00:25:14.980675 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 6 00:25:14.991502 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Nov 6 00:25:15.008828 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 6 00:25:15.058318 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 6 00:25:15.060211 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 6 00:25:15.060279 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 6 00:25:15.064650 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 6 00:25:15.064698 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 6 00:25:15.065580 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 6 00:25:15.065656 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 6 00:25:15.073389 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 6 00:25:15.073485 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 6 00:25:15.078171 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 6 00:25:15.078270 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 6 00:25:15.084877 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 6 00:25:15.086906 systemd[1]: systemd-network-generator.service: Deactivated successfully. Nov 6 00:25:15.086984 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Nov 6 00:25:15.095738 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 6 00:25:15.095850 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 6 00:25:15.100504 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 6 00:25:15.100583 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 6 00:25:15.105180 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 6 00:25:15.105231 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 6 00:25:15.110928 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 6 00:25:15.110984 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 00:25:15.118185 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 6 00:25:15.118305 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 6 00:25:15.122945 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 6 00:25:15.123077 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 6 00:25:15.124996 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 6 00:25:15.130472 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 6 00:25:15.159825 systemd[1]: Switching root. Nov 6 00:25:15.197677 systemd-journald[201]: Journal stopped Nov 6 00:25:16.719310 systemd-journald[201]: Received SIGTERM from PID 1 (systemd). Nov 6 00:25:16.719417 kernel: SELinux: policy capability network_peer_controls=1 Nov 6 00:25:16.719446 kernel: SELinux: policy capability open_perms=1 Nov 6 00:25:16.719476 kernel: SELinux: policy capability extended_socket_class=1 Nov 6 00:25:16.719503 kernel: SELinux: policy capability always_check_network=0 Nov 6 00:25:16.719528 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 6 00:25:16.719554 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 6 00:25:16.719608 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 6 00:25:16.719636 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 6 00:25:16.719661 kernel: SELinux: policy capability userspace_initial_context=0 Nov 6 00:25:16.719687 kernel: audit: type=1403 audit(1762388715.616:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 6 00:25:16.719721 systemd[1]: Successfully loaded SELinux policy in 72.445ms. Nov 6 00:25:16.719757 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.715ms. Nov 6 00:25:16.719786 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 6 00:25:16.719815 systemd[1]: Detected virtualization kvm. Nov 6 00:25:16.719843 systemd[1]: Detected architecture x86-64. Nov 6 00:25:16.719871 systemd[1]: Detected first boot. Nov 6 00:25:16.719916 systemd[1]: Initializing machine ID from VM UUID. Nov 6 00:25:16.719945 zram_generator::config[1112]: No configuration found. Nov 6 00:25:16.719973 kernel: Guest personality initialized and is inactive Nov 6 00:25:16.720002 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Nov 6 00:25:16.720020 kernel: Initialized host personality Nov 6 00:25:16.720034 kernel: NET: Registered PF_VSOCK protocol family Nov 6 00:25:16.720050 systemd[1]: Populated /etc with preset unit settings. Nov 6 00:25:16.720067 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Nov 6 00:25:16.720087 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 6 00:25:16.720107 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 6 00:25:16.720123 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 6 00:25:16.720140 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 6 00:25:16.720159 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 6 00:25:16.720176 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 6 00:25:16.720205 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 6 00:25:16.720229 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 6 00:25:16.720246 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 6 00:25:16.720261 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 6 00:25:16.720273 systemd[1]: Created slice user.slice - User and Session Slice. Nov 6 00:25:16.720285 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 6 00:25:16.720298 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 6 00:25:16.720314 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 6 00:25:16.720331 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 6 00:25:16.720343 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 6 00:25:16.720356 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 6 00:25:16.720369 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 6 00:25:16.720381 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 6 00:25:16.720393 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 6 00:25:16.720407 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 6 00:25:16.720419 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 6 00:25:16.720431 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 6 00:25:16.720443 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 6 00:25:16.720458 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 6 00:25:16.720470 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 6 00:25:16.720482 systemd[1]: Reached target slices.target - Slice Units. Nov 6 00:25:16.720494 systemd[1]: Reached target swap.target - Swaps. Nov 6 00:25:16.720506 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 6 00:25:16.720521 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 6 00:25:16.720533 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Nov 6 00:25:16.720545 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 6 00:25:16.720557 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 6 00:25:16.720579 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 6 00:25:16.720593 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 6 00:25:16.720605 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 6 00:25:16.720617 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 6 00:25:16.720629 systemd[1]: Mounting media.mount - External Media Directory... Nov 6 00:25:16.720644 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 00:25:16.720659 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 6 00:25:16.720673 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 6 00:25:16.720687 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 6 00:25:16.720700 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 6 00:25:16.720712 systemd[1]: Reached target machines.target - Containers. Nov 6 00:25:16.720724 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 6 00:25:16.720738 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 6 00:25:16.720755 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 6 00:25:16.720769 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 6 00:25:16.720782 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 6 00:25:16.720794 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 6 00:25:16.720806 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 6 00:25:16.720818 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 6 00:25:16.720829 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 6 00:25:16.720842 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 6 00:25:16.720853 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 6 00:25:16.720868 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 6 00:25:16.720880 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 6 00:25:16.720913 systemd[1]: Stopped systemd-fsck-usr.service. Nov 6 00:25:16.720930 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 6 00:25:16.720946 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 6 00:25:16.720959 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 6 00:25:16.720971 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 6 00:25:16.720983 kernel: loop: module loaded Nov 6 00:25:16.720995 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 6 00:25:16.721011 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Nov 6 00:25:16.721024 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 6 00:25:16.721037 kernel: ACPI: bus type drm_connector registered Nov 6 00:25:16.721050 systemd[1]: verity-setup.service: Deactivated successfully. Nov 6 00:25:16.721062 systemd[1]: Stopped verity-setup.service. Nov 6 00:25:16.721077 kernel: fuse: init (API version 7.41) Nov 6 00:25:16.721089 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 00:25:16.721101 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 6 00:25:16.721113 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 6 00:25:16.721151 systemd-journald[1194]: Collecting audit messages is disabled. Nov 6 00:25:16.721174 systemd[1]: Mounted media.mount - External Media Directory. Nov 6 00:25:16.721187 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 6 00:25:16.721202 systemd-journald[1194]: Journal started Nov 6 00:25:16.721225 systemd-journald[1194]: Runtime Journal (/run/log/journal/918d2427bab841b78fe8d2572f3a56b1) is 6M, max 48.3M, 42.2M free. Nov 6 00:25:16.346155 systemd[1]: Queued start job for default target multi-user.target. Nov 6 00:25:16.360056 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 6 00:25:16.360680 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 6 00:25:16.726026 systemd[1]: Started systemd-journald.service - Journal Service. Nov 6 00:25:16.728332 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 6 00:25:16.730322 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 6 00:25:16.732445 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 6 00:25:16.734902 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 6 00:25:16.737292 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 6 00:25:16.737560 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 6 00:25:16.740038 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 6 00:25:16.740263 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 6 00:25:16.742458 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 6 00:25:16.742709 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 6 00:25:16.744964 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 6 00:25:16.745232 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 6 00:25:16.747703 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 6 00:25:16.748070 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 6 00:25:16.750318 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 6 00:25:16.750608 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 6 00:25:16.752906 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 6 00:25:16.755846 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 6 00:25:16.758703 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 6 00:25:16.761482 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Nov 6 00:25:16.779422 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 6 00:25:16.783683 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 6 00:25:16.787447 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 6 00:25:16.789620 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 6 00:25:16.789674 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 6 00:25:16.792977 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Nov 6 00:25:16.797128 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 6 00:25:16.800120 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 6 00:25:16.804021 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 6 00:25:16.808516 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 6 00:25:16.811427 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 6 00:25:16.814105 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 6 00:25:16.816712 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 6 00:25:16.823296 systemd-journald[1194]: Time spent on flushing to /var/log/journal/918d2427bab841b78fe8d2572f3a56b1 is 33.487ms for 984 entries. Nov 6 00:25:16.823296 systemd-journald[1194]: System Journal (/var/log/journal/918d2427bab841b78fe8d2572f3a56b1) is 8M, max 195.6M, 187.6M free. Nov 6 00:25:16.889029 systemd-journald[1194]: Received client request to flush runtime journal. Nov 6 00:25:16.889107 kernel: loop0: detected capacity change from 0 to 110984 Nov 6 00:25:16.889139 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 6 00:25:16.820044 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 6 00:25:16.823593 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 6 00:25:16.828727 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 6 00:25:16.838173 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 6 00:25:16.841981 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 6 00:25:16.844495 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 6 00:25:16.852255 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 6 00:25:16.862748 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 6 00:25:16.869177 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Nov 6 00:25:16.878635 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 6 00:25:16.890971 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 6 00:25:16.898713 systemd-tmpfiles[1232]: ACLs are not supported, ignoring. Nov 6 00:25:16.898745 systemd-tmpfiles[1232]: ACLs are not supported, ignoring. Nov 6 00:25:16.908488 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 6 00:25:16.915147 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 6 00:25:16.926038 kernel: loop1: detected capacity change from 0 to 128016 Nov 6 00:25:16.932573 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Nov 6 00:25:16.981945 kernel: loop2: detected capacity change from 0 to 229808 Nov 6 00:25:16.987706 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 6 00:25:16.993259 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 6 00:25:17.037594 systemd-tmpfiles[1252]: ACLs are not supported, ignoring. Nov 6 00:25:17.037616 systemd-tmpfiles[1252]: ACLs are not supported, ignoring. Nov 6 00:25:17.042596 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 6 00:25:17.055933 kernel: loop3: detected capacity change from 0 to 110984 Nov 6 00:25:17.072915 kernel: loop4: detected capacity change from 0 to 128016 Nov 6 00:25:17.090922 kernel: loop5: detected capacity change from 0 to 229808 Nov 6 00:25:17.246737 (sd-merge)[1256]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Nov 6 00:25:17.249032 (sd-merge)[1256]: Merged extensions into '/usr'. Nov 6 00:25:17.255046 systemd[1]: Reload requested from client PID 1231 ('systemd-sysext') (unit systemd-sysext.service)... Nov 6 00:25:17.255074 systemd[1]: Reloading... Nov 6 00:25:17.394014 zram_generator::config[1281]: No configuration found. Nov 6 00:25:17.873331 ldconfig[1226]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 6 00:25:17.906179 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 6 00:25:17.906822 systemd[1]: Reloading finished in 651 ms. Nov 6 00:25:17.939574 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 6 00:25:17.956718 systemd[1]: Starting ensure-sysext.service... Nov 6 00:25:17.959459 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 6 00:25:17.980332 systemd-tmpfiles[1319]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Nov 6 00:25:17.980372 systemd-tmpfiles[1319]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Nov 6 00:25:17.980753 systemd-tmpfiles[1319]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 6 00:25:17.981172 systemd-tmpfiles[1319]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 6 00:25:17.982180 systemd-tmpfiles[1319]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 6 00:25:17.982505 systemd-tmpfiles[1319]: ACLs are not supported, ignoring. Nov 6 00:25:17.982594 systemd-tmpfiles[1319]: ACLs are not supported, ignoring. Nov 6 00:25:17.988498 systemd[1]: Reload requested from client PID 1318 ('systemctl') (unit ensure-sysext.service)... Nov 6 00:25:17.988524 systemd[1]: Reloading... Nov 6 00:25:18.003303 systemd-tmpfiles[1319]: Detected autofs mount point /boot during canonicalization of boot. Nov 6 00:25:18.003326 systemd-tmpfiles[1319]: Skipping /boot Nov 6 00:25:18.015940 systemd-tmpfiles[1319]: Detected autofs mount point /boot during canonicalization of boot. Nov 6 00:25:18.016076 systemd-tmpfiles[1319]: Skipping /boot Nov 6 00:25:18.068375 zram_generator::config[1347]: No configuration found. Nov 6 00:25:18.546586 systemd[1]: Reloading finished in 557 ms. Nov 6 00:25:18.567399 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 6 00:25:18.569813 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 6 00:25:18.595298 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 6 00:25:18.608709 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 6 00:25:18.612622 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 6 00:25:18.638275 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 6 00:25:18.643784 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 6 00:25:18.648380 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 6 00:25:18.658569 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 6 00:25:18.663971 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 00:25:18.664152 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 6 00:25:18.668308 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 6 00:25:18.674841 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 6 00:25:18.680690 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 6 00:25:18.683018 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 6 00:25:18.683135 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 6 00:25:18.685342 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 6 00:25:18.687420 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 00:25:18.688662 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 6 00:25:18.690462 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 6 00:25:18.695654 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 6 00:25:18.696622 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 6 00:25:18.699671 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 6 00:25:18.700415 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 6 00:25:18.713739 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 6 00:25:18.716279 systemd-udevd[1395]: Using default interface naming scheme 'v255'. Nov 6 00:25:18.749169 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 00:25:18.749488 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 6 00:25:18.751411 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 6 00:25:18.755313 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 6 00:25:18.758753 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 6 00:25:18.761058 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 6 00:25:18.761236 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 6 00:25:18.761333 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 00:25:18.764725 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 00:25:18.764986 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 6 00:25:18.772176 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 6 00:25:18.808573 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 6 00:25:18.808764 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 6 00:25:18.808940 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 00:25:18.810582 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 6 00:25:18.810848 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 6 00:25:18.813382 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 6 00:25:18.813601 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 6 00:25:18.816029 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 6 00:25:18.818418 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 6 00:25:18.818639 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 6 00:25:18.821640 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 6 00:25:18.821950 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 6 00:25:18.825221 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 6 00:25:18.837080 systemd[1]: Finished ensure-sysext.service. Nov 6 00:25:18.845445 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 6 00:25:18.849395 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 6 00:25:18.865133 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 6 00:25:18.867075 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 6 00:25:18.867195 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 6 00:25:18.870862 augenrules[1463]: No rules Nov 6 00:25:18.871869 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 6 00:25:18.876191 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 6 00:25:18.878103 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 6 00:25:18.880904 systemd[1]: audit-rules.service: Deactivated successfully. Nov 6 00:25:18.883264 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 6 00:25:18.911124 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 6 00:25:18.935617 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 6 00:25:19.018934 kernel: mousedev: PS/2 mouse device common for all mice Nov 6 00:25:19.083926 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Nov 6 00:25:19.109126 systemd-networkd[1460]: lo: Link UP Nov 6 00:25:19.109143 systemd-networkd[1460]: lo: Gained carrier Nov 6 00:25:19.114347 systemd-resolved[1391]: Positive Trust Anchors: Nov 6 00:25:19.114369 systemd-resolved[1391]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 6 00:25:19.114411 systemd-resolved[1391]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 6 00:25:19.120626 systemd-resolved[1391]: Defaulting to hostname 'linux'. Nov 6 00:25:19.123079 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 6 00:25:19.125536 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 6 00:25:19.133161 systemd-networkd[1460]: Enumeration completed Nov 6 00:25:19.133877 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 6 00:25:19.135848 systemd-networkd[1460]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 6 00:25:19.135864 systemd-networkd[1460]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 6 00:25:19.136258 systemd[1]: Reached target network.target - Network. Nov 6 00:25:19.140398 systemd-networkd[1460]: eth0: Link UP Nov 6 00:25:19.140664 systemd-networkd[1460]: eth0: Gained carrier Nov 6 00:25:19.140704 systemd-networkd[1460]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 6 00:25:19.141685 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Nov 6 00:25:19.146914 kernel: ACPI: button: Power Button [PWRF] Nov 6 00:25:19.152950 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Nov 6 00:25:19.153365 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Nov 6 00:25:19.154694 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 6 00:25:19.157709 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 6 00:25:19.181780 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 6 00:25:19.182706 systemd-networkd[1460]: eth0: DHCPv4 address 10.0.0.113/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 6 00:25:19.184538 systemd[1]: Reached target sysinit.target - System Initialization. Nov 6 00:25:19.186523 systemd-timesyncd[1469]: Network configuration changed, trying to establish connection. Nov 6 00:25:19.186736 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 6 00:25:19.189304 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 6 00:25:19.191348 systemd-timesyncd[1469]: Contacted time server 10.0.0.1:123 (10.0.0.1). Nov 6 00:25:19.191843 systemd-timesyncd[1469]: Initial clock synchronization to Thu 2025-11-06 00:25:18.855622 UTC. Nov 6 00:25:19.192133 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Nov 6 00:25:19.194505 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 6 00:25:19.197139 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 6 00:25:19.197187 systemd[1]: Reached target paths.target - Path Units. Nov 6 00:25:19.200003 systemd[1]: Reached target time-set.target - System Time Set. Nov 6 00:25:19.202327 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 6 00:25:19.204592 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 6 00:25:19.207021 systemd[1]: Reached target timers.target - Timer Units. Nov 6 00:25:19.212143 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 6 00:25:19.217588 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 6 00:25:19.226332 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Nov 6 00:25:19.228968 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Nov 6 00:25:19.231481 systemd[1]: Reached target ssh-access.target - SSH Access Available. Nov 6 00:25:19.245082 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 6 00:25:19.289490 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Nov 6 00:25:19.295124 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 6 00:25:19.298634 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Nov 6 00:25:19.301360 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 6 00:25:19.312379 systemd[1]: Reached target sockets.target - Socket Units. Nov 6 00:25:19.314221 systemd[1]: Reached target basic.target - Basic System. Nov 6 00:25:19.316075 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 6 00:25:19.316166 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 6 00:25:19.319144 systemd[1]: Starting containerd.service - containerd container runtime... Nov 6 00:25:19.327683 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 6 00:25:19.336485 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 6 00:25:19.400097 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 6 00:25:19.403122 kernel: kvm_amd: TSC scaling supported Nov 6 00:25:19.403158 kernel: kvm_amd: Nested Virtualization enabled Nov 6 00:25:19.403181 kernel: kvm_amd: Nested Paging enabled Nov 6 00:25:19.404164 kernel: kvm_amd: LBR virtualization supported Nov 6 00:25:19.405335 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Nov 6 00:25:19.406545 kernel: kvm_amd: Virtual GIF supported Nov 6 00:25:19.410262 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 6 00:25:19.412357 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 6 00:25:19.426792 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Nov 6 00:25:19.431054 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 6 00:25:19.433783 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 6 00:25:19.436794 jq[1511]: false Nov 6 00:25:19.441298 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 6 00:25:19.445922 google_oslogin_nss_cache[1513]: oslogin_cache_refresh[1513]: Refreshing passwd entry cache Nov 6 00:25:19.445684 oslogin_cache_refresh[1513]: Refreshing passwd entry cache Nov 6 00:25:19.446317 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 6 00:25:19.454808 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 6 00:25:19.465743 google_oslogin_nss_cache[1513]: oslogin_cache_refresh[1513]: Failure getting users, quitting Nov 6 00:25:19.465743 google_oslogin_nss_cache[1513]: oslogin_cache_refresh[1513]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 6 00:25:19.465743 google_oslogin_nss_cache[1513]: oslogin_cache_refresh[1513]: Refreshing group entry cache Nov 6 00:25:19.465591 oslogin_cache_refresh[1513]: Failure getting users, quitting Nov 6 00:25:19.465621 oslogin_cache_refresh[1513]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 6 00:25:19.465708 oslogin_cache_refresh[1513]: Refreshing group entry cache Nov 6 00:25:19.468032 extend-filesystems[1512]: Found /dev/vda6 Nov 6 00:25:19.472967 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 00:25:19.482511 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 6 00:25:19.483370 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 6 00:25:19.484532 systemd[1]: Starting update-engine.service - Update Engine... Nov 6 00:25:19.488136 extend-filesystems[1512]: Found /dev/vda9 Nov 6 00:25:19.485097 oslogin_cache_refresh[1513]: Failure getting groups, quitting Nov 6 00:25:19.491982 google_oslogin_nss_cache[1513]: oslogin_cache_refresh[1513]: Failure getting groups, quitting Nov 6 00:25:19.491982 google_oslogin_nss_cache[1513]: oslogin_cache_refresh[1513]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 6 00:25:19.488352 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 6 00:25:19.485117 oslogin_cache_refresh[1513]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 6 00:25:19.495108 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 6 00:25:19.504928 kernel: EDAC MC: Ver: 3.0.0 Nov 6 00:25:19.505362 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 6 00:25:19.508141 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 6 00:25:19.508394 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 6 00:25:19.508926 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Nov 6 00:25:19.509179 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Nov 6 00:25:19.511627 systemd[1]: motdgen.service: Deactivated successfully. Nov 6 00:25:19.511929 jq[1533]: true Nov 6 00:25:19.512580 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 6 00:25:19.518397 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 6 00:25:19.524441 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 6 00:25:19.526094 update_engine[1532]: I20251106 00:25:19.525959 1532 main.cc:92] Flatcar Update Engine starting Nov 6 00:25:19.537859 (ntainerd)[1541]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 6 00:25:19.540225 jq[1540]: true Nov 6 00:25:19.542165 extend-filesystems[1512]: Checking size of /dev/vda9 Nov 6 00:25:19.550759 tar[1538]: linux-amd64/LICENSE Nov 6 00:25:19.551283 tar[1538]: linux-amd64/helm Nov 6 00:25:19.715910 systemd-logind[1523]: Watching system buttons on /dev/input/event2 (Power Button) Nov 6 00:25:19.715941 systemd-logind[1523]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 6 00:25:19.716389 systemd-logind[1523]: New seat seat0. Nov 6 00:25:19.719164 systemd[1]: Started systemd-logind.service - User Login Management. Nov 6 00:25:19.766659 extend-filesystems[1512]: Resized partition /dev/vda9 Nov 6 00:25:19.857307 sshd_keygen[1531]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 6 00:25:19.845875 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 6 00:25:19.861024 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 6 00:25:19.894876 dbus-daemon[1508]: [system] SELinux support is enabled Nov 6 00:25:19.895271 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 6 00:25:19.914199 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 6 00:25:19.914254 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 6 00:25:19.914377 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 6 00:25:19.914391 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 6 00:25:19.926049 dbus-daemon[1508]: [system] Successfully activated service 'org.freedesktop.systemd1' Nov 6 00:25:19.937189 update_engine[1532]: I20251106 00:25:19.930655 1532 update_check_scheduler.cc:74] Next update check in 7m19s Nov 6 00:25:19.931330 systemd[1]: Started update-engine.service - Update Engine. Nov 6 00:25:19.964914 systemd[1]: issuegen.service: Deactivated successfully. Nov 6 00:25:19.965243 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 6 00:25:19.985270 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 6 00:25:19.987945 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 6 00:25:20.016658 extend-filesystems[1585]: resize2fs 1.47.3 (8-Jul-2025) Nov 6 00:25:20.062172 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 6 00:25:20.172491 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 6 00:25:20.196713 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 6 00:25:20.197961 systemd[1]: Reached target getty.target - Login Prompts. Nov 6 00:25:20.250423 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Nov 6 00:25:20.271539 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 00:25:20.315723 tar[1538]: linux-amd64/README.md Nov 6 00:25:20.377824 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 6 00:25:20.437217 bash[1569]: Updated "/home/core/.ssh/authorized_keys" Nov 6 00:25:20.438903 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Nov 6 00:25:20.440599 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 6 00:25:21.234330 containerd[1541]: time="2025-11-06T00:25:20Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Nov 6 00:25:20.443691 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Nov 6 00:25:20.566318 locksmithd[1591]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 6 00:25:20.644174 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 6 00:25:21.235709 containerd[1541]: time="2025-11-06T00:25:21.235364215Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Nov 6 00:25:20.647554 systemd[1]: Started sshd@0-10.0.0.113:22-10.0.0.1:54540.service - OpenSSH per-connection server daemon (10.0.0.1:54540). Nov 6 00:25:20.961108 systemd-networkd[1460]: eth0: Gained IPv6LL Nov 6 00:25:20.965424 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 6 00:25:20.968498 systemd[1]: Reached target network-online.target - Network is Online. Nov 6 00:25:20.972503 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Nov 6 00:25:21.236557 extend-filesystems[1585]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 6 00:25:21.236557 extend-filesystems[1585]: old_desc_blocks = 1, new_desc_blocks = 1 Nov 6 00:25:21.236557 extend-filesystems[1585]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Nov 6 00:25:21.249221 extend-filesystems[1512]: Resized filesystem in /dev/vda9 Nov 6 00:25:21.237585 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:25:21.243292 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 6 00:25:21.245706 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 6 00:25:21.247074 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 6 00:25:21.261164 containerd[1541]: time="2025-11-06T00:25:21.261079552Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="16.936µs" Nov 6 00:25:21.261164 containerd[1541]: time="2025-11-06T00:25:21.261134283Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Nov 6 00:25:21.261164 containerd[1541]: time="2025-11-06T00:25:21.261172414Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Nov 6 00:25:21.261502 containerd[1541]: time="2025-11-06T00:25:21.261462862Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Nov 6 00:25:21.261502 containerd[1541]: time="2025-11-06T00:25:21.261493707Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Nov 6 00:25:21.261555 containerd[1541]: time="2025-11-06T00:25:21.261535467Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 6 00:25:21.261654 containerd[1541]: time="2025-11-06T00:25:21.261627683Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 6 00:25:21.261683 containerd[1541]: time="2025-11-06T00:25:21.261651241Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 6 00:25:21.262072 containerd[1541]: time="2025-11-06T00:25:21.262032255Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 6 00:25:21.262072 containerd[1541]: time="2025-11-06T00:25:21.262054510Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 6 00:25:21.262072 containerd[1541]: time="2025-11-06T00:25:21.262065522Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 6 00:25:21.262072 containerd[1541]: time="2025-11-06T00:25:21.262073272Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Nov 6 00:25:21.262222 containerd[1541]: time="2025-11-06T00:25:21.262197201Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Nov 6 00:25:21.262590 containerd[1541]: time="2025-11-06T00:25:21.262554714Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 6 00:25:21.262629 containerd[1541]: time="2025-11-06T00:25:21.262609493Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 6 00:25:21.262629 containerd[1541]: time="2025-11-06T00:25:21.262625070Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Nov 6 00:25:21.262715 containerd[1541]: time="2025-11-06T00:25:21.262683227Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Nov 6 00:25:21.263258 containerd[1541]: time="2025-11-06T00:25:21.263175102Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Nov 6 00:25:21.263464 containerd[1541]: time="2025-11-06T00:25:21.263381721Z" level=info msg="metadata content store policy set" policy=shared Nov 6 00:25:21.274645 sshd[1612]: Accepted publickey for core from 10.0.0.1 port 54540 ssh2: RSA SHA256:PmSYF5WO1c+PbjRA1Pm6yQw5/JNmNUR55sY7don0Q4E Nov 6 00:25:21.277202 sshd-session[1612]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:25:21.278344 containerd[1541]: time="2025-11-06T00:25:21.278292871Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Nov 6 00:25:21.278435 containerd[1541]: time="2025-11-06T00:25:21.278404978Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Nov 6 00:25:21.279134 containerd[1541]: time="2025-11-06T00:25:21.279089642Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Nov 6 00:25:21.279134 containerd[1541]: time="2025-11-06T00:25:21.279127049Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Nov 6 00:25:21.279198 containerd[1541]: time="2025-11-06T00:25:21.279145299Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Nov 6 00:25:21.279198 containerd[1541]: time="2025-11-06T00:25:21.279161831Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Nov 6 00:25:21.279198 containerd[1541]: time="2025-11-06T00:25:21.279181607Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Nov 6 00:25:21.279263 containerd[1541]: time="2025-11-06T00:25:21.279197270Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Nov 6 00:25:21.279263 containerd[1541]: time="2025-11-06T00:25:21.279214633Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Nov 6 00:25:21.279263 containerd[1541]: time="2025-11-06T00:25:21.279230643Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Nov 6 00:25:21.279263 containerd[1541]: time="2025-11-06T00:25:21.279243914Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Nov 6 00:25:21.279263 containerd[1541]: time="2025-11-06T00:25:21.279259635Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Nov 6 00:25:21.279497 containerd[1541]: time="2025-11-06T00:25:21.279463813Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Nov 6 00:25:21.279547 containerd[1541]: time="2025-11-06T00:25:21.279497929Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Nov 6 00:25:21.279547 containerd[1541]: time="2025-11-06T00:25:21.279523871Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Nov 6 00:25:21.279594 containerd[1541]: time="2025-11-06T00:25:21.279548549Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Nov 6 00:25:21.279594 containerd[1541]: time="2025-11-06T00:25:21.279564261Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Nov 6 00:25:21.279594 containerd[1541]: time="2025-11-06T00:25:21.279579065Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Nov 6 00:25:21.279669 containerd[1541]: time="2025-11-06T00:25:21.279594960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Nov 6 00:25:21.279669 containerd[1541]: time="2025-11-06T00:25:21.279612988Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Nov 6 00:25:21.279669 containerd[1541]: time="2025-11-06T00:25:21.279630168Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Nov 6 00:25:21.279669 containerd[1541]: time="2025-11-06T00:25:21.279644229Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Nov 6 00:25:21.279669 containerd[1541]: time="2025-11-06T00:25:21.279657780Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Nov 6 00:25:21.279808 containerd[1541]: time="2025-11-06T00:25:21.279785801Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Nov 6 00:25:21.279866 containerd[1541]: time="2025-11-06T00:25:21.279814011Z" level=info msg="Start snapshots syncer" Nov 6 00:25:21.280016 containerd[1541]: time="2025-11-06T00:25:21.279923492Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Nov 6 00:25:21.281121 containerd[1541]: time="2025-11-06T00:25:21.281069447Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Nov 6 00:25:21.281359 containerd[1541]: time="2025-11-06T00:25:21.281339010Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Nov 6 00:25:21.281615 containerd[1541]: time="2025-11-06T00:25:21.281596857Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Nov 6 00:25:21.281951 containerd[1541]: time="2025-11-06T00:25:21.281791152Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Nov 6 00:25:21.281951 containerd[1541]: time="2025-11-06T00:25:21.281822614Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Nov 6 00:25:21.281951 containerd[1541]: time="2025-11-06T00:25:21.281836512Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Nov 6 00:25:21.281951 containerd[1541]: time="2025-11-06T00:25:21.281852803Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Nov 6 00:25:21.282089 containerd[1541]: time="2025-11-06T00:25:21.281868321Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Nov 6 00:25:21.282170 containerd[1541]: time="2025-11-06T00:25:21.282152226Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Nov 6 00:25:21.282234 containerd[1541]: time="2025-11-06T00:25:21.282220054Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Nov 6 00:25:21.282327 containerd[1541]: time="2025-11-06T00:25:21.282312646Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Nov 6 00:25:21.282394 containerd[1541]: time="2025-11-06T00:25:21.282379498Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Nov 6 00:25:21.282456 containerd[1541]: time="2025-11-06T00:25:21.282441593Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Nov 6 00:25:21.282615 containerd[1541]: time="2025-11-06T00:25:21.282565368Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 6 00:25:21.282738 containerd[1541]: time="2025-11-06T00:25:21.282591889Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 6 00:25:21.282799 containerd[1541]: time="2025-11-06T00:25:21.282785074Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 6 00:25:21.282868 containerd[1541]: time="2025-11-06T00:25:21.282853249Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 6 00:25:21.282964 containerd[1541]: time="2025-11-06T00:25:21.282950049Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Nov 6 00:25:21.283034 containerd[1541]: time="2025-11-06T00:25:21.283020859Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Nov 6 00:25:21.283104 containerd[1541]: time="2025-11-06T00:25:21.283089536Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Nov 6 00:25:21.284710 containerd[1541]: time="2025-11-06T00:25:21.284531575Z" level=info msg="runtime interface created" Nov 6 00:25:21.284710 containerd[1541]: time="2025-11-06T00:25:21.284575969Z" level=info msg="created NRI interface" Nov 6 00:25:21.284710 containerd[1541]: time="2025-11-06T00:25:21.284608754Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Nov 6 00:25:21.284937 containerd[1541]: time="2025-11-06T00:25:21.284906035Z" level=info msg="Connect containerd service" Nov 6 00:25:21.285017 containerd[1541]: time="2025-11-06T00:25:21.284993425Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 6 00:25:21.285581 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 6 00:25:21.286181 containerd[1541]: time="2025-11-06T00:25:21.286141726Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 6 00:25:21.292156 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 6 00:25:21.294791 systemd[1]: coreos-metadata.service: Deactivated successfully. Nov 6 00:25:21.295825 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Nov 6 00:25:21.304495 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 6 00:25:21.310810 systemd-logind[1523]: New session 1 of user core. Nov 6 00:25:21.316673 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 6 00:25:21.329085 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 6 00:25:21.335664 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 6 00:25:21.470814 (systemd)[1641]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 6 00:25:21.474799 systemd-logind[1523]: New session c1 of user core. Nov 6 00:25:21.627984 containerd[1541]: time="2025-11-06T00:25:21.627554351Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 6 00:25:21.627984 containerd[1541]: time="2025-11-06T00:25:21.627653237Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 6 00:25:21.627984 containerd[1541]: time="2025-11-06T00:25:21.627704927Z" level=info msg="Start subscribing containerd event" Nov 6 00:25:21.627984 containerd[1541]: time="2025-11-06T00:25:21.627740327Z" level=info msg="Start recovering state" Nov 6 00:25:21.631104 containerd[1541]: time="2025-11-06T00:25:21.631064242Z" level=info msg="Start event monitor" Nov 6 00:25:21.631264 containerd[1541]: time="2025-11-06T00:25:21.631245663Z" level=info msg="Start cni network conf syncer for default" Nov 6 00:25:21.631412 containerd[1541]: time="2025-11-06T00:25:21.631315141Z" level=info msg="Start streaming server" Nov 6 00:25:21.631412 containerd[1541]: time="2025-11-06T00:25:21.631341430Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Nov 6 00:25:21.631412 containerd[1541]: time="2025-11-06T00:25:21.631356187Z" level=info msg="runtime interface starting up..." Nov 6 00:25:21.631608 containerd[1541]: time="2025-11-06T00:25:21.631364197Z" level=info msg="starting plugins..." Nov 6 00:25:21.631608 containerd[1541]: time="2025-11-06T00:25:21.631564369Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Nov 6 00:25:21.634434 containerd[1541]: time="2025-11-06T00:25:21.632268037Z" level=info msg="containerd successfully booted in 1.055328s" Nov 6 00:25:21.632341 systemd[1]: Started containerd.service - containerd container runtime. Nov 6 00:25:21.734144 systemd[1641]: Queued start job for default target default.target. Nov 6 00:25:21.840151 systemd[1641]: Created slice app.slice - User Application Slice. Nov 6 00:25:21.840184 systemd[1641]: Reached target paths.target - Paths. Nov 6 00:25:21.840230 systemd[1641]: Reached target timers.target - Timers. Nov 6 00:25:21.842003 systemd[1641]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 6 00:25:21.863482 systemd[1641]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 6 00:25:21.863664 systemd[1641]: Reached target sockets.target - Sockets. Nov 6 00:25:21.865272 systemd[1641]: Reached target basic.target - Basic System. Nov 6 00:25:21.865505 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 6 00:25:21.866065 systemd[1641]: Reached target default.target - Main User Target. Nov 6 00:25:21.866113 systemd[1641]: Startup finished in 361ms. Nov 6 00:25:21.870178 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 6 00:25:21.950277 systemd[1]: Started sshd@1-10.0.0.113:22-10.0.0.1:54550.service - OpenSSH per-connection server daemon (10.0.0.1:54550). Nov 6 00:25:22.229347 sshd[1664]: Accepted publickey for core from 10.0.0.1 port 54550 ssh2: RSA SHA256:PmSYF5WO1c+PbjRA1Pm6yQw5/JNmNUR55sY7don0Q4E Nov 6 00:25:22.231027 sshd-session[1664]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:25:22.236450 systemd-logind[1523]: New session 2 of user core. Nov 6 00:25:22.313219 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 6 00:25:22.414188 sshd[1667]: Connection closed by 10.0.0.1 port 54550 Nov 6 00:25:22.414557 sshd-session[1664]: pam_unix(sshd:session): session closed for user core Nov 6 00:25:22.434080 systemd[1]: sshd@1-10.0.0.113:22-10.0.0.1:54550.service: Deactivated successfully. Nov 6 00:25:22.436333 systemd[1]: session-2.scope: Deactivated successfully. Nov 6 00:25:22.437364 systemd-logind[1523]: Session 2 logged out. Waiting for processes to exit. Nov 6 00:25:22.439929 systemd[1]: Started sshd@2-10.0.0.113:22-10.0.0.1:54560.service - OpenSSH per-connection server daemon (10.0.0.1:54560). Nov 6 00:25:22.443719 systemd-logind[1523]: Removed session 2. Nov 6 00:25:22.586047 sshd[1673]: Accepted publickey for core from 10.0.0.1 port 54560 ssh2: RSA SHA256:PmSYF5WO1c+PbjRA1Pm6yQw5/JNmNUR55sY7don0Q4E Nov 6 00:25:22.587781 sshd-session[1673]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:25:22.592571 systemd-logind[1523]: New session 3 of user core. Nov 6 00:25:22.602076 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 6 00:25:22.675197 sshd[1676]: Connection closed by 10.0.0.1 port 54560 Nov 6 00:25:22.675617 sshd-session[1673]: pam_unix(sshd:session): session closed for user core Nov 6 00:25:22.680419 systemd[1]: sshd@2-10.0.0.113:22-10.0.0.1:54560.service: Deactivated successfully. Nov 6 00:25:22.682386 systemd[1]: session-3.scope: Deactivated successfully. Nov 6 00:25:22.683307 systemd-logind[1523]: Session 3 logged out. Waiting for processes to exit. Nov 6 00:25:22.684717 systemd-logind[1523]: Removed session 3. Nov 6 00:25:24.068714 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:25:24.071140 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 6 00:25:24.086672 (kubelet)[1686]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 6 00:25:24.088475 systemd[1]: Startup finished in 3.735s (kernel) + 8.895s (initrd) + 8.541s (userspace) = 21.172s. Nov 6 00:25:24.959998 kubelet[1686]: E1106 00:25:24.959895 1686 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 6 00:25:24.965176 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 6 00:25:24.965374 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 6 00:25:24.965809 systemd[1]: kubelet.service: Consumed 3.492s CPU time, 267.6M memory peak. Nov 6 00:25:32.525037 systemd[1]: Started sshd@3-10.0.0.113:22-10.0.0.1:39588.service - OpenSSH per-connection server daemon (10.0.0.1:39588). Nov 6 00:25:32.585400 sshd[1699]: Accepted publickey for core from 10.0.0.1 port 39588 ssh2: RSA SHA256:PmSYF5WO1c+PbjRA1Pm6yQw5/JNmNUR55sY7don0Q4E Nov 6 00:25:32.587306 sshd-session[1699]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:25:32.591902 systemd-logind[1523]: New session 4 of user core. Nov 6 00:25:32.610105 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 6 00:25:32.663760 sshd[1702]: Connection closed by 10.0.0.1 port 39588 Nov 6 00:25:32.664291 sshd-session[1699]: pam_unix(sshd:session): session closed for user core Nov 6 00:25:32.677614 systemd[1]: sshd@3-10.0.0.113:22-10.0.0.1:39588.service: Deactivated successfully. Nov 6 00:25:32.679502 systemd[1]: session-4.scope: Deactivated successfully. Nov 6 00:25:32.680351 systemd-logind[1523]: Session 4 logged out. Waiting for processes to exit. Nov 6 00:25:32.683043 systemd[1]: Started sshd@4-10.0.0.113:22-10.0.0.1:39600.service - OpenSSH per-connection server daemon (10.0.0.1:39600). Nov 6 00:25:32.683825 systemd-logind[1523]: Removed session 4. Nov 6 00:25:32.738768 sshd[1708]: Accepted publickey for core from 10.0.0.1 port 39600 ssh2: RSA SHA256:PmSYF5WO1c+PbjRA1Pm6yQw5/JNmNUR55sY7don0Q4E Nov 6 00:25:32.740453 sshd-session[1708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:25:32.745041 systemd-logind[1523]: New session 5 of user core. Nov 6 00:25:32.754044 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 6 00:25:32.804029 sshd[1711]: Connection closed by 10.0.0.1 port 39600 Nov 6 00:25:32.804552 sshd-session[1708]: pam_unix(sshd:session): session closed for user core Nov 6 00:25:32.819242 systemd[1]: sshd@4-10.0.0.113:22-10.0.0.1:39600.service: Deactivated successfully. Nov 6 00:25:32.821173 systemd[1]: session-5.scope: Deactivated successfully. Nov 6 00:25:32.821974 systemd-logind[1523]: Session 5 logged out. Waiting for processes to exit. Nov 6 00:25:32.824683 systemd[1]: Started sshd@5-10.0.0.113:22-10.0.0.1:39610.service - OpenSSH per-connection server daemon (10.0.0.1:39610). Nov 6 00:25:32.825593 systemd-logind[1523]: Removed session 5. Nov 6 00:25:32.884314 sshd[1717]: Accepted publickey for core from 10.0.0.1 port 39610 ssh2: RSA SHA256:PmSYF5WO1c+PbjRA1Pm6yQw5/JNmNUR55sY7don0Q4E Nov 6 00:25:32.886139 sshd-session[1717]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:25:32.890703 systemd-logind[1523]: New session 6 of user core. Nov 6 00:25:32.900115 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 6 00:25:32.956908 sshd[1721]: Connection closed by 10.0.0.1 port 39610 Nov 6 00:25:32.957350 sshd-session[1717]: pam_unix(sshd:session): session closed for user core Nov 6 00:25:32.973858 systemd[1]: sshd@5-10.0.0.113:22-10.0.0.1:39610.service: Deactivated successfully. Nov 6 00:25:32.975804 systemd[1]: session-6.scope: Deactivated successfully. Nov 6 00:25:32.976671 systemd-logind[1523]: Session 6 logged out. Waiting for processes to exit. Nov 6 00:25:32.979395 systemd[1]: Started sshd@6-10.0.0.113:22-10.0.0.1:39612.service - OpenSSH per-connection server daemon (10.0.0.1:39612). Nov 6 00:25:32.980111 systemd-logind[1523]: Removed session 6. Nov 6 00:25:33.031357 sshd[1727]: Accepted publickey for core from 10.0.0.1 port 39612 ssh2: RSA SHA256:PmSYF5WO1c+PbjRA1Pm6yQw5/JNmNUR55sY7don0Q4E Nov 6 00:25:33.033304 sshd-session[1727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:25:33.039048 systemd-logind[1523]: New session 7 of user core. Nov 6 00:25:33.054128 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 6 00:25:33.114253 sudo[1731]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 6 00:25:33.114556 sudo[1731]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 6 00:25:33.129210 sudo[1731]: pam_unix(sudo:session): session closed for user root Nov 6 00:25:33.131299 sshd[1730]: Connection closed by 10.0.0.1 port 39612 Nov 6 00:25:33.131797 sshd-session[1727]: pam_unix(sshd:session): session closed for user core Nov 6 00:25:33.147139 systemd[1]: sshd@6-10.0.0.113:22-10.0.0.1:39612.service: Deactivated successfully. Nov 6 00:25:33.149166 systemd[1]: session-7.scope: Deactivated successfully. Nov 6 00:25:33.149929 systemd-logind[1523]: Session 7 logged out. Waiting for processes to exit. Nov 6 00:25:33.152677 systemd[1]: Started sshd@7-10.0.0.113:22-10.0.0.1:39622.service - OpenSSH per-connection server daemon (10.0.0.1:39622). Nov 6 00:25:33.153453 systemd-logind[1523]: Removed session 7. Nov 6 00:25:33.209375 sshd[1737]: Accepted publickey for core from 10.0.0.1 port 39622 ssh2: RSA SHA256:PmSYF5WO1c+PbjRA1Pm6yQw5/JNmNUR55sY7don0Q4E Nov 6 00:25:33.211329 sshd-session[1737]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:25:33.216669 systemd-logind[1523]: New session 8 of user core. Nov 6 00:25:33.230177 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 6 00:25:33.286320 sudo[1742]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 6 00:25:33.286719 sudo[1742]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 6 00:25:33.328928 sudo[1742]: pam_unix(sudo:session): session closed for user root Nov 6 00:25:33.338303 sudo[1741]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 6 00:25:33.338756 sudo[1741]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 6 00:25:33.353820 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 6 00:25:33.416653 augenrules[1764]: No rules Nov 6 00:25:33.418424 systemd[1]: audit-rules.service: Deactivated successfully. Nov 6 00:25:33.418703 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 6 00:25:33.420075 sudo[1741]: pam_unix(sudo:session): session closed for user root Nov 6 00:25:33.422205 sshd[1740]: Connection closed by 10.0.0.1 port 39622 Nov 6 00:25:33.422618 sshd-session[1737]: pam_unix(sshd:session): session closed for user core Nov 6 00:25:33.436229 systemd[1]: sshd@7-10.0.0.113:22-10.0.0.1:39622.service: Deactivated successfully. Nov 6 00:25:33.438207 systemd[1]: session-8.scope: Deactivated successfully. Nov 6 00:25:33.439115 systemd-logind[1523]: Session 8 logged out. Waiting for processes to exit. Nov 6 00:25:33.442827 systemd[1]: Started sshd@8-10.0.0.113:22-10.0.0.1:39628.service - OpenSSH per-connection server daemon (10.0.0.1:39628). Nov 6 00:25:33.443572 systemd-logind[1523]: Removed session 8. Nov 6 00:25:33.509530 sshd[1773]: Accepted publickey for core from 10.0.0.1 port 39628 ssh2: RSA SHA256:PmSYF5WO1c+PbjRA1Pm6yQw5/JNmNUR55sY7don0Q4E Nov 6 00:25:33.511784 sshd-session[1773]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:25:33.518503 systemd-logind[1523]: New session 9 of user core. Nov 6 00:25:33.529134 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 6 00:25:33.585564 sudo[1777]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 6 00:25:33.585961 sudo[1777]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 6 00:25:34.333024 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 6 00:25:34.355202 (dockerd)[1797]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 6 00:25:34.839669 dockerd[1797]: time="2025-11-06T00:25:34.839604191Z" level=info msg="Starting up" Nov 6 00:25:34.840584 dockerd[1797]: time="2025-11-06T00:25:34.840557314Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Nov 6 00:25:34.863794 dockerd[1797]: time="2025-11-06T00:25:34.863737490Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Nov 6 00:25:35.067033 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 6 00:25:35.068763 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:25:35.439238 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:25:35.444712 (kubelet)[1830]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 6 00:25:35.775473 kubelet[1830]: E1106 00:25:35.775278 1830 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 6 00:25:35.782216 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 6 00:25:35.782435 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 6 00:25:35.782928 systemd[1]: kubelet.service: Consumed 319ms CPU time, 110.4M memory peak. Nov 6 00:25:37.017024 dockerd[1797]: time="2025-11-06T00:25:37.016959115Z" level=info msg="Loading containers: start." Nov 6 00:25:37.226915 kernel: Initializing XFRM netlink socket Nov 6 00:25:37.538268 systemd-networkd[1460]: docker0: Link UP Nov 6 00:25:37.566729 dockerd[1797]: time="2025-11-06T00:25:37.566648699Z" level=info msg="Loading containers: done." Nov 6 00:25:37.584438 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1592886063-merged.mount: Deactivated successfully. Nov 6 00:25:37.603025 dockerd[1797]: time="2025-11-06T00:25:37.602854565Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 6 00:25:37.603200 dockerd[1797]: time="2025-11-06T00:25:37.603073025Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Nov 6 00:25:37.603242 dockerd[1797]: time="2025-11-06T00:25:37.603207972Z" level=info msg="Initializing buildkit" Nov 6 00:25:38.146129 dockerd[1797]: time="2025-11-06T00:25:38.146056900Z" level=info msg="Completed buildkit initialization" Nov 6 00:25:38.151779 dockerd[1797]: time="2025-11-06T00:25:38.151731953Z" level=info msg="Daemon has completed initialization" Nov 6 00:25:38.151875 dockerd[1797]: time="2025-11-06T00:25:38.151822049Z" level=info msg="API listen on /run/docker.sock" Nov 6 00:25:38.152125 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 6 00:25:39.647606 containerd[1541]: time="2025-11-06T00:25:39.647540887Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Nov 6 00:25:41.937232 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1662438558.mount: Deactivated successfully. Nov 6 00:25:43.529571 containerd[1541]: time="2025-11-06T00:25:43.529460418Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:25:43.534454 containerd[1541]: time="2025-11-06T00:25:43.534370486Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=30114893" Nov 6 00:25:43.561815 containerd[1541]: time="2025-11-06T00:25:43.561715639Z" level=info msg="ImageCreate event name:\"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:25:43.621612 containerd[1541]: time="2025-11-06T00:25:43.621519487Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:25:43.622972 containerd[1541]: time="2025-11-06T00:25:43.622899081Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"30111492\" in 3.975290063s" Nov 6 00:25:43.622972 containerd[1541]: time="2025-11-06T00:25:43.622963707Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\"" Nov 6 00:25:43.623913 containerd[1541]: time="2025-11-06T00:25:43.623853635Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Nov 6 00:25:45.585070 containerd[1541]: time="2025-11-06T00:25:45.584987797Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:25:45.587517 containerd[1541]: time="2025-11-06T00:25:45.587454461Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=26020844" Nov 6 00:25:45.589186 containerd[1541]: time="2025-11-06T00:25:45.589109581Z" level=info msg="ImageCreate event name:\"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:25:45.592695 containerd[1541]: time="2025-11-06T00:25:45.592641277Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:25:45.593842 containerd[1541]: time="2025-11-06T00:25:45.593782109Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"27681301\" in 1.96989154s" Nov 6 00:25:45.593998 containerd[1541]: time="2025-11-06T00:25:45.593855797Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\"" Nov 6 00:25:45.594659 containerd[1541]: time="2025-11-06T00:25:45.594598685Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Nov 6 00:25:45.817071 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 6 00:25:45.818785 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:25:46.051959 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:25:46.056747 (kubelet)[2103]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 6 00:25:46.267579 kubelet[2103]: E1106 00:25:46.267508 2103 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 6 00:25:46.272073 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 6 00:25:46.272286 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 6 00:25:46.272730 systemd[1]: kubelet.service: Consumed 229ms CPU time, 110.6M memory peak. Nov 6 00:25:48.013149 containerd[1541]: time="2025-11-06T00:25:48.013048638Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:25:48.021668 containerd[1541]: time="2025-11-06T00:25:48.021529569Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=20155568" Nov 6 00:25:48.026275 containerd[1541]: time="2025-11-06T00:25:48.026158175Z" level=info msg="ImageCreate event name:\"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:25:48.166152 containerd[1541]: time="2025-11-06T00:25:48.166013399Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:25:48.167288 containerd[1541]: time="2025-11-06T00:25:48.167243719Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"21816043\" in 2.572591098s" Nov 6 00:25:48.167288 containerd[1541]: time="2025-11-06T00:25:48.167286877Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\"" Nov 6 00:25:48.168245 containerd[1541]: time="2025-11-06T00:25:48.168180623Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Nov 6 00:25:49.727648 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4206036616.mount: Deactivated successfully. Nov 6 00:25:50.998702 containerd[1541]: time="2025-11-06T00:25:50.998626626Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:25:50.999512 containerd[1541]: time="2025-11-06T00:25:50.999458000Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=31929469" Nov 6 00:25:51.000750 containerd[1541]: time="2025-11-06T00:25:51.000660498Z" level=info msg="ImageCreate event name:\"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:25:51.002945 containerd[1541]: time="2025-11-06T00:25:51.002868032Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:25:51.003374 containerd[1541]: time="2025-11-06T00:25:51.003333394Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"31928488\" in 2.835101724s" Nov 6 00:25:51.003411 containerd[1541]: time="2025-11-06T00:25:51.003376397Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\"" Nov 6 00:25:51.003987 containerd[1541]: time="2025-11-06T00:25:51.003960321Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Nov 6 00:25:51.544634 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3467197544.mount: Deactivated successfully. Nov 6 00:25:52.612468 containerd[1541]: time="2025-11-06T00:25:52.612391376Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:25:52.613735 containerd[1541]: time="2025-11-06T00:25:52.613666451Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Nov 6 00:25:52.615145 containerd[1541]: time="2025-11-06T00:25:52.615076210Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:25:52.618899 containerd[1541]: time="2025-11-06T00:25:52.618842477Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:25:52.619894 containerd[1541]: time="2025-11-06T00:25:52.619842859Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.615853515s" Nov 6 00:25:52.619894 containerd[1541]: time="2025-11-06T00:25:52.619874325Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Nov 6 00:25:52.620635 containerd[1541]: time="2025-11-06T00:25:52.620597609Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 6 00:25:53.170242 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1803340970.mount: Deactivated successfully. Nov 6 00:25:53.180717 containerd[1541]: time="2025-11-06T00:25:53.180622203Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 6 00:25:53.181939 containerd[1541]: time="2025-11-06T00:25:53.181905918Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Nov 6 00:25:53.183138 containerd[1541]: time="2025-11-06T00:25:53.183095084Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 6 00:25:53.185773 containerd[1541]: time="2025-11-06T00:25:53.185532319Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 6 00:25:53.186002 containerd[1541]: time="2025-11-06T00:25:53.185914614Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 565.275648ms" Nov 6 00:25:53.186002 containerd[1541]: time="2025-11-06T00:25:53.185940509Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 6 00:25:53.186556 containerd[1541]: time="2025-11-06T00:25:53.186520611Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Nov 6 00:25:54.456841 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2512025977.mount: Deactivated successfully. Nov 6 00:25:56.317101 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Nov 6 00:25:56.318811 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:25:56.922800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:25:56.926894 (kubelet)[2239]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 6 00:25:56.967179 kubelet[2239]: E1106 00:25:56.967113 2239 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 6 00:25:56.971724 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 6 00:25:56.971975 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 6 00:25:56.972407 systemd[1]: kubelet.service: Consumed 233ms CPU time, 109.9M memory peak. Nov 6 00:25:58.031924 containerd[1541]: time="2025-11-06T00:25:58.031845484Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:25:58.037256 containerd[1541]: time="2025-11-06T00:25:58.037173791Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58378433" Nov 6 00:25:58.044190 containerd[1541]: time="2025-11-06T00:25:58.044095435Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:25:58.049427 containerd[1541]: time="2025-11-06T00:25:58.049365281Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:25:58.050556 containerd[1541]: time="2025-11-06T00:25:58.050499678Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 4.86393776s" Nov 6 00:25:58.050556 containerd[1541]: time="2025-11-06T00:25:58.050540612Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Nov 6 00:26:01.227940 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:26:01.228205 systemd[1]: kubelet.service: Consumed 233ms CPU time, 109.9M memory peak. Nov 6 00:26:01.230696 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:26:01.257123 systemd[1]: Reload requested from client PID 2281 ('systemctl') (unit session-9.scope)... Nov 6 00:26:01.257141 systemd[1]: Reloading... Nov 6 00:26:01.354074 zram_generator::config[2326]: No configuration found. Nov 6 00:26:01.861510 systemd[1]: Reloading finished in 603 ms. Nov 6 00:26:01.954210 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 6 00:26:01.954362 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 6 00:26:01.954807 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:26:01.954904 systemd[1]: kubelet.service: Consumed 160ms CPU time, 98.2M memory peak. Nov 6 00:26:01.957163 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:26:02.175207 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:26:02.197454 (kubelet)[2371]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 6 00:26:02.239027 kubelet[2371]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 6 00:26:02.239027 kubelet[2371]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 6 00:26:02.239027 kubelet[2371]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 6 00:26:02.239027 kubelet[2371]: I1106 00:26:02.238081 2371 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 6 00:26:02.949824 kubelet[2371]: I1106 00:26:02.949736 2371 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 6 00:26:02.949824 kubelet[2371]: I1106 00:26:02.949778 2371 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 6 00:26:02.950564 kubelet[2371]: I1106 00:26:02.950514 2371 server.go:956] "Client rotation is on, will bootstrap in background" Nov 6 00:26:02.981008 kubelet[2371]: I1106 00:26:02.980943 2371 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 6 00:26:02.981452 kubelet[2371]: E1106 00:26:02.981407 2371 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.113:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.113:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 6 00:26:02.991652 kubelet[2371]: I1106 00:26:02.991621 2371 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 6 00:26:02.998698 kubelet[2371]: I1106 00:26:02.998660 2371 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 6 00:26:02.998990 kubelet[2371]: I1106 00:26:02.998954 2371 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 6 00:26:02.999165 kubelet[2371]: I1106 00:26:02.998982 2371 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 6 00:26:02.999264 kubelet[2371]: I1106 00:26:02.999182 2371 topology_manager.go:138] "Creating topology manager with none policy" Nov 6 00:26:02.999264 kubelet[2371]: I1106 00:26:02.999197 2371 container_manager_linux.go:303] "Creating device plugin manager" Nov 6 00:26:02.999363 kubelet[2371]: I1106 00:26:02.999349 2371 state_mem.go:36] "Initialized new in-memory state store" Nov 6 00:26:03.002208 kubelet[2371]: I1106 00:26:03.002182 2371 kubelet.go:480] "Attempting to sync node with API server" Nov 6 00:26:03.002208 kubelet[2371]: I1106 00:26:03.002206 2371 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 6 00:26:03.002274 kubelet[2371]: I1106 00:26:03.002251 2371 kubelet.go:386] "Adding apiserver pod source" Nov 6 00:26:03.004617 kubelet[2371]: I1106 00:26:03.004503 2371 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 6 00:26:03.011408 kubelet[2371]: I1106 00:26:03.011382 2371 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 6 00:26:03.011482 kubelet[2371]: E1106 00:26:03.011405 2371 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.113:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.113:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 6 00:26:03.011482 kubelet[2371]: E1106 00:26:03.011464 2371 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.113:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.113:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 6 00:26:03.012035 kubelet[2371]: I1106 00:26:03.012013 2371 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 6 00:26:03.014539 kubelet[2371]: W1106 00:26:03.014508 2371 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 6 00:26:03.017334 kubelet[2371]: I1106 00:26:03.017289 2371 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 6 00:26:03.017334 kubelet[2371]: I1106 00:26:03.017346 2371 server.go:1289] "Started kubelet" Nov 6 00:26:04.678910 kubelet[2371]: I1106 00:26:04.678117 2371 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 6 00:26:04.678910 kubelet[2371]: I1106 00:26:04.678753 2371 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 6 00:26:04.680061 kubelet[2371]: E1106 00:26:04.680032 2371 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.113:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.113:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 6 00:26:04.680389 kubelet[2371]: I1106 00:26:04.680298 2371 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 6 00:26:04.682321 kubelet[2371]: E1106 00:26:04.681263 2371 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.113:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.113:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 6 00:26:04.682500 kubelet[2371]: I1106 00:26:04.682429 2371 server.go:317] "Adding debug handlers to kubelet server" Nov 6 00:26:04.682604 kubelet[2371]: I1106 00:26:04.682575 2371 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 6 00:26:04.683944 kubelet[2371]: I1106 00:26:04.683491 2371 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 6 00:26:04.684829 kubelet[2371]: E1106 00:26:04.684790 2371 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 6 00:26:04.685070 kubelet[2371]: E1106 00:26:04.685031 2371 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 00:26:04.685134 kubelet[2371]: I1106 00:26:04.685089 2371 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 6 00:26:04.685924 kubelet[2371]: I1106 00:26:04.685277 2371 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 6 00:26:04.685924 kubelet[2371]: I1106 00:26:04.685349 2371 reconciler.go:26] "Reconciler: start to sync state" Nov 6 00:26:04.685924 kubelet[2371]: E1106 00:26:04.685642 2371 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.113:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.113:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 6 00:26:04.685924 kubelet[2371]: I1106 00:26:04.685841 2371 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 6 00:26:04.686445 kubelet[2371]: E1106 00:26:04.686396 2371 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.113:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.113:6443: connect: connection refused" interval="200ms" Nov 6 00:26:04.688851 kubelet[2371]: I1106 00:26:04.688821 2371 factory.go:223] Registration of the containerd container factory successfully Nov 6 00:26:04.688851 kubelet[2371]: I1106 00:26:04.688839 2371 factory.go:223] Registration of the systemd container factory successfully Nov 6 00:26:04.691295 kubelet[2371]: E1106 00:26:04.689378 2371 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.113:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.113:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1875433e0fc97d64 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-11-06 00:26:03.017313636 +0000 UTC m=+0.815329729,LastTimestamp:2025-11-06 00:26:03.017313636 +0000 UTC m=+0.815329729,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 6 00:26:04.704186 kubelet[2371]: I1106 00:26:04.704150 2371 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 6 00:26:04.704186 kubelet[2371]: I1106 00:26:04.704169 2371 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 6 00:26:04.704186 kubelet[2371]: I1106 00:26:04.704191 2371 state_mem.go:36] "Initialized new in-memory state store" Nov 6 00:26:04.711065 kubelet[2371]: I1106 00:26:04.711036 2371 policy_none.go:49] "None policy: Start" Nov 6 00:26:04.711206 kubelet[2371]: I1106 00:26:04.711075 2371 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 6 00:26:04.711206 kubelet[2371]: I1106 00:26:04.711099 2371 state_mem.go:35] "Initializing new in-memory state store" Nov 6 00:26:04.712778 kubelet[2371]: I1106 00:26:04.712558 2371 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 6 00:26:04.715027 kubelet[2371]: I1106 00:26:04.715003 2371 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 6 00:26:04.715088 kubelet[2371]: I1106 00:26:04.715039 2371 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 6 00:26:04.715088 kubelet[2371]: I1106 00:26:04.715062 2371 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 6 00:26:04.715088 kubelet[2371]: I1106 00:26:04.715075 2371 kubelet.go:2436] "Starting kubelet main sync loop" Nov 6 00:26:04.715180 kubelet[2371]: E1106 00:26:04.715119 2371 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 6 00:26:04.718873 kubelet[2371]: E1106 00:26:04.717603 2371 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.113:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.113:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 6 00:26:04.720581 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 6 00:26:04.740485 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 6 00:26:04.744207 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 6 00:26:04.755057 kubelet[2371]: E1106 00:26:04.755017 2371 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 6 00:26:04.755299 kubelet[2371]: I1106 00:26:04.755272 2371 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 6 00:26:04.755364 kubelet[2371]: I1106 00:26:04.755304 2371 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 6 00:26:04.755771 kubelet[2371]: I1106 00:26:04.755631 2371 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 6 00:26:04.756804 kubelet[2371]: E1106 00:26:04.756774 2371 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 6 00:26:04.756909 kubelet[2371]: E1106 00:26:04.756844 2371 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Nov 6 00:26:04.831605 systemd[1]: Created slice kubepods-burstable-pod638d0d07c8377f6ac0172f027766c80d.slice - libcontainer container kubepods-burstable-pod638d0d07c8377f6ac0172f027766c80d.slice. Nov 6 00:26:04.853072 kubelet[2371]: E1106 00:26:04.853029 2371 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 6 00:26:04.856972 kubelet[2371]: I1106 00:26:04.856842 2371 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 6 00:26:04.856952 systemd[1]: Created slice kubepods-burstable-pod20c890a246d840d308022312da9174cb.slice - libcontainer container kubepods-burstable-pod20c890a246d840d308022312da9174cb.slice. Nov 6 00:26:04.857994 kubelet[2371]: E1106 00:26:04.857954 2371 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.113:6443/api/v1/nodes\": dial tcp 10.0.0.113:6443: connect: connection refused" node="localhost" Nov 6 00:26:04.858835 kubelet[2371]: E1106 00:26:04.858815 2371 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 6 00:26:04.869384 systemd[1]: Created slice kubepods-burstable-podd13d96f639b65e57f439b4396b605564.slice - libcontainer container kubepods-burstable-podd13d96f639b65e57f439b4396b605564.slice. Nov 6 00:26:04.871310 kubelet[2371]: E1106 00:26:04.871273 2371 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 6 00:26:04.886947 kubelet[2371]: E1106 00:26:04.886916 2371 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.113:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.113:6443: connect: connection refused" interval="400ms" Nov 6 00:26:04.916277 update_engine[1532]: I20251106 00:26:04.916177 1532 update_attempter.cc:509] Updating boot flags... Nov 6 00:26:04.989834 kubelet[2371]: I1106 00:26:04.986702 2371 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d13d96f639b65e57f439b4396b605564-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d13d96f639b65e57f439b4396b605564\") " pod="kube-system/kube-scheduler-localhost" Nov 6 00:26:04.989834 kubelet[2371]: I1106 00:26:04.986743 2371 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/638d0d07c8377f6ac0172f027766c80d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"638d0d07c8377f6ac0172f027766c80d\") " pod="kube-system/kube-apiserver-localhost" Nov 6 00:26:04.989834 kubelet[2371]: I1106 00:26:04.986795 2371 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/638d0d07c8377f6ac0172f027766c80d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"638d0d07c8377f6ac0172f027766c80d\") " pod="kube-system/kube-apiserver-localhost" Nov 6 00:26:04.989834 kubelet[2371]: I1106 00:26:04.986815 2371 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/638d0d07c8377f6ac0172f027766c80d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"638d0d07c8377f6ac0172f027766c80d\") " pod="kube-system/kube-apiserver-localhost" Nov 6 00:26:04.989834 kubelet[2371]: I1106 00:26:04.986844 2371 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 6 00:26:04.990246 kubelet[2371]: I1106 00:26:04.986859 2371 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 6 00:26:04.990246 kubelet[2371]: I1106 00:26:04.986876 2371 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 6 00:26:04.990246 kubelet[2371]: I1106 00:26:04.986908 2371 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 6 00:26:04.990246 kubelet[2371]: I1106 00:26:04.986924 2371 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 6 00:26:05.059692 kubelet[2371]: I1106 00:26:05.059649 2371 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 6 00:26:05.060279 kubelet[2371]: E1106 00:26:05.060242 2371 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.113:6443/api/v1/nodes\": dial tcp 10.0.0.113:6443: connect: connection refused" node="localhost" Nov 6 00:26:05.061847 kubelet[2371]: E1106 00:26:05.061813 2371 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.113:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.113:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 6 00:26:05.154841 kubelet[2371]: E1106 00:26:05.154762 2371 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:26:05.155711 containerd[1541]: time="2025-11-06T00:26:05.155649495Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:638d0d07c8377f6ac0172f027766c80d,Namespace:kube-system,Attempt:0,}" Nov 6 00:26:05.160081 kubelet[2371]: E1106 00:26:05.159984 2371 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:26:05.160943 containerd[1541]: time="2025-11-06T00:26:05.160869107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:20c890a246d840d308022312da9174cb,Namespace:kube-system,Attempt:0,}" Nov 6 00:26:05.172106 kubelet[2371]: E1106 00:26:05.172050 2371 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:26:05.172744 containerd[1541]: time="2025-11-06T00:26:05.172630132Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d13d96f639b65e57f439b4396b605564,Namespace:kube-system,Attempt:0,}" Nov 6 00:26:05.196157 containerd[1541]: time="2025-11-06T00:26:05.196095298Z" level=info msg="connecting to shim 882ebfddc35a0e3db84690da346e4a8e25d350e0b68d493c7f4f259a2c5dd9ae" address="unix:///run/containerd/s/d745e2e14858d30fa5d115693ed7805970938adb1378eb056437a1be63960937" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:26:05.212047 containerd[1541]: time="2025-11-06T00:26:05.211989487Z" level=info msg="connecting to shim ea27c02f7ac4ea0564cc6c88a258c1eee08a3afa288d5746fb2d1f082d57c2aa" address="unix:///run/containerd/s/66400b0ab584bae67c79f0736971b6d8cc52ebc19fa9f1d27a9814a262450851" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:26:05.215561 containerd[1541]: time="2025-11-06T00:26:05.215509515Z" level=info msg="connecting to shim fb5c13ee0b6eb9154efa19bdae19ecb33f25dbdcf6739d6d878c07a05b720d12" address="unix:///run/containerd/s/52d89da1b5e27564e732a2df66367941a663b361c14dd6909855c2659a7f8785" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:26:05.262061 systemd[1]: Started cri-containerd-882ebfddc35a0e3db84690da346e4a8e25d350e0b68d493c7f4f259a2c5dd9ae.scope - libcontainer container 882ebfddc35a0e3db84690da346e4a8e25d350e0b68d493c7f4f259a2c5dd9ae. Nov 6 00:26:05.266762 systemd[1]: Started cri-containerd-ea27c02f7ac4ea0564cc6c88a258c1eee08a3afa288d5746fb2d1f082d57c2aa.scope - libcontainer container ea27c02f7ac4ea0564cc6c88a258c1eee08a3afa288d5746fb2d1f082d57c2aa. Nov 6 00:26:05.276240 systemd[1]: Started cri-containerd-fb5c13ee0b6eb9154efa19bdae19ecb33f25dbdcf6739d6d878c07a05b720d12.scope - libcontainer container fb5c13ee0b6eb9154efa19bdae19ecb33f25dbdcf6739d6d878c07a05b720d12. Nov 6 00:26:05.288201 kubelet[2371]: E1106 00:26:05.288120 2371 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.113:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.113:6443: connect: connection refused" interval="800ms" Nov 6 00:26:05.336227 containerd[1541]: time="2025-11-06T00:26:05.336155208Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:20c890a246d840d308022312da9174cb,Namespace:kube-system,Attempt:0,} returns sandbox id \"ea27c02f7ac4ea0564cc6c88a258c1eee08a3afa288d5746fb2d1f082d57c2aa\"" Nov 6 00:26:05.337212 kubelet[2371]: E1106 00:26:05.337166 2371 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:26:05.345351 containerd[1541]: time="2025-11-06T00:26:05.345298657Z" level=info msg="CreateContainer within sandbox \"ea27c02f7ac4ea0564cc6c88a258c1eee08a3afa288d5746fb2d1f082d57c2aa\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 6 00:26:05.348367 containerd[1541]: time="2025-11-06T00:26:05.348306673Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:638d0d07c8377f6ac0172f027766c80d,Namespace:kube-system,Attempt:0,} returns sandbox id \"882ebfddc35a0e3db84690da346e4a8e25d350e0b68d493c7f4f259a2c5dd9ae\"" Nov 6 00:26:05.349470 kubelet[2371]: E1106 00:26:05.349431 2371 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:26:05.356391 containerd[1541]: time="2025-11-06T00:26:05.356347990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d13d96f639b65e57f439b4396b605564,Namespace:kube-system,Attempt:0,} returns sandbox id \"fb5c13ee0b6eb9154efa19bdae19ecb33f25dbdcf6739d6d878c07a05b720d12\"" Nov 6 00:26:05.356479 containerd[1541]: time="2025-11-06T00:26:05.356456408Z" level=info msg="CreateContainer within sandbox \"882ebfddc35a0e3db84690da346e4a8e25d350e0b68d493c7f4f259a2c5dd9ae\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 6 00:26:05.357055 kubelet[2371]: E1106 00:26:05.357012 2371 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:26:05.360268 containerd[1541]: time="2025-11-06T00:26:05.360235113Z" level=info msg="Container f4ab3e91ab9f2a94040a850e3100950dfb6c50fa42a5cf058fbd4870895058f4: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:26:05.361920 containerd[1541]: time="2025-11-06T00:26:05.361870709Z" level=info msg="CreateContainer within sandbox \"fb5c13ee0b6eb9154efa19bdae19ecb33f25dbdcf6739d6d878c07a05b720d12\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 6 00:26:05.368526 containerd[1541]: time="2025-11-06T00:26:05.368478313Z" level=info msg="Container 7f87e2b3b9aadbadad0fad334c273a71d0d446249831cf993ca5f0a05b062904: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:26:05.382514 containerd[1541]: time="2025-11-06T00:26:05.382449562Z" level=info msg="CreateContainer within sandbox \"882ebfddc35a0e3db84690da346e4a8e25d350e0b68d493c7f4f259a2c5dd9ae\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"7f87e2b3b9aadbadad0fad334c273a71d0d446249831cf993ca5f0a05b062904\"" Nov 6 00:26:05.383704 containerd[1541]: time="2025-11-06T00:26:05.383255161Z" level=info msg="Container 309ded79caa476ec83ede5f4080814056845d4fd17ffc8d7e7907bcd46c7b21e: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:26:05.384839 containerd[1541]: time="2025-11-06T00:26:05.383617484Z" level=info msg="StartContainer for \"7f87e2b3b9aadbadad0fad334c273a71d0d446249831cf993ca5f0a05b062904\"" Nov 6 00:26:05.387450 containerd[1541]: time="2025-11-06T00:26:05.387380729Z" level=info msg="connecting to shim 7f87e2b3b9aadbadad0fad334c273a71d0d446249831cf993ca5f0a05b062904" address="unix:///run/containerd/s/d745e2e14858d30fa5d115693ed7805970938adb1378eb056437a1be63960937" protocol=ttrpc version=3 Nov 6 00:26:05.387704 containerd[1541]: time="2025-11-06T00:26:05.387649546Z" level=info msg="CreateContainer within sandbox \"ea27c02f7ac4ea0564cc6c88a258c1eee08a3afa288d5746fb2d1f082d57c2aa\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f4ab3e91ab9f2a94040a850e3100950dfb6c50fa42a5cf058fbd4870895058f4\"" Nov 6 00:26:05.388630 containerd[1541]: time="2025-11-06T00:26:05.388467819Z" level=info msg="StartContainer for \"f4ab3e91ab9f2a94040a850e3100950dfb6c50fa42a5cf058fbd4870895058f4\"" Nov 6 00:26:05.390778 containerd[1541]: time="2025-11-06T00:26:05.390726780Z" level=info msg="connecting to shim f4ab3e91ab9f2a94040a850e3100950dfb6c50fa42a5cf058fbd4870895058f4" address="unix:///run/containerd/s/66400b0ab584bae67c79f0736971b6d8cc52ebc19fa9f1d27a9814a262450851" protocol=ttrpc version=3 Nov 6 00:26:05.396376 containerd[1541]: time="2025-11-06T00:26:05.396296884Z" level=info msg="CreateContainer within sandbox \"fb5c13ee0b6eb9154efa19bdae19ecb33f25dbdcf6739d6d878c07a05b720d12\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"309ded79caa476ec83ede5f4080814056845d4fd17ffc8d7e7907bcd46c7b21e\"" Nov 6 00:26:05.398929 containerd[1541]: time="2025-11-06T00:26:05.397474264Z" level=info msg="StartContainer for \"309ded79caa476ec83ede5f4080814056845d4fd17ffc8d7e7907bcd46c7b21e\"" Nov 6 00:26:05.400331 containerd[1541]: time="2025-11-06T00:26:05.400292252Z" level=info msg="connecting to shim 309ded79caa476ec83ede5f4080814056845d4fd17ffc8d7e7907bcd46c7b21e" address="unix:///run/containerd/s/52d89da1b5e27564e732a2df66367941a663b361c14dd6909855c2659a7f8785" protocol=ttrpc version=3 Nov 6 00:26:05.421065 systemd[1]: Started cri-containerd-7f87e2b3b9aadbadad0fad334c273a71d0d446249831cf993ca5f0a05b062904.scope - libcontainer container 7f87e2b3b9aadbadad0fad334c273a71d0d446249831cf993ca5f0a05b062904. Nov 6 00:26:05.424306 systemd[1]: Started cri-containerd-f4ab3e91ab9f2a94040a850e3100950dfb6c50fa42a5cf058fbd4870895058f4.scope - libcontainer container f4ab3e91ab9f2a94040a850e3100950dfb6c50fa42a5cf058fbd4870895058f4. Nov 6 00:26:05.446028 systemd[1]: Started cri-containerd-309ded79caa476ec83ede5f4080814056845d4fd17ffc8d7e7907bcd46c7b21e.scope - libcontainer container 309ded79caa476ec83ede5f4080814056845d4fd17ffc8d7e7907bcd46c7b21e. Nov 6 00:26:05.462489 kubelet[2371]: I1106 00:26:05.462453 2371 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 6 00:26:05.462773 kubelet[2371]: E1106 00:26:05.462751 2371 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.113:6443/api/v1/nodes\": dial tcp 10.0.0.113:6443: connect: connection refused" node="localhost" Nov 6 00:26:05.565145 containerd[1541]: time="2025-11-06T00:26:05.564906043Z" level=info msg="StartContainer for \"f4ab3e91ab9f2a94040a850e3100950dfb6c50fa42a5cf058fbd4870895058f4\" returns successfully" Nov 6 00:26:05.571733 containerd[1541]: time="2025-11-06T00:26:05.571637094Z" level=info msg="StartContainer for \"7f87e2b3b9aadbadad0fad334c273a71d0d446249831cf993ca5f0a05b062904\" returns successfully" Nov 6 00:26:05.630280 containerd[1541]: time="2025-11-06T00:26:05.630147280Z" level=info msg="StartContainer for \"309ded79caa476ec83ede5f4080814056845d4fd17ffc8d7e7907bcd46c7b21e\" returns successfully" Nov 6 00:26:05.726714 kubelet[2371]: E1106 00:26:05.726667 2371 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 6 00:26:05.727260 kubelet[2371]: E1106 00:26:05.726795 2371 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:26:05.730305 kubelet[2371]: E1106 00:26:05.730245 2371 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 6 00:26:05.730539 kubelet[2371]: E1106 00:26:05.730508 2371 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:26:05.735049 kubelet[2371]: E1106 00:26:05.735013 2371 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 6 00:26:05.735266 kubelet[2371]: E1106 00:26:05.735116 2371 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:26:06.264383 kubelet[2371]: I1106 00:26:06.264342 2371 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 6 00:26:06.736730 kubelet[2371]: E1106 00:26:06.736694 2371 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 6 00:26:06.737173 kubelet[2371]: E1106 00:26:06.736836 2371 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:26:06.737317 kubelet[2371]: E1106 00:26:06.737288 2371 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 6 00:26:06.737401 kubelet[2371]: E1106 00:26:06.737385 2371 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:26:07.431933 kubelet[2371]: E1106 00:26:07.431030 2371 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Nov 6 00:26:07.612569 kubelet[2371]: I1106 00:26:07.612518 2371 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 6 00:26:07.679764 kubelet[2371]: I1106 00:26:07.679701 2371 apiserver.go:52] "Watching apiserver" Nov 6 00:26:07.685989 kubelet[2371]: I1106 00:26:07.685835 2371 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 6 00:26:07.685989 kubelet[2371]: I1106 00:26:07.685921 2371 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 6 00:26:07.693338 kubelet[2371]: E1106 00:26:07.693294 2371 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Nov 6 00:26:07.693338 kubelet[2371]: I1106 00:26:07.693323 2371 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 6 00:26:07.694789 kubelet[2371]: E1106 00:26:07.694758 2371 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Nov 6 00:26:07.694789 kubelet[2371]: I1106 00:26:07.694780 2371 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 6 00:26:07.696729 kubelet[2371]: E1106 00:26:07.696667 2371 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Nov 6 00:26:08.043487 kubelet[2371]: I1106 00:26:08.043348 2371 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 6 00:26:08.046439 kubelet[2371]: E1106 00:26:08.046411 2371 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Nov 6 00:26:08.046598 kubelet[2371]: E1106 00:26:08.046583 2371 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:26:10.620144 kubelet[2371]: I1106 00:26:10.620099 2371 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 6 00:26:10.688498 kubelet[2371]: E1106 00:26:10.688455 2371 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:26:10.742283 kubelet[2371]: E1106 00:26:10.742236 2371 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:26:10.991703 kubelet[2371]: I1106 00:26:10.991557 2371 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 6 00:26:11.002776 kubelet[2371]: E1106 00:26:11.002744 2371 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:26:11.743923 kubelet[2371]: E1106 00:26:11.743863 2371 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:26:12.149029 systemd[1]: Reload requested from client PID 2674 ('systemctl') (unit session-9.scope)... Nov 6 00:26:12.149050 systemd[1]: Reloading... Nov 6 00:26:12.255926 zram_generator::config[2717]: No configuration found. Nov 6 00:26:12.602543 systemd[1]: Reloading finished in 452 ms. Nov 6 00:26:12.644242 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:26:12.671696 systemd[1]: kubelet.service: Deactivated successfully. Nov 6 00:26:12.672196 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:26:12.672272 systemd[1]: kubelet.service: Consumed 1.569s CPU time, 133.3M memory peak. Nov 6 00:26:12.675052 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:26:12.943038 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:26:12.954453 (kubelet)[2762]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 6 00:26:12.990708 kubelet[2762]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 6 00:26:12.990708 kubelet[2762]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 6 00:26:12.990708 kubelet[2762]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 6 00:26:12.991186 kubelet[2762]: I1106 00:26:12.990734 2762 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 6 00:26:12.999045 kubelet[2762]: I1106 00:26:12.998999 2762 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 6 00:26:12.999045 kubelet[2762]: I1106 00:26:12.999027 2762 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 6 00:26:12.999233 kubelet[2762]: I1106 00:26:12.999225 2762 server.go:956] "Client rotation is on, will bootstrap in background" Nov 6 00:26:13.000391 kubelet[2762]: I1106 00:26:13.000364 2762 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 6 00:26:13.002484 kubelet[2762]: I1106 00:26:13.002418 2762 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 6 00:26:13.006282 kubelet[2762]: I1106 00:26:13.006253 2762 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 6 00:26:13.012329 kubelet[2762]: I1106 00:26:13.012260 2762 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 6 00:26:13.012736 kubelet[2762]: I1106 00:26:13.012678 2762 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 6 00:26:13.012961 kubelet[2762]: I1106 00:26:13.012724 2762 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 6 00:26:13.013088 kubelet[2762]: I1106 00:26:13.012976 2762 topology_manager.go:138] "Creating topology manager with none policy" Nov 6 00:26:13.013088 kubelet[2762]: I1106 00:26:13.012990 2762 container_manager_linux.go:303] "Creating device plugin manager" Nov 6 00:26:13.013088 kubelet[2762]: I1106 00:26:13.013060 2762 state_mem.go:36] "Initialized new in-memory state store" Nov 6 00:26:13.013304 kubelet[2762]: I1106 00:26:13.013283 2762 kubelet.go:480] "Attempting to sync node with API server" Nov 6 00:26:13.013378 kubelet[2762]: I1106 00:26:13.013309 2762 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 6 00:26:13.013378 kubelet[2762]: I1106 00:26:13.013341 2762 kubelet.go:386] "Adding apiserver pod source" Nov 6 00:26:13.013378 kubelet[2762]: I1106 00:26:13.013361 2762 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 6 00:26:13.014797 kubelet[2762]: I1106 00:26:13.014744 2762 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 6 00:26:13.015358 kubelet[2762]: I1106 00:26:13.015329 2762 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 6 00:26:13.021325 kubelet[2762]: I1106 00:26:13.021284 2762 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 6 00:26:13.021325 kubelet[2762]: I1106 00:26:13.021330 2762 server.go:1289] "Started kubelet" Nov 6 00:26:13.022991 kubelet[2762]: I1106 00:26:13.022828 2762 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 6 00:26:13.024752 kubelet[2762]: I1106 00:26:13.024682 2762 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 6 00:26:13.026650 kubelet[2762]: I1106 00:26:13.025564 2762 server.go:317] "Adding debug handlers to kubelet server" Nov 6 00:26:13.026650 kubelet[2762]: I1106 00:26:13.025914 2762 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 6 00:26:13.026650 kubelet[2762]: I1106 00:26:13.026283 2762 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 6 00:26:13.026806 kubelet[2762]: I1106 00:26:13.026789 2762 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 6 00:26:13.027300 kubelet[2762]: I1106 00:26:13.027276 2762 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 6 00:26:13.027388 kubelet[2762]: I1106 00:26:13.027369 2762 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 6 00:26:13.027639 kubelet[2762]: I1106 00:26:13.027614 2762 reconciler.go:26] "Reconciler: start to sync state" Nov 6 00:26:13.029436 kubelet[2762]: I1106 00:26:13.029389 2762 factory.go:223] Registration of the systemd container factory successfully Nov 6 00:26:13.029554 kubelet[2762]: I1106 00:26:13.029502 2762 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 6 00:26:13.031042 kubelet[2762]: I1106 00:26:13.031017 2762 factory.go:223] Registration of the containerd container factory successfully Nov 6 00:26:13.031708 kubelet[2762]: E1106 00:26:13.031665 2762 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 6 00:26:13.039252 kubelet[2762]: I1106 00:26:13.039190 2762 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 6 00:26:13.042621 kubelet[2762]: I1106 00:26:13.042219 2762 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 6 00:26:13.042621 kubelet[2762]: I1106 00:26:13.042251 2762 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 6 00:26:13.042621 kubelet[2762]: I1106 00:26:13.042288 2762 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 6 00:26:13.042621 kubelet[2762]: I1106 00:26:13.042298 2762 kubelet.go:2436] "Starting kubelet main sync loop" Nov 6 00:26:13.042621 kubelet[2762]: E1106 00:26:13.042352 2762 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 6 00:26:13.073912 kubelet[2762]: I1106 00:26:13.073858 2762 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 6 00:26:13.073912 kubelet[2762]: I1106 00:26:13.073898 2762 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 6 00:26:13.073912 kubelet[2762]: I1106 00:26:13.073922 2762 state_mem.go:36] "Initialized new in-memory state store" Nov 6 00:26:13.074127 kubelet[2762]: I1106 00:26:13.074110 2762 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 6 00:26:13.074164 kubelet[2762]: I1106 00:26:13.074130 2762 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 6 00:26:13.074164 kubelet[2762]: I1106 00:26:13.074158 2762 policy_none.go:49] "None policy: Start" Nov 6 00:26:13.074205 kubelet[2762]: I1106 00:26:13.074170 2762 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 6 00:26:13.074205 kubelet[2762]: I1106 00:26:13.074183 2762 state_mem.go:35] "Initializing new in-memory state store" Nov 6 00:26:13.074317 kubelet[2762]: I1106 00:26:13.074302 2762 state_mem.go:75] "Updated machine memory state" Nov 6 00:26:13.079866 kubelet[2762]: E1106 00:26:13.079653 2762 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 6 00:26:13.079978 kubelet[2762]: I1106 00:26:13.079960 2762 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 6 00:26:13.080024 kubelet[2762]: I1106 00:26:13.079974 2762 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 6 00:26:13.080235 kubelet[2762]: I1106 00:26:13.080215 2762 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 6 00:26:13.082302 kubelet[2762]: E1106 00:26:13.082276 2762 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 6 00:26:13.143578 kubelet[2762]: I1106 00:26:13.143505 2762 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 6 00:26:13.143578 kubelet[2762]: I1106 00:26:13.143559 2762 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 6 00:26:13.143790 kubelet[2762]: I1106 00:26:13.143613 2762 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 6 00:26:13.190588 kubelet[2762]: I1106 00:26:13.190535 2762 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 6 00:26:13.229445 kubelet[2762]: I1106 00:26:13.229261 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 6 00:26:13.229445 kubelet[2762]: I1106 00:26:13.229304 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 6 00:26:13.229445 kubelet[2762]: I1106 00:26:13.229322 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 6 00:26:13.229445 kubelet[2762]: I1106 00:26:13.229340 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d13d96f639b65e57f439b4396b605564-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d13d96f639b65e57f439b4396b605564\") " pod="kube-system/kube-scheduler-localhost" Nov 6 00:26:13.229445 kubelet[2762]: I1106 00:26:13.229365 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/638d0d07c8377f6ac0172f027766c80d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"638d0d07c8377f6ac0172f027766c80d\") " pod="kube-system/kube-apiserver-localhost" Nov 6 00:26:13.229748 kubelet[2762]: I1106 00:26:13.229408 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/638d0d07c8377f6ac0172f027766c80d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"638d0d07c8377f6ac0172f027766c80d\") " pod="kube-system/kube-apiserver-localhost" Nov 6 00:26:13.229748 kubelet[2762]: I1106 00:26:13.229470 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 6 00:26:13.229748 kubelet[2762]: I1106 00:26:13.229553 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 6 00:26:13.229748 kubelet[2762]: I1106 00:26:13.229587 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/638d0d07c8377f6ac0172f027766c80d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"638d0d07c8377f6ac0172f027766c80d\") " pod="kube-system/kube-apiserver-localhost" Nov 6 00:26:13.317254 kubelet[2762]: E1106 00:26:13.317163 2762 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Nov 6 00:26:13.317720 kubelet[2762]: E1106 00:26:13.317697 2762 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Nov 6 00:26:13.320996 kubelet[2762]: I1106 00:26:13.320943 2762 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Nov 6 00:26:13.321137 kubelet[2762]: I1106 00:26:13.321053 2762 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 6 00:26:13.618343 kubelet[2762]: E1106 00:26:13.618289 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:26:13.618586 kubelet[2762]: E1106 00:26:13.618289 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:26:13.618586 kubelet[2762]: E1106 00:26:13.618389 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:26:14.014159 kubelet[2762]: I1106 00:26:14.014018 2762 apiserver.go:52] "Watching apiserver" Nov 6 00:26:14.027780 kubelet[2762]: I1106 00:26:14.027735 2762 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 6 00:26:14.057445 kubelet[2762]: I1106 00:26:14.057402 2762 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 6 00:26:14.057605 kubelet[2762]: E1106 00:26:14.057496 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:26:14.057713 kubelet[2762]: E1106 00:26:14.057691 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:26:14.195195 kubelet[2762]: E1106 00:26:14.195116 2762 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Nov 6 00:26:14.195877 kubelet[2762]: E1106 00:26:14.195393 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:26:14.257490 sudo[2802]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Nov 6 00:26:14.257849 sudo[2802]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Nov 6 00:26:14.368316 kubelet[2762]: I1106 00:26:14.368231 2762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.368206282 podStartE2EDuration="1.368206282s" podCreationTimestamp="2025-11-06 00:26:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 00:26:14.321853567 +0000 UTC m=+1.362088006" watchObservedRunningTime="2025-11-06 00:26:14.368206282 +0000 UTC m=+1.408440711" Nov 6 00:26:14.368510 kubelet[2762]: I1106 00:26:14.368356 2762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=4.368352979 podStartE2EDuration="4.368352979s" podCreationTimestamp="2025-11-06 00:26:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 00:26:14.367384625 +0000 UTC m=+1.407619054" watchObservedRunningTime="2025-11-06 00:26:14.368352979 +0000 UTC m=+1.408587408" Nov 6 00:26:14.502899 kubelet[2762]: I1106 00:26:14.502809 2762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=4.502783658 podStartE2EDuration="4.502783658s" podCreationTimestamp="2025-11-06 00:26:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 00:26:14.502285323 +0000 UTC m=+1.542519752" watchObservedRunningTime="2025-11-06 00:26:14.502783658 +0000 UTC m=+1.543018087" Nov 6 00:26:14.583944 sudo[2802]: pam_unix(sudo:session): session closed for user root Nov 6 00:26:15.059288 kubelet[2762]: E1106 00:26:15.059221 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:26:15.059830 kubelet[2762]: E1106 00:26:15.059347 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:26:17.336571 sudo[1777]: pam_unix(sudo:session): session closed for user root Nov 6 00:26:17.345984 sshd[1776]: Connection closed by 10.0.0.1 port 39628 Nov 6 00:26:17.355866 sshd-session[1773]: pam_unix(sshd:session): session closed for user core Nov 6 00:26:17.380170 systemd[1]: sshd@8-10.0.0.113:22-10.0.0.1:39628.service: Deactivated successfully. Nov 6 00:26:17.387348 systemd[1]: session-9.scope: Deactivated successfully. Nov 6 00:26:17.388851 systemd[1]: session-9.scope: Consumed 5.788s CPU time, 264.6M memory peak. Nov 6 00:26:17.391181 systemd-logind[1523]: Session 9 logged out. Waiting for processes to exit. Nov 6 00:26:17.396936 systemd-logind[1523]: Removed session 9. Nov 6 00:26:17.802816 kubelet[2762]: E1106 00:26:17.802294 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:26:18.080317 kubelet[2762]: E1106 00:26:18.079755 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:26:18.333706 kubelet[2762]: I1106 00:26:18.333570 2762 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 6 00:26:18.334062 containerd[1541]: time="2025-11-06T00:26:18.334020641Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 6 00:26:18.334515 kubelet[2762]: I1106 00:26:18.334186 2762 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 6 00:26:19.251392 systemd[1]: Created slice kubepods-besteffort-poddbc7d5c4_e427_44e9_a843_e50d922229f9.slice - libcontainer container kubepods-besteffort-poddbc7d5c4_e427_44e9_a843_e50d922229f9.slice. Nov 6 00:26:19.319914 kubelet[2762]: I1106 00:26:19.317456 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dbc7d5c4-e427-44e9-a843-e50d922229f9-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-w4khw\" (UID: \"dbc7d5c4-e427-44e9-a843-e50d922229f9\") " pod="kube-system/cilium-operator-6c4d7847fc-w4khw" Nov 6 00:26:19.319914 kubelet[2762]: I1106 00:26:19.317524 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xwjp\" (UniqueName: \"kubernetes.io/projected/dbc7d5c4-e427-44e9-a843-e50d922229f9-kube-api-access-5xwjp\") pod \"cilium-operator-6c4d7847fc-w4khw\" (UID: \"dbc7d5c4-e427-44e9-a843-e50d922229f9\") " pod="kube-system/cilium-operator-6c4d7847fc-w4khw" Nov 6 00:26:19.578419 systemd[1]: Created slice kubepods-burstable-pod60420992_95c1_4e3f_94a6_8591d0324a99.slice - libcontainer container kubepods-burstable-pod60420992_95c1_4e3f_94a6_8591d0324a99.slice. Nov 6 00:26:19.600604 kubelet[2762]: E1106 00:26:19.598220 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:26:19.601223 containerd[1541]: time="2025-11-06T00:26:19.601183533Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-w4khw,Uid:dbc7d5c4-e427-44e9-a843-e50d922229f9,Namespace:kube-system,Attempt:0,}" Nov 6 00:26:19.606467 systemd[1]: Created slice kubepods-besteffort-pod420b363b_2ac0_44b3_9706_9dd68f91a430.slice - libcontainer container kubepods-besteffort-pod420b363b_2ac0_44b3_9706_9dd68f91a430.slice. Nov 6 00:26:19.625018 kubelet[2762]: I1106 00:26:19.624597 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/420b363b-2ac0-44b3-9706-9dd68f91a430-kube-proxy\") pod \"kube-proxy-ssf4f\" (UID: \"420b363b-2ac0-44b3-9706-9dd68f91a430\") " pod="kube-system/kube-proxy-ssf4f" Nov 6 00:26:19.625018 kubelet[2762]: I1106 00:26:19.624648 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/60420992-95c1-4e3f-94a6-8591d0324a99-cilium-run\") pod \"cilium-k5ccd\" (UID: \"60420992-95c1-4e3f-94a6-8591d0324a99\") " pod="kube-system/cilium-k5ccd" Nov 6 00:26:19.625018 kubelet[2762]: I1106 00:26:19.624670 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/60420992-95c1-4e3f-94a6-8591d0324a99-hostproc\") pod \"cilium-k5ccd\" (UID: \"60420992-95c1-4e3f-94a6-8591d0324a99\") " pod="kube-system/cilium-k5ccd" Nov 6 00:26:19.625018 kubelet[2762]: I1106 00:26:19.624690 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/60420992-95c1-4e3f-94a6-8591d0324a99-xtables-lock\") pod \"cilium-k5ccd\" (UID: \"60420992-95c1-4e3f-94a6-8591d0324a99\") " pod="kube-system/cilium-k5ccd" Nov 6 00:26:19.625018 kubelet[2762]: I1106 00:26:19.624715 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/60420992-95c1-4e3f-94a6-8591d0324a99-host-proc-sys-kernel\") pod \"cilium-k5ccd\" (UID: \"60420992-95c1-4e3f-94a6-8591d0324a99\") " pod="kube-system/cilium-k5ccd" Nov 6 00:26:19.625018 kubelet[2762]: I1106 00:26:19.624756 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/60420992-95c1-4e3f-94a6-8591d0324a99-cilium-config-path\") pod \"cilium-k5ccd\" (UID: \"60420992-95c1-4e3f-94a6-8591d0324a99\") " pod="kube-system/cilium-k5ccd" Nov 6 00:26:19.625601 kubelet[2762]: I1106 00:26:19.624776 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/60420992-95c1-4e3f-94a6-8591d0324a99-hubble-tls\") pod \"cilium-k5ccd\" (UID: \"60420992-95c1-4e3f-94a6-8591d0324a99\") " pod="kube-system/cilium-k5ccd" Nov 6 00:26:19.625601 kubelet[2762]: I1106 00:26:19.624796 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lg75h\" (UniqueName: \"kubernetes.io/projected/60420992-95c1-4e3f-94a6-8591d0324a99-kube-api-access-lg75h\") pod \"cilium-k5ccd\" (UID: \"60420992-95c1-4e3f-94a6-8591d0324a99\") " pod="kube-system/cilium-k5ccd" Nov 6 00:26:19.625601 kubelet[2762]: I1106 00:26:19.624820 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/420b363b-2ac0-44b3-9706-9dd68f91a430-xtables-lock\") pod \"kube-proxy-ssf4f\" (UID: \"420b363b-2ac0-44b3-9706-9dd68f91a430\") " pod="kube-system/kube-proxy-ssf4f" Nov 6 00:26:19.625601 kubelet[2762]: I1106 00:26:19.624845 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/60420992-95c1-4e3f-94a6-8591d0324a99-cilium-cgroup\") pod \"cilium-k5ccd\" (UID: \"60420992-95c1-4e3f-94a6-8591d0324a99\") " pod="kube-system/cilium-k5ccd" Nov 6 00:26:19.625601 kubelet[2762]: I1106 00:26:19.624896 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/60420992-95c1-4e3f-94a6-8591d0324a99-etc-cni-netd\") pod \"cilium-k5ccd\" (UID: \"60420992-95c1-4e3f-94a6-8591d0324a99\") " pod="kube-system/cilium-k5ccd" Nov 6 00:26:19.625601 kubelet[2762]: I1106 00:26:19.624924 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/60420992-95c1-4e3f-94a6-8591d0324a99-bpf-maps\") pod \"cilium-k5ccd\" (UID: \"60420992-95c1-4e3f-94a6-8591d0324a99\") " pod="kube-system/cilium-k5ccd" Nov 6 00:26:19.626158 kubelet[2762]: I1106 00:26:19.624947 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/60420992-95c1-4e3f-94a6-8591d0324a99-clustermesh-secrets\") pod \"cilium-k5ccd\" (UID: \"60420992-95c1-4e3f-94a6-8591d0324a99\") " pod="kube-system/cilium-k5ccd" Nov 6 00:26:19.626158 kubelet[2762]: I1106 00:26:19.624975 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/60420992-95c1-4e3f-94a6-8591d0324a99-host-proc-sys-net\") pod \"cilium-k5ccd\" (UID: \"60420992-95c1-4e3f-94a6-8591d0324a99\") " pod="kube-system/cilium-k5ccd" Nov 6 00:26:19.626158 kubelet[2762]: I1106 00:26:19.624994 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/420b363b-2ac0-44b3-9706-9dd68f91a430-lib-modules\") pod \"kube-proxy-ssf4f\" (UID: \"420b363b-2ac0-44b3-9706-9dd68f91a430\") " pod="kube-system/kube-proxy-ssf4f" Nov 6 00:26:19.626158 kubelet[2762]: I1106 00:26:19.625013 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/60420992-95c1-4e3f-94a6-8591d0324a99-lib-modules\") pod \"cilium-k5ccd\" (UID: \"60420992-95c1-4e3f-94a6-8591d0324a99\") " pod="kube-system/cilium-k5ccd" Nov 6 00:26:19.626158 kubelet[2762]: I1106 00:26:19.625034 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbw7k\" (UniqueName: \"kubernetes.io/projected/420b363b-2ac0-44b3-9706-9dd68f91a430-kube-api-access-tbw7k\") pod \"kube-proxy-ssf4f\" (UID: \"420b363b-2ac0-44b3-9706-9dd68f91a430\") " pod="kube-system/kube-proxy-ssf4f" Nov 6 00:26:19.626322 kubelet[2762]: I1106 00:26:19.625054 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/60420992-95c1-4e3f-94a6-8591d0324a99-cni-path\") pod \"cilium-k5ccd\" (UID: \"60420992-95c1-4e3f-94a6-8591d0324a99\") " pod="kube-system/cilium-k5ccd" Nov 6 00:26:19.831137 containerd[1541]: time="2025-11-06T00:26:19.829365950Z" level=info msg="connecting to shim b68479c2bc560ed7134dba8b8ad7686853511c88cb71dd5e1faa9950d7bd59b5" address="unix:///run/containerd/s/2480dc8df76d7a7921cb4f329cf851dcabfafddfb3b19b21834a5a196ee42df3" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:26:19.890346 kubelet[2762]: E1106 00:26:19.890048 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:26:19.892596 containerd[1541]: time="2025-11-06T00:26:19.891629561Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-k5ccd,Uid:60420992-95c1-4e3f-94a6-8591d0324a99,Namespace:kube-system,Attempt:0,}" Nov 6 00:26:19.911760 kubelet[2762]: E1106 00:26:19.911681 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:26:19.917967 containerd[1541]: time="2025-11-06T00:26:19.914151001Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ssf4f,Uid:420b363b-2ac0-44b3-9706-9dd68f91a430,Namespace:kube-system,Attempt:0,}" Nov 6 00:26:20.015175 systemd[1]: Started cri-containerd-b68479c2bc560ed7134dba8b8ad7686853511c88cb71dd5e1faa9950d7bd59b5.scope - libcontainer container b68479c2bc560ed7134dba8b8ad7686853511c88cb71dd5e1faa9950d7bd59b5. Nov 6 00:26:20.095984 containerd[1541]: time="2025-11-06T00:26:20.095789880Z" level=info msg="connecting to shim aed88f98f2b53dc44a265839f8722142d78dff34d2e1eb9f9456a830740de968" address="unix:///run/containerd/s/b86c9bce5b7e650b4ec88ab9bc49611105ecb99bdd62786687c6968ede867b45" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:26:20.113589 containerd[1541]: time="2025-11-06T00:26:20.112844239Z" level=info msg="connecting to shim 5086b9ba72418ebcd66943092abf02b29f9fd8c007d764d9cd2a081fdc91fb08" address="unix:///run/containerd/s/2ac20f0ea3b73b88a7b418afb3a4620b89e3c5bb93b3dacb6093d565f180b278" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:26:20.279529 systemd[1]: Started cri-containerd-5086b9ba72418ebcd66943092abf02b29f9fd8c007d764d9cd2a081fdc91fb08.scope - libcontainer container 5086b9ba72418ebcd66943092abf02b29f9fd8c007d764d9cd2a081fdc91fb08. Nov 6 00:26:20.287100 systemd[1]: Started cri-containerd-aed88f98f2b53dc44a265839f8722142d78dff34d2e1eb9f9456a830740de968.scope - libcontainer container aed88f98f2b53dc44a265839f8722142d78dff34d2e1eb9f9456a830740de968. Nov 6 00:26:20.297264 containerd[1541]: time="2025-11-06T00:26:20.297194757Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-w4khw,Uid:dbc7d5c4-e427-44e9-a843-e50d922229f9,Namespace:kube-system,Attempt:0,} returns sandbox id \"b68479c2bc560ed7134dba8b8ad7686853511c88cb71dd5e1faa9950d7bd59b5\"" Nov 6 00:26:20.302227 kubelet[2762]: E1106 00:26:20.302007 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:26:20.306541 containerd[1541]: time="2025-11-06T00:26:20.306413817Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Nov 6 00:26:20.358208 containerd[1541]: time="2025-11-06T00:26:20.357819983Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-k5ccd,Uid:60420992-95c1-4e3f-94a6-8591d0324a99,Namespace:kube-system,Attempt:0,} returns sandbox id \"5086b9ba72418ebcd66943092abf02b29f9fd8c007d764d9cd2a081fdc91fb08\"" Nov 6 00:26:20.360480 kubelet[2762]: E1106 00:26:20.360359 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:26:20.384101 containerd[1541]: time="2025-11-06T00:26:20.383834922Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ssf4f,Uid:420b363b-2ac0-44b3-9706-9dd68f91a430,Namespace:kube-system,Attempt:0,} returns sandbox id \"aed88f98f2b53dc44a265839f8722142d78dff34d2e1eb9f9456a830740de968\"" Nov 6 00:26:20.388147 kubelet[2762]: E1106 00:26:20.386158 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:26:20.410280 containerd[1541]: time="2025-11-06T00:26:20.409915185Z" level=info msg="CreateContainer within sandbox \"aed88f98f2b53dc44a265839f8722142d78dff34d2e1eb9f9456a830740de968\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 6 00:26:20.446409 containerd[1541]: time="2025-11-06T00:26:20.446309674Z" level=info msg="Container e3c9f0f05121a324b16e28756876a1be463ef5d4ba6cb31d6718b8e982690da9: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:26:20.461976 containerd[1541]: time="2025-11-06T00:26:20.461909362Z" level=info msg="CreateContainer within sandbox \"aed88f98f2b53dc44a265839f8722142d78dff34d2e1eb9f9456a830740de968\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e3c9f0f05121a324b16e28756876a1be463ef5d4ba6cb31d6718b8e982690da9\"" Nov 6 00:26:20.462991 containerd[1541]: time="2025-11-06T00:26:20.462948326Z" level=info msg="StartContainer for \"e3c9f0f05121a324b16e28756876a1be463ef5d4ba6cb31d6718b8e982690da9\"" Nov 6 00:26:20.469223 containerd[1541]: time="2025-11-06T00:26:20.469113818Z" level=info msg="connecting to shim e3c9f0f05121a324b16e28756876a1be463ef5d4ba6cb31d6718b8e982690da9" address="unix:///run/containerd/s/b86c9bce5b7e650b4ec88ab9bc49611105ecb99bdd62786687c6968ede867b45" protocol=ttrpc version=3 Nov 6 00:26:20.597290 systemd[1]: Started cri-containerd-e3c9f0f05121a324b16e28756876a1be463ef5d4ba6cb31d6718b8e982690da9.scope - libcontainer container e3c9f0f05121a324b16e28756876a1be463ef5d4ba6cb31d6718b8e982690da9. Nov 6 00:26:20.760817 kubelet[2762]: E1106 00:26:20.760604 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:26:20.770736 containerd[1541]: time="2025-11-06T00:26:20.769059127Z" level=info msg="StartContainer for \"e3c9f0f05121a324b16e28756876a1be463ef5d4ba6cb31d6718b8e982690da9\" returns successfully" Nov 6 00:26:21.105207 kubelet[2762]: E1106 00:26:21.102516 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:26:21.122999 kubelet[2762]: E1106 00:26:21.118165 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:26:21.231979 kubelet[2762]: I1106 00:26:21.231217 2762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-ssf4f" podStartSLOduration=2.231197348 podStartE2EDuration="2.231197348s" podCreationTimestamp="2025-11-06 00:26:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 00:26:21.231105691 +0000 UTC m=+8.271340130" watchObservedRunningTime="2025-11-06 00:26:21.231197348 +0000 UTC m=+8.271431767" Nov 6 00:26:22.118176 kubelet[2762]: E1106 00:26:22.117728 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:26:22.842045 kubelet[2762]: E1106 00:26:22.841917 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:26:23.121311 kubelet[2762]: E1106 00:26:23.120201 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:26:24.126124 kubelet[2762]: E1106 00:26:24.122921 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:26:24.206649 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1261259483.mount: Deactivated successfully. Nov 6 00:26:25.808735 containerd[1541]: time="2025-11-06T00:26:25.808645348Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:26:25.810907 containerd[1541]: time="2025-11-06T00:26:25.810816295Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Nov 6 00:26:25.813934 containerd[1541]: time="2025-11-06T00:26:25.813659378Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:26:25.819037 containerd[1541]: time="2025-11-06T00:26:25.818954560Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 5.512431462s" Nov 6 00:26:25.819037 containerd[1541]: time="2025-11-06T00:26:25.819025697Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Nov 6 00:26:25.827660 containerd[1541]: time="2025-11-06T00:26:25.823650256Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Nov 6 00:26:25.839213 containerd[1541]: time="2025-11-06T00:26:25.838794633Z" level=info msg="CreateContainer within sandbox \"b68479c2bc560ed7134dba8b8ad7686853511c88cb71dd5e1faa9950d7bd59b5\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Nov 6 00:26:25.873068 containerd[1541]: time="2025-11-06T00:26:25.871476681Z" level=info msg="Container 367c1374b284454f3546b80da347b02b062f460b67edb09732e5cbcd20eba489: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:26:25.883437 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3585635016.mount: Deactivated successfully. Nov 6 00:26:25.894367 containerd[1541]: time="2025-11-06T00:26:25.894282995Z" level=info msg="CreateContainer within sandbox \"b68479c2bc560ed7134dba8b8ad7686853511c88cb71dd5e1faa9950d7bd59b5\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"367c1374b284454f3546b80da347b02b062f460b67edb09732e5cbcd20eba489\"" Nov 6 00:26:25.909210 containerd[1541]: time="2025-11-06T00:26:25.896012380Z" level=info msg="StartContainer for \"367c1374b284454f3546b80da347b02b062f460b67edb09732e5cbcd20eba489\"" Nov 6 00:26:25.909210 containerd[1541]: time="2025-11-06T00:26:25.897178930Z" level=info msg="connecting to shim 367c1374b284454f3546b80da347b02b062f460b67edb09732e5cbcd20eba489" address="unix:///run/containerd/s/2480dc8df76d7a7921cb4f329cf851dcabfafddfb3b19b21834a5a196ee42df3" protocol=ttrpc version=3 Nov 6 00:26:25.991276 systemd[1]: Started cri-containerd-367c1374b284454f3546b80da347b02b062f460b67edb09732e5cbcd20eba489.scope - libcontainer container 367c1374b284454f3546b80da347b02b062f460b67edb09732e5cbcd20eba489. Nov 6 00:26:26.207575 containerd[1541]: time="2025-11-06T00:26:26.207411151Z" level=info msg="StartContainer for \"367c1374b284454f3546b80da347b02b062f460b67edb09732e5cbcd20eba489\" returns successfully" Nov 6 00:26:27.214695 kubelet[2762]: E1106 00:26:27.214164 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:26:28.216932 kubelet[2762]: E1106 00:26:28.216866 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:26:36.022491 kernel: hrtimer: interrupt took 9645468 ns Nov 6 00:26:39.734664 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4207942151.mount: Deactivated successfully. Nov 6 00:26:45.089098 containerd[1541]: time="2025-11-06T00:26:45.088875912Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:26:45.142636 containerd[1541]: time="2025-11-06T00:26:45.142524037Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Nov 6 00:26:45.201380 containerd[1541]: time="2025-11-06T00:26:45.201270238Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:26:45.203345 containerd[1541]: time="2025-11-06T00:26:45.203252404Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 19.376071699s" Nov 6 00:26:45.203345 containerd[1541]: time="2025-11-06T00:26:45.203308400Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Nov 6 00:26:45.330396 containerd[1541]: time="2025-11-06T00:26:45.330337104Z" level=info msg="CreateContainer within sandbox \"5086b9ba72418ebcd66943092abf02b29f9fd8c007d764d9cd2a081fdc91fb08\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 6 00:26:45.454979 containerd[1541]: time="2025-11-06T00:26:45.454843620Z" level=info msg="Container 2568ce2b88f0de9d0d0100333fd51d8618bd044cddf3aa072d89c4b9b0ba4ced: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:26:45.465909 containerd[1541]: time="2025-11-06T00:26:45.465823128Z" level=info msg="CreateContainer within sandbox \"5086b9ba72418ebcd66943092abf02b29f9fd8c007d764d9cd2a081fdc91fb08\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"2568ce2b88f0de9d0d0100333fd51d8618bd044cddf3aa072d89c4b9b0ba4ced\"" Nov 6 00:26:45.467157 containerd[1541]: time="2025-11-06T00:26:45.467103653Z" level=info msg="StartContainer for \"2568ce2b88f0de9d0d0100333fd51d8618bd044cddf3aa072d89c4b9b0ba4ced\"" Nov 6 00:26:45.468246 containerd[1541]: time="2025-11-06T00:26:45.468217591Z" level=info msg="connecting to shim 2568ce2b88f0de9d0d0100333fd51d8618bd044cddf3aa072d89c4b9b0ba4ced" address="unix:///run/containerd/s/2ac20f0ea3b73b88a7b418afb3a4620b89e3c5bb93b3dacb6093d565f180b278" protocol=ttrpc version=3 Nov 6 00:26:45.498188 systemd[1]: Started cri-containerd-2568ce2b88f0de9d0d0100333fd51d8618bd044cddf3aa072d89c4b9b0ba4ced.scope - libcontainer container 2568ce2b88f0de9d0d0100333fd51d8618bd044cddf3aa072d89c4b9b0ba4ced. Nov 6 00:26:45.540036 containerd[1541]: time="2025-11-06T00:26:45.539974759Z" level=info msg="StartContainer for \"2568ce2b88f0de9d0d0100333fd51d8618bd044cddf3aa072d89c4b9b0ba4ced\" returns successfully" Nov 6 00:26:45.552030 systemd[1]: cri-containerd-2568ce2b88f0de9d0d0100333fd51d8618bd044cddf3aa072d89c4b9b0ba4ced.scope: Deactivated successfully. Nov 6 00:26:45.555142 containerd[1541]: time="2025-11-06T00:26:45.555088542Z" level=info msg="received exit event container_id:\"2568ce2b88f0de9d0d0100333fd51d8618bd044cddf3aa072d89c4b9b0ba4ced\" id:\"2568ce2b88f0de9d0d0100333fd51d8618bd044cddf3aa072d89c4b9b0ba4ced\" pid:3236 exited_at:{seconds:1762388805 nanos:554492765}" Nov 6 00:26:45.555300 containerd[1541]: time="2025-11-06T00:26:45.555153366Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2568ce2b88f0de9d0d0100333fd51d8618bd044cddf3aa072d89c4b9b0ba4ced\" id:\"2568ce2b88f0de9d0d0100333fd51d8618bd044cddf3aa072d89c4b9b0ba4ced\" pid:3236 exited_at:{seconds:1762388805 nanos:554492765}" Nov 6 00:26:45.582648 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2568ce2b88f0de9d0d0100333fd51d8618bd044cddf3aa072d89c4b9b0ba4ced-rootfs.mount: Deactivated successfully. Nov 6 00:26:46.299341 kubelet[2762]: E1106 00:26:46.299283 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:26:46.306368 containerd[1541]: time="2025-11-06T00:26:46.306320974Z" level=info msg="CreateContainer within sandbox \"5086b9ba72418ebcd66943092abf02b29f9fd8c007d764d9cd2a081fdc91fb08\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 6 00:26:46.360260 containerd[1541]: time="2025-11-06T00:26:46.360186840Z" level=info msg="Container 23cc4fbf9a5d0b2e8e0ef436b4832e9b8bb111266a4829152bc9557f0b0e595b: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:26:46.362277 kubelet[2762]: I1106 00:26:46.361767 2762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-w4khw" podStartSLOduration=22.844110972 podStartE2EDuration="28.361734434s" podCreationTimestamp="2025-11-06 00:26:18 +0000 UTC" firstStartedPulling="2025-11-06 00:26:20.305828662 +0000 UTC m=+7.346063091" lastFinishedPulling="2025-11-06 00:26:25.823452114 +0000 UTC m=+12.863686553" observedRunningTime="2025-11-06 00:26:27.759903063 +0000 UTC m=+14.800137502" watchObservedRunningTime="2025-11-06 00:26:46.361734434 +0000 UTC m=+33.401968874" Nov 6 00:26:46.369900 containerd[1541]: time="2025-11-06T00:26:46.369826285Z" level=info msg="CreateContainer within sandbox \"5086b9ba72418ebcd66943092abf02b29f9fd8c007d764d9cd2a081fdc91fb08\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"23cc4fbf9a5d0b2e8e0ef436b4832e9b8bb111266a4829152bc9557f0b0e595b\"" Nov 6 00:26:46.370568 containerd[1541]: time="2025-11-06T00:26:46.370520951Z" level=info msg="StartContainer for \"23cc4fbf9a5d0b2e8e0ef436b4832e9b8bb111266a4829152bc9557f0b0e595b\"" Nov 6 00:26:46.371979 containerd[1541]: time="2025-11-06T00:26:46.371927847Z" level=info msg="connecting to shim 23cc4fbf9a5d0b2e8e0ef436b4832e9b8bb111266a4829152bc9557f0b0e595b" address="unix:///run/containerd/s/2ac20f0ea3b73b88a7b418afb3a4620b89e3c5bb93b3dacb6093d565f180b278" protocol=ttrpc version=3 Nov 6 00:26:46.401314 systemd[1]: Started cri-containerd-23cc4fbf9a5d0b2e8e0ef436b4832e9b8bb111266a4829152bc9557f0b0e595b.scope - libcontainer container 23cc4fbf9a5d0b2e8e0ef436b4832e9b8bb111266a4829152bc9557f0b0e595b. Nov 6 00:26:46.440714 containerd[1541]: time="2025-11-06T00:26:46.440651651Z" level=info msg="StartContainer for \"23cc4fbf9a5d0b2e8e0ef436b4832e9b8bb111266a4829152bc9557f0b0e595b\" returns successfully" Nov 6 00:26:46.466067 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 6 00:26:46.466694 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 6 00:26:46.469188 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Nov 6 00:26:46.471927 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 6 00:26:46.473900 containerd[1541]: time="2025-11-06T00:26:46.473779172Z" level=info msg="received exit event container_id:\"23cc4fbf9a5d0b2e8e0ef436b4832e9b8bb111266a4829152bc9557f0b0e595b\" id:\"23cc4fbf9a5d0b2e8e0ef436b4832e9b8bb111266a4829152bc9557f0b0e595b\" pid:3280 exited_at:{seconds:1762388806 nanos:473421770}" Nov 6 00:26:46.474073 containerd[1541]: time="2025-11-06T00:26:46.474035632Z" level=info msg="TaskExit event in podsandbox handler container_id:\"23cc4fbf9a5d0b2e8e0ef436b4832e9b8bb111266a4829152bc9557f0b0e595b\" id:\"23cc4fbf9a5d0b2e8e0ef436b4832e9b8bb111266a4829152bc9557f0b0e595b\" pid:3280 exited_at:{seconds:1762388806 nanos:473421770}" Nov 6 00:26:46.474398 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Nov 6 00:26:46.475297 systemd[1]: cri-containerd-23cc4fbf9a5d0b2e8e0ef436b4832e9b8bb111266a4829152bc9557f0b0e595b.scope: Deactivated successfully. Nov 6 00:26:46.517257 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-23cc4fbf9a5d0b2e8e0ef436b4832e9b8bb111266a4829152bc9557f0b0e595b-rootfs.mount: Deactivated successfully. Nov 6 00:26:46.518720 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 6 00:26:47.303915 kubelet[2762]: E1106 00:26:47.303727 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:26:47.313149 containerd[1541]: time="2025-11-06T00:26:47.313083631Z" level=info msg="CreateContainer within sandbox \"5086b9ba72418ebcd66943092abf02b29f9fd8c007d764d9cd2a081fdc91fb08\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 6 00:26:47.333626 containerd[1541]: time="2025-11-06T00:26:47.333559598Z" level=info msg="Container b02671d13318eac3cf03b249f683e077eab0120d4a813ae647ca834f9820c82e: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:26:47.349618 containerd[1541]: time="2025-11-06T00:26:47.349544712Z" level=info msg="CreateContainer within sandbox \"5086b9ba72418ebcd66943092abf02b29f9fd8c007d764d9cd2a081fdc91fb08\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b02671d13318eac3cf03b249f683e077eab0120d4a813ae647ca834f9820c82e\"" Nov 6 00:26:47.350394 containerd[1541]: time="2025-11-06T00:26:47.350320873Z" level=info msg="StartContainer for \"b02671d13318eac3cf03b249f683e077eab0120d4a813ae647ca834f9820c82e\"" Nov 6 00:26:47.352366 containerd[1541]: time="2025-11-06T00:26:47.352306312Z" level=info msg="connecting to shim b02671d13318eac3cf03b249f683e077eab0120d4a813ae647ca834f9820c82e" address="unix:///run/containerd/s/2ac20f0ea3b73b88a7b418afb3a4620b89e3c5bb93b3dacb6093d565f180b278" protocol=ttrpc version=3 Nov 6 00:26:47.377092 systemd[1]: Started cri-containerd-b02671d13318eac3cf03b249f683e077eab0120d4a813ae647ca834f9820c82e.scope - libcontainer container b02671d13318eac3cf03b249f683e077eab0120d4a813ae647ca834f9820c82e. Nov 6 00:26:47.426254 systemd[1]: cri-containerd-b02671d13318eac3cf03b249f683e077eab0120d4a813ae647ca834f9820c82e.scope: Deactivated successfully. Nov 6 00:26:47.427395 containerd[1541]: time="2025-11-06T00:26:47.427326258Z" level=info msg="received exit event container_id:\"b02671d13318eac3cf03b249f683e077eab0120d4a813ae647ca834f9820c82e\" id:\"b02671d13318eac3cf03b249f683e077eab0120d4a813ae647ca834f9820c82e\" pid:3326 exited_at:{seconds:1762388807 nanos:427018830}" Nov 6 00:26:47.427532 containerd[1541]: time="2025-11-06T00:26:47.427501111Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b02671d13318eac3cf03b249f683e077eab0120d4a813ae647ca834f9820c82e\" id:\"b02671d13318eac3cf03b249f683e077eab0120d4a813ae647ca834f9820c82e\" pid:3326 exited_at:{seconds:1762388807 nanos:427018830}" Nov 6 00:26:47.443084 containerd[1541]: time="2025-11-06T00:26:47.442880669Z" level=info msg="StartContainer for \"b02671d13318eac3cf03b249f683e077eab0120d4a813ae647ca834f9820c82e\" returns successfully" Nov 6 00:26:47.469853 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b02671d13318eac3cf03b249f683e077eab0120d4a813ae647ca834f9820c82e-rootfs.mount: Deactivated successfully. Nov 6 00:26:48.310197 kubelet[2762]: E1106 00:26:48.310156 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:26:48.368766 containerd[1541]: time="2025-11-06T00:26:48.368668170Z" level=info msg="CreateContainer within sandbox \"5086b9ba72418ebcd66943092abf02b29f9fd8c007d764d9cd2a081fdc91fb08\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 6 00:26:48.627213 containerd[1541]: time="2025-11-06T00:26:48.627159690Z" level=info msg="Container 85900c7bf821baab339e33895b5a861c3935c98b57cc8723ff373440c2dee911: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:26:48.631960 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3656589455.mount: Deactivated successfully. Nov 6 00:26:48.837414 containerd[1541]: time="2025-11-06T00:26:48.837353234Z" level=info msg="CreateContainer within sandbox \"5086b9ba72418ebcd66943092abf02b29f9fd8c007d764d9cd2a081fdc91fb08\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"85900c7bf821baab339e33895b5a861c3935c98b57cc8723ff373440c2dee911\"" Nov 6 00:26:48.838009 containerd[1541]: time="2025-11-06T00:26:48.837980601Z" level=info msg="StartContainer for \"85900c7bf821baab339e33895b5a861c3935c98b57cc8723ff373440c2dee911\"" Nov 6 00:26:48.839063 containerd[1541]: time="2025-11-06T00:26:48.839033731Z" level=info msg="connecting to shim 85900c7bf821baab339e33895b5a861c3935c98b57cc8723ff373440c2dee911" address="unix:///run/containerd/s/2ac20f0ea3b73b88a7b418afb3a4620b89e3c5bb93b3dacb6093d565f180b278" protocol=ttrpc version=3 Nov 6 00:26:48.860102 systemd[1]: Started cri-containerd-85900c7bf821baab339e33895b5a861c3935c98b57cc8723ff373440c2dee911.scope - libcontainer container 85900c7bf821baab339e33895b5a861c3935c98b57cc8723ff373440c2dee911. Nov 6 00:26:48.895444 systemd[1]: cri-containerd-85900c7bf821baab339e33895b5a861c3935c98b57cc8723ff373440c2dee911.scope: Deactivated successfully. Nov 6 00:26:48.897096 containerd[1541]: time="2025-11-06T00:26:48.897044715Z" level=info msg="TaskExit event in podsandbox handler container_id:\"85900c7bf821baab339e33895b5a861c3935c98b57cc8723ff373440c2dee911\" id:\"85900c7bf821baab339e33895b5a861c3935c98b57cc8723ff373440c2dee911\" pid:3365 exited_at:{seconds:1762388808 nanos:895836037}" Nov 6 00:26:49.136942 containerd[1541]: time="2025-11-06T00:26:49.136860913Z" level=info msg="received exit event container_id:\"85900c7bf821baab339e33895b5a861c3935c98b57cc8723ff373440c2dee911\" id:\"85900c7bf821baab339e33895b5a861c3935c98b57cc8723ff373440c2dee911\" pid:3365 exited_at:{seconds:1762388808 nanos:895836037}" Nov 6 00:26:49.144770 containerd[1541]: time="2025-11-06T00:26:49.144729718Z" level=info msg="StartContainer for \"85900c7bf821baab339e33895b5a861c3935c98b57cc8723ff373440c2dee911\" returns successfully" Nov 6 00:26:49.157840 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-85900c7bf821baab339e33895b5a861c3935c98b57cc8723ff373440c2dee911-rootfs.mount: Deactivated successfully. Nov 6 00:26:49.314835 kubelet[2762]: E1106 00:26:49.314788 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:26:50.321768 kubelet[2762]: E1106 00:26:50.321706 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:26:50.544849 containerd[1541]: time="2025-11-06T00:26:50.544784175Z" level=info msg="CreateContainer within sandbox \"5086b9ba72418ebcd66943092abf02b29f9fd8c007d764d9cd2a081fdc91fb08\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 6 00:26:50.859389 containerd[1541]: time="2025-11-06T00:26:50.859323759Z" level=info msg="Container c612caac88f2161c303529ed7d9694074a6893ba911d7af89a392bd41c338cd3: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:26:50.863867 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2988139844.mount: Deactivated successfully. Nov 6 00:26:50.972840 containerd[1541]: time="2025-11-06T00:26:50.972761354Z" level=info msg="CreateContainer within sandbox \"5086b9ba72418ebcd66943092abf02b29f9fd8c007d764d9cd2a081fdc91fb08\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c612caac88f2161c303529ed7d9694074a6893ba911d7af89a392bd41c338cd3\"" Nov 6 00:26:50.973562 containerd[1541]: time="2025-11-06T00:26:50.973526343Z" level=info msg="StartContainer for \"c612caac88f2161c303529ed7d9694074a6893ba911d7af89a392bd41c338cd3\"" Nov 6 00:26:50.974759 containerd[1541]: time="2025-11-06T00:26:50.974672119Z" level=info msg="connecting to shim c612caac88f2161c303529ed7d9694074a6893ba911d7af89a392bd41c338cd3" address="unix:///run/containerd/s/2ac20f0ea3b73b88a7b418afb3a4620b89e3c5bb93b3dacb6093d565f180b278" protocol=ttrpc version=3 Nov 6 00:26:51.001087 systemd[1]: Started cri-containerd-c612caac88f2161c303529ed7d9694074a6893ba911d7af89a392bd41c338cd3.scope - libcontainer container c612caac88f2161c303529ed7d9694074a6893ba911d7af89a392bd41c338cd3. Nov 6 00:26:51.129572 containerd[1541]: time="2025-11-06T00:26:51.129372342Z" level=info msg="StartContainer for \"c612caac88f2161c303529ed7d9694074a6893ba911d7af89a392bd41c338cd3\" returns successfully" Nov 6 00:26:51.254810 containerd[1541]: time="2025-11-06T00:26:51.254604948Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c612caac88f2161c303529ed7d9694074a6893ba911d7af89a392bd41c338cd3\" id:\"01a82640369c32cf2aa8470eb480c222ff113b71e8d5bacbcf0e47acd9942617\" pid:3442 exited_at:{seconds:1762388811 nanos:254140342}" Nov 6 00:26:51.291284 kubelet[2762]: I1106 00:26:51.291241 2762 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 6 00:26:51.329113 kubelet[2762]: E1106 00:26:51.329027 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:26:51.572194 kubelet[2762]: I1106 00:26:51.572119 2762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-k5ccd" podStartSLOduration=7.7310211859999995 podStartE2EDuration="32.572084849s" podCreationTimestamp="2025-11-06 00:26:19 +0000 UTC" firstStartedPulling="2025-11-06 00:26:20.363164714 +0000 UTC m=+7.403399143" lastFinishedPulling="2025-11-06 00:26:45.204228367 +0000 UTC m=+32.244462806" observedRunningTime="2025-11-06 00:26:51.451125804 +0000 UTC m=+38.491360253" watchObservedRunningTime="2025-11-06 00:26:51.572084849 +0000 UTC m=+38.612319278" Nov 6 00:26:51.767674 systemd[1]: Created slice kubepods-burstable-pod687b5ad9_1088_486d_b4ce_6a2efc8de543.slice - libcontainer container kubepods-burstable-pod687b5ad9_1088_486d_b4ce_6a2efc8de543.slice. Nov 6 00:26:51.836801 kubelet[2762]: I1106 00:26:51.836596 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5x5w6\" (UniqueName: \"kubernetes.io/projected/687b5ad9-1088-486d-b4ce-6a2efc8de543-kube-api-access-5x5w6\") pod \"coredns-674b8bbfcf-rbz25\" (UID: \"687b5ad9-1088-486d-b4ce-6a2efc8de543\") " pod="kube-system/coredns-674b8bbfcf-rbz25" Nov 6 00:26:51.836801 kubelet[2762]: I1106 00:26:51.836705 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/687b5ad9-1088-486d-b4ce-6a2efc8de543-config-volume\") pod \"coredns-674b8bbfcf-rbz25\" (UID: \"687b5ad9-1088-486d-b4ce-6a2efc8de543\") " pod="kube-system/coredns-674b8bbfcf-rbz25" Nov 6 00:26:51.894559 systemd[1]: Created slice kubepods-burstable-pod36d8cd51_e29a_4732_b2dc_07e945bf3283.slice - libcontainer container kubepods-burstable-pod36d8cd51_e29a_4732_b2dc_07e945bf3283.slice. Nov 6 00:26:51.936958 kubelet[2762]: I1106 00:26:51.936906 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vjff\" (UniqueName: \"kubernetes.io/projected/36d8cd51-e29a-4732-b2dc-07e945bf3283-kube-api-access-9vjff\") pod \"coredns-674b8bbfcf-s8kpj\" (UID: \"36d8cd51-e29a-4732-b2dc-07e945bf3283\") " pod="kube-system/coredns-674b8bbfcf-s8kpj" Nov 6 00:26:51.936958 kubelet[2762]: I1106 00:26:51.936961 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/36d8cd51-e29a-4732-b2dc-07e945bf3283-config-volume\") pod \"coredns-674b8bbfcf-s8kpj\" (UID: \"36d8cd51-e29a-4732-b2dc-07e945bf3283\") " pod="kube-system/coredns-674b8bbfcf-s8kpj" Nov 6 00:26:52.072115 kubelet[2762]: E1106 00:26:52.072075 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:26:52.073030 containerd[1541]: time="2025-11-06T00:26:52.072981509Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rbz25,Uid:687b5ad9-1088-486d-b4ce-6a2efc8de543,Namespace:kube-system,Attempt:0,}" Nov 6 00:26:52.197785 kubelet[2762]: E1106 00:26:52.197430 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:26:52.198511 containerd[1541]: time="2025-11-06T00:26:52.198406895Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-s8kpj,Uid:36d8cd51-e29a-4732-b2dc-07e945bf3283,Namespace:kube-system,Attempt:0,}" Nov 6 00:26:52.330773 kubelet[2762]: E1106 00:26:52.330721 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:26:53.333569 kubelet[2762]: E1106 00:26:53.333526 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:26:53.412873 systemd-networkd[1460]: cilium_host: Link UP Nov 6 00:26:53.413803 systemd-networkd[1460]: cilium_net: Link UP Nov 6 00:26:53.415064 systemd-networkd[1460]: cilium_net: Gained carrier Nov 6 00:26:53.415258 systemd-networkd[1460]: cilium_host: Gained carrier Nov 6 00:26:53.526433 systemd-networkd[1460]: cilium_vxlan: Link UP Nov 6 00:26:53.526444 systemd-networkd[1460]: cilium_vxlan: Gained carrier Nov 6 00:26:53.746934 kernel: NET: Registered PF_ALG protocol family Nov 6 00:26:54.145160 systemd-networkd[1460]: cilium_host: Gained IPv6LL Nov 6 00:26:54.273092 systemd-networkd[1460]: cilium_net: Gained IPv6LL Nov 6 00:26:54.480950 systemd-networkd[1460]: lxc_health: Link UP Nov 6 00:26:54.492473 systemd-networkd[1460]: lxc_health: Gained carrier Nov 6 00:26:54.684195 systemd-networkd[1460]: lxc965666912bac: Link UP Nov 6 00:26:54.697924 kernel: eth0: renamed from tmp9c225 Nov 6 00:26:54.698080 systemd-networkd[1460]: lxc965666912bac: Gained carrier Nov 6 00:26:54.913206 systemd-networkd[1460]: cilium_vxlan: Gained IPv6LL Nov 6 00:26:55.053455 systemd-networkd[1460]: lxc57cbbfb81cc6: Link UP Nov 6 00:26:55.064231 kernel: eth0: renamed from tmp73e3c Nov 6 00:26:55.066270 systemd-networkd[1460]: lxc57cbbfb81cc6: Gained carrier Nov 6 00:26:55.745575 systemd-networkd[1460]: lxc_health: Gained IPv6LL Nov 6 00:26:55.890907 kubelet[2762]: E1106 00:26:55.890613 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:26:56.321205 systemd-networkd[1460]: lxc57cbbfb81cc6: Gained IPv6LL Nov 6 00:26:56.338474 kubelet[2762]: E1106 00:26:56.338428 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:26:56.705175 systemd-networkd[1460]: lxc965666912bac: Gained IPv6LL Nov 6 00:26:57.348583 kubelet[2762]: E1106 00:26:57.344802 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:27:00.873919 containerd[1541]: time="2025-11-06T00:27:00.873376775Z" level=info msg="connecting to shim 73e3c7089b8e05ef76fa964476dc10dbdb12f9331eb1e339859a917be922e394" address="unix:///run/containerd/s/e2acb8c5f5c71d4e2be4e394f0793b2be0af85dd1938e8c31c8c38348078231c" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:27:01.006242 systemd[1]: Started cri-containerd-73e3c7089b8e05ef76fa964476dc10dbdb12f9331eb1e339859a917be922e394.scope - libcontainer container 73e3c7089b8e05ef76fa964476dc10dbdb12f9331eb1e339859a917be922e394. Nov 6 00:27:01.048724 containerd[1541]: time="2025-11-06T00:27:01.048613707Z" level=info msg="connecting to shim 9c225e960591474e336f4dea73241f54649a4219bbd58ab4fbbc569541c97ad6" address="unix:///run/containerd/s/f47b0e13bc1fc63130106415453e535ccd84db90d53490da82ff5eb8afa0ad9d" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:27:01.068018 systemd-resolved[1391]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 6 00:27:01.123112 systemd[1]: Started cri-containerd-9c225e960591474e336f4dea73241f54649a4219bbd58ab4fbbc569541c97ad6.scope - libcontainer container 9c225e960591474e336f4dea73241f54649a4219bbd58ab4fbbc569541c97ad6. Nov 6 00:27:01.184378 containerd[1541]: time="2025-11-06T00:27:01.183772349Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-s8kpj,Uid:36d8cd51-e29a-4732-b2dc-07e945bf3283,Namespace:kube-system,Attempt:0,} returns sandbox id \"73e3c7089b8e05ef76fa964476dc10dbdb12f9331eb1e339859a917be922e394\"" Nov 6 00:27:01.184268 systemd-resolved[1391]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 6 00:27:01.197840 kubelet[2762]: E1106 00:27:01.196829 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:27:01.217741 containerd[1541]: time="2025-11-06T00:27:01.217196144Z" level=info msg="CreateContainer within sandbox \"73e3c7089b8e05ef76fa964476dc10dbdb12f9331eb1e339859a917be922e394\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 6 00:27:01.298836 containerd[1541]: time="2025-11-06T00:27:01.298761380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rbz25,Uid:687b5ad9-1088-486d-b4ce-6a2efc8de543,Namespace:kube-system,Attempt:0,} returns sandbox id \"9c225e960591474e336f4dea73241f54649a4219bbd58ab4fbbc569541c97ad6\"" Nov 6 00:27:01.301352 kubelet[2762]: E1106 00:27:01.300830 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:27:01.333989 containerd[1541]: time="2025-11-06T00:27:01.331929777Z" level=info msg="Container 1224ca80fb688acff2e101e2d6f420f2a4122ab01acd61eeaa0766c2ea947c3b: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:27:01.357169 containerd[1541]: time="2025-11-06T00:27:01.355270635Z" level=info msg="CreateContainer within sandbox \"9c225e960591474e336f4dea73241f54649a4219bbd58ab4fbbc569541c97ad6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 6 00:27:01.381726 containerd[1541]: time="2025-11-06T00:27:01.381672703Z" level=info msg="CreateContainer within sandbox \"73e3c7089b8e05ef76fa964476dc10dbdb12f9331eb1e339859a917be922e394\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1224ca80fb688acff2e101e2d6f420f2a4122ab01acd61eeaa0766c2ea947c3b\"" Nov 6 00:27:01.395084 containerd[1541]: time="2025-11-06T00:27:01.392047819Z" level=info msg="StartContainer for \"1224ca80fb688acff2e101e2d6f420f2a4122ab01acd61eeaa0766c2ea947c3b\"" Nov 6 00:27:01.395084 containerd[1541]: time="2025-11-06T00:27:01.393369307Z" level=info msg="connecting to shim 1224ca80fb688acff2e101e2d6f420f2a4122ab01acd61eeaa0766c2ea947c3b" address="unix:///run/containerd/s/e2acb8c5f5c71d4e2be4e394f0793b2be0af85dd1938e8c31c8c38348078231c" protocol=ttrpc version=3 Nov 6 00:27:01.435112 containerd[1541]: time="2025-11-06T00:27:01.433875427Z" level=info msg="Container 2d20b3184a8f38d0bc77e736012298fa503e5db602a37ef32d5823ef61197655: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:27:01.469435 systemd[1]: Started cri-containerd-1224ca80fb688acff2e101e2d6f420f2a4122ab01acd61eeaa0766c2ea947c3b.scope - libcontainer container 1224ca80fb688acff2e101e2d6f420f2a4122ab01acd61eeaa0766c2ea947c3b. Nov 6 00:27:01.470311 containerd[1541]: time="2025-11-06T00:27:01.469600366Z" level=info msg="CreateContainer within sandbox \"9c225e960591474e336f4dea73241f54649a4219bbd58ab4fbbc569541c97ad6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2d20b3184a8f38d0bc77e736012298fa503e5db602a37ef32d5823ef61197655\"" Nov 6 00:27:01.471638 containerd[1541]: time="2025-11-06T00:27:01.471600734Z" level=info msg="StartContainer for \"2d20b3184a8f38d0bc77e736012298fa503e5db602a37ef32d5823ef61197655\"" Nov 6 00:27:01.476155 containerd[1541]: time="2025-11-06T00:27:01.475560933Z" level=info msg="connecting to shim 2d20b3184a8f38d0bc77e736012298fa503e5db602a37ef32d5823ef61197655" address="unix:///run/containerd/s/f47b0e13bc1fc63130106415453e535ccd84db90d53490da82ff5eb8afa0ad9d" protocol=ttrpc version=3 Nov 6 00:27:01.546368 systemd[1]: Started cri-containerd-2d20b3184a8f38d0bc77e736012298fa503e5db602a37ef32d5823ef61197655.scope - libcontainer container 2d20b3184a8f38d0bc77e736012298fa503e5db602a37ef32d5823ef61197655. Nov 6 00:27:01.576571 containerd[1541]: time="2025-11-06T00:27:01.576500437Z" level=info msg="StartContainer for \"1224ca80fb688acff2e101e2d6f420f2a4122ab01acd61eeaa0766c2ea947c3b\" returns successfully" Nov 6 00:27:01.647199 containerd[1541]: time="2025-11-06T00:27:01.647035215Z" level=info msg="StartContainer for \"2d20b3184a8f38d0bc77e736012298fa503e5db602a37ef32d5823ef61197655\" returns successfully" Nov 6 00:27:01.857639 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1231866674.mount: Deactivated successfully. Nov 6 00:27:02.478719 kubelet[2762]: E1106 00:27:02.473815 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:27:02.484516 kubelet[2762]: E1106 00:27:02.484355 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:27:02.553281 kubelet[2762]: I1106 00:27:02.552275 2762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-rbz25" podStartSLOduration=44.552254404 podStartE2EDuration="44.552254404s" podCreationTimestamp="2025-11-06 00:26:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 00:27:02.547836039 +0000 UTC m=+49.588070488" watchObservedRunningTime="2025-11-06 00:27:02.552254404 +0000 UTC m=+49.592488833" Nov 6 00:27:02.658649 kubelet[2762]: I1106 00:27:02.657238 2762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-s8kpj" podStartSLOduration=43.657211315 podStartE2EDuration="43.657211315s" podCreationTimestamp="2025-11-06 00:26:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 00:27:02.594243799 +0000 UTC m=+49.634478228" watchObservedRunningTime="2025-11-06 00:27:02.657211315 +0000 UTC m=+49.697445754" Nov 6 00:27:03.492709 kubelet[2762]: E1106 00:27:03.488809 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:27:03.492709 kubelet[2762]: E1106 00:27:03.489678 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:27:04.489786 kubelet[2762]: E1106 00:27:04.487850 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:27:11.127129 systemd[1]: Started sshd@9-10.0.0.113:22-10.0.0.1:60530.service - OpenSSH per-connection server daemon (10.0.0.1:60530). Nov 6 00:27:11.234611 sshd[4095]: Accepted publickey for core from 10.0.0.1 port 60530 ssh2: RSA SHA256:PmSYF5WO1c+PbjRA1Pm6yQw5/JNmNUR55sY7don0Q4E Nov 6 00:27:11.237669 sshd-session[4095]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:27:11.246933 systemd-logind[1523]: New session 10 of user core. Nov 6 00:27:11.257275 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 6 00:27:11.863012 sshd[4098]: Connection closed by 10.0.0.1 port 60530 Nov 6 00:27:11.863536 sshd-session[4095]: pam_unix(sshd:session): session closed for user core Nov 6 00:27:11.869407 systemd[1]: sshd@9-10.0.0.113:22-10.0.0.1:60530.service: Deactivated successfully. Nov 6 00:27:11.872280 systemd[1]: session-10.scope: Deactivated successfully. Nov 6 00:27:11.873324 systemd-logind[1523]: Session 10 logged out. Waiting for processes to exit. Nov 6 00:27:11.875449 systemd-logind[1523]: Removed session 10. Nov 6 00:27:16.906325 systemd[1]: Started sshd@10-10.0.0.113:22-10.0.0.1:54844.service - OpenSSH per-connection server daemon (10.0.0.1:54844). Nov 6 00:27:17.063261 sshd[4116]: Accepted publickey for core from 10.0.0.1 port 54844 ssh2: RSA SHA256:PmSYF5WO1c+PbjRA1Pm6yQw5/JNmNUR55sY7don0Q4E Nov 6 00:27:17.066026 sshd-session[4116]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:27:17.098969 systemd-logind[1523]: New session 11 of user core. Nov 6 00:27:17.126978 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 6 00:27:17.467832 sshd[4119]: Connection closed by 10.0.0.1 port 54844 Nov 6 00:27:17.467327 sshd-session[4116]: pam_unix(sshd:session): session closed for user core Nov 6 00:27:17.478947 systemd[1]: sshd@10-10.0.0.113:22-10.0.0.1:54844.service: Deactivated successfully. Nov 6 00:27:17.483259 systemd[1]: session-11.scope: Deactivated successfully. Nov 6 00:27:17.493979 systemd-logind[1523]: Session 11 logged out. Waiting for processes to exit. Nov 6 00:27:17.498398 systemd-logind[1523]: Removed session 11. Nov 6 00:27:22.500636 systemd[1]: Started sshd@11-10.0.0.113:22-10.0.0.1:54852.service - OpenSSH per-connection server daemon (10.0.0.1:54852). Nov 6 00:27:22.718226 sshd[4135]: Accepted publickey for core from 10.0.0.1 port 54852 ssh2: RSA SHA256:PmSYF5WO1c+PbjRA1Pm6yQw5/JNmNUR55sY7don0Q4E Nov 6 00:27:22.720777 sshd-session[4135]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:27:22.733928 systemd-logind[1523]: New session 12 of user core. Nov 6 00:27:22.743666 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 6 00:27:22.953905 sshd[4138]: Connection closed by 10.0.0.1 port 54852 Nov 6 00:27:22.954784 sshd-session[4135]: pam_unix(sshd:session): session closed for user core Nov 6 00:27:22.963269 systemd[1]: sshd@11-10.0.0.113:22-10.0.0.1:54852.service: Deactivated successfully. Nov 6 00:27:22.967424 systemd[1]: session-12.scope: Deactivated successfully. Nov 6 00:27:22.972912 systemd-logind[1523]: Session 12 logged out. Waiting for processes to exit. Nov 6 00:27:22.976157 systemd-logind[1523]: Removed session 12. Nov 6 00:27:25.091326 kubelet[2762]: E1106 00:27:25.091263 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:27:28.001175 systemd[1]: Started sshd@12-10.0.0.113:22-10.0.0.1:55060.service - OpenSSH per-connection server daemon (10.0.0.1:55060). Nov 6 00:27:28.179237 sshd[4152]: Accepted publickey for core from 10.0.0.1 port 55060 ssh2: RSA SHA256:PmSYF5WO1c+PbjRA1Pm6yQw5/JNmNUR55sY7don0Q4E Nov 6 00:27:28.182724 sshd-session[4152]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:27:28.216026 systemd-logind[1523]: New session 13 of user core. Nov 6 00:27:28.234057 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 6 00:27:28.483750 sshd[4155]: Connection closed by 10.0.0.1 port 55060 Nov 6 00:27:28.481569 sshd-session[4152]: pam_unix(sshd:session): session closed for user core Nov 6 00:27:28.485730 systemd[1]: sshd@12-10.0.0.113:22-10.0.0.1:55060.service: Deactivated successfully. Nov 6 00:27:28.491400 systemd[1]: session-13.scope: Deactivated successfully. Nov 6 00:27:28.504507 systemd-logind[1523]: Session 13 logged out. Waiting for processes to exit. Nov 6 00:27:28.512031 systemd-logind[1523]: Removed session 13. Nov 6 00:27:32.052791 kubelet[2762]: E1106 00:27:32.044178 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:27:33.519744 systemd[1]: Started sshd@13-10.0.0.113:22-10.0.0.1:55068.service - OpenSSH per-connection server daemon (10.0.0.1:55068). Nov 6 00:27:33.611297 sshd[4171]: Accepted publickey for core from 10.0.0.1 port 55068 ssh2: RSA SHA256:PmSYF5WO1c+PbjRA1Pm6yQw5/JNmNUR55sY7don0Q4E Nov 6 00:27:33.618572 sshd-session[4171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:27:33.637026 systemd-logind[1523]: New session 14 of user core. Nov 6 00:27:33.652269 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 6 00:27:33.939082 sshd[4174]: Connection closed by 10.0.0.1 port 55068 Nov 6 00:27:33.938287 sshd-session[4171]: pam_unix(sshd:session): session closed for user core Nov 6 00:27:33.965409 systemd[1]: sshd@13-10.0.0.113:22-10.0.0.1:55068.service: Deactivated successfully. Nov 6 00:27:33.970746 systemd[1]: session-14.scope: Deactivated successfully. Nov 6 00:27:33.982645 systemd-logind[1523]: Session 14 logged out. Waiting for processes to exit. Nov 6 00:27:33.984176 systemd-logind[1523]: Removed session 14. Nov 6 00:27:38.967581 systemd[1]: Started sshd@14-10.0.0.113:22-10.0.0.1:37610.service - OpenSSH per-connection server daemon (10.0.0.1:37610). Nov 6 00:27:39.126454 sshd[4189]: Accepted publickey for core from 10.0.0.1 port 37610 ssh2: RSA SHA256:PmSYF5WO1c+PbjRA1Pm6yQw5/JNmNUR55sY7don0Q4E Nov 6 00:27:39.129318 sshd-session[4189]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:27:39.140669 systemd-logind[1523]: New session 15 of user core. Nov 6 00:27:39.153295 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 6 00:27:39.437844 sshd[4192]: Connection closed by 10.0.0.1 port 37610 Nov 6 00:27:39.438383 sshd-session[4189]: pam_unix(sshd:session): session closed for user core Nov 6 00:27:39.458327 systemd[1]: sshd@14-10.0.0.113:22-10.0.0.1:37610.service: Deactivated successfully. Nov 6 00:27:39.471564 systemd[1]: session-15.scope: Deactivated successfully. Nov 6 00:27:39.482110 systemd-logind[1523]: Session 15 logged out. Waiting for processes to exit. Nov 6 00:27:39.490566 systemd-logind[1523]: Removed session 15. Nov 6 00:27:44.488740 systemd[1]: Started sshd@15-10.0.0.113:22-10.0.0.1:37622.service - OpenSSH per-connection server daemon (10.0.0.1:37622). Nov 6 00:27:44.619991 sshd[4207]: Accepted publickey for core from 10.0.0.1 port 37622 ssh2: RSA SHA256:PmSYF5WO1c+PbjRA1Pm6yQw5/JNmNUR55sY7don0Q4E Nov 6 00:27:44.623049 sshd-session[4207]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:27:44.632517 systemd-logind[1523]: New session 16 of user core. Nov 6 00:27:44.651289 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 6 00:27:44.982926 sshd[4210]: Connection closed by 10.0.0.1 port 37622 Nov 6 00:27:44.989078 sshd-session[4207]: pam_unix(sshd:session): session closed for user core Nov 6 00:27:45.004362 systemd[1]: sshd@15-10.0.0.113:22-10.0.0.1:37622.service: Deactivated successfully. Nov 6 00:27:45.010978 systemd[1]: session-16.scope: Deactivated successfully. Nov 6 00:27:45.018735 systemd-logind[1523]: Session 16 logged out. Waiting for processes to exit. Nov 6 00:27:45.026466 systemd-logind[1523]: Removed session 16. Nov 6 00:27:48.044762 kubelet[2762]: E1106 00:27:48.044684 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:27:50.029922 systemd[1]: Started sshd@16-10.0.0.113:22-10.0.0.1:37212.service - OpenSSH per-connection server daemon (10.0.0.1:37212). Nov 6 00:27:50.175790 sshd[4225]: Accepted publickey for core from 10.0.0.1 port 37212 ssh2: RSA SHA256:PmSYF5WO1c+PbjRA1Pm6yQw5/JNmNUR55sY7don0Q4E Nov 6 00:27:50.182562 sshd-session[4225]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:27:50.207154 systemd-logind[1523]: New session 17 of user core. Nov 6 00:27:50.212187 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 6 00:27:50.478385 sshd[4229]: Connection closed by 10.0.0.1 port 37212 Nov 6 00:27:50.481299 sshd-session[4225]: pam_unix(sshd:session): session closed for user core Nov 6 00:27:50.499015 systemd[1]: sshd@16-10.0.0.113:22-10.0.0.1:37212.service: Deactivated successfully. Nov 6 00:27:50.506685 systemd[1]: session-17.scope: Deactivated successfully. Nov 6 00:27:50.513647 systemd-logind[1523]: Session 17 logged out. Waiting for processes to exit. Nov 6 00:27:50.519574 systemd[1]: Started sshd@17-10.0.0.113:22-10.0.0.1:37222.service - OpenSSH per-connection server daemon (10.0.0.1:37222). Nov 6 00:27:50.536103 systemd-logind[1523]: Removed session 17. Nov 6 00:27:50.648106 sshd[4243]: Accepted publickey for core from 10.0.0.1 port 37222 ssh2: RSA SHA256:PmSYF5WO1c+PbjRA1Pm6yQw5/JNmNUR55sY7don0Q4E Nov 6 00:27:50.655966 sshd-session[4243]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:27:50.667944 systemd-logind[1523]: New session 18 of user core. Nov 6 00:27:50.688716 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 6 00:27:51.053689 kubelet[2762]: E1106 00:27:51.053585 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:27:51.127850 sshd[4246]: Connection closed by 10.0.0.1 port 37222 Nov 6 00:27:51.134068 sshd-session[4243]: pam_unix(sshd:session): session closed for user core Nov 6 00:27:51.157680 systemd[1]: sshd@17-10.0.0.113:22-10.0.0.1:37222.service: Deactivated successfully. Nov 6 00:27:51.163610 systemd[1]: session-18.scope: Deactivated successfully. Nov 6 00:27:51.175578 systemd-logind[1523]: Session 18 logged out. Waiting for processes to exit. Nov 6 00:27:51.182624 systemd[1]: Started sshd@18-10.0.0.113:22-10.0.0.1:37228.service - OpenSSH per-connection server daemon (10.0.0.1:37228). Nov 6 00:27:51.196319 systemd-logind[1523]: Removed session 18. Nov 6 00:27:51.363859 sshd[4257]: Accepted publickey for core from 10.0.0.1 port 37228 ssh2: RSA SHA256:PmSYF5WO1c+PbjRA1Pm6yQw5/JNmNUR55sY7don0Q4E Nov 6 00:27:51.370920 sshd-session[4257]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:27:51.387256 systemd-logind[1523]: New session 19 of user core. Nov 6 00:27:51.402853 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 6 00:27:51.658268 sshd[4262]: Connection closed by 10.0.0.1 port 37228 Nov 6 00:27:51.659207 sshd-session[4257]: pam_unix(sshd:session): session closed for user core Nov 6 00:27:51.669731 systemd-logind[1523]: Session 19 logged out. Waiting for processes to exit. Nov 6 00:27:51.670363 systemd[1]: sshd@18-10.0.0.113:22-10.0.0.1:37228.service: Deactivated successfully. Nov 6 00:27:51.673914 systemd[1]: session-19.scope: Deactivated successfully. Nov 6 00:27:51.676870 systemd-logind[1523]: Removed session 19. Nov 6 00:27:52.051032 kubelet[2762]: E1106 00:27:52.050857 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:27:56.717567 systemd[1]: Started sshd@19-10.0.0.113:22-10.0.0.1:50820.service - OpenSSH per-connection server daemon (10.0.0.1:50820). Nov 6 00:27:56.922709 sshd[4275]: Accepted publickey for core from 10.0.0.1 port 50820 ssh2: RSA SHA256:PmSYF5WO1c+PbjRA1Pm6yQw5/JNmNUR55sY7don0Q4E Nov 6 00:27:56.922461 sshd-session[4275]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:27:56.945575 systemd-logind[1523]: New session 20 of user core. Nov 6 00:27:56.971474 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 6 00:27:57.373209 sshd[4278]: Connection closed by 10.0.0.1 port 50820 Nov 6 00:27:57.374280 sshd-session[4275]: pam_unix(sshd:session): session closed for user core Nov 6 00:27:57.381067 systemd[1]: sshd@19-10.0.0.113:22-10.0.0.1:50820.service: Deactivated successfully. Nov 6 00:27:57.388636 systemd[1]: session-20.scope: Deactivated successfully. Nov 6 00:27:57.395211 systemd-logind[1523]: Session 20 logged out. Waiting for processes to exit. Nov 6 00:27:57.400634 systemd-logind[1523]: Removed session 20. Nov 6 00:28:02.415952 systemd[1]: Started sshd@20-10.0.0.113:22-10.0.0.1:50826.service - OpenSSH per-connection server daemon (10.0.0.1:50826). Nov 6 00:28:02.538080 sshd[4291]: Accepted publickey for core from 10.0.0.1 port 50826 ssh2: RSA SHA256:PmSYF5WO1c+PbjRA1Pm6yQw5/JNmNUR55sY7don0Q4E Nov 6 00:28:02.542712 sshd-session[4291]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:28:02.570539 systemd-logind[1523]: New session 21 of user core. Nov 6 00:28:02.577442 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 6 00:28:02.823305 sshd[4294]: Connection closed by 10.0.0.1 port 50826 Nov 6 00:28:02.821317 sshd-session[4291]: pam_unix(sshd:session): session closed for user core Nov 6 00:28:02.843558 systemd[1]: sshd@20-10.0.0.113:22-10.0.0.1:50826.service: Deactivated successfully. Nov 6 00:28:02.853740 systemd[1]: session-21.scope: Deactivated successfully. Nov 6 00:28:02.857224 systemd-logind[1523]: Session 21 logged out. Waiting for processes to exit. Nov 6 00:28:02.866933 systemd-logind[1523]: Removed session 21. Nov 6 00:28:07.855705 systemd[1]: Started sshd@21-10.0.0.113:22-10.0.0.1:37254.service - OpenSSH per-connection server daemon (10.0.0.1:37254). Nov 6 00:28:08.007405 sshd[4307]: Accepted publickey for core from 10.0.0.1 port 37254 ssh2: RSA SHA256:PmSYF5WO1c+PbjRA1Pm6yQw5/JNmNUR55sY7don0Q4E Nov 6 00:28:08.012570 sshd-session[4307]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:28:08.021807 systemd-logind[1523]: New session 22 of user core. Nov 6 00:28:08.032225 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 6 00:28:08.325450 sshd[4310]: Connection closed by 10.0.0.1 port 37254 Nov 6 00:28:08.326063 sshd-session[4307]: pam_unix(sshd:session): session closed for user core Nov 6 00:28:08.338141 systemd[1]: sshd@21-10.0.0.113:22-10.0.0.1:37254.service: Deactivated successfully. Nov 6 00:28:08.350069 systemd[1]: session-22.scope: Deactivated successfully. Nov 6 00:28:08.355524 systemd-logind[1523]: Session 22 logged out. Waiting for processes to exit. Nov 6 00:28:08.359511 systemd-logind[1523]: Removed session 22. Nov 6 00:28:11.055367 kubelet[2762]: E1106 00:28:11.054403 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:28:13.384569 systemd[1]: Started sshd@22-10.0.0.113:22-10.0.0.1:37256.service - OpenSSH per-connection server daemon (10.0.0.1:37256). Nov 6 00:28:13.520066 sshd[4326]: Accepted publickey for core from 10.0.0.1 port 37256 ssh2: RSA SHA256:PmSYF5WO1c+PbjRA1Pm6yQw5/JNmNUR55sY7don0Q4E Nov 6 00:28:13.523087 sshd-session[4326]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:28:13.532966 systemd-logind[1523]: New session 23 of user core. Nov 6 00:28:13.550315 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 6 00:28:13.966315 sshd[4329]: Connection closed by 10.0.0.1 port 37256 Nov 6 00:28:13.965732 sshd-session[4326]: pam_unix(sshd:session): session closed for user core Nov 6 00:28:13.985274 systemd[1]: sshd@22-10.0.0.113:22-10.0.0.1:37256.service: Deactivated successfully. Nov 6 00:28:13.993346 systemd[1]: session-23.scope: Deactivated successfully. Nov 6 00:28:14.000056 systemd-logind[1523]: Session 23 logged out. Waiting for processes to exit. Nov 6 00:28:14.008926 systemd-logind[1523]: Removed session 23. Nov 6 00:28:18.991044 systemd[1]: Started sshd@23-10.0.0.113:22-10.0.0.1:39576.service - OpenSSH per-connection server daemon (10.0.0.1:39576). Nov 6 00:28:19.123472 sshd[4342]: Accepted publickey for core from 10.0.0.1 port 39576 ssh2: RSA SHA256:PmSYF5WO1c+PbjRA1Pm6yQw5/JNmNUR55sY7don0Q4E Nov 6 00:28:19.128075 sshd-session[4342]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:28:19.154772 systemd-logind[1523]: New session 24 of user core. Nov 6 00:28:19.168258 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 6 00:28:19.426062 sshd[4345]: Connection closed by 10.0.0.1 port 39576 Nov 6 00:28:19.426993 sshd-session[4342]: pam_unix(sshd:session): session closed for user core Nov 6 00:28:19.450427 systemd[1]: sshd@23-10.0.0.113:22-10.0.0.1:39576.service: Deactivated successfully. Nov 6 00:28:19.454602 systemd[1]: session-24.scope: Deactivated successfully. Nov 6 00:28:19.461971 systemd-logind[1523]: Session 24 logged out. Waiting for processes to exit. Nov 6 00:28:19.475493 systemd[1]: Started sshd@24-10.0.0.113:22-10.0.0.1:39586.service - OpenSSH per-connection server daemon (10.0.0.1:39586). Nov 6 00:28:19.480340 systemd-logind[1523]: Removed session 24. Nov 6 00:28:19.596253 sshd[4359]: Accepted publickey for core from 10.0.0.1 port 39586 ssh2: RSA SHA256:PmSYF5WO1c+PbjRA1Pm6yQw5/JNmNUR55sY7don0Q4E Nov 6 00:28:19.601074 sshd-session[4359]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:28:19.617772 systemd-logind[1523]: New session 25 of user core. Nov 6 00:28:19.636723 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 6 00:28:20.256146 sshd[4362]: Connection closed by 10.0.0.1 port 39586 Nov 6 00:28:20.259701 sshd-session[4359]: pam_unix(sshd:session): session closed for user core Nov 6 00:28:20.276714 systemd[1]: sshd@24-10.0.0.113:22-10.0.0.1:39586.service: Deactivated successfully. Nov 6 00:28:20.284865 systemd[1]: session-25.scope: Deactivated successfully. Nov 6 00:28:20.290630 systemd-logind[1523]: Session 25 logged out. Waiting for processes to exit. Nov 6 00:28:20.306235 systemd[1]: Started sshd@25-10.0.0.113:22-10.0.0.1:39596.service - OpenSSH per-connection server daemon (10.0.0.1:39596). Nov 6 00:28:20.310747 systemd-logind[1523]: Removed session 25. Nov 6 00:28:20.427472 sshd[4374]: Accepted publickey for core from 10.0.0.1 port 39596 ssh2: RSA SHA256:PmSYF5WO1c+PbjRA1Pm6yQw5/JNmNUR55sY7don0Q4E Nov 6 00:28:20.429752 sshd-session[4374]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:28:20.447119 systemd-logind[1523]: New session 26 of user core. Nov 6 00:28:20.457318 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 6 00:28:21.046941 kubelet[2762]: E1106 00:28:21.044087 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:28:21.963948 sshd[4377]: Connection closed by 10.0.0.1 port 39596 Nov 6 00:28:21.966640 sshd-session[4374]: pam_unix(sshd:session): session closed for user core Nov 6 00:28:22.012559 systemd[1]: sshd@25-10.0.0.113:22-10.0.0.1:39596.service: Deactivated successfully. Nov 6 00:28:22.041986 systemd[1]: session-26.scope: Deactivated successfully. Nov 6 00:28:22.048945 systemd-logind[1523]: Session 26 logged out. Waiting for processes to exit. Nov 6 00:28:22.075463 systemd[1]: Started sshd@26-10.0.0.113:22-10.0.0.1:39604.service - OpenSSH per-connection server daemon (10.0.0.1:39604). Nov 6 00:28:22.088991 systemd-logind[1523]: Removed session 26. Nov 6 00:28:22.188935 sshd[4398]: Accepted publickey for core from 10.0.0.1 port 39604 ssh2: RSA SHA256:PmSYF5WO1c+PbjRA1Pm6yQw5/JNmNUR55sY7don0Q4E Nov 6 00:28:22.189744 sshd-session[4398]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:28:22.216025 systemd-logind[1523]: New session 27 of user core. Nov 6 00:28:22.236270 systemd[1]: Started session-27.scope - Session 27 of User core. Nov 6 00:28:22.854653 sshd[4403]: Connection closed by 10.0.0.1 port 39604 Nov 6 00:28:22.849855 sshd-session[4398]: pam_unix(sshd:session): session closed for user core Nov 6 00:28:22.894211 systemd[1]: sshd@26-10.0.0.113:22-10.0.0.1:39604.service: Deactivated successfully. Nov 6 00:28:22.898552 systemd[1]: session-27.scope: Deactivated successfully. Nov 6 00:28:22.915182 systemd-logind[1523]: Session 27 logged out. Waiting for processes to exit. Nov 6 00:28:22.920963 systemd[1]: Started sshd@27-10.0.0.113:22-10.0.0.1:39612.service - OpenSSH per-connection server daemon (10.0.0.1:39612). Nov 6 00:28:22.924198 systemd-logind[1523]: Removed session 27. Nov 6 00:28:23.000050 sshd[4415]: Accepted publickey for core from 10.0.0.1 port 39612 ssh2: RSA SHA256:PmSYF5WO1c+PbjRA1Pm6yQw5/JNmNUR55sY7don0Q4E Nov 6 00:28:23.002677 sshd-session[4415]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:28:23.027365 systemd-logind[1523]: New session 28 of user core. Nov 6 00:28:23.043869 systemd[1]: Started session-28.scope - Session 28 of User core. Nov 6 00:28:23.390820 sshd[4418]: Connection closed by 10.0.0.1 port 39612 Nov 6 00:28:23.384153 sshd-session[4415]: pam_unix(sshd:session): session closed for user core Nov 6 00:28:23.412249 systemd[1]: sshd@27-10.0.0.113:22-10.0.0.1:39612.service: Deactivated successfully. Nov 6 00:28:23.423639 systemd[1]: session-28.scope: Deactivated successfully. Nov 6 00:28:23.427851 systemd-logind[1523]: Session 28 logged out. Waiting for processes to exit. Nov 6 00:28:23.429688 systemd-logind[1523]: Removed session 28. Nov 6 00:28:26.045844 kubelet[2762]: E1106 00:28:26.043420 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:28:26.045844 kubelet[2762]: E1106 00:28:26.044319 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:28:28.411743 systemd[1]: Started sshd@28-10.0.0.113:22-10.0.0.1:43028.service - OpenSSH per-connection server daemon (10.0.0.1:43028). Nov 6 00:28:28.547352 sshd[4431]: Accepted publickey for core from 10.0.0.1 port 43028 ssh2: RSA SHA256:PmSYF5WO1c+PbjRA1Pm6yQw5/JNmNUR55sY7don0Q4E Nov 6 00:28:28.550058 sshd-session[4431]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:28:28.574147 systemd-logind[1523]: New session 29 of user core. Nov 6 00:28:28.591992 systemd[1]: Started session-29.scope - Session 29 of User core. Nov 6 00:28:28.936779 sshd[4435]: Connection closed by 10.0.0.1 port 43028 Nov 6 00:28:28.940734 sshd-session[4431]: pam_unix(sshd:session): session closed for user core Nov 6 00:28:28.957879 systemd[1]: sshd@28-10.0.0.113:22-10.0.0.1:43028.service: Deactivated successfully. Nov 6 00:28:28.968654 systemd[1]: session-29.scope: Deactivated successfully. Nov 6 00:28:28.978735 systemd-logind[1523]: Session 29 logged out. Waiting for processes to exit. Nov 6 00:28:28.985326 systemd-logind[1523]: Removed session 29. Nov 6 00:28:33.971037 systemd[1]: Started sshd@29-10.0.0.113:22-10.0.0.1:43030.service - OpenSSH per-connection server daemon (10.0.0.1:43030). Nov 6 00:28:34.062710 sshd[4449]: Accepted publickey for core from 10.0.0.1 port 43030 ssh2: RSA SHA256:PmSYF5WO1c+PbjRA1Pm6yQw5/JNmNUR55sY7don0Q4E Nov 6 00:28:34.066145 sshd-session[4449]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:28:34.079325 systemd-logind[1523]: New session 30 of user core. Nov 6 00:28:34.089254 systemd[1]: Started session-30.scope - Session 30 of User core. Nov 6 00:28:34.260922 sshd[4452]: Connection closed by 10.0.0.1 port 43030 Nov 6 00:28:34.261723 sshd-session[4449]: pam_unix(sshd:session): session closed for user core Nov 6 00:28:34.271699 systemd[1]: sshd@29-10.0.0.113:22-10.0.0.1:43030.service: Deactivated successfully. Nov 6 00:28:34.275652 systemd[1]: session-30.scope: Deactivated successfully. Nov 6 00:28:34.279856 systemd-logind[1523]: Session 30 logged out. Waiting for processes to exit. Nov 6 00:28:34.284237 systemd-logind[1523]: Removed session 30. Nov 6 00:28:39.294417 systemd[1]: Started sshd@30-10.0.0.113:22-10.0.0.1:54804.service - OpenSSH per-connection server daemon (10.0.0.1:54804). Nov 6 00:28:39.406538 sshd[4467]: Accepted publickey for core from 10.0.0.1 port 54804 ssh2: RSA SHA256:PmSYF5WO1c+PbjRA1Pm6yQw5/JNmNUR55sY7don0Q4E Nov 6 00:28:39.411135 sshd-session[4467]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:28:39.439463 systemd-logind[1523]: New session 31 of user core. Nov 6 00:28:39.467596 systemd[1]: Started session-31.scope - Session 31 of User core. Nov 6 00:28:39.705268 sshd[4470]: Connection closed by 10.0.0.1 port 54804 Nov 6 00:28:39.707322 sshd-session[4467]: pam_unix(sshd:session): session closed for user core Nov 6 00:28:39.722391 systemd[1]: sshd@30-10.0.0.113:22-10.0.0.1:54804.service: Deactivated successfully. Nov 6 00:28:39.732583 systemd[1]: session-31.scope: Deactivated successfully. Nov 6 00:28:39.733864 systemd-logind[1523]: Session 31 logged out. Waiting for processes to exit. Nov 6 00:28:39.740759 systemd-logind[1523]: Removed session 31. Nov 6 00:28:44.745901 systemd[1]: Started sshd@31-10.0.0.113:22-10.0.0.1:54820.service - OpenSSH per-connection server daemon (10.0.0.1:54820). Nov 6 00:28:44.899200 sshd[4484]: Accepted publickey for core from 10.0.0.1 port 54820 ssh2: RSA SHA256:PmSYF5WO1c+PbjRA1Pm6yQw5/JNmNUR55sY7don0Q4E Nov 6 00:28:44.900787 sshd-session[4484]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:28:44.914942 systemd-logind[1523]: New session 32 of user core. Nov 6 00:28:44.935226 systemd[1]: Started session-32.scope - Session 32 of User core. Nov 6 00:28:45.144146 sshd[4487]: Connection closed by 10.0.0.1 port 54820 Nov 6 00:28:45.144584 sshd-session[4484]: pam_unix(sshd:session): session closed for user core Nov 6 00:28:45.154115 systemd[1]: sshd@31-10.0.0.113:22-10.0.0.1:54820.service: Deactivated successfully. Nov 6 00:28:45.162603 systemd[1]: session-32.scope: Deactivated successfully. Nov 6 00:28:45.168561 systemd-logind[1523]: Session 32 logged out. Waiting for processes to exit. Nov 6 00:28:45.174768 systemd-logind[1523]: Removed session 32. Nov 6 00:28:49.053127 kubelet[2762]: E1106 00:28:49.053056 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:28:50.183901 systemd[1]: Started sshd@32-10.0.0.113:22-10.0.0.1:58264.service - OpenSSH per-connection server daemon (10.0.0.1:58264). Nov 6 00:28:50.304238 sshd[4501]: Accepted publickey for core from 10.0.0.1 port 58264 ssh2: RSA SHA256:PmSYF5WO1c+PbjRA1Pm6yQw5/JNmNUR55sY7don0Q4E Nov 6 00:28:50.310383 sshd-session[4501]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:28:50.329652 systemd-logind[1523]: New session 33 of user core. Nov 6 00:28:50.345594 systemd[1]: Started session-33.scope - Session 33 of User core. Nov 6 00:28:50.752465 sshd[4504]: Connection closed by 10.0.0.1 port 58264 Nov 6 00:28:50.752290 sshd-session[4501]: pam_unix(sshd:session): session closed for user core Nov 6 00:28:50.764207 systemd[1]: sshd@32-10.0.0.113:22-10.0.0.1:58264.service: Deactivated successfully. Nov 6 00:28:50.769471 systemd[1]: session-33.scope: Deactivated successfully. Nov 6 00:28:50.776448 systemd-logind[1523]: Session 33 logged out. Waiting for processes to exit. Nov 6 00:28:50.782700 systemd[1]: Started sshd@33-10.0.0.113:22-10.0.0.1:58270.service - OpenSSH per-connection server daemon (10.0.0.1:58270). Nov 6 00:28:50.785517 systemd-logind[1523]: Removed session 33. Nov 6 00:28:50.877926 sshd[4517]: Accepted publickey for core from 10.0.0.1 port 58270 ssh2: RSA SHA256:PmSYF5WO1c+PbjRA1Pm6yQw5/JNmNUR55sY7don0Q4E Nov 6 00:28:50.878539 sshd-session[4517]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:28:50.906942 systemd-logind[1523]: New session 34 of user core. Nov 6 00:28:50.922862 systemd[1]: Started session-34.scope - Session 34 of User core. Nov 6 00:28:52.856330 containerd[1541]: time="2025-11-06T00:28:52.856191314Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 6 00:28:52.859178 containerd[1541]: time="2025-11-06T00:28:52.859036892Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c612caac88f2161c303529ed7d9694074a6893ba911d7af89a392bd41c338cd3\" id:\"ab3947dc5a1c2fd0e8db14e59703f0f98816cd50105fd670f32241fbd8b51725\" pid:4545 exited_at:{seconds:1762388932 nanos:858032049}" Nov 6 00:28:52.930005 containerd[1541]: time="2025-11-06T00:28:52.929823083Z" level=info msg="StopContainer for \"c612caac88f2161c303529ed7d9694074a6893ba911d7af89a392bd41c338cd3\" with timeout 2 (s)" Nov 6 00:28:52.951042 containerd[1541]: time="2025-11-06T00:28:52.950861417Z" level=info msg="Stop container \"c612caac88f2161c303529ed7d9694074a6893ba911d7af89a392bd41c338cd3\" with signal terminated" Nov 6 00:28:52.980218 systemd-networkd[1460]: lxc_health: Link DOWN Nov 6 00:28:52.980232 systemd-networkd[1460]: lxc_health: Lost carrier Nov 6 00:28:53.046501 systemd[1]: cri-containerd-c612caac88f2161c303529ed7d9694074a6893ba911d7af89a392bd41c338cd3.scope: Deactivated successfully. Nov 6 00:28:53.047215 systemd[1]: cri-containerd-c612caac88f2161c303529ed7d9694074a6893ba911d7af89a392bd41c338cd3.scope: Consumed 9.768s CPU time, 124.9M memory peak, 204K read from disk, 13.3M written to disk. Nov 6 00:28:53.066012 containerd[1541]: time="2025-11-06T00:28:53.052171345Z" level=info msg="received exit event container_id:\"c612caac88f2161c303529ed7d9694074a6893ba911d7af89a392bd41c338cd3\" id:\"c612caac88f2161c303529ed7d9694074a6893ba911d7af89a392bd41c338cd3\" pid:3403 exited_at:{seconds:1762388933 nanos:48735612}" Nov 6 00:28:53.066012 containerd[1541]: time="2025-11-06T00:28:53.058231435Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c612caac88f2161c303529ed7d9694074a6893ba911d7af89a392bd41c338cd3\" id:\"c612caac88f2161c303529ed7d9694074a6893ba911d7af89a392bd41c338cd3\" pid:3403 exited_at:{seconds:1762388933 nanos:48735612}" Nov 6 00:28:53.160183 containerd[1541]: time="2025-11-06T00:28:53.159807609Z" level=info msg="StopContainer for \"367c1374b284454f3546b80da347b02b062f460b67edb09732e5cbcd20eba489\" with timeout 30 (s)" Nov 6 00:28:53.169000 containerd[1541]: time="2025-11-06T00:28:53.168584633Z" level=info msg="Stop container \"367c1374b284454f3546b80da347b02b062f460b67edb09732e5cbcd20eba489\" with signal terminated" Nov 6 00:28:53.211043 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c612caac88f2161c303529ed7d9694074a6893ba911d7af89a392bd41c338cd3-rootfs.mount: Deactivated successfully. Nov 6 00:28:53.230790 kubelet[2762]: E1106 00:28:53.229452 2762 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Nov 6 00:28:53.242636 systemd[1]: cri-containerd-367c1374b284454f3546b80da347b02b062f460b67edb09732e5cbcd20eba489.scope: Deactivated successfully. Nov 6 00:28:53.250995 containerd[1541]: time="2025-11-06T00:28:53.249649566Z" level=info msg="received exit event container_id:\"367c1374b284454f3546b80da347b02b062f460b67edb09732e5cbcd20eba489\" id:\"367c1374b284454f3546b80da347b02b062f460b67edb09732e5cbcd20eba489\" pid:3173 exited_at:{seconds:1762388933 nanos:249012383}" Nov 6 00:28:53.264541 containerd[1541]: time="2025-11-06T00:28:53.250356252Z" level=info msg="TaskExit event in podsandbox handler container_id:\"367c1374b284454f3546b80da347b02b062f460b67edb09732e5cbcd20eba489\" id:\"367c1374b284454f3546b80da347b02b062f460b67edb09732e5cbcd20eba489\" pid:3173 exited_at:{seconds:1762388933 nanos:249012383}" Nov 6 00:28:53.343687 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-367c1374b284454f3546b80da347b02b062f460b67edb09732e5cbcd20eba489-rootfs.mount: Deactivated successfully. Nov 6 00:28:53.412422 containerd[1541]: time="2025-11-06T00:28:53.407445892Z" level=info msg="StopContainer for \"c612caac88f2161c303529ed7d9694074a6893ba911d7af89a392bd41c338cd3\" returns successfully" Nov 6 00:28:53.428123 containerd[1541]: time="2025-11-06T00:28:53.427543123Z" level=info msg="StopPodSandbox for \"5086b9ba72418ebcd66943092abf02b29f9fd8c007d764d9cd2a081fdc91fb08\"" Nov 6 00:28:53.428123 containerd[1541]: time="2025-11-06T00:28:53.427685594Z" level=info msg="Container to stop \"23cc4fbf9a5d0b2e8e0ef436b4832e9b8bb111266a4829152bc9557f0b0e595b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 6 00:28:53.428123 containerd[1541]: time="2025-11-06T00:28:53.427702718Z" level=info msg="Container to stop \"b02671d13318eac3cf03b249f683e077eab0120d4a813ae647ca834f9820c82e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 6 00:28:53.428123 containerd[1541]: time="2025-11-06T00:28:53.427715532Z" level=info msg="Container to stop \"85900c7bf821baab339e33895b5a861c3935c98b57cc8723ff373440c2dee911\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 6 00:28:53.428123 containerd[1541]: time="2025-11-06T00:28:53.427727134Z" level=info msg="Container to stop \"c612caac88f2161c303529ed7d9694074a6893ba911d7af89a392bd41c338cd3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 6 00:28:53.428123 containerd[1541]: time="2025-11-06T00:28:53.427738967Z" level=info msg="Container to stop \"2568ce2b88f0de9d0d0100333fd51d8618bd044cddf3aa072d89c4b9b0ba4ced\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 6 00:28:53.428123 containerd[1541]: time="2025-11-06T00:28:53.427780926Z" level=info msg="StopContainer for \"367c1374b284454f3546b80da347b02b062f460b67edb09732e5cbcd20eba489\" returns successfully" Nov 6 00:28:53.428545 containerd[1541]: time="2025-11-06T00:28:53.428351213Z" level=info msg="StopPodSandbox for \"b68479c2bc560ed7134dba8b8ad7686853511c88cb71dd5e1faa9950d7bd59b5\"" Nov 6 00:28:53.435672 containerd[1541]: time="2025-11-06T00:28:53.435576792Z" level=info msg="Container to stop \"367c1374b284454f3546b80da347b02b062f460b67edb09732e5cbcd20eba489\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 6 00:28:53.454919 containerd[1541]: time="2025-11-06T00:28:53.454579700Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5086b9ba72418ebcd66943092abf02b29f9fd8c007d764d9cd2a081fdc91fb08\" id:\"5086b9ba72418ebcd66943092abf02b29f9fd8c007d764d9cd2a081fdc91fb08\" pid:2956 exit_status:137 exited_at:{seconds:1762388933 nanos:454070179}" Nov 6 00:28:53.461950 systemd[1]: cri-containerd-5086b9ba72418ebcd66943092abf02b29f9fd8c007d764d9cd2a081fdc91fb08.scope: Deactivated successfully. Nov 6 00:28:53.509734 systemd[1]: cri-containerd-b68479c2bc560ed7134dba8b8ad7686853511c88cb71dd5e1faa9950d7bd59b5.scope: Deactivated successfully. Nov 6 00:28:53.618674 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5086b9ba72418ebcd66943092abf02b29f9fd8c007d764d9cd2a081fdc91fb08-rootfs.mount: Deactivated successfully. Nov 6 00:28:53.627286 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b68479c2bc560ed7134dba8b8ad7686853511c88cb71dd5e1faa9950d7bd59b5-rootfs.mount: Deactivated successfully. Nov 6 00:28:53.670574 containerd[1541]: time="2025-11-06T00:28:53.670419578Z" level=info msg="shim disconnected" id=b68479c2bc560ed7134dba8b8ad7686853511c88cb71dd5e1faa9950d7bd59b5 namespace=k8s.io Nov 6 00:28:53.670926 containerd[1541]: time="2025-11-06T00:28:53.670772220Z" level=warning msg="cleaning up after shim disconnected" id=b68479c2bc560ed7134dba8b8ad7686853511c88cb71dd5e1faa9950d7bd59b5 namespace=k8s.io Nov 6 00:28:53.753433 containerd[1541]: time="2025-11-06T00:28:53.670798550Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 6 00:28:53.753433 containerd[1541]: time="2025-11-06T00:28:53.670572079Z" level=info msg="shim disconnected" id=5086b9ba72418ebcd66943092abf02b29f9fd8c007d764d9cd2a081fdc91fb08 namespace=k8s.io Nov 6 00:28:53.753433 containerd[1541]: time="2025-11-06T00:28:53.750384508Z" level=warning msg="cleaning up after shim disconnected" id=5086b9ba72418ebcd66943092abf02b29f9fd8c007d764d9cd2a081fdc91fb08 namespace=k8s.io Nov 6 00:28:53.753433 containerd[1541]: time="2025-11-06T00:28:53.750402602Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 6 00:28:53.850854 containerd[1541]: time="2025-11-06T00:28:53.850796003Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b68479c2bc560ed7134dba8b8ad7686853511c88cb71dd5e1faa9950d7bd59b5\" id:\"b68479c2bc560ed7134dba8b8ad7686853511c88cb71dd5e1faa9950d7bd59b5\" pid:2881 exit_status:137 exited_at:{seconds:1762388933 nanos:514875944}" Nov 6 00:28:53.851991 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b68479c2bc560ed7134dba8b8ad7686853511c88cb71dd5e1faa9950d7bd59b5-shm.mount: Deactivated successfully. Nov 6 00:28:53.864593 containerd[1541]: time="2025-11-06T00:28:53.861803165Z" level=info msg="TearDown network for sandbox \"5086b9ba72418ebcd66943092abf02b29f9fd8c007d764d9cd2a081fdc91fb08\" successfully" Nov 6 00:28:53.864593 containerd[1541]: time="2025-11-06T00:28:53.861868209Z" level=info msg="StopPodSandbox for \"5086b9ba72418ebcd66943092abf02b29f9fd8c007d764d9cd2a081fdc91fb08\" returns successfully" Nov 6 00:28:53.867165 containerd[1541]: time="2025-11-06T00:28:53.866136958Z" level=info msg="received exit event sandbox_id:\"b68479c2bc560ed7134dba8b8ad7686853511c88cb71dd5e1faa9950d7bd59b5\" exit_status:137 exited_at:{seconds:1762388933 nanos:514875944}" Nov 6 00:28:53.867165 containerd[1541]: time="2025-11-06T00:28:53.866733805Z" level=info msg="received exit event sandbox_id:\"5086b9ba72418ebcd66943092abf02b29f9fd8c007d764d9cd2a081fdc91fb08\" exit_status:137 exited_at:{seconds:1762388933 nanos:454070179}" Nov 6 00:28:53.874735 containerd[1541]: time="2025-11-06T00:28:53.874669837Z" level=info msg="TearDown network for sandbox \"b68479c2bc560ed7134dba8b8ad7686853511c88cb71dd5e1faa9950d7bd59b5\" successfully" Nov 6 00:28:53.875102 containerd[1541]: time="2025-11-06T00:28:53.874966532Z" level=info msg="StopPodSandbox for \"b68479c2bc560ed7134dba8b8ad7686853511c88cb71dd5e1faa9950d7bd59b5\" returns successfully" Nov 6 00:28:54.023556 kubelet[2762]: I1106 00:28:54.018524 2762 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/60420992-95c1-4e3f-94a6-8591d0324a99-cni-path\") pod \"60420992-95c1-4e3f-94a6-8591d0324a99\" (UID: \"60420992-95c1-4e3f-94a6-8591d0324a99\") " Nov 6 00:28:54.023556 kubelet[2762]: I1106 00:28:54.018601 2762 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dbc7d5c4-e427-44e9-a843-e50d922229f9-cilium-config-path\") pod \"dbc7d5c4-e427-44e9-a843-e50d922229f9\" (UID: \"dbc7d5c4-e427-44e9-a843-e50d922229f9\") " Nov 6 00:28:54.023556 kubelet[2762]: I1106 00:28:54.018627 2762 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/60420992-95c1-4e3f-94a6-8591d0324a99-cilium-run\") pod \"60420992-95c1-4e3f-94a6-8591d0324a99\" (UID: \"60420992-95c1-4e3f-94a6-8591d0324a99\") " Nov 6 00:28:54.023556 kubelet[2762]: I1106 00:28:54.018649 2762 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/60420992-95c1-4e3f-94a6-8591d0324a99-cilium-config-path\") pod \"60420992-95c1-4e3f-94a6-8591d0324a99\" (UID: \"60420992-95c1-4e3f-94a6-8591d0324a99\") " Nov 6 00:28:54.023556 kubelet[2762]: I1106 00:28:54.018675 2762 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/60420992-95c1-4e3f-94a6-8591d0324a99-bpf-maps\") pod \"60420992-95c1-4e3f-94a6-8591d0324a99\" (UID: \"60420992-95c1-4e3f-94a6-8591d0324a99\") " Nov 6 00:28:54.023556 kubelet[2762]: I1106 00:28:54.018700 2762 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/60420992-95c1-4e3f-94a6-8591d0324a99-hubble-tls\") pod \"60420992-95c1-4e3f-94a6-8591d0324a99\" (UID: \"60420992-95c1-4e3f-94a6-8591d0324a99\") " Nov 6 00:28:54.024001 kubelet[2762]: I1106 00:28:54.018720 2762 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/60420992-95c1-4e3f-94a6-8591d0324a99-hostproc\") pod \"60420992-95c1-4e3f-94a6-8591d0324a99\" (UID: \"60420992-95c1-4e3f-94a6-8591d0324a99\") " Nov 6 00:28:54.024001 kubelet[2762]: I1106 00:28:54.018745 2762 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lg75h\" (UniqueName: \"kubernetes.io/projected/60420992-95c1-4e3f-94a6-8591d0324a99-kube-api-access-lg75h\") pod \"60420992-95c1-4e3f-94a6-8591d0324a99\" (UID: \"60420992-95c1-4e3f-94a6-8591d0324a99\") " Nov 6 00:28:54.024001 kubelet[2762]: I1106 00:28:54.018768 2762 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/60420992-95c1-4e3f-94a6-8591d0324a99-xtables-lock\") pod \"60420992-95c1-4e3f-94a6-8591d0324a99\" (UID: \"60420992-95c1-4e3f-94a6-8591d0324a99\") " Nov 6 00:28:54.024001 kubelet[2762]: I1106 00:28:54.018794 2762 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5xwjp\" (UniqueName: \"kubernetes.io/projected/dbc7d5c4-e427-44e9-a843-e50d922229f9-kube-api-access-5xwjp\") pod \"dbc7d5c4-e427-44e9-a843-e50d922229f9\" (UID: \"dbc7d5c4-e427-44e9-a843-e50d922229f9\") " Nov 6 00:28:54.024001 kubelet[2762]: I1106 00:28:54.018824 2762 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/60420992-95c1-4e3f-94a6-8591d0324a99-host-proc-sys-net\") pod \"60420992-95c1-4e3f-94a6-8591d0324a99\" (UID: \"60420992-95c1-4e3f-94a6-8591d0324a99\") " Nov 6 00:28:54.024001 kubelet[2762]: I1106 00:28:54.018849 2762 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/60420992-95c1-4e3f-94a6-8591d0324a99-lib-modules\") pod \"60420992-95c1-4e3f-94a6-8591d0324a99\" (UID: \"60420992-95c1-4e3f-94a6-8591d0324a99\") " Nov 6 00:28:54.024234 kubelet[2762]: I1106 00:28:54.018911 2762 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/60420992-95c1-4e3f-94a6-8591d0324a99-clustermesh-secrets\") pod \"60420992-95c1-4e3f-94a6-8591d0324a99\" (UID: \"60420992-95c1-4e3f-94a6-8591d0324a99\") " Nov 6 00:28:54.024234 kubelet[2762]: I1106 00:28:54.018936 2762 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/60420992-95c1-4e3f-94a6-8591d0324a99-host-proc-sys-kernel\") pod \"60420992-95c1-4e3f-94a6-8591d0324a99\" (UID: \"60420992-95c1-4e3f-94a6-8591d0324a99\") " Nov 6 00:28:54.024234 kubelet[2762]: I1106 00:28:54.018959 2762 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/60420992-95c1-4e3f-94a6-8591d0324a99-cilium-cgroup\") pod \"60420992-95c1-4e3f-94a6-8591d0324a99\" (UID: \"60420992-95c1-4e3f-94a6-8591d0324a99\") " Nov 6 00:28:54.024234 kubelet[2762]: I1106 00:28:54.018980 2762 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/60420992-95c1-4e3f-94a6-8591d0324a99-etc-cni-netd\") pod \"60420992-95c1-4e3f-94a6-8591d0324a99\" (UID: \"60420992-95c1-4e3f-94a6-8591d0324a99\") " Nov 6 00:28:54.024234 kubelet[2762]: I1106 00:28:54.019128 2762 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/60420992-95c1-4e3f-94a6-8591d0324a99-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "60420992-95c1-4e3f-94a6-8591d0324a99" (UID: "60420992-95c1-4e3f-94a6-8591d0324a99"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 00:28:54.024427 kubelet[2762]: I1106 00:28:54.019192 2762 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/60420992-95c1-4e3f-94a6-8591d0324a99-cni-path" (OuterVolumeSpecName: "cni-path") pod "60420992-95c1-4e3f-94a6-8591d0324a99" (UID: "60420992-95c1-4e3f-94a6-8591d0324a99"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 00:28:54.024427 kubelet[2762]: I1106 00:28:54.022628 2762 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/60420992-95c1-4e3f-94a6-8591d0324a99-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "60420992-95c1-4e3f-94a6-8591d0324a99" (UID: "60420992-95c1-4e3f-94a6-8591d0324a99"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 00:28:54.024427 kubelet[2762]: I1106 00:28:54.022698 2762 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/60420992-95c1-4e3f-94a6-8591d0324a99-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "60420992-95c1-4e3f-94a6-8591d0324a99" (UID: "60420992-95c1-4e3f-94a6-8591d0324a99"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 00:28:54.024427 kubelet[2762]: I1106 00:28:54.023010 2762 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/60420992-95c1-4e3f-94a6-8591d0324a99-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "60420992-95c1-4e3f-94a6-8591d0324a99" (UID: "60420992-95c1-4e3f-94a6-8591d0324a99"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 00:28:54.028020 kubelet[2762]: I1106 00:28:54.026867 2762 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dbc7d5c4-e427-44e9-a843-e50d922229f9-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "dbc7d5c4-e427-44e9-a843-e50d922229f9" (UID: "dbc7d5c4-e427-44e9-a843-e50d922229f9"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 6 00:28:54.031492 kubelet[2762]: I1106 00:28:54.029289 2762 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/60420992-95c1-4e3f-94a6-8591d0324a99-hostproc" (OuterVolumeSpecName: "hostproc") pod "60420992-95c1-4e3f-94a6-8591d0324a99" (UID: "60420992-95c1-4e3f-94a6-8591d0324a99"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 00:28:54.032833 kubelet[2762]: I1106 00:28:54.030338 2762 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/60420992-95c1-4e3f-94a6-8591d0324a99-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "60420992-95c1-4e3f-94a6-8591d0324a99" (UID: "60420992-95c1-4e3f-94a6-8591d0324a99"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 00:28:54.033004 kubelet[2762]: I1106 00:28:54.032849 2762 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/60420992-95c1-4e3f-94a6-8591d0324a99-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "60420992-95c1-4e3f-94a6-8591d0324a99" (UID: "60420992-95c1-4e3f-94a6-8591d0324a99"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 00:28:54.033072 kubelet[2762]: I1106 00:28:54.033016 2762 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/60420992-95c1-4e3f-94a6-8591d0324a99-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "60420992-95c1-4e3f-94a6-8591d0324a99" (UID: "60420992-95c1-4e3f-94a6-8591d0324a99"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 00:28:54.033111 kubelet[2762]: I1106 00:28:54.033073 2762 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/60420992-95c1-4e3f-94a6-8591d0324a99-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "60420992-95c1-4e3f-94a6-8591d0324a99" (UID: "60420992-95c1-4e3f-94a6-8591d0324a99"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 00:28:54.037388 kubelet[2762]: I1106 00:28:54.037302 2762 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60420992-95c1-4e3f-94a6-8591d0324a99-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "60420992-95c1-4e3f-94a6-8591d0324a99" (UID: "60420992-95c1-4e3f-94a6-8591d0324a99"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 6 00:28:54.045647 kubelet[2762]: I1106 00:28:54.045581 2762 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60420992-95c1-4e3f-94a6-8591d0324a99-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "60420992-95c1-4e3f-94a6-8591d0324a99" (UID: "60420992-95c1-4e3f-94a6-8591d0324a99"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 6 00:28:54.048433 kubelet[2762]: I1106 00:28:54.048368 2762 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60420992-95c1-4e3f-94a6-8591d0324a99-kube-api-access-lg75h" (OuterVolumeSpecName: "kube-api-access-lg75h") pod "60420992-95c1-4e3f-94a6-8591d0324a99" (UID: "60420992-95c1-4e3f-94a6-8591d0324a99"). InnerVolumeSpecName "kube-api-access-lg75h". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 6 00:28:54.048648 kubelet[2762]: I1106 00:28:54.048418 2762 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60420992-95c1-4e3f-94a6-8591d0324a99-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "60420992-95c1-4e3f-94a6-8591d0324a99" (UID: "60420992-95c1-4e3f-94a6-8591d0324a99"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 6 00:28:54.052743 kubelet[2762]: I1106 00:28:54.052640 2762 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbc7d5c4-e427-44e9-a843-e50d922229f9-kube-api-access-5xwjp" (OuterVolumeSpecName: "kube-api-access-5xwjp") pod "dbc7d5c4-e427-44e9-a843-e50d922229f9" (UID: "dbc7d5c4-e427-44e9-a843-e50d922229f9"). InnerVolumeSpecName "kube-api-access-5xwjp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 6 00:28:54.120048 kubelet[2762]: I1106 00:28:54.119995 2762 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lg75h\" (UniqueName: \"kubernetes.io/projected/60420992-95c1-4e3f-94a6-8591d0324a99-kube-api-access-lg75h\") on node \"localhost\" DevicePath \"\"" Nov 6 00:28:54.120048 kubelet[2762]: I1106 00:28:54.120045 2762 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/60420992-95c1-4e3f-94a6-8591d0324a99-xtables-lock\") on node \"localhost\" DevicePath \"\"" Nov 6 00:28:54.120048 kubelet[2762]: I1106 00:28:54.120061 2762 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5xwjp\" (UniqueName: \"kubernetes.io/projected/dbc7d5c4-e427-44e9-a843-e50d922229f9-kube-api-access-5xwjp\") on node \"localhost\" DevicePath \"\"" Nov 6 00:28:54.120417 kubelet[2762]: I1106 00:28:54.120073 2762 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/60420992-95c1-4e3f-94a6-8591d0324a99-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Nov 6 00:28:54.120417 kubelet[2762]: I1106 00:28:54.120084 2762 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/60420992-95c1-4e3f-94a6-8591d0324a99-lib-modules\") on node \"localhost\" DevicePath \"\"" Nov 6 00:28:54.120417 kubelet[2762]: I1106 00:28:54.120095 2762 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/60420992-95c1-4e3f-94a6-8591d0324a99-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Nov 6 00:28:54.120417 kubelet[2762]: I1106 00:28:54.120113 2762 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/60420992-95c1-4e3f-94a6-8591d0324a99-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Nov 6 00:28:54.120417 kubelet[2762]: I1106 00:28:54.120128 2762 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/60420992-95c1-4e3f-94a6-8591d0324a99-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Nov 6 00:28:54.120417 kubelet[2762]: I1106 00:28:54.120154 2762 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/60420992-95c1-4e3f-94a6-8591d0324a99-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Nov 6 00:28:54.120417 kubelet[2762]: I1106 00:28:54.120185 2762 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/60420992-95c1-4e3f-94a6-8591d0324a99-cni-path\") on node \"localhost\" DevicePath \"\"" Nov 6 00:28:54.120417 kubelet[2762]: I1106 00:28:54.120218 2762 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dbc7d5c4-e427-44e9-a843-e50d922229f9-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Nov 6 00:28:54.120696 kubelet[2762]: I1106 00:28:54.120232 2762 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/60420992-95c1-4e3f-94a6-8591d0324a99-cilium-run\") on node \"localhost\" DevicePath \"\"" Nov 6 00:28:54.120696 kubelet[2762]: I1106 00:28:54.120244 2762 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/60420992-95c1-4e3f-94a6-8591d0324a99-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Nov 6 00:28:54.120696 kubelet[2762]: I1106 00:28:54.120255 2762 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/60420992-95c1-4e3f-94a6-8591d0324a99-bpf-maps\") on node \"localhost\" DevicePath \"\"" Nov 6 00:28:54.120696 kubelet[2762]: I1106 00:28:54.120268 2762 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/60420992-95c1-4e3f-94a6-8591d0324a99-hubble-tls\") on node \"localhost\" DevicePath \"\"" Nov 6 00:28:54.120696 kubelet[2762]: I1106 00:28:54.120278 2762 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/60420992-95c1-4e3f-94a6-8591d0324a99-hostproc\") on node \"localhost\" DevicePath \"\"" Nov 6 00:28:54.194696 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5086b9ba72418ebcd66943092abf02b29f9fd8c007d764d9cd2a081fdc91fb08-shm.mount: Deactivated successfully. Nov 6 00:28:54.195172 systemd[1]: var-lib-kubelet-pods-60420992\x2d95c1\x2d4e3f\x2d94a6\x2d8591d0324a99-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlg75h.mount: Deactivated successfully. Nov 6 00:28:54.196142 systemd[1]: var-lib-kubelet-pods-60420992\x2d95c1\x2d4e3f\x2d94a6\x2d8591d0324a99-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Nov 6 00:28:54.196261 systemd[1]: var-lib-kubelet-pods-60420992\x2d95c1\x2d4e3f\x2d94a6\x2d8591d0324a99-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Nov 6 00:28:54.196355 systemd[1]: var-lib-kubelet-pods-dbc7d5c4\x2de427\x2d44e9\x2da843\x2de50d922229f9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5xwjp.mount: Deactivated successfully. Nov 6 00:28:54.333734 kubelet[2762]: I1106 00:28:54.330425 2762 scope.go:117] "RemoveContainer" containerID="c612caac88f2161c303529ed7d9694074a6893ba911d7af89a392bd41c338cd3" Nov 6 00:28:54.349555 containerd[1541]: time="2025-11-06T00:28:54.346029916Z" level=info msg="RemoveContainer for \"c612caac88f2161c303529ed7d9694074a6893ba911d7af89a392bd41c338cd3\"" Nov 6 00:28:54.361104 systemd[1]: Removed slice kubepods-burstable-pod60420992_95c1_4e3f_94a6_8591d0324a99.slice - libcontainer container kubepods-burstable-pod60420992_95c1_4e3f_94a6_8591d0324a99.slice. Nov 6 00:28:54.361396 systemd[1]: kubepods-burstable-pod60420992_95c1_4e3f_94a6_8591d0324a99.slice: Consumed 9.898s CPU time, 125.2M memory peak, 208K read from disk, 13.3M written to disk. Nov 6 00:28:54.379031 systemd[1]: Removed slice kubepods-besteffort-poddbc7d5c4_e427_44e9_a843_e50d922229f9.slice - libcontainer container kubepods-besteffort-poddbc7d5c4_e427_44e9_a843_e50d922229f9.slice. Nov 6 00:28:54.410347 containerd[1541]: time="2025-11-06T00:28:54.406963892Z" level=info msg="RemoveContainer for \"c612caac88f2161c303529ed7d9694074a6893ba911d7af89a392bd41c338cd3\" returns successfully" Nov 6 00:28:54.410540 kubelet[2762]: I1106 00:28:54.407775 2762 scope.go:117] "RemoveContainer" containerID="85900c7bf821baab339e33895b5a861c3935c98b57cc8723ff373440c2dee911" Nov 6 00:28:54.414432 containerd[1541]: time="2025-11-06T00:28:54.414387129Z" level=info msg="RemoveContainer for \"85900c7bf821baab339e33895b5a861c3935c98b57cc8723ff373440c2dee911\"" Nov 6 00:28:54.434677 containerd[1541]: time="2025-11-06T00:28:54.433990119Z" level=info msg="RemoveContainer for \"85900c7bf821baab339e33895b5a861c3935c98b57cc8723ff373440c2dee911\" returns successfully" Nov 6 00:28:54.434853 kubelet[2762]: I1106 00:28:54.434352 2762 scope.go:117] "RemoveContainer" containerID="b02671d13318eac3cf03b249f683e077eab0120d4a813ae647ca834f9820c82e" Nov 6 00:28:54.446191 containerd[1541]: time="2025-11-06T00:28:54.441305019Z" level=info msg="RemoveContainer for \"b02671d13318eac3cf03b249f683e077eab0120d4a813ae647ca834f9820c82e\"" Nov 6 00:28:54.476989 containerd[1541]: time="2025-11-06T00:28:54.476933787Z" level=info msg="RemoveContainer for \"b02671d13318eac3cf03b249f683e077eab0120d4a813ae647ca834f9820c82e\" returns successfully" Nov 6 00:28:54.482917 kubelet[2762]: I1106 00:28:54.479471 2762 scope.go:117] "RemoveContainer" containerID="23cc4fbf9a5d0b2e8e0ef436b4832e9b8bb111266a4829152bc9557f0b0e595b" Nov 6 00:28:54.483100 containerd[1541]: time="2025-11-06T00:28:54.482715898Z" level=info msg="RemoveContainer for \"23cc4fbf9a5d0b2e8e0ef436b4832e9b8bb111266a4829152bc9557f0b0e595b\"" Nov 6 00:28:54.509430 containerd[1541]: time="2025-11-06T00:28:54.509210242Z" level=info msg="RemoveContainer for \"23cc4fbf9a5d0b2e8e0ef436b4832e9b8bb111266a4829152bc9557f0b0e595b\" returns successfully" Nov 6 00:28:54.509679 kubelet[2762]: I1106 00:28:54.509577 2762 scope.go:117] "RemoveContainer" containerID="2568ce2b88f0de9d0d0100333fd51d8618bd044cddf3aa072d89c4b9b0ba4ced" Nov 6 00:28:54.518909 containerd[1541]: time="2025-11-06T00:28:54.518804873Z" level=info msg="RemoveContainer for \"2568ce2b88f0de9d0d0100333fd51d8618bd044cddf3aa072d89c4b9b0ba4ced\"" Nov 6 00:28:54.534388 sshd[4520]: Connection closed by 10.0.0.1 port 58270 Nov 6 00:28:54.542588 sshd-session[4517]: pam_unix(sshd:session): session closed for user core Nov 6 00:28:54.569080 systemd[1]: sshd@33-10.0.0.113:22-10.0.0.1:58270.service: Deactivated successfully. Nov 6 00:28:54.577878 systemd[1]: session-34.scope: Deactivated successfully. Nov 6 00:28:54.586797 systemd-logind[1523]: Session 34 logged out. Waiting for processes to exit. Nov 6 00:28:54.589693 containerd[1541]: time="2025-11-06T00:28:54.589520016Z" level=info msg="RemoveContainer for \"2568ce2b88f0de9d0d0100333fd51d8618bd044cddf3aa072d89c4b9b0ba4ced\" returns successfully" Nov 6 00:28:54.590204 kubelet[2762]: I1106 00:28:54.589940 2762 scope.go:117] "RemoveContainer" containerID="c612caac88f2161c303529ed7d9694074a6893ba911d7af89a392bd41c338cd3" Nov 6 00:28:54.590306 containerd[1541]: time="2025-11-06T00:28:54.590249957Z" level=error msg="ContainerStatus for \"c612caac88f2161c303529ed7d9694074a6893ba911d7af89a392bd41c338cd3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c612caac88f2161c303529ed7d9694074a6893ba911d7af89a392bd41c338cd3\": not found" Nov 6 00:28:54.592556 systemd[1]: Started sshd@34-10.0.0.113:22-10.0.0.1:58276.service - OpenSSH per-connection server daemon (10.0.0.1:58276). Nov 6 00:28:54.596985 kubelet[2762]: E1106 00:28:54.596928 2762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c612caac88f2161c303529ed7d9694074a6893ba911d7af89a392bd41c338cd3\": not found" containerID="c612caac88f2161c303529ed7d9694074a6893ba911d7af89a392bd41c338cd3" Nov 6 00:28:54.597303 kubelet[2762]: I1106 00:28:54.597233 2762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c612caac88f2161c303529ed7d9694074a6893ba911d7af89a392bd41c338cd3"} err="failed to get container status \"c612caac88f2161c303529ed7d9694074a6893ba911d7af89a392bd41c338cd3\": rpc error: code = NotFound desc = an error occurred when try to find container \"c612caac88f2161c303529ed7d9694074a6893ba911d7af89a392bd41c338cd3\": not found" Nov 6 00:28:54.597391 kubelet[2762]: I1106 00:28:54.597368 2762 scope.go:117] "RemoveContainer" containerID="85900c7bf821baab339e33895b5a861c3935c98b57cc8723ff373440c2dee911" Nov 6 00:28:54.597897 containerd[1541]: time="2025-11-06T00:28:54.597828429Z" level=error msg="ContainerStatus for \"85900c7bf821baab339e33895b5a861c3935c98b57cc8723ff373440c2dee911\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"85900c7bf821baab339e33895b5a861c3935c98b57cc8723ff373440c2dee911\": not found" Nov 6 00:28:54.598224 kubelet[2762]: E1106 00:28:54.598109 2762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"85900c7bf821baab339e33895b5a861c3935c98b57cc8723ff373440c2dee911\": not found" containerID="85900c7bf821baab339e33895b5a861c3935c98b57cc8723ff373440c2dee911" Nov 6 00:28:54.598224 kubelet[2762]: I1106 00:28:54.598135 2762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"85900c7bf821baab339e33895b5a861c3935c98b57cc8723ff373440c2dee911"} err="failed to get container status \"85900c7bf821baab339e33895b5a861c3935c98b57cc8723ff373440c2dee911\": rpc error: code = NotFound desc = an error occurred when try to find container \"85900c7bf821baab339e33895b5a861c3935c98b57cc8723ff373440c2dee911\": not found" Nov 6 00:28:54.598224 kubelet[2762]: I1106 00:28:54.598154 2762 scope.go:117] "RemoveContainer" containerID="b02671d13318eac3cf03b249f683e077eab0120d4a813ae647ca834f9820c82e" Nov 6 00:28:54.598819 containerd[1541]: time="2025-11-06T00:28:54.598785982Z" level=error msg="ContainerStatus for \"b02671d13318eac3cf03b249f683e077eab0120d4a813ae647ca834f9820c82e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b02671d13318eac3cf03b249f683e077eab0120d4a813ae647ca834f9820c82e\": not found" Nov 6 00:28:54.599097 kubelet[2762]: E1106 00:28:54.598995 2762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b02671d13318eac3cf03b249f683e077eab0120d4a813ae647ca834f9820c82e\": not found" containerID="b02671d13318eac3cf03b249f683e077eab0120d4a813ae647ca834f9820c82e" Nov 6 00:28:54.599097 kubelet[2762]: I1106 00:28:54.599018 2762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b02671d13318eac3cf03b249f683e077eab0120d4a813ae647ca834f9820c82e"} err="failed to get container status \"b02671d13318eac3cf03b249f683e077eab0120d4a813ae647ca834f9820c82e\": rpc error: code = NotFound desc = an error occurred when try to find container \"b02671d13318eac3cf03b249f683e077eab0120d4a813ae647ca834f9820c82e\": not found" Nov 6 00:28:54.599097 kubelet[2762]: I1106 00:28:54.599040 2762 scope.go:117] "RemoveContainer" containerID="23cc4fbf9a5d0b2e8e0ef436b4832e9b8bb111266a4829152bc9557f0b0e595b" Nov 6 00:28:54.599524 containerd[1541]: time="2025-11-06T00:28:54.599490624Z" level=error msg="ContainerStatus for \"23cc4fbf9a5d0b2e8e0ef436b4832e9b8bb111266a4829152bc9557f0b0e595b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"23cc4fbf9a5d0b2e8e0ef436b4832e9b8bb111266a4829152bc9557f0b0e595b\": not found" Nov 6 00:28:54.599792 kubelet[2762]: E1106 00:28:54.599688 2762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"23cc4fbf9a5d0b2e8e0ef436b4832e9b8bb111266a4829152bc9557f0b0e595b\": not found" containerID="23cc4fbf9a5d0b2e8e0ef436b4832e9b8bb111266a4829152bc9557f0b0e595b" Nov 6 00:28:54.599792 kubelet[2762]: I1106 00:28:54.599715 2762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"23cc4fbf9a5d0b2e8e0ef436b4832e9b8bb111266a4829152bc9557f0b0e595b"} err="failed to get container status \"23cc4fbf9a5d0b2e8e0ef436b4832e9b8bb111266a4829152bc9557f0b0e595b\": rpc error: code = NotFound desc = an error occurred when try to find container \"23cc4fbf9a5d0b2e8e0ef436b4832e9b8bb111266a4829152bc9557f0b0e595b\": not found" Nov 6 00:28:54.599792 kubelet[2762]: I1106 00:28:54.599733 2762 scope.go:117] "RemoveContainer" containerID="2568ce2b88f0de9d0d0100333fd51d8618bd044cddf3aa072d89c4b9b0ba4ced" Nov 6 00:28:54.600078 containerd[1541]: time="2025-11-06T00:28:54.600050090Z" level=error msg="ContainerStatus for \"2568ce2b88f0de9d0d0100333fd51d8618bd044cddf3aa072d89c4b9b0ba4ced\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2568ce2b88f0de9d0d0100333fd51d8618bd044cddf3aa072d89c4b9b0ba4ced\": not found" Nov 6 00:28:54.601443 kubelet[2762]: E1106 00:28:54.600225 2762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2568ce2b88f0de9d0d0100333fd51d8618bd044cddf3aa072d89c4b9b0ba4ced\": not found" containerID="2568ce2b88f0de9d0d0100333fd51d8618bd044cddf3aa072d89c4b9b0ba4ced" Nov 6 00:28:54.601443 kubelet[2762]: I1106 00:28:54.600253 2762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2568ce2b88f0de9d0d0100333fd51d8618bd044cddf3aa072d89c4b9b0ba4ced"} err="failed to get container status \"2568ce2b88f0de9d0d0100333fd51d8618bd044cddf3aa072d89c4b9b0ba4ced\": rpc error: code = NotFound desc = an error occurred when try to find container \"2568ce2b88f0de9d0d0100333fd51d8618bd044cddf3aa072d89c4b9b0ba4ced\": not found" Nov 6 00:28:54.601443 kubelet[2762]: I1106 00:28:54.600270 2762 scope.go:117] "RemoveContainer" containerID="367c1374b284454f3546b80da347b02b062f460b67edb09732e5cbcd20eba489" Nov 6 00:28:54.603245 systemd-logind[1523]: Removed session 34. Nov 6 00:28:54.606514 containerd[1541]: time="2025-11-06T00:28:54.606467480Z" level=info msg="RemoveContainer for \"367c1374b284454f3546b80da347b02b062f460b67edb09732e5cbcd20eba489\"" Nov 6 00:28:54.620584 containerd[1541]: time="2025-11-06T00:28:54.618876621Z" level=info msg="RemoveContainer for \"367c1374b284454f3546b80da347b02b062f460b67edb09732e5cbcd20eba489\" returns successfully" Nov 6 00:28:54.621368 kubelet[2762]: I1106 00:28:54.621338 2762 scope.go:117] "RemoveContainer" containerID="367c1374b284454f3546b80da347b02b062f460b67edb09732e5cbcd20eba489" Nov 6 00:28:54.621944 containerd[1541]: time="2025-11-06T00:28:54.621901602Z" level=error msg="ContainerStatus for \"367c1374b284454f3546b80da347b02b062f460b67edb09732e5cbcd20eba489\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"367c1374b284454f3546b80da347b02b062f460b67edb09732e5cbcd20eba489\": not found" Nov 6 00:28:54.625499 kubelet[2762]: E1106 00:28:54.623560 2762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"367c1374b284454f3546b80da347b02b062f460b67edb09732e5cbcd20eba489\": not found" containerID="367c1374b284454f3546b80da347b02b062f460b67edb09732e5cbcd20eba489" Nov 6 00:28:54.625776 kubelet[2762]: I1106 00:28:54.625727 2762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"367c1374b284454f3546b80da347b02b062f460b67edb09732e5cbcd20eba489"} err="failed to get container status \"367c1374b284454f3546b80da347b02b062f460b67edb09732e5cbcd20eba489\": rpc error: code = NotFound desc = an error occurred when try to find container \"367c1374b284454f3546b80da347b02b062f460b67edb09732e5cbcd20eba489\": not found" Nov 6 00:28:54.703869 sshd[4674]: Accepted publickey for core from 10.0.0.1 port 58276 ssh2: RSA SHA256:PmSYF5WO1c+PbjRA1Pm6yQw5/JNmNUR55sY7don0Q4E Nov 6 00:28:54.707007 sshd-session[4674]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:28:54.728399 systemd-logind[1523]: New session 35 of user core. Nov 6 00:28:54.744467 systemd[1]: Started session-35.scope - Session 35 of User core. Nov 6 00:28:55.069734 kubelet[2762]: I1106 00:28:55.062970 2762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60420992-95c1-4e3f-94a6-8591d0324a99" path="/var/lib/kubelet/pods/60420992-95c1-4e3f-94a6-8591d0324a99/volumes" Nov 6 00:28:55.069734 kubelet[2762]: I1106 00:28:55.066878 2762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dbc7d5c4-e427-44e9-a843-e50d922229f9" path="/var/lib/kubelet/pods/dbc7d5c4-e427-44e9-a843-e50d922229f9/volumes" Nov 6 00:28:56.457499 kubelet[2762]: I1106 00:28:56.457406 2762 setters.go:618] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-06T00:28:56Z","lastTransitionTime":"2025-11-06T00:28:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Nov 6 00:28:56.631515 sshd[4677]: Connection closed by 10.0.0.1 port 58276 Nov 6 00:28:56.634140 sshd-session[4674]: pam_unix(sshd:session): session closed for user core Nov 6 00:28:56.658439 systemd[1]: sshd@34-10.0.0.113:22-10.0.0.1:58276.service: Deactivated successfully. Nov 6 00:28:56.665773 systemd[1]: session-35.scope: Deactivated successfully. Nov 6 00:28:56.666224 systemd[1]: session-35.scope: Consumed 1.139s CPU time, 27.8M memory peak. Nov 6 00:28:56.699227 systemd-logind[1523]: Session 35 logged out. Waiting for processes to exit. Nov 6 00:28:56.703874 systemd[1]: Started sshd@35-10.0.0.113:22-10.0.0.1:40106.service - OpenSSH per-connection server daemon (10.0.0.1:40106). Nov 6 00:28:56.706653 systemd-logind[1523]: Removed session 35. Nov 6 00:28:56.769009 systemd[1]: Created slice kubepods-burstable-pod9063c6f2_4a2c_412c_8e71_e2334df5b114.slice - libcontainer container kubepods-burstable-pod9063c6f2_4a2c_412c_8e71_e2334df5b114.slice. Nov 6 00:28:56.818242 sshd[4689]: Accepted publickey for core from 10.0.0.1 port 40106 ssh2: RSA SHA256:PmSYF5WO1c+PbjRA1Pm6yQw5/JNmNUR55sY7don0Q4E Nov 6 00:28:56.820336 sshd-session[4689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:28:56.836694 systemd-logind[1523]: New session 36 of user core. Nov 6 00:28:56.843281 systemd[1]: Started session-36.scope - Session 36 of User core. Nov 6 00:28:56.884279 kubelet[2762]: I1106 00:28:56.884162 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9063c6f2-4a2c-412c-8e71-e2334df5b114-cilium-run\") pod \"cilium-xsrz9\" (UID: \"9063c6f2-4a2c-412c-8e71-e2334df5b114\") " pod="kube-system/cilium-xsrz9" Nov 6 00:28:56.884279 kubelet[2762]: I1106 00:28:56.884245 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4bjg\" (UniqueName: \"kubernetes.io/projected/9063c6f2-4a2c-412c-8e71-e2334df5b114-kube-api-access-p4bjg\") pod \"cilium-xsrz9\" (UID: \"9063c6f2-4a2c-412c-8e71-e2334df5b114\") " pod="kube-system/cilium-xsrz9" Nov 6 00:28:56.884279 kubelet[2762]: I1106 00:28:56.884274 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9063c6f2-4a2c-412c-8e71-e2334df5b114-bpf-maps\") pod \"cilium-xsrz9\" (UID: \"9063c6f2-4a2c-412c-8e71-e2334df5b114\") " pod="kube-system/cilium-xsrz9" Nov 6 00:28:56.884279 kubelet[2762]: I1106 00:28:56.884301 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9063c6f2-4a2c-412c-8e71-e2334df5b114-clustermesh-secrets\") pod \"cilium-xsrz9\" (UID: \"9063c6f2-4a2c-412c-8e71-e2334df5b114\") " pod="kube-system/cilium-xsrz9" Nov 6 00:28:56.884667 kubelet[2762]: I1106 00:28:56.884419 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9063c6f2-4a2c-412c-8e71-e2334df5b114-host-proc-sys-net\") pod \"cilium-xsrz9\" (UID: \"9063c6f2-4a2c-412c-8e71-e2334df5b114\") " pod="kube-system/cilium-xsrz9" Nov 6 00:28:56.884667 kubelet[2762]: I1106 00:28:56.884510 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9063c6f2-4a2c-412c-8e71-e2334df5b114-etc-cni-netd\") pod \"cilium-xsrz9\" (UID: \"9063c6f2-4a2c-412c-8e71-e2334df5b114\") " pod="kube-system/cilium-xsrz9" Nov 6 00:28:56.884667 kubelet[2762]: I1106 00:28:56.884574 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9063c6f2-4a2c-412c-8e71-e2334df5b114-xtables-lock\") pod \"cilium-xsrz9\" (UID: \"9063c6f2-4a2c-412c-8e71-e2334df5b114\") " pod="kube-system/cilium-xsrz9" Nov 6 00:28:56.884667 kubelet[2762]: I1106 00:28:56.884628 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9063c6f2-4a2c-412c-8e71-e2334df5b114-hostproc\") pod \"cilium-xsrz9\" (UID: \"9063c6f2-4a2c-412c-8e71-e2334df5b114\") " pod="kube-system/cilium-xsrz9" Nov 6 00:28:56.884667 kubelet[2762]: I1106 00:28:56.884654 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9063c6f2-4a2c-412c-8e71-e2334df5b114-lib-modules\") pod \"cilium-xsrz9\" (UID: \"9063c6f2-4a2c-412c-8e71-e2334df5b114\") " pod="kube-system/cilium-xsrz9" Nov 6 00:28:56.884824 kubelet[2762]: I1106 00:28:56.884678 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/9063c6f2-4a2c-412c-8e71-e2334df5b114-cilium-ipsec-secrets\") pod \"cilium-xsrz9\" (UID: \"9063c6f2-4a2c-412c-8e71-e2334df5b114\") " pod="kube-system/cilium-xsrz9" Nov 6 00:28:56.884824 kubelet[2762]: I1106 00:28:56.884705 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9063c6f2-4a2c-412c-8e71-e2334df5b114-host-proc-sys-kernel\") pod \"cilium-xsrz9\" (UID: \"9063c6f2-4a2c-412c-8e71-e2334df5b114\") " pod="kube-system/cilium-xsrz9" Nov 6 00:28:56.884824 kubelet[2762]: I1106 00:28:56.884734 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9063c6f2-4a2c-412c-8e71-e2334df5b114-cilium-cgroup\") pod \"cilium-xsrz9\" (UID: \"9063c6f2-4a2c-412c-8e71-e2334df5b114\") " pod="kube-system/cilium-xsrz9" Nov 6 00:28:56.884824 kubelet[2762]: I1106 00:28:56.884762 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9063c6f2-4a2c-412c-8e71-e2334df5b114-cni-path\") pod \"cilium-xsrz9\" (UID: \"9063c6f2-4a2c-412c-8e71-e2334df5b114\") " pod="kube-system/cilium-xsrz9" Nov 6 00:28:56.885065 kubelet[2762]: I1106 00:28:56.884834 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9063c6f2-4a2c-412c-8e71-e2334df5b114-cilium-config-path\") pod \"cilium-xsrz9\" (UID: \"9063c6f2-4a2c-412c-8e71-e2334df5b114\") " pod="kube-system/cilium-xsrz9" Nov 6 00:28:56.885065 kubelet[2762]: I1106 00:28:56.884864 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9063c6f2-4a2c-412c-8e71-e2334df5b114-hubble-tls\") pod \"cilium-xsrz9\" (UID: \"9063c6f2-4a2c-412c-8e71-e2334df5b114\") " pod="kube-system/cilium-xsrz9" Nov 6 00:28:56.909736 sshd[4692]: Connection closed by 10.0.0.1 port 40106 Nov 6 00:28:56.910093 sshd-session[4689]: pam_unix(sshd:session): session closed for user core Nov 6 00:28:56.928651 systemd[1]: sshd@35-10.0.0.113:22-10.0.0.1:40106.service: Deactivated successfully. Nov 6 00:28:56.931489 systemd[1]: session-36.scope: Deactivated successfully. Nov 6 00:28:56.934506 systemd-logind[1523]: Session 36 logged out. Waiting for processes to exit. Nov 6 00:28:56.938073 systemd[1]: Started sshd@36-10.0.0.113:22-10.0.0.1:40114.service - OpenSSH per-connection server daemon (10.0.0.1:40114). Nov 6 00:28:56.947940 systemd-logind[1523]: Removed session 36. Nov 6 00:28:57.058649 sshd[4699]: Accepted publickey for core from 10.0.0.1 port 40114 ssh2: RSA SHA256:PmSYF5WO1c+PbjRA1Pm6yQw5/JNmNUR55sY7don0Q4E Nov 6 00:28:57.064547 sshd-session[4699]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:28:57.086542 systemd-logind[1523]: New session 37 of user core. Nov 6 00:28:57.089828 kubelet[2762]: E1106 00:28:57.089765 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:28:57.090590 containerd[1541]: time="2025-11-06T00:28:57.090531782Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xsrz9,Uid:9063c6f2-4a2c-412c-8e71-e2334df5b114,Namespace:kube-system,Attempt:0,}" Nov 6 00:28:57.109276 systemd[1]: Started session-37.scope - Session 37 of User core. Nov 6 00:28:57.192994 containerd[1541]: time="2025-11-06T00:28:57.192060694Z" level=info msg="connecting to shim 9a544f63fb6937a2a8898f3f86746bb3f36c97445d752233d9d066d4a4e2ac72" address="unix:///run/containerd/s/6fa4e7d016cf619ee21abd97c409b5f0ffd469f6ffe67fc3f8e7dd15af3fb08f" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:28:57.266921 systemd[1]: Started cri-containerd-9a544f63fb6937a2a8898f3f86746bb3f36c97445d752233d9d066d4a4e2ac72.scope - libcontainer container 9a544f63fb6937a2a8898f3f86746bb3f36c97445d752233d9d066d4a4e2ac72. Nov 6 00:28:57.394822 containerd[1541]: time="2025-11-06T00:28:57.394743530Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xsrz9,Uid:9063c6f2-4a2c-412c-8e71-e2334df5b114,Namespace:kube-system,Attempt:0,} returns sandbox id \"9a544f63fb6937a2a8898f3f86746bb3f36c97445d752233d9d066d4a4e2ac72\"" Nov 6 00:28:57.399501 kubelet[2762]: E1106 00:28:57.397946 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:28:57.498012 containerd[1541]: time="2025-11-06T00:28:57.497819977Z" level=info msg="CreateContainer within sandbox \"9a544f63fb6937a2a8898f3f86746bb3f36c97445d752233d9d066d4a4e2ac72\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 6 00:28:57.587384 containerd[1541]: time="2025-11-06T00:28:57.586349453Z" level=info msg="Container 0d9bd0975616dfe133950095236b6b1466d02303ed472d4fb60dd2eb6909c43a: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:28:57.621857 containerd[1541]: time="2025-11-06T00:28:57.621776607Z" level=info msg="CreateContainer within sandbox \"9a544f63fb6937a2a8898f3f86746bb3f36c97445d752233d9d066d4a4e2ac72\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"0d9bd0975616dfe133950095236b6b1466d02303ed472d4fb60dd2eb6909c43a\"" Nov 6 00:28:57.632086 containerd[1541]: time="2025-11-06T00:28:57.631966501Z" level=info msg="StartContainer for \"0d9bd0975616dfe133950095236b6b1466d02303ed472d4fb60dd2eb6909c43a\"" Nov 6 00:28:57.634867 containerd[1541]: time="2025-11-06T00:28:57.634297541Z" level=info msg="connecting to shim 0d9bd0975616dfe133950095236b6b1466d02303ed472d4fb60dd2eb6909c43a" address="unix:///run/containerd/s/6fa4e7d016cf619ee21abd97c409b5f0ffd469f6ffe67fc3f8e7dd15af3fb08f" protocol=ttrpc version=3 Nov 6 00:28:57.711690 systemd[1]: Started cri-containerd-0d9bd0975616dfe133950095236b6b1466d02303ed472d4fb60dd2eb6909c43a.scope - libcontainer container 0d9bd0975616dfe133950095236b6b1466d02303ed472d4fb60dd2eb6909c43a. Nov 6 00:28:57.861474 containerd[1541]: time="2025-11-06T00:28:57.859510079Z" level=info msg="StartContainer for \"0d9bd0975616dfe133950095236b6b1466d02303ed472d4fb60dd2eb6909c43a\" returns successfully" Nov 6 00:28:57.900418 systemd[1]: cri-containerd-0d9bd0975616dfe133950095236b6b1466d02303ed472d4fb60dd2eb6909c43a.scope: Deactivated successfully. Nov 6 00:28:57.910479 containerd[1541]: time="2025-11-06T00:28:57.910231428Z" level=info msg="received exit event container_id:\"0d9bd0975616dfe133950095236b6b1466d02303ed472d4fb60dd2eb6909c43a\" id:\"0d9bd0975616dfe133950095236b6b1466d02303ed472d4fb60dd2eb6909c43a\" pid:4770 exited_at:{seconds:1762388937 nanos:909754580}" Nov 6 00:28:57.910805 containerd[1541]: time="2025-11-06T00:28:57.910751118Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0d9bd0975616dfe133950095236b6b1466d02303ed472d4fb60dd2eb6909c43a\" id:\"0d9bd0975616dfe133950095236b6b1466d02303ed472d4fb60dd2eb6909c43a\" pid:4770 exited_at:{seconds:1762388937 nanos:909754580}" Nov 6 00:28:58.232727 kubelet[2762]: E1106 00:28:58.232508 2762 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Nov 6 00:28:58.427608 kubelet[2762]: E1106 00:28:58.427532 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:28:58.457078 containerd[1541]: time="2025-11-06T00:28:58.456764501Z" level=info msg="CreateContainer within sandbox \"9a544f63fb6937a2a8898f3f86746bb3f36c97445d752233d9d066d4a4e2ac72\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 6 00:28:58.517849 containerd[1541]: time="2025-11-06T00:28:58.516490156Z" level=info msg="Container c672a7e25c2f8a70b0ad77bbfc37035e0d598edfd104e49879ce7550d29643b5: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:28:58.531470 containerd[1541]: time="2025-11-06T00:28:58.528576621Z" level=info msg="CreateContainer within sandbox \"9a544f63fb6937a2a8898f3f86746bb3f36c97445d752233d9d066d4a4e2ac72\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c672a7e25c2f8a70b0ad77bbfc37035e0d598edfd104e49879ce7550d29643b5\"" Nov 6 00:28:58.531470 containerd[1541]: time="2025-11-06T00:28:58.530706538Z" level=info msg="StartContainer for \"c672a7e25c2f8a70b0ad77bbfc37035e0d598edfd104e49879ce7550d29643b5\"" Nov 6 00:28:58.536408 containerd[1541]: time="2025-11-06T00:28:58.535198942Z" level=info msg="connecting to shim c672a7e25c2f8a70b0ad77bbfc37035e0d598edfd104e49879ce7550d29643b5" address="unix:///run/containerd/s/6fa4e7d016cf619ee21abd97c409b5f0ffd469f6ffe67fc3f8e7dd15af3fb08f" protocol=ttrpc version=3 Nov 6 00:28:58.576804 systemd[1]: Started cri-containerd-c672a7e25c2f8a70b0ad77bbfc37035e0d598edfd104e49879ce7550d29643b5.scope - libcontainer container c672a7e25c2f8a70b0ad77bbfc37035e0d598edfd104e49879ce7550d29643b5. Nov 6 00:28:58.695063 containerd[1541]: time="2025-11-06T00:28:58.693098368Z" level=info msg="StartContainer for \"c672a7e25c2f8a70b0ad77bbfc37035e0d598edfd104e49879ce7550d29643b5\" returns successfully" Nov 6 00:28:58.695384 systemd[1]: cri-containerd-c672a7e25c2f8a70b0ad77bbfc37035e0d598edfd104e49879ce7550d29643b5.scope: Deactivated successfully. Nov 6 00:28:58.701081 containerd[1541]: time="2025-11-06T00:28:58.700038594Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c672a7e25c2f8a70b0ad77bbfc37035e0d598edfd104e49879ce7550d29643b5\" id:\"c672a7e25c2f8a70b0ad77bbfc37035e0d598edfd104e49879ce7550d29643b5\" pid:4814 exited_at:{seconds:1762388938 nanos:696694966}" Nov 6 00:28:58.701081 containerd[1541]: time="2025-11-06T00:28:58.700346290Z" level=info msg="received exit event container_id:\"c672a7e25c2f8a70b0ad77bbfc37035e0d598edfd104e49879ce7550d29643b5\" id:\"c672a7e25c2f8a70b0ad77bbfc37035e0d598edfd104e49879ce7550d29643b5\" pid:4814 exited_at:{seconds:1762388938 nanos:696694966}" Nov 6 00:28:59.007053 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c672a7e25c2f8a70b0ad77bbfc37035e0d598edfd104e49879ce7550d29643b5-rootfs.mount: Deactivated successfully. Nov 6 00:28:59.445496 kubelet[2762]: E1106 00:28:59.444674 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:28:59.475707 containerd[1541]: time="2025-11-06T00:28:59.475617681Z" level=info msg="CreateContainer within sandbox \"9a544f63fb6937a2a8898f3f86746bb3f36c97445d752233d9d066d4a4e2ac72\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 6 00:28:59.503970 containerd[1541]: time="2025-11-06T00:28:59.503873999Z" level=info msg="Container 27741991d99ff6b547c7c488843ac3bec5a6f8cf86ae702a9b53944866831e52: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:28:59.596501 containerd[1541]: time="2025-11-06T00:28:59.590747901Z" level=info msg="CreateContainer within sandbox \"9a544f63fb6937a2a8898f3f86746bb3f36c97445d752233d9d066d4a4e2ac72\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"27741991d99ff6b547c7c488843ac3bec5a6f8cf86ae702a9b53944866831e52\"" Nov 6 00:28:59.596501 containerd[1541]: time="2025-11-06T00:28:59.592213251Z" level=info msg="StartContainer for \"27741991d99ff6b547c7c488843ac3bec5a6f8cf86ae702a9b53944866831e52\"" Nov 6 00:28:59.604454 containerd[1541]: time="2025-11-06T00:28:59.603532185Z" level=info msg="connecting to shim 27741991d99ff6b547c7c488843ac3bec5a6f8cf86ae702a9b53944866831e52" address="unix:///run/containerd/s/6fa4e7d016cf619ee21abd97c409b5f0ffd469f6ffe67fc3f8e7dd15af3fb08f" protocol=ttrpc version=3 Nov 6 00:28:59.693209 systemd[1]: Started cri-containerd-27741991d99ff6b547c7c488843ac3bec5a6f8cf86ae702a9b53944866831e52.scope - libcontainer container 27741991d99ff6b547c7c488843ac3bec5a6f8cf86ae702a9b53944866831e52. Nov 6 00:28:59.825690 systemd[1]: cri-containerd-27741991d99ff6b547c7c488843ac3bec5a6f8cf86ae702a9b53944866831e52.scope: Deactivated successfully. Nov 6 00:28:59.833412 containerd[1541]: time="2025-11-06T00:28:59.833349080Z" level=info msg="received exit event container_id:\"27741991d99ff6b547c7c488843ac3bec5a6f8cf86ae702a9b53944866831e52\" id:\"27741991d99ff6b547c7c488843ac3bec5a6f8cf86ae702a9b53944866831e52\" pid:4858 exited_at:{seconds:1762388939 nanos:832868464}" Nov 6 00:28:59.833567 containerd[1541]: time="2025-11-06T00:28:59.833355722Z" level=info msg="TaskExit event in podsandbox handler container_id:\"27741991d99ff6b547c7c488843ac3bec5a6f8cf86ae702a9b53944866831e52\" id:\"27741991d99ff6b547c7c488843ac3bec5a6f8cf86ae702a9b53944866831e52\" pid:4858 exited_at:{seconds:1762388939 nanos:832868464}" Nov 6 00:28:59.835657 containerd[1541]: time="2025-11-06T00:28:59.835622840Z" level=info msg="StartContainer for \"27741991d99ff6b547c7c488843ac3bec5a6f8cf86ae702a9b53944866831e52\" returns successfully" Nov 6 00:28:59.901524 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-27741991d99ff6b547c7c488843ac3bec5a6f8cf86ae702a9b53944866831e52-rootfs.mount: Deactivated successfully. Nov 6 00:29:00.460735 kubelet[2762]: E1106 00:29:00.458276 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:29:00.488215 containerd[1541]: time="2025-11-06T00:29:00.485612926Z" level=info msg="CreateContainer within sandbox \"9a544f63fb6937a2a8898f3f86746bb3f36c97445d752233d9d066d4a4e2ac72\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 6 00:29:00.556084 containerd[1541]: time="2025-11-06T00:29:00.555980785Z" level=info msg="Container d4ef1192b9eb6b27b9111a22c375ea822a622ea56231749808ac89824b434582: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:29:00.586592 containerd[1541]: time="2025-11-06T00:29:00.586440879Z" level=info msg="CreateContainer within sandbox \"9a544f63fb6937a2a8898f3f86746bb3f36c97445d752233d9d066d4a4e2ac72\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d4ef1192b9eb6b27b9111a22c375ea822a622ea56231749808ac89824b434582\"" Nov 6 00:29:00.588288 containerd[1541]: time="2025-11-06T00:29:00.587579096Z" level=info msg="StartContainer for \"d4ef1192b9eb6b27b9111a22c375ea822a622ea56231749808ac89824b434582\"" Nov 6 00:29:00.589070 containerd[1541]: time="2025-11-06T00:29:00.589040278Z" level=info msg="connecting to shim d4ef1192b9eb6b27b9111a22c375ea822a622ea56231749808ac89824b434582" address="unix:///run/containerd/s/6fa4e7d016cf619ee21abd97c409b5f0ffd469f6ffe67fc3f8e7dd15af3fb08f" protocol=ttrpc version=3 Nov 6 00:29:00.662644 systemd[1]: Started cri-containerd-d4ef1192b9eb6b27b9111a22c375ea822a622ea56231749808ac89824b434582.scope - libcontainer container d4ef1192b9eb6b27b9111a22c375ea822a622ea56231749808ac89824b434582. Nov 6 00:29:00.800094 systemd[1]: cri-containerd-d4ef1192b9eb6b27b9111a22c375ea822a622ea56231749808ac89824b434582.scope: Deactivated successfully. Nov 6 00:29:00.804001 containerd[1541]: time="2025-11-06T00:29:00.800068993Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d4ef1192b9eb6b27b9111a22c375ea822a622ea56231749808ac89824b434582\" id:\"d4ef1192b9eb6b27b9111a22c375ea822a622ea56231749808ac89824b434582\" pid:4897 exited_at:{seconds:1762388940 nanos:799682658}" Nov 6 00:29:00.806103 containerd[1541]: time="2025-11-06T00:29:00.806006860Z" level=info msg="received exit event container_id:\"d4ef1192b9eb6b27b9111a22c375ea822a622ea56231749808ac89824b434582\" id:\"d4ef1192b9eb6b27b9111a22c375ea822a622ea56231749808ac89824b434582\" pid:4897 exited_at:{seconds:1762388940 nanos:799682658}" Nov 6 00:29:00.818933 containerd[1541]: time="2025-11-06T00:29:00.812987082Z" level=info msg="StartContainer for \"d4ef1192b9eb6b27b9111a22c375ea822a622ea56231749808ac89824b434582\" returns successfully" Nov 6 00:29:00.879696 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d4ef1192b9eb6b27b9111a22c375ea822a622ea56231749808ac89824b434582-rootfs.mount: Deactivated successfully. Nov 6 00:29:01.498610 kubelet[2762]: E1106 00:29:01.498551 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:29:01.520388 containerd[1541]: time="2025-11-06T00:29:01.520325961Z" level=info msg="CreateContainer within sandbox \"9a544f63fb6937a2a8898f3f86746bb3f36c97445d752233d9d066d4a4e2ac72\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 6 00:29:01.602969 containerd[1541]: time="2025-11-06T00:29:01.602787199Z" level=info msg="Container 73ee3fca1576b852a2b2f58faf96454b01875caae44879b53e9d6197a072b70b: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:29:01.650625 containerd[1541]: time="2025-11-06T00:29:01.650519724Z" level=info msg="CreateContainer within sandbox \"9a544f63fb6937a2a8898f3f86746bb3f36c97445d752233d9d066d4a4e2ac72\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"73ee3fca1576b852a2b2f58faf96454b01875caae44879b53e9d6197a072b70b\"" Nov 6 00:29:01.652529 containerd[1541]: time="2025-11-06T00:29:01.651676407Z" level=info msg="StartContainer for \"73ee3fca1576b852a2b2f58faf96454b01875caae44879b53e9d6197a072b70b\"" Nov 6 00:29:01.659282 containerd[1541]: time="2025-11-06T00:29:01.659203430Z" level=info msg="connecting to shim 73ee3fca1576b852a2b2f58faf96454b01875caae44879b53e9d6197a072b70b" address="unix:///run/containerd/s/6fa4e7d016cf619ee21abd97c409b5f0ffd469f6ffe67fc3f8e7dd15af3fb08f" protocol=ttrpc version=3 Nov 6 00:29:01.751235 systemd[1]: Started cri-containerd-73ee3fca1576b852a2b2f58faf96454b01875caae44879b53e9d6197a072b70b.scope - libcontainer container 73ee3fca1576b852a2b2f58faf96454b01875caae44879b53e9d6197a072b70b. Nov 6 00:29:01.869716 containerd[1541]: time="2025-11-06T00:29:01.867702406Z" level=info msg="StartContainer for \"73ee3fca1576b852a2b2f58faf96454b01875caae44879b53e9d6197a072b70b\" returns successfully" Nov 6 00:29:02.056824 containerd[1541]: time="2025-11-06T00:29:02.055915723Z" level=info msg="TaskExit event in podsandbox handler container_id:\"73ee3fca1576b852a2b2f58faf96454b01875caae44879b53e9d6197a072b70b\" id:\"60508779051b5b1178e524b41e5e7a9ae4ab6e78ca33d1a490aaab2aec75daa8\" pid:4968 exited_at:{seconds:1762388942 nanos:55447942}" Nov 6 00:29:02.541292 kubelet[2762]: E1106 00:29:02.525580 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:29:03.095543 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Nov 6 00:29:03.509684 kubelet[2762]: E1106 00:29:03.509535 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:29:04.448134 containerd[1541]: time="2025-11-06T00:29:04.448038507Z" level=info msg="TaskExit event in podsandbox handler container_id:\"73ee3fca1576b852a2b2f58faf96454b01875caae44879b53e9d6197a072b70b\" id:\"3376c843c9c4b40b620e4b85d458dcbdd3c9e301d6b276c27baa442457caac19\" pid:5074 exit_status:1 exited_at:{seconds:1762388944 nanos:446191419}" Nov 6 00:29:04.485680 kubelet[2762]: E1106 00:29:04.485539 2762 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:41758->127.0.0.1:37659: write tcp 127.0.0.1:41758->127.0.0.1:37659: write: broken pipe Nov 6 00:29:04.509647 kubelet[2762]: E1106 00:29:04.509608 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:29:06.912775 containerd[1541]: time="2025-11-06T00:29:06.912670471Z" level=info msg="TaskExit event in podsandbox handler container_id:\"73ee3fca1576b852a2b2f58faf96454b01875caae44879b53e9d6197a072b70b\" id:\"271694c861908002445762c8880caed0988d473ba25f8ebd0fad3eb9ae1ae903\" pid:5193 exit_status:1 exited_at:{seconds:1762388946 nanos:911903410}" Nov 6 00:29:09.404733 systemd-networkd[1460]: lxc_health: Link UP Nov 6 00:29:09.422078 systemd-networkd[1460]: lxc_health: Gained carrier Nov 6 00:29:09.573322 containerd[1541]: time="2025-11-06T00:29:09.573264303Z" level=info msg="TaskExit event in podsandbox handler container_id:\"73ee3fca1576b852a2b2f58faf96454b01875caae44879b53e9d6197a072b70b\" id:\"30d0bc727a45ad69291519f2c668c436e3a4a72e0cd62bc67aaf66c26013ce3f\" pid:5529 exit_status:1 exited_at:{seconds:1762388949 nanos:572639473}" Nov 6 00:29:10.046920 kubelet[2762]: E1106 00:29:10.046373 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:29:10.725641 systemd-networkd[1460]: lxc_health: Gained IPv6LL Nov 6 00:29:11.095713 kubelet[2762]: E1106 00:29:11.095675 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:29:11.164830 kubelet[2762]: I1106 00:29:11.164688 2762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-xsrz9" podStartSLOduration=15.164660892 podStartE2EDuration="15.164660892s" podCreationTimestamp="2025-11-06 00:28:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 00:29:02.817346941 +0000 UTC m=+169.857581370" watchObservedRunningTime="2025-11-06 00:29:11.164660892 +0000 UTC m=+178.204895321" Nov 6 00:29:11.540387 kubelet[2762]: E1106 00:29:11.540049 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:29:12.047442 containerd[1541]: time="2025-11-06T00:29:12.047370148Z" level=info msg="TaskExit event in podsandbox handler container_id:\"73ee3fca1576b852a2b2f58faf96454b01875caae44879b53e9d6197a072b70b\" id:\"ad5df85d037d75a7118a27960756a373a7b2f2d2328b72831f1fcd5f4deb832a\" pid:5585 exited_at:{seconds:1762388952 nanos:46842464}" Nov 6 00:29:12.551651 kubelet[2762]: E1106 00:29:12.551148 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:29:13.059105 containerd[1541]: time="2025-11-06T00:29:13.059035918Z" level=info msg="StopPodSandbox for \"b68479c2bc560ed7134dba8b8ad7686853511c88cb71dd5e1faa9950d7bd59b5\"" Nov 6 00:29:13.059105 containerd[1541]: time="2025-11-06T00:29:13.059253422Z" level=info msg="TearDown network for sandbox \"b68479c2bc560ed7134dba8b8ad7686853511c88cb71dd5e1faa9950d7bd59b5\" successfully" Nov 6 00:29:13.059105 containerd[1541]: time="2025-11-06T00:29:13.059271427Z" level=info msg="StopPodSandbox for \"b68479c2bc560ed7134dba8b8ad7686853511c88cb71dd5e1faa9950d7bd59b5\" returns successfully" Nov 6 00:29:13.067531 containerd[1541]: time="2025-11-06T00:29:13.067384275Z" level=info msg="RemovePodSandbox for \"b68479c2bc560ed7134dba8b8ad7686853511c88cb71dd5e1faa9950d7bd59b5\"" Nov 6 00:29:13.067531 containerd[1541]: time="2025-11-06T00:29:13.067469187Z" level=info msg="Forcibly stopping sandbox \"b68479c2bc560ed7134dba8b8ad7686853511c88cb71dd5e1faa9950d7bd59b5\"" Nov 6 00:29:13.067739 containerd[1541]: time="2025-11-06T00:29:13.067634993Z" level=info msg="TearDown network for sandbox \"b68479c2bc560ed7134dba8b8ad7686853511c88cb71dd5e1faa9950d7bd59b5\" successfully" Nov 6 00:29:13.069759 containerd[1541]: time="2025-11-06T00:29:13.069709963Z" level=info msg="Ensure that sandbox b68479c2bc560ed7134dba8b8ad7686853511c88cb71dd5e1faa9950d7bd59b5 in task-service has been cleanup successfully" Nov 6 00:29:13.266466 containerd[1541]: time="2025-11-06T00:29:13.266301181Z" level=info msg="RemovePodSandbox \"b68479c2bc560ed7134dba8b8ad7686853511c88cb71dd5e1faa9950d7bd59b5\" returns successfully" Nov 6 00:29:13.267723 containerd[1541]: time="2025-11-06T00:29:13.267692089Z" level=info msg="StopPodSandbox for \"5086b9ba72418ebcd66943092abf02b29f9fd8c007d764d9cd2a081fdc91fb08\"" Nov 6 00:29:13.268061 containerd[1541]: time="2025-11-06T00:29:13.268003643Z" level=info msg="TearDown network for sandbox \"5086b9ba72418ebcd66943092abf02b29f9fd8c007d764d9cd2a081fdc91fb08\" successfully" Nov 6 00:29:13.268694 containerd[1541]: time="2025-11-06T00:29:13.268129693Z" level=info msg="StopPodSandbox for \"5086b9ba72418ebcd66943092abf02b29f9fd8c007d764d9cd2a081fdc91fb08\" returns successfully" Nov 6 00:29:13.268958 containerd[1541]: time="2025-11-06T00:29:13.268840165Z" level=info msg="RemovePodSandbox for \"5086b9ba72418ebcd66943092abf02b29f9fd8c007d764d9cd2a081fdc91fb08\"" Nov 6 00:29:13.269540 containerd[1541]: time="2025-11-06T00:29:13.269499050Z" level=info msg="Forcibly stopping sandbox \"5086b9ba72418ebcd66943092abf02b29f9fd8c007d764d9cd2a081fdc91fb08\"" Nov 6 00:29:13.269637 containerd[1541]: time="2025-11-06T00:29:13.269614730Z" level=info msg="TearDown network for sandbox \"5086b9ba72418ebcd66943092abf02b29f9fd8c007d764d9cd2a081fdc91fb08\" successfully" Nov 6 00:29:13.277058 containerd[1541]: time="2025-11-06T00:29:13.276912747Z" level=info msg="Ensure that sandbox 5086b9ba72418ebcd66943092abf02b29f9fd8c007d764d9cd2a081fdc91fb08 in task-service has been cleanup successfully" Nov 6 00:29:13.349638 containerd[1541]: time="2025-11-06T00:29:13.349515597Z" level=info msg="RemovePodSandbox \"5086b9ba72418ebcd66943092abf02b29f9fd8c007d764d9cd2a081fdc91fb08\" returns successfully" Nov 6 00:29:14.044343 kubelet[2762]: E1106 00:29:14.043639 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:29:14.496226 containerd[1541]: time="2025-11-06T00:29:14.495806662Z" level=info msg="TaskExit event in podsandbox handler container_id:\"73ee3fca1576b852a2b2f58faf96454b01875caae44879b53e9d6197a072b70b\" id:\"55954fbfff1658f508b53499f67000d14bc4c97b2d1815806e75d77cf50f09c3\" pid:5614 exited_at:{seconds:1762388954 nanos:492481591}" Nov 6 00:29:14.534525 sshd[4706]: Connection closed by 10.0.0.1 port 40114 Nov 6 00:29:14.534570 sshd-session[4699]: pam_unix(sshd:session): session closed for user core Nov 6 00:29:14.551743 systemd[1]: sshd@36-10.0.0.113:22-10.0.0.1:40114.service: Deactivated successfully. Nov 6 00:29:14.556909 systemd[1]: session-37.scope: Deactivated successfully. Nov 6 00:29:14.559193 systemd-logind[1523]: Session 37 logged out. Waiting for processes to exit. Nov 6 00:29:14.564235 systemd-logind[1523]: Removed session 37.