Nov 6 23:33:30.943245 kernel: Linux version 6.6.113-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Thu Nov 6 22:02:38 -00 2025 Nov 6 23:33:30.943285 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=1a4810aa24298684dd9efd264f1d9b812e4e16f32429f4615db9ff284dd4ac25 Nov 6 23:33:30.943301 kernel: BIOS-provided physical RAM map: Nov 6 23:33:30.943312 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Nov 6 23:33:30.943321 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Nov 6 23:33:30.943331 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Nov 6 23:33:30.943339 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Nov 6 23:33:30.943346 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Nov 6 23:33:30.943353 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 6 23:33:30.943360 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Nov 6 23:33:30.943370 kernel: NX (Execute Disable) protection: active Nov 6 23:33:30.943377 kernel: APIC: Static calls initialized Nov 6 23:33:30.943389 kernel: SMBIOS 2.8 present. Nov 6 23:33:30.943396 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Nov 6 23:33:30.943405 kernel: Hypervisor detected: KVM Nov 6 23:33:30.943413 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 6 23:33:30.943426 kernel: kvm-clock: using sched offset of 3608425874 cycles Nov 6 23:33:30.943435 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 6 23:33:30.943443 kernel: tsc: Detected 2494.140 MHz processor Nov 6 23:33:30.943452 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 6 23:33:30.943460 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 6 23:33:30.943468 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Nov 6 23:33:30.943476 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Nov 6 23:33:30.943484 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 6 23:33:30.943494 kernel: ACPI: Early table checksum verification disabled Nov 6 23:33:30.943502 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Nov 6 23:33:30.943514 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 23:33:30.943526 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 23:33:30.943538 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 23:33:30.943548 kernel: ACPI: FACS 0x000000007FFE0000 000040 Nov 6 23:33:30.943556 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 23:33:30.943564 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 23:33:30.943572 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 23:33:30.943583 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 23:33:30.943591 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Nov 6 23:33:30.943599 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Nov 6 23:33:30.943607 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Nov 6 23:33:30.943615 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Nov 6 23:33:30.943623 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Nov 6 23:33:30.943631 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Nov 6 23:33:30.943643 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Nov 6 23:33:30.943655 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Nov 6 23:33:30.943668 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Nov 6 23:33:30.943680 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Nov 6 23:33:30.943688 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Nov 6 23:33:30.943700 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] Nov 6 23:33:30.943708 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] Nov 6 23:33:30.943720 kernel: Zone ranges: Nov 6 23:33:30.943729 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 6 23:33:30.946899 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Nov 6 23:33:30.946916 kernel: Normal empty Nov 6 23:33:30.946925 kernel: Movable zone start for each node Nov 6 23:33:30.946933 kernel: Early memory node ranges Nov 6 23:33:30.946942 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Nov 6 23:33:30.946950 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Nov 6 23:33:30.946959 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Nov 6 23:33:30.946968 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 6 23:33:30.946984 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Nov 6 23:33:30.946998 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Nov 6 23:33:30.947006 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 6 23:33:30.947015 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 6 23:33:30.947024 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 6 23:33:30.947032 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 6 23:33:30.947041 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 6 23:33:30.947050 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 6 23:33:30.947058 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 6 23:33:30.947070 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 6 23:33:30.947079 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 6 23:33:30.947087 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 6 23:33:30.947095 kernel: TSC deadline timer available Nov 6 23:33:30.947104 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Nov 6 23:33:30.947112 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 6 23:33:30.947121 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Nov 6 23:33:30.947132 kernel: Booting paravirtualized kernel on KVM Nov 6 23:33:30.947141 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 6 23:33:30.947153 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Nov 6 23:33:30.947161 kernel: percpu: Embedded 58 pages/cpu s196712 r8192 d32664 u1048576 Nov 6 23:33:30.947170 kernel: pcpu-alloc: s196712 r8192 d32664 u1048576 alloc=1*2097152 Nov 6 23:33:30.947178 kernel: pcpu-alloc: [0] 0 1 Nov 6 23:33:30.947187 kernel: kvm-guest: PV spinlocks disabled, no host support Nov 6 23:33:30.947197 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=1a4810aa24298684dd9efd264f1d9b812e4e16f32429f4615db9ff284dd4ac25 Nov 6 23:33:30.947206 kernel: random: crng init done Nov 6 23:33:30.947214 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 6 23:33:30.947225 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 6 23:33:30.947233 kernel: Fallback order for Node 0: 0 Nov 6 23:33:30.947242 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 Nov 6 23:33:30.947250 kernel: Policy zone: DMA32 Nov 6 23:33:30.947258 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 6 23:33:30.947268 kernel: Memory: 1969156K/2096612K available (14336K kernel code, 2288K rwdata, 22872K rodata, 43520K init, 1560K bss, 127196K reserved, 0K cma-reserved) Nov 6 23:33:30.947276 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 6 23:33:30.947285 kernel: Kernel/User page tables isolation: enabled Nov 6 23:33:30.947293 kernel: ftrace: allocating 37954 entries in 149 pages Nov 6 23:33:30.947304 kernel: ftrace: allocated 149 pages with 4 groups Nov 6 23:33:30.947313 kernel: Dynamic Preempt: voluntary Nov 6 23:33:30.947321 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 6 23:33:30.947331 kernel: rcu: RCU event tracing is enabled. Nov 6 23:33:30.947339 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 6 23:33:30.947347 kernel: Trampoline variant of Tasks RCU enabled. Nov 6 23:33:30.947356 kernel: Rude variant of Tasks RCU enabled. Nov 6 23:33:30.947364 kernel: Tracing variant of Tasks RCU enabled. Nov 6 23:33:30.947373 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 6 23:33:30.947384 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 6 23:33:30.947392 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Nov 6 23:33:30.947400 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 6 23:33:30.947409 kernel: Console: colour VGA+ 80x25 Nov 6 23:33:30.947420 kernel: printk: console [tty0] enabled Nov 6 23:33:30.947428 kernel: printk: console [ttyS0] enabled Nov 6 23:33:30.947437 kernel: ACPI: Core revision 20230628 Nov 6 23:33:30.947446 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 6 23:33:30.947454 kernel: APIC: Switch to symmetric I/O mode setup Nov 6 23:33:30.947465 kernel: x2apic enabled Nov 6 23:33:30.947474 kernel: APIC: Switched APIC routing to: physical x2apic Nov 6 23:33:30.947482 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 6 23:33:30.947491 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns Nov 6 23:33:30.947499 kernel: Calibrating delay loop (skipped) preset value.. 4988.28 BogoMIPS (lpj=2494140) Nov 6 23:33:30.947507 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Nov 6 23:33:30.947516 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Nov 6 23:33:30.947524 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 6 23:33:30.947544 kernel: Spectre V2 : Mitigation: Retpolines Nov 6 23:33:30.947553 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 6 23:33:30.947562 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Nov 6 23:33:30.947570 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 6 23:33:30.947582 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 6 23:33:30.947591 kernel: MDS: Mitigation: Clear CPU buffers Nov 6 23:33:30.947600 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Nov 6 23:33:30.947609 kernel: active return thunk: its_return_thunk Nov 6 23:33:30.947617 kernel: ITS: Mitigation: Aligned branch/return thunks Nov 6 23:33:30.947631 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 6 23:33:30.947640 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 6 23:33:30.947649 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 6 23:33:30.947658 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 6 23:33:30.947667 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Nov 6 23:33:30.947676 kernel: Freeing SMP alternatives memory: 32K Nov 6 23:33:30.947685 kernel: pid_max: default: 32768 minimum: 301 Nov 6 23:33:30.947694 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 6 23:33:30.947703 kernel: landlock: Up and running. Nov 6 23:33:30.947715 kernel: SELinux: Initializing. Nov 6 23:33:30.947724 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 6 23:33:30.947746 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 6 23:33:30.947755 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Nov 6 23:33:30.947765 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 6 23:33:30.947774 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 6 23:33:30.947783 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 6 23:33:30.947792 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Nov 6 23:33:30.947804 kernel: signal: max sigframe size: 1776 Nov 6 23:33:30.947813 kernel: rcu: Hierarchical SRCU implementation. Nov 6 23:33:30.947822 kernel: rcu: Max phase no-delay instances is 400. Nov 6 23:33:30.947831 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Nov 6 23:33:30.947840 kernel: smp: Bringing up secondary CPUs ... Nov 6 23:33:30.947848 kernel: smpboot: x86: Booting SMP configuration: Nov 6 23:33:30.947857 kernel: .... node #0, CPUs: #1 Nov 6 23:33:30.947866 kernel: smp: Brought up 1 node, 2 CPUs Nov 6 23:33:30.947877 kernel: smpboot: Max logical packages: 1 Nov 6 23:33:30.947886 kernel: smpboot: Total of 2 processors activated (9976.56 BogoMIPS) Nov 6 23:33:30.947898 kernel: devtmpfs: initialized Nov 6 23:33:30.947911 kernel: x86/mm: Memory block size: 128MB Nov 6 23:33:30.947923 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 6 23:33:30.947935 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 6 23:33:30.947949 kernel: pinctrl core: initialized pinctrl subsystem Nov 6 23:33:30.947963 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 6 23:33:30.947975 kernel: audit: initializing netlink subsys (disabled) Nov 6 23:33:30.947988 kernel: audit: type=2000 audit(1762472010.287:1): state=initialized audit_enabled=0 res=1 Nov 6 23:33:30.948004 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 6 23:33:30.948018 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 6 23:33:30.948032 kernel: cpuidle: using governor menu Nov 6 23:33:30.948047 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 6 23:33:30.948062 kernel: dca service started, version 1.12.1 Nov 6 23:33:30.948076 kernel: PCI: Using configuration type 1 for base access Nov 6 23:33:30.948092 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 6 23:33:30.948106 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 6 23:33:30.948120 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 6 23:33:30.948139 kernel: ACPI: Added _OSI(Module Device) Nov 6 23:33:30.948153 kernel: ACPI: Added _OSI(Processor Device) Nov 6 23:33:30.948168 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 6 23:33:30.948182 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 6 23:33:30.948197 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Nov 6 23:33:30.948209 kernel: ACPI: Interpreter enabled Nov 6 23:33:30.948222 kernel: ACPI: PM: (supports S0 S5) Nov 6 23:33:30.948237 kernel: ACPI: Using IOAPIC for interrupt routing Nov 6 23:33:30.948250 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 6 23:33:30.948263 kernel: PCI: Using E820 reservations for host bridge windows Nov 6 23:33:30.948280 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Nov 6 23:33:30.948294 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 6 23:33:30.948548 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Nov 6 23:33:30.948714 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Nov 6 23:33:30.949801 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Nov 6 23:33:30.949824 kernel: acpiphp: Slot [3] registered Nov 6 23:33:30.949834 kernel: acpiphp: Slot [4] registered Nov 6 23:33:30.949849 kernel: acpiphp: Slot [5] registered Nov 6 23:33:30.949858 kernel: acpiphp: Slot [6] registered Nov 6 23:33:30.949867 kernel: acpiphp: Slot [7] registered Nov 6 23:33:30.949877 kernel: acpiphp: Slot [8] registered Nov 6 23:33:30.949886 kernel: acpiphp: Slot [9] registered Nov 6 23:33:30.949894 kernel: acpiphp: Slot [10] registered Nov 6 23:33:30.949904 kernel: acpiphp: Slot [11] registered Nov 6 23:33:30.949919 kernel: acpiphp: Slot [12] registered Nov 6 23:33:30.949931 kernel: acpiphp: Slot [13] registered Nov 6 23:33:30.949949 kernel: acpiphp: Slot [14] registered Nov 6 23:33:30.949958 kernel: acpiphp: Slot [15] registered Nov 6 23:33:30.949967 kernel: acpiphp: Slot [16] registered Nov 6 23:33:30.949976 kernel: acpiphp: Slot [17] registered Nov 6 23:33:30.949985 kernel: acpiphp: Slot [18] registered Nov 6 23:33:30.949993 kernel: acpiphp: Slot [19] registered Nov 6 23:33:30.950002 kernel: acpiphp: Slot [20] registered Nov 6 23:33:30.950011 kernel: acpiphp: Slot [21] registered Nov 6 23:33:30.950020 kernel: acpiphp: Slot [22] registered Nov 6 23:33:30.950028 kernel: acpiphp: Slot [23] registered Nov 6 23:33:30.950040 kernel: acpiphp: Slot [24] registered Nov 6 23:33:30.950049 kernel: acpiphp: Slot [25] registered Nov 6 23:33:30.950058 kernel: acpiphp: Slot [26] registered Nov 6 23:33:30.950067 kernel: acpiphp: Slot [27] registered Nov 6 23:33:30.950075 kernel: acpiphp: Slot [28] registered Nov 6 23:33:30.950084 kernel: acpiphp: Slot [29] registered Nov 6 23:33:30.950093 kernel: acpiphp: Slot [30] registered Nov 6 23:33:30.950102 kernel: acpiphp: Slot [31] registered Nov 6 23:33:30.950111 kernel: PCI host bridge to bus 0000:00 Nov 6 23:33:30.950271 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 6 23:33:30.950369 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 6 23:33:30.950458 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 6 23:33:30.950547 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Nov 6 23:33:30.950636 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Nov 6 23:33:30.950723 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 6 23:33:30.950927 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Nov 6 23:33:30.951059 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Nov 6 23:33:30.951177 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Nov 6 23:33:30.951279 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Nov 6 23:33:30.951379 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Nov 6 23:33:30.951478 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Nov 6 23:33:30.951577 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Nov 6 23:33:30.951683 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Nov 6 23:33:30.954950 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Nov 6 23:33:30.955077 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Nov 6 23:33:30.955188 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Nov 6 23:33:30.955291 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Nov 6 23:33:30.955388 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Nov 6 23:33:30.955509 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Nov 6 23:33:30.955625 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Nov 6 23:33:30.955770 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Nov 6 23:33:30.955878 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Nov 6 23:33:30.955975 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Nov 6 23:33:30.956115 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 6 23:33:30.956279 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Nov 6 23:33:30.956388 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Nov 6 23:33:30.956498 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Nov 6 23:33:30.956598 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Nov 6 23:33:30.956712 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Nov 6 23:33:30.958962 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Nov 6 23:33:30.959154 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Nov 6 23:33:30.959307 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Nov 6 23:33:30.959489 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Nov 6 23:33:30.959668 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Nov 6 23:33:30.959834 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Nov 6 23:33:30.959993 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Nov 6 23:33:30.960184 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Nov 6 23:33:30.960336 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Nov 6 23:33:30.960488 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Nov 6 23:33:30.960654 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Nov 6 23:33:30.962090 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Nov 6 23:33:30.962270 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Nov 6 23:33:30.962426 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Nov 6 23:33:30.962577 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Nov 6 23:33:30.962745 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Nov 6 23:33:30.962961 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Nov 6 23:33:30.963117 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Nov 6 23:33:30.963137 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 6 23:33:30.963153 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 6 23:33:30.963168 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 6 23:33:30.963184 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 6 23:33:30.963199 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Nov 6 23:33:30.963215 kernel: iommu: Default domain type: Translated Nov 6 23:33:30.963236 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 6 23:33:30.963261 kernel: PCI: Using ACPI for IRQ routing Nov 6 23:33:30.963279 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 6 23:33:30.963297 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Nov 6 23:33:30.963315 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Nov 6 23:33:30.963484 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Nov 6 23:33:30.963649 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Nov 6 23:33:30.966949 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 6 23:33:30.966989 kernel: vgaarb: loaded Nov 6 23:33:30.967015 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 6 23:33:30.967029 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 6 23:33:30.967043 kernel: clocksource: Switched to clocksource kvm-clock Nov 6 23:33:30.967056 kernel: VFS: Disk quotas dquot_6.6.0 Nov 6 23:33:30.967070 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 6 23:33:30.967084 kernel: pnp: PnP ACPI init Nov 6 23:33:30.967097 kernel: pnp: PnP ACPI: found 4 devices Nov 6 23:33:30.967124 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 6 23:33:30.967138 kernel: NET: Registered PF_INET protocol family Nov 6 23:33:30.967157 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 6 23:33:30.967171 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Nov 6 23:33:30.967185 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 6 23:33:30.967200 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 6 23:33:30.967213 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Nov 6 23:33:30.967227 kernel: TCP: Hash tables configured (established 16384 bind 16384) Nov 6 23:33:30.967241 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 6 23:33:30.967256 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 6 23:33:30.967270 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 6 23:33:30.967289 kernel: NET: Registered PF_XDP protocol family Nov 6 23:33:30.967472 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 6 23:33:30.967624 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 6 23:33:30.967816 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 6 23:33:30.967927 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Nov 6 23:33:30.968042 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Nov 6 23:33:30.968201 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Nov 6 23:33:30.968363 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Nov 6 23:33:30.968393 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Nov 6 23:33:30.968533 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7a0 took 31057 usecs Nov 6 23:33:30.968547 kernel: PCI: CLS 0 bytes, default 64 Nov 6 23:33:30.968557 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Nov 6 23:33:30.968570 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns Nov 6 23:33:30.968584 kernel: Initialise system trusted keyrings Nov 6 23:33:30.968598 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Nov 6 23:33:30.968611 kernel: Key type asymmetric registered Nov 6 23:33:30.968631 kernel: Asymmetric key parser 'x509' registered Nov 6 23:33:30.968645 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Nov 6 23:33:30.968658 kernel: io scheduler mq-deadline registered Nov 6 23:33:30.968671 kernel: io scheduler kyber registered Nov 6 23:33:30.968684 kernel: io scheduler bfq registered Nov 6 23:33:30.968699 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 6 23:33:30.968712 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Nov 6 23:33:30.968726 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Nov 6 23:33:30.970229 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Nov 6 23:33:30.970260 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 6 23:33:30.970276 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 6 23:33:30.970287 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 6 23:33:30.970296 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 6 23:33:30.970305 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 6 23:33:30.970315 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 6 23:33:30.970477 kernel: rtc_cmos 00:03: RTC can wake from S4 Nov 6 23:33:30.970575 kernel: rtc_cmos 00:03: registered as rtc0 Nov 6 23:33:30.970672 kernel: rtc_cmos 00:03: setting system clock to 2025-11-06T23:33:30 UTC (1762472010) Nov 6 23:33:30.970796 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Nov 6 23:33:30.970814 kernel: intel_pstate: CPU model not supported Nov 6 23:33:30.970827 kernel: NET: Registered PF_INET6 protocol family Nov 6 23:33:30.970841 kernel: Segment Routing with IPv6 Nov 6 23:33:30.970854 kernel: In-situ OAM (IOAM) with IPv6 Nov 6 23:33:30.970867 kernel: NET: Registered PF_PACKET protocol family Nov 6 23:33:30.970881 kernel: Key type dns_resolver registered Nov 6 23:33:30.970895 kernel: IPI shorthand broadcast: enabled Nov 6 23:33:30.970915 kernel: sched_clock: Marking stable (906005937, 139501776)->(1157604971, -112097258) Nov 6 23:33:30.970928 kernel: registered taskstats version 1 Nov 6 23:33:30.970942 kernel: Loading compiled-in X.509 certificates Nov 6 23:33:30.970953 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.113-flatcar: d06f6bc77ef9183fbb55ec1fc021fe2cce974996' Nov 6 23:33:30.970961 kernel: Key type .fscrypt registered Nov 6 23:33:30.970970 kernel: Key type fscrypt-provisioning registered Nov 6 23:33:30.970979 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 6 23:33:30.970988 kernel: ima: Allocated hash algorithm: sha1 Nov 6 23:33:30.970996 kernel: ima: No architecture policies found Nov 6 23:33:30.971009 kernel: clk: Disabling unused clocks Nov 6 23:33:30.971018 kernel: Freeing unused kernel image (initmem) memory: 43520K Nov 6 23:33:30.971027 kernel: Write protecting the kernel read-only data: 38912k Nov 6 23:33:30.971036 kernel: Freeing unused kernel image (rodata/data gap) memory: 1704K Nov 6 23:33:30.971064 kernel: Run /init as init process Nov 6 23:33:30.971076 kernel: with arguments: Nov 6 23:33:30.971088 kernel: /init Nov 6 23:33:30.971097 kernel: with environment: Nov 6 23:33:30.971106 kernel: HOME=/ Nov 6 23:33:30.971118 kernel: TERM=linux Nov 6 23:33:30.971130 systemd[1]: Successfully made /usr/ read-only. Nov 6 23:33:30.971143 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 6 23:33:30.971153 systemd[1]: Detected virtualization kvm. Nov 6 23:33:30.971162 systemd[1]: Detected architecture x86-64. Nov 6 23:33:30.971172 systemd[1]: Running in initrd. Nov 6 23:33:30.971181 systemd[1]: No hostname configured, using default hostname. Nov 6 23:33:30.971194 systemd[1]: Hostname set to . Nov 6 23:33:30.971204 systemd[1]: Initializing machine ID from VM UUID. Nov 6 23:33:30.971213 systemd[1]: Queued start job for default target initrd.target. Nov 6 23:33:30.971223 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 6 23:33:30.971232 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 6 23:33:30.971243 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 6 23:33:30.971259 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 6 23:33:30.971273 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 6 23:33:30.971292 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 6 23:33:30.971308 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 6 23:33:30.971321 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 6 23:33:30.971334 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 6 23:33:30.971348 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 6 23:33:30.971361 systemd[1]: Reached target paths.target - Path Units. Nov 6 23:33:30.971375 systemd[1]: Reached target slices.target - Slice Units. Nov 6 23:33:30.971393 systemd[1]: Reached target swap.target - Swaps. Nov 6 23:33:30.971407 systemd[1]: Reached target timers.target - Timer Units. Nov 6 23:33:30.971421 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 6 23:33:30.971434 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 6 23:33:30.971448 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 6 23:33:30.971464 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Nov 6 23:33:30.971478 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 6 23:33:30.971488 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 6 23:33:30.971498 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 6 23:33:30.971508 systemd[1]: Reached target sockets.target - Socket Units. Nov 6 23:33:30.971518 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 6 23:33:30.971528 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 6 23:33:30.971538 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 6 23:33:30.971547 systemd[1]: Starting systemd-fsck-usr.service... Nov 6 23:33:30.971560 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 6 23:33:30.971570 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 6 23:33:30.971580 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 23:33:30.971636 systemd-journald[185]: Collecting audit messages is disabled. Nov 6 23:33:30.971664 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 6 23:33:30.971674 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 6 23:33:30.971685 systemd[1]: Finished systemd-fsck-usr.service. Nov 6 23:33:30.971695 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 6 23:33:30.971706 systemd-journald[185]: Journal started Nov 6 23:33:30.971731 systemd-journald[185]: Runtime Journal (/run/log/journal/d9adf67e9426441f8f8c30922ce2dbfd) is 4.9M, max 39.3M, 34.4M free. Nov 6 23:33:30.944113 systemd-modules-load[186]: Inserted module 'overlay' Nov 6 23:33:31.024442 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 6 23:33:31.024470 kernel: Bridge firewalling registered Nov 6 23:33:30.992653 systemd-modules-load[186]: Inserted module 'br_netfilter' Nov 6 23:33:31.032793 systemd[1]: Started systemd-journald.service - Journal Service. Nov 6 23:33:31.033399 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 6 23:33:31.034022 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 23:33:31.036194 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 6 23:33:31.047980 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 6 23:33:31.050368 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 6 23:33:31.052955 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 6 23:33:31.063675 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 6 23:33:31.083025 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 6 23:33:31.086444 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 6 23:33:31.096960 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 6 23:33:31.097940 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 6 23:33:31.098941 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 6 23:33:31.103024 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 6 23:33:31.131758 dracut-cmdline[222]: dracut-dracut-053 Nov 6 23:33:31.133410 dracut-cmdline[222]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=1a4810aa24298684dd9efd264f1d9b812e4e16f32429f4615db9ff284dd4ac25 Nov 6 23:33:31.146877 systemd-resolved[219]: Positive Trust Anchors: Nov 6 23:33:31.147868 systemd-resolved[219]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 6 23:33:31.148715 systemd-resolved[219]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 6 23:33:31.155929 systemd-resolved[219]: Defaulting to hostname 'linux'. Nov 6 23:33:31.158332 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 6 23:33:31.159397 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 6 23:33:31.220795 kernel: SCSI subsystem initialized Nov 6 23:33:31.230769 kernel: Loading iSCSI transport class v2.0-870. Nov 6 23:33:31.241780 kernel: iscsi: registered transport (tcp) Nov 6 23:33:31.263784 kernel: iscsi: registered transport (qla4xxx) Nov 6 23:33:31.263895 kernel: QLogic iSCSI HBA Driver Nov 6 23:33:31.317353 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 6 23:33:31.321930 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 6 23:33:31.362065 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 6 23:33:31.362154 kernel: device-mapper: uevent: version 1.0.3 Nov 6 23:33:31.363963 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 6 23:33:31.411775 kernel: raid6: avx2x4 gen() 16434 MB/s Nov 6 23:33:31.427770 kernel: raid6: avx2x2 gen() 17276 MB/s Nov 6 23:33:31.444873 kernel: raid6: avx2x1 gen() 13026 MB/s Nov 6 23:33:31.444947 kernel: raid6: using algorithm avx2x2 gen() 17276 MB/s Nov 6 23:33:31.463768 kernel: raid6: .... xor() 20272 MB/s, rmw enabled Nov 6 23:33:31.463854 kernel: raid6: using avx2x2 recovery algorithm Nov 6 23:33:31.485766 kernel: xor: automatically using best checksumming function avx Nov 6 23:33:31.640782 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 6 23:33:31.654689 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 6 23:33:31.661052 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 6 23:33:31.679006 systemd-udevd[404]: Using default interface naming scheme 'v255'. Nov 6 23:33:31.685947 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 6 23:33:31.692019 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 6 23:33:31.711221 dracut-pre-trigger[408]: rd.md=0: removing MD RAID activation Nov 6 23:33:31.749761 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 6 23:33:31.757944 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 6 23:33:31.818172 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 6 23:33:31.824031 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 6 23:33:31.849419 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 6 23:33:31.852127 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 6 23:33:31.853677 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 6 23:33:31.854850 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 6 23:33:31.860996 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 6 23:33:31.880992 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 6 23:33:31.900759 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Nov 6 23:33:31.911721 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Nov 6 23:33:31.912022 kernel: scsi host0: Virtio SCSI HBA Nov 6 23:33:31.939693 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 6 23:33:31.939781 kernel: GPT:9289727 != 125829119 Nov 6 23:33:31.939796 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 6 23:33:31.939812 kernel: GPT:9289727 != 125829119 Nov 6 23:33:31.939835 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 6 23:33:31.939854 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 6 23:33:31.942766 kernel: ACPI: bus type USB registered Nov 6 23:33:31.944760 kernel: cryptd: max_cpu_qlen set to 1000 Nov 6 23:33:31.961831 kernel: usbcore: registered new interface driver usbfs Nov 6 23:33:31.964805 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Nov 6 23:33:31.965035 kernel: usbcore: registered new interface driver hub Nov 6 23:33:31.967229 kernel: virtio_blk virtio5: [vdb] 980 512-byte logical blocks (502 kB/490 KiB) Nov 6 23:33:31.968757 kernel: libata version 3.00 loaded. Nov 6 23:33:31.974978 kernel: usbcore: registered new device driver usb Nov 6 23:33:31.981848 kernel: ata_piix 0000:00:01.1: version 2.13 Nov 6 23:33:31.990160 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 6 23:33:31.991594 kernel: scsi host1: ata_piix Nov 6 23:33:31.990582 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 6 23:33:31.998150 kernel: scsi host2: ata_piix Nov 6 23:33:31.998576 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Nov 6 23:33:31.998595 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Nov 6 23:33:31.993952 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 6 23:33:31.994716 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 6 23:33:31.995078 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 23:33:31.995711 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 23:33:32.005420 kernel: AVX2 version of gcm_enc/dec engaged. Nov 6 23:33:32.005461 kernel: AES CTR mode by8 optimization enabled Nov 6 23:33:32.007034 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 23:33:32.009466 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Nov 6 23:33:32.050820 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by (udev-worker) (450) Nov 6 23:33:32.054840 kernel: BTRFS: device fsid 7e63b391-7474-48b8-9614-cf161680d90d devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (463) Nov 6 23:33:32.072080 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 6 23:33:32.129573 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 23:33:32.139333 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 6 23:33:32.148635 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 6 23:33:32.157496 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Nov 6 23:33:32.158663 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 6 23:33:32.164967 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 6 23:33:32.177122 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 6 23:33:32.188731 disk-uuid[537]: Primary Header is updated. Nov 6 23:33:32.188731 disk-uuid[537]: Secondary Entries is updated. Nov 6 23:33:32.188731 disk-uuid[537]: Secondary Header is updated. Nov 6 23:33:32.205052 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 6 23:33:32.215323 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Nov 6 23:33:32.215538 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Nov 6 23:33:32.215669 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Nov 6 23:33:32.215826 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Nov 6 23:33:32.216006 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 6 23:33:32.216020 kernel: hub 1-0:1.0: USB hub found Nov 6 23:33:32.216161 kernel: hub 1-0:1.0: 2 ports detected Nov 6 23:33:32.230810 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 6 23:33:33.221659 disk-uuid[544]: The operation has completed successfully. Nov 6 23:33:33.222550 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 6 23:33:33.272947 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 6 23:33:33.273917 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 6 23:33:33.313018 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 6 23:33:33.316415 sh[563]: Success Nov 6 23:33:33.331768 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Nov 6 23:33:33.413660 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 6 23:33:33.414842 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 6 23:33:33.417295 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 6 23:33:33.443784 kernel: BTRFS info (device dm-0): first mount of filesystem 7e63b391-7474-48b8-9614-cf161680d90d Nov 6 23:33:33.443857 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 6 23:33:33.443871 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 6 23:33:33.445766 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 6 23:33:33.446918 kernel: BTRFS info (device dm-0): using free space tree Nov 6 23:33:33.456363 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 6 23:33:33.457442 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 6 23:33:33.467085 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 6 23:33:33.471953 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 6 23:33:33.491793 kernel: BTRFS info (device vda6): first mount of filesystem c2193637-3855-459d-ac6d-9b4591136350 Nov 6 23:33:33.491868 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 6 23:33:33.491882 kernel: BTRFS info (device vda6): using free space tree Nov 6 23:33:33.497774 kernel: BTRFS info (device vda6): auto enabling async discard Nov 6 23:33:33.504975 kernel: BTRFS info (device vda6): last unmount of filesystem c2193637-3855-459d-ac6d-9b4591136350 Nov 6 23:33:33.506228 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 6 23:33:33.511072 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 6 23:33:33.627415 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 6 23:33:33.638984 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 6 23:33:33.645776 ignition[648]: Ignition 2.20.0 Nov 6 23:33:33.645786 ignition[648]: Stage: fetch-offline Nov 6 23:33:33.645824 ignition[648]: no configs at "/usr/lib/ignition/base.d" Nov 6 23:33:33.645833 ignition[648]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 6 23:33:33.645939 ignition[648]: parsed url from cmdline: "" Nov 6 23:33:33.649839 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 6 23:33:33.645943 ignition[648]: no config URL provided Nov 6 23:33:33.645948 ignition[648]: reading system config file "/usr/lib/ignition/user.ign" Nov 6 23:33:33.645956 ignition[648]: no config at "/usr/lib/ignition/user.ign" Nov 6 23:33:33.645961 ignition[648]: failed to fetch config: resource requires networking Nov 6 23:33:33.646184 ignition[648]: Ignition finished successfully Nov 6 23:33:33.673351 systemd-networkd[748]: lo: Link UP Nov 6 23:33:33.673364 systemd-networkd[748]: lo: Gained carrier Nov 6 23:33:33.675867 systemd-networkd[748]: Enumeration completed Nov 6 23:33:33.676002 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 6 23:33:33.676251 systemd-networkd[748]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Nov 6 23:33:33.676256 systemd-networkd[748]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Nov 6 23:33:33.677083 systemd-networkd[748]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 6 23:33:33.677088 systemd-networkd[748]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 6 23:33:33.678374 systemd-networkd[748]: eth0: Link UP Nov 6 23:33:33.678380 systemd-networkd[748]: eth0: Gained carrier Nov 6 23:33:33.678392 systemd-networkd[748]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Nov 6 23:33:33.678977 systemd[1]: Reached target network.target - Network. Nov 6 23:33:33.682143 systemd-networkd[748]: eth1: Link UP Nov 6 23:33:33.682148 systemd-networkd[748]: eth1: Gained carrier Nov 6 23:33:33.682161 systemd-networkd[748]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 6 23:33:33.689011 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 6 23:33:33.700826 systemd-networkd[748]: eth1: DHCPv4 address 10.124.0.27/20 acquired from 169.254.169.253 Nov 6 23:33:33.704863 systemd-networkd[748]: eth0: DHCPv4 address 147.182.203.129/20, gateway 147.182.192.1 acquired from 169.254.169.253 Nov 6 23:33:33.705921 ignition[753]: Ignition 2.20.0 Nov 6 23:33:33.705931 ignition[753]: Stage: fetch Nov 6 23:33:33.706173 ignition[753]: no configs at "/usr/lib/ignition/base.d" Nov 6 23:33:33.706190 ignition[753]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 6 23:33:33.706313 ignition[753]: parsed url from cmdline: "" Nov 6 23:33:33.706317 ignition[753]: no config URL provided Nov 6 23:33:33.706323 ignition[753]: reading system config file "/usr/lib/ignition/user.ign" Nov 6 23:33:33.706333 ignition[753]: no config at "/usr/lib/ignition/user.ign" Nov 6 23:33:33.706358 ignition[753]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Nov 6 23:33:33.706539 ignition[753]: GET error: Get "http://169.254.169.254/metadata/v1/user-data": dial tcp 169.254.169.254:80: connect: network is unreachable Nov 6 23:33:33.906723 ignition[753]: GET http://169.254.169.254/metadata/v1/user-data: attempt #2 Nov 6 23:33:33.937482 ignition[753]: GET result: OK Nov 6 23:33:33.937673 ignition[753]: parsing config with SHA512: de4af302bcc307df30b003766bf7d6675f953888a0a93c4244313dddb691f368cbe8dc5177ffea6d3e8bbff168ac2ca73bd0684fd462486f9d79d1ca36870686 Nov 6 23:33:33.945078 unknown[753]: fetched base config from "system" Nov 6 23:33:33.945622 ignition[753]: fetch: fetch complete Nov 6 23:33:33.945094 unknown[753]: fetched base config from "system" Nov 6 23:33:33.945629 ignition[753]: fetch: fetch passed Nov 6 23:33:33.945104 unknown[753]: fetched user config from "digitalocean" Nov 6 23:33:33.945689 ignition[753]: Ignition finished successfully Nov 6 23:33:33.949509 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 6 23:33:33.955976 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 6 23:33:33.987352 ignition[760]: Ignition 2.20.0 Nov 6 23:33:33.987369 ignition[760]: Stage: kargs Nov 6 23:33:33.987675 ignition[760]: no configs at "/usr/lib/ignition/base.d" Nov 6 23:33:33.987693 ignition[760]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 6 23:33:33.989218 ignition[760]: kargs: kargs passed Nov 6 23:33:33.991048 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 6 23:33:33.989293 ignition[760]: Ignition finished successfully Nov 6 23:33:33.997000 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 6 23:33:34.016640 ignition[766]: Ignition 2.20.0 Nov 6 23:33:34.016653 ignition[766]: Stage: disks Nov 6 23:33:34.016861 ignition[766]: no configs at "/usr/lib/ignition/base.d" Nov 6 23:33:34.022668 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 6 23:33:34.016872 ignition[766]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 6 23:33:34.017729 ignition[766]: disks: disks passed Nov 6 23:33:34.024059 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 6 23:33:34.017791 ignition[766]: Ignition finished successfully Nov 6 23:33:34.025019 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 6 23:33:34.025908 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 6 23:33:34.026858 systemd[1]: Reached target sysinit.target - System Initialization. Nov 6 23:33:34.027598 systemd[1]: Reached target basic.target - Basic System. Nov 6 23:33:34.035443 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 6 23:33:34.052389 systemd-fsck[774]: ROOT: clean, 14/553520 files, 52654/553472 blocks Nov 6 23:33:34.054677 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 6 23:33:34.062892 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 6 23:33:34.171784 kernel: EXT4-fs (vda9): mounted filesystem 2abcf372-764b-46c0-a870-42c779c5f871 r/w with ordered data mode. Quota mode: none. Nov 6 23:33:34.172605 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 6 23:33:34.173589 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 6 23:33:34.179866 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 6 23:33:34.182864 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 6 23:33:34.184988 systemd[1]: Starting flatcar-afterburn-network.service - Flatcar Afterburn network service... Nov 6 23:33:34.190857 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Nov 6 23:33:34.196822 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by mount (782) Nov 6 23:33:34.192994 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 6 23:33:34.193033 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 6 23:33:34.202826 kernel: BTRFS info (device vda6): first mount of filesystem c2193637-3855-459d-ac6d-9b4591136350 Nov 6 23:33:34.202894 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 6 23:33:34.202908 kernel: BTRFS info (device vda6): using free space tree Nov 6 23:33:34.210744 kernel: BTRFS info (device vda6): auto enabling async discard Nov 6 23:33:34.209054 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 6 23:33:34.216997 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 6 23:33:34.221504 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 6 23:33:34.302778 coreos-metadata[785]: Nov 06 23:33:34.301 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Nov 6 23:33:34.309301 initrd-setup-root[812]: cut: /sysroot/etc/passwd: No such file or directory Nov 6 23:33:34.310943 coreos-metadata[784]: Nov 06 23:33:34.310 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Nov 6 23:33:34.316090 initrd-setup-root[819]: cut: /sysroot/etc/group: No such file or directory Nov 6 23:33:34.321786 initrd-setup-root[826]: cut: /sysroot/etc/shadow: No such file or directory Nov 6 23:33:34.327730 initrd-setup-root[833]: cut: /sysroot/etc/gshadow: No such file or directory Nov 6 23:33:34.329654 coreos-metadata[785]: Nov 06 23:33:34.329 INFO Fetch successful Nov 6 23:33:34.331066 coreos-metadata[784]: Nov 06 23:33:34.330 INFO Fetch successful Nov 6 23:33:34.338343 coreos-metadata[785]: Nov 06 23:33:34.338 INFO wrote hostname ci-4230.2.4-n-4ba84db3ac to /sysroot/etc/hostname Nov 6 23:33:34.339408 systemd[1]: flatcar-afterburn-network.service: Deactivated successfully. Nov 6 23:33:34.340372 systemd[1]: Finished flatcar-afterburn-network.service - Flatcar Afterburn network service. Nov 6 23:33:34.341844 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 6 23:33:34.430798 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 6 23:33:34.438910 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 6 23:33:34.440693 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 6 23:33:34.450215 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 6 23:33:34.452801 kernel: BTRFS info (device vda6): last unmount of filesystem c2193637-3855-459d-ac6d-9b4591136350 Nov 6 23:33:34.479833 ignition[902]: INFO : Ignition 2.20.0 Nov 6 23:33:34.479833 ignition[902]: INFO : Stage: mount Nov 6 23:33:34.479833 ignition[902]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 6 23:33:34.479833 ignition[902]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 6 23:33:34.484112 ignition[902]: INFO : mount: mount passed Nov 6 23:33:34.484112 ignition[902]: INFO : Ignition finished successfully Nov 6 23:33:34.481832 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 6 23:33:34.483822 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 6 23:33:34.495943 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 6 23:33:34.504537 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 6 23:33:34.520950 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (914) Nov 6 23:33:34.524336 kernel: BTRFS info (device vda6): first mount of filesystem c2193637-3855-459d-ac6d-9b4591136350 Nov 6 23:33:34.524410 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 6 23:33:34.524425 kernel: BTRFS info (device vda6): using free space tree Nov 6 23:33:34.528773 kernel: BTRFS info (device vda6): auto enabling async discard Nov 6 23:33:34.531414 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 6 23:33:34.563122 ignition[930]: INFO : Ignition 2.20.0 Nov 6 23:33:34.565060 ignition[930]: INFO : Stage: files Nov 6 23:33:34.565060 ignition[930]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 6 23:33:34.565060 ignition[930]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 6 23:33:34.565060 ignition[930]: DEBUG : files: compiled without relabeling support, skipping Nov 6 23:33:34.567523 ignition[930]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 6 23:33:34.567523 ignition[930]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 6 23:33:34.571437 ignition[930]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 6 23:33:34.572284 ignition[930]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 6 23:33:34.573137 unknown[930]: wrote ssh authorized keys file for user: core Nov 6 23:33:34.573968 ignition[930]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 6 23:33:34.576705 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 6 23:33:34.576705 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Nov 6 23:33:34.703621 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 6 23:33:34.767362 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 6 23:33:34.767362 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 6 23:33:34.767362 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Nov 6 23:33:34.929001 systemd-networkd[748]: eth0: Gained IPv6LL Nov 6 23:33:34.929439 systemd-networkd[748]: eth1: Gained IPv6LL Nov 6 23:33:35.040427 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Nov 6 23:33:35.825645 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 6 23:33:35.825645 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Nov 6 23:33:35.828099 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Nov 6 23:33:35.828099 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 6 23:33:35.828099 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 6 23:33:35.828099 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 6 23:33:35.828099 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 6 23:33:35.828099 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 6 23:33:35.828099 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 6 23:33:35.828099 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 6 23:33:35.828099 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 6 23:33:35.828099 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 6 23:33:35.828099 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 6 23:33:35.828099 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 6 23:33:35.828099 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Nov 6 23:33:36.187284 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Nov 6 23:33:37.712528 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 6 23:33:37.712528 ignition[930]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Nov 6 23:33:37.716208 ignition[930]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 6 23:33:37.716208 ignition[930]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 6 23:33:37.716208 ignition[930]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Nov 6 23:33:37.716208 ignition[930]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Nov 6 23:33:37.716208 ignition[930]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Nov 6 23:33:37.716208 ignition[930]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 6 23:33:37.716208 ignition[930]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 6 23:33:37.716208 ignition[930]: INFO : files: files passed Nov 6 23:33:37.716208 ignition[930]: INFO : Ignition finished successfully Nov 6 23:33:37.716539 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 6 23:33:37.726119 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 6 23:33:37.729984 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 6 23:33:37.736038 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 6 23:33:37.736240 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 6 23:33:37.762199 initrd-setup-root-after-ignition[959]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 6 23:33:37.762199 initrd-setup-root-after-ignition[959]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 6 23:33:37.763990 initrd-setup-root-after-ignition[963]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 6 23:33:37.764707 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 6 23:33:37.766837 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 6 23:33:37.774054 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 6 23:33:37.813877 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 6 23:33:37.814068 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 6 23:33:37.816337 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 6 23:33:37.817070 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 6 23:33:37.818197 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 6 23:33:37.829134 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 6 23:33:37.848806 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 6 23:33:37.856085 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 6 23:33:37.878979 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 6 23:33:37.880637 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 6 23:33:37.881546 systemd[1]: Stopped target timers.target - Timer Units. Nov 6 23:33:37.882543 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 6 23:33:37.882829 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 6 23:33:37.884407 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 6 23:33:37.885707 systemd[1]: Stopped target basic.target - Basic System. Nov 6 23:33:37.886756 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 6 23:33:37.887915 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 6 23:33:37.889020 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 6 23:33:37.890231 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 6 23:33:37.891428 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 6 23:33:37.892556 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 6 23:33:37.893668 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 6 23:33:37.894768 systemd[1]: Stopped target swap.target - Swaps. Nov 6 23:33:37.895727 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 6 23:33:37.895967 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 6 23:33:37.897051 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 6 23:33:37.897835 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 6 23:33:37.898883 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 6 23:33:37.899010 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 6 23:33:37.900076 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 6 23:33:37.900281 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 6 23:33:37.901658 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 6 23:33:37.901893 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 6 23:33:37.903209 systemd[1]: ignition-files.service: Deactivated successfully. Nov 6 23:33:37.903397 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 6 23:33:37.904188 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Nov 6 23:33:37.904374 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 6 23:33:37.913142 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 6 23:33:37.918092 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 6 23:33:37.918819 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 6 23:33:37.919075 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 6 23:33:37.922406 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 6 23:33:37.922634 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 6 23:33:37.935189 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 6 23:33:37.935298 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 6 23:33:37.945622 ignition[983]: INFO : Ignition 2.20.0 Nov 6 23:33:37.945622 ignition[983]: INFO : Stage: umount Nov 6 23:33:37.954938 ignition[983]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 6 23:33:37.954938 ignition[983]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 6 23:33:37.954938 ignition[983]: INFO : umount: umount passed Nov 6 23:33:37.954938 ignition[983]: INFO : Ignition finished successfully Nov 6 23:33:37.954452 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 6 23:33:37.955777 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 6 23:33:37.957280 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 6 23:33:37.957440 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 6 23:33:37.960982 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 6 23:33:37.961084 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 6 23:33:37.963942 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 6 23:33:37.964037 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 6 23:33:37.964675 systemd[1]: Stopped target network.target - Network. Nov 6 23:33:37.967927 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 6 23:33:37.968050 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 6 23:33:37.971000 systemd[1]: Stopped target paths.target - Path Units. Nov 6 23:33:37.974067 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 6 23:33:37.978509 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 6 23:33:37.979517 systemd[1]: Stopped target slices.target - Slice Units. Nov 6 23:33:37.983041 systemd[1]: Stopped target sockets.target - Socket Units. Nov 6 23:33:37.983654 systemd[1]: iscsid.socket: Deactivated successfully. Nov 6 23:33:37.983728 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 6 23:33:37.984336 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 6 23:33:37.984402 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 6 23:33:37.986941 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 6 23:33:37.987017 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 6 23:33:37.987785 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 6 23:33:37.987833 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 6 23:33:37.988594 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 6 23:33:37.992244 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 6 23:33:37.996296 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 6 23:33:38.000930 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 6 23:33:38.001114 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 6 23:33:38.008538 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Nov 6 23:33:38.009995 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 6 23:33:38.010874 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 6 23:33:38.012435 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 6 23:33:38.012642 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 6 23:33:38.015836 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Nov 6 23:33:38.018181 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 6 23:33:38.018270 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 6 23:33:38.019466 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 6 23:33:38.019565 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 6 23:33:38.026938 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 6 23:33:38.027555 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 6 23:33:38.027660 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 6 23:33:38.028393 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 6 23:33:38.028473 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 6 23:33:38.029632 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 6 23:33:38.029706 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 6 23:33:38.030552 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 6 23:33:38.030626 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 6 23:33:38.031850 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 6 23:33:38.035296 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Nov 6 23:33:38.035410 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Nov 6 23:33:38.051173 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 6 23:33:38.051443 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 6 23:33:38.052791 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 6 23:33:38.052865 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 6 23:33:38.053581 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 6 23:33:38.053636 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 6 23:33:38.055043 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 6 23:33:38.055127 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 6 23:33:38.056868 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 6 23:33:38.056952 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 6 23:33:38.057974 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 6 23:33:38.058055 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 6 23:33:38.065114 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 6 23:33:38.065829 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 6 23:33:38.065947 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 6 23:33:38.068316 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 6 23:33:38.068407 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 6 23:33:38.069652 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 6 23:33:38.069731 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 6 23:33:38.071373 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 6 23:33:38.071450 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 23:33:38.074106 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Nov 6 23:33:38.074203 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Nov 6 23:33:38.077176 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 6 23:33:38.077338 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 6 23:33:38.087043 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 6 23:33:38.087234 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 6 23:33:38.089503 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 6 23:33:38.095067 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 6 23:33:38.106679 systemd[1]: Switching root. Nov 6 23:33:38.152805 systemd-journald[185]: Journal stopped Nov 6 23:33:39.466146 systemd-journald[185]: Received SIGTERM from PID 1 (systemd). Nov 6 23:33:39.466257 kernel: SELinux: policy capability network_peer_controls=1 Nov 6 23:33:39.466284 kernel: SELinux: policy capability open_perms=1 Nov 6 23:33:39.466305 kernel: SELinux: policy capability extended_socket_class=1 Nov 6 23:33:39.466333 kernel: SELinux: policy capability always_check_network=0 Nov 6 23:33:39.466359 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 6 23:33:39.466393 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 6 23:33:39.466414 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 6 23:33:39.466433 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 6 23:33:39.466454 kernel: audit: type=1403 audit(1762472018.284:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 6 23:33:39.466478 systemd[1]: Successfully loaded SELinux policy in 48.854ms. Nov 6 23:33:39.466513 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 13.415ms. Nov 6 23:33:39.466538 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 6 23:33:39.466571 systemd[1]: Detected virtualization kvm. Nov 6 23:33:39.466592 systemd[1]: Detected architecture x86-64. Nov 6 23:33:39.466612 systemd[1]: Detected first boot. Nov 6 23:33:39.466633 systemd[1]: Hostname set to . Nov 6 23:33:39.466654 systemd[1]: Initializing machine ID from VM UUID. Nov 6 23:33:39.466676 zram_generator::config[1027]: No configuration found. Nov 6 23:33:39.466715 kernel: Guest personality initialized and is inactive Nov 6 23:33:39.466775 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Nov 6 23:33:39.466807 kernel: Initialized host personality Nov 6 23:33:39.466827 kernel: NET: Registered PF_VSOCK protocol family Nov 6 23:33:39.466847 systemd[1]: Populated /etc with preset unit settings. Nov 6 23:33:39.466873 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Nov 6 23:33:39.466896 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 6 23:33:39.466921 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 6 23:33:39.466943 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 6 23:33:39.466965 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 6 23:33:39.466988 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 6 23:33:39.467014 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 6 23:33:39.467035 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 6 23:33:39.467057 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 6 23:33:39.467087 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 6 23:33:39.467109 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 6 23:33:39.467136 systemd[1]: Created slice user.slice - User and Session Slice. Nov 6 23:33:39.467158 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 6 23:33:39.467179 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 6 23:33:39.467201 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 6 23:33:39.467228 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 6 23:33:39.467251 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 6 23:33:39.467275 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 6 23:33:39.467297 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 6 23:33:39.467320 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 6 23:33:39.467342 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 6 23:33:39.467369 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 6 23:33:39.467392 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 6 23:33:39.467414 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 6 23:33:39.467437 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 6 23:33:39.467458 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 6 23:33:39.467479 systemd[1]: Reached target slices.target - Slice Units. Nov 6 23:33:39.467501 systemd[1]: Reached target swap.target - Swaps. Nov 6 23:33:39.467523 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 6 23:33:39.467545 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 6 23:33:39.467572 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Nov 6 23:33:39.467593 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 6 23:33:39.467614 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 6 23:33:39.467635 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 6 23:33:39.467657 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 6 23:33:39.467679 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 6 23:33:39.467700 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 6 23:33:39.467721 systemd[1]: Mounting media.mount - External Media Directory... Nov 6 23:33:39.468803 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 23:33:39.468851 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 6 23:33:39.468875 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 6 23:33:39.468897 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 6 23:33:39.468919 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 6 23:33:39.468941 systemd[1]: Reached target machines.target - Containers. Nov 6 23:33:39.468963 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 6 23:33:39.468985 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 6 23:33:39.469007 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 6 23:33:39.469035 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 6 23:33:39.469059 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 6 23:33:39.469081 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 6 23:33:39.469102 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 6 23:33:39.469124 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 6 23:33:39.469145 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 6 23:33:39.469167 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 6 23:33:39.469189 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 6 23:33:39.469217 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 6 23:33:39.469239 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 6 23:33:39.469261 systemd[1]: Stopped systemd-fsck-usr.service. Nov 6 23:33:39.469287 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 6 23:33:39.469309 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 6 23:33:39.469331 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 6 23:33:39.469352 kernel: fuse: init (API version 7.39) Nov 6 23:33:39.469374 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 6 23:33:39.469395 kernel: loop: module loaded Nov 6 23:33:39.469421 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 6 23:33:39.469443 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Nov 6 23:33:39.469465 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 6 23:33:39.469492 systemd[1]: verity-setup.service: Deactivated successfully. Nov 6 23:33:39.469516 systemd[1]: Stopped verity-setup.service. Nov 6 23:33:39.469542 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 23:33:39.469564 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 6 23:33:39.469587 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 6 23:33:39.469607 systemd[1]: Mounted media.mount - External Media Directory. Nov 6 23:33:39.469627 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 6 23:33:39.469653 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 6 23:33:39.469676 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 6 23:33:39.469698 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 6 23:33:39.469721 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 6 23:33:39.474932 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 6 23:33:39.474986 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 6 23:33:39.475010 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 6 23:33:39.475033 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 6 23:33:39.475055 kernel: ACPI: bus type drm_connector registered Nov 6 23:33:39.475090 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 6 23:33:39.475114 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 6 23:33:39.475136 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 6 23:33:39.475158 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 6 23:33:39.475180 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 6 23:33:39.475201 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 6 23:33:39.475222 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 6 23:33:39.475244 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 6 23:33:39.475267 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 6 23:33:39.475296 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 6 23:33:39.475320 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 6 23:33:39.475343 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 6 23:33:39.475370 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 6 23:33:39.475393 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Nov 6 23:33:39.475416 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 6 23:33:39.475438 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 6 23:33:39.475462 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 6 23:33:39.475484 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 6 23:33:39.475513 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 6 23:33:39.475535 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 6 23:33:39.475606 systemd-journald[1108]: Collecting audit messages is disabled. Nov 6 23:33:39.475653 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 6 23:33:39.475676 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 6 23:33:39.475699 systemd-journald[1108]: Journal started Nov 6 23:33:39.479927 systemd-journald[1108]: Runtime Journal (/run/log/journal/d9adf67e9426441f8f8c30922ce2dbfd) is 4.9M, max 39.3M, 34.4M free. Nov 6 23:33:39.034746 systemd[1]: Queued start job for default target multi-user.target. Nov 6 23:33:39.043415 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 6 23:33:39.043905 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 6 23:33:39.488910 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 6 23:33:39.488989 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 6 23:33:39.498838 systemd[1]: Started systemd-journald.service - Journal Service. Nov 6 23:33:39.499920 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 6 23:33:39.500751 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 6 23:33:39.501487 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Nov 6 23:33:39.502114 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 6 23:33:39.503141 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 6 23:33:39.504048 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 6 23:33:39.533759 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 6 23:33:39.546636 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 6 23:33:39.547509 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 6 23:33:39.556350 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 6 23:33:39.559109 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Nov 6 23:33:39.571909 systemd-journald[1108]: Time spent on flushing to /var/log/journal/d9adf67e9426441f8f8c30922ce2dbfd is 119.340ms for 1009 entries. Nov 6 23:33:39.571909 systemd-journald[1108]: System Journal (/var/log/journal/d9adf67e9426441f8f8c30922ce2dbfd) is 8M, max 195.6M, 187.6M free. Nov 6 23:33:39.705905 systemd-journald[1108]: Received client request to flush runtime journal. Nov 6 23:33:39.706002 kernel: loop0: detected capacity change from 0 to 8 Nov 6 23:33:39.706035 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 6 23:33:39.706061 kernel: loop1: detected capacity change from 0 to 138176 Nov 6 23:33:39.706084 kernel: loop2: detected capacity change from 0 to 224512 Nov 6 23:33:39.617349 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Nov 6 23:33:39.623131 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 6 23:33:39.663693 systemd-tmpfiles[1135]: ACLs are not supported, ignoring. Nov 6 23:33:39.663708 systemd-tmpfiles[1135]: ACLs are not supported, ignoring. Nov 6 23:33:39.678374 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 6 23:33:39.688063 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 6 23:33:39.708797 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 6 23:33:39.739887 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 6 23:33:39.762981 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Nov 6 23:33:39.766992 kernel: loop3: detected capacity change from 0 to 147912 Nov 6 23:33:39.794986 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 6 23:33:39.811101 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 6 23:33:39.832059 udevadm[1175]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Nov 6 23:33:39.840064 systemd-tmpfiles[1177]: ACLs are not supported, ignoring. Nov 6 23:33:39.840079 systemd-tmpfiles[1177]: ACLs are not supported, ignoring. Nov 6 23:33:39.847906 kernel: loop4: detected capacity change from 0 to 8 Nov 6 23:33:39.867136 kernel: loop5: detected capacity change from 0 to 138176 Nov 6 23:33:39.866271 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 6 23:33:39.899642 kernel: loop6: detected capacity change from 0 to 224512 Nov 6 23:33:39.926958 kernel: loop7: detected capacity change from 0 to 147912 Nov 6 23:33:39.967054 (sd-merge)[1180]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Nov 6 23:33:39.969335 (sd-merge)[1180]: Merged extensions into '/usr'. Nov 6 23:33:39.979921 systemd[1]: Reload requested from client PID 1134 ('systemd-sysext') (unit systemd-sysext.service)... Nov 6 23:33:39.979950 systemd[1]: Reloading... Nov 6 23:33:40.163812 zram_generator::config[1210]: No configuration found. Nov 6 23:33:40.203763 ldconfig[1130]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 6 23:33:40.353871 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 6 23:33:40.443128 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 6 23:33:40.443373 systemd[1]: Reloading finished in 462 ms. Nov 6 23:33:40.459832 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 6 23:33:40.460842 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 6 23:33:40.474999 systemd[1]: Starting ensure-sysext.service... Nov 6 23:33:40.482059 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 6 23:33:40.503786 systemd[1]: Reload requested from client PID 1253 ('systemctl') (unit ensure-sysext.service)... Nov 6 23:33:40.503800 systemd[1]: Reloading... Nov 6 23:33:40.525055 systemd-tmpfiles[1254]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 6 23:33:40.525454 systemd-tmpfiles[1254]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 6 23:33:40.526665 systemd-tmpfiles[1254]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 6 23:33:40.527165 systemd-tmpfiles[1254]: ACLs are not supported, ignoring. Nov 6 23:33:40.527240 systemd-tmpfiles[1254]: ACLs are not supported, ignoring. Nov 6 23:33:40.530843 systemd-tmpfiles[1254]: Detected autofs mount point /boot during canonicalization of boot. Nov 6 23:33:40.530855 systemd-tmpfiles[1254]: Skipping /boot Nov 6 23:33:40.547367 systemd-tmpfiles[1254]: Detected autofs mount point /boot during canonicalization of boot. Nov 6 23:33:40.547382 systemd-tmpfiles[1254]: Skipping /boot Nov 6 23:33:40.632765 zram_generator::config[1292]: No configuration found. Nov 6 23:33:40.800159 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 6 23:33:40.870939 systemd[1]: Reloading finished in 366 ms. Nov 6 23:33:40.887147 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 6 23:33:40.900219 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 6 23:33:40.913118 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 6 23:33:40.915935 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 6 23:33:40.923037 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 6 23:33:40.933091 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 6 23:33:40.938612 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 6 23:33:40.942963 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 6 23:33:40.946490 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 23:33:40.946675 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 6 23:33:40.954217 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 6 23:33:40.957024 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 6 23:33:40.967088 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 6 23:33:40.967919 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 6 23:33:40.968054 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 6 23:33:40.972123 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 6 23:33:40.973111 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 23:33:40.978127 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 23:33:40.979089 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 6 23:33:40.979279 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 6 23:33:40.979360 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 6 23:33:40.979450 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 23:33:40.986497 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 23:33:40.986832 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 6 23:33:40.995064 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 6 23:33:40.996154 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 6 23:33:40.996305 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 6 23:33:40.996447 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 23:33:41.002849 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 6 23:33:41.005078 systemd[1]: Finished ensure-sysext.service. Nov 6 23:33:41.005918 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 6 23:33:41.006106 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 6 23:33:41.015117 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 6 23:33:41.022909 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 6 23:33:41.030070 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 6 23:33:41.048517 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 6 23:33:41.049868 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 6 23:33:41.050544 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 6 23:33:41.052250 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 6 23:33:41.052451 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 6 23:33:41.054551 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 6 23:33:41.055903 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 6 23:33:41.070392 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 6 23:33:41.070586 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 6 23:33:41.076687 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 6 23:33:41.094150 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 6 23:33:41.095455 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 6 23:33:41.113203 augenrules[1373]: No rules Nov 6 23:33:41.117230 systemd[1]: audit-rules.service: Deactivated successfully. Nov 6 23:33:41.117513 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 6 23:33:41.126400 systemd-udevd[1338]: Using default interface naming scheme 'v255'. Nov 6 23:33:41.178893 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 6 23:33:41.180171 systemd-resolved[1332]: Positive Trust Anchors: Nov 6 23:33:41.180538 systemd-resolved[1332]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 6 23:33:41.180659 systemd-resolved[1332]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 6 23:33:41.189016 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 6 23:33:41.189292 systemd-resolved[1332]: Using system hostname 'ci-4230.2.4-n-4ba84db3ac'. Nov 6 23:33:41.192249 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 6 23:33:41.192835 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 6 23:33:41.267900 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 6 23:33:41.269203 systemd[1]: Reached target time-set.target - System Time Set. Nov 6 23:33:41.301228 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 6 23:33:41.350264 systemd[1]: Condition check resulted in dev-disk-by\x2dlabel-config\x2d2.device - /dev/disk/by-label/config-2 being skipped. Nov 6 23:33:41.360907 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Nov 6 23:33:41.361548 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 23:33:41.361824 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 6 23:33:41.375254 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 6 23:33:41.377859 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1397) Nov 6 23:33:41.386963 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 6 23:33:41.403478 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 6 23:33:41.406044 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 6 23:33:41.406096 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 6 23:33:41.406125 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 6 23:33:41.406143 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 23:33:41.410572 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 6 23:33:41.412669 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 6 23:33:41.416451 systemd-networkd[1383]: lo: Link UP Nov 6 23:33:41.416461 systemd-networkd[1383]: lo: Gained carrier Nov 6 23:33:41.423609 systemd-networkd[1383]: Enumeration completed Nov 6 23:33:41.438803 kernel: ISO 9660 Extensions: RRIP_1991A Nov 6 23:33:41.437786 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 6 23:33:41.443787 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Nov 6 23:33:41.444569 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 6 23:33:41.444795 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 6 23:33:41.445569 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 6 23:33:41.445759 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 6 23:33:41.452724 systemd-networkd[1383]: eth1: Configuring with /run/systemd/network/10-66:25:4d:d8:ff:d6.network. Nov 6 23:33:41.455786 systemd-networkd[1383]: eth1: Link UP Nov 6 23:33:41.455796 systemd-networkd[1383]: eth1: Gained carrier Nov 6 23:33:41.457532 systemd[1]: Reached target network.target - Network. Nov 6 23:33:41.461947 systemd-timesyncd[1349]: Network configuration changed, trying to establish connection. Nov 6 23:33:41.465974 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Nov 6 23:33:41.468941 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 6 23:33:41.469826 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 6 23:33:41.469908 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 6 23:33:41.494610 systemd-networkd[1383]: eth0: Configuring with /run/systemd/network/10-b2:fc:88:45:b3:6f.network. Nov 6 23:33:41.495271 systemd-networkd[1383]: eth0: Link UP Nov 6 23:33:41.495276 systemd-networkd[1383]: eth0: Gained carrier Nov 6 23:33:41.496904 systemd-timesyncd[1349]: Network configuration changed, trying to establish connection. Nov 6 23:33:41.500203 systemd-timesyncd[1349]: Network configuration changed, trying to establish connection. Nov 6 23:33:41.504334 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Nov 6 23:33:41.533329 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 6 23:33:41.542134 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 6 23:33:41.545777 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Nov 6 23:33:41.550775 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Nov 6 23:33:41.572433 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 6 23:33:41.576780 kernel: ACPI: button: Power Button [PWRF] Nov 6 23:33:41.588762 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Nov 6 23:33:41.641775 kernel: mousedev: PS/2 mouse device common for all mice Nov 6 23:33:41.661114 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 23:33:41.674433 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Nov 6 23:33:41.679664 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Nov 6 23:33:41.680967 kernel: Console: switching to colour dummy device 80x25 Nov 6 23:33:41.682283 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Nov 6 23:33:41.682331 kernel: [drm] features: -context_init Nov 6 23:33:41.684781 kernel: [drm] number of scanouts: 1 Nov 6 23:33:41.684834 kernel: [drm] number of cap sets: 0 Nov 6 23:33:41.692810 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Nov 6 23:33:41.708361 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Nov 6 23:33:41.708114 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 6 23:33:41.708337 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 23:33:41.710974 kernel: Console: switching to colour frame buffer device 128x48 Nov 6 23:33:41.716913 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Nov 6 23:33:41.753387 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Nov 6 23:33:41.776968 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 23:33:41.786371 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 6 23:33:41.789010 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 23:33:41.802077 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 23:33:41.806764 kernel: EDAC MC: Ver: 3.0.0 Nov 6 23:33:41.835380 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Nov 6 23:33:41.840978 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Nov 6 23:33:41.865717 lvm[1441]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 6 23:33:41.876893 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 23:33:41.906193 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Nov 6 23:33:41.909137 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 6 23:33:41.909327 systemd[1]: Reached target sysinit.target - System Initialization. Nov 6 23:33:41.909643 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 6 23:33:41.910523 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 6 23:33:41.911515 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 6 23:33:41.911691 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 6 23:33:41.911825 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 6 23:33:41.911935 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 6 23:33:41.911975 systemd[1]: Reached target paths.target - Path Units. Nov 6 23:33:41.912038 systemd[1]: Reached target timers.target - Timer Units. Nov 6 23:33:41.913980 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 6 23:33:41.916170 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 6 23:33:41.921372 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Nov 6 23:33:41.923300 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Nov 6 23:33:41.923874 systemd[1]: Reached target ssh-access.target - SSH Access Available. Nov 6 23:33:41.934795 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 6 23:33:41.936870 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Nov 6 23:33:41.952102 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Nov 6 23:33:41.955181 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 6 23:33:41.958520 systemd[1]: Reached target sockets.target - Socket Units. Nov 6 23:33:41.959383 lvm[1448]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 6 23:33:41.959854 systemd[1]: Reached target basic.target - Basic System. Nov 6 23:33:41.960582 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 6 23:33:41.960632 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 6 23:33:41.968894 systemd[1]: Starting containerd.service - containerd container runtime... Nov 6 23:33:41.975992 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 6 23:33:41.978610 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 6 23:33:41.995059 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 6 23:33:42.003042 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 6 23:33:42.006587 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 6 23:33:42.013561 jq[1452]: false Nov 6 23:33:42.013038 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 6 23:33:42.026281 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 6 23:33:42.040887 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 6 23:33:42.047303 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 6 23:33:42.060084 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 6 23:33:42.061411 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 6 23:33:42.063446 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 6 23:33:42.064871 systemd[1]: Starting update-engine.service - Update Engine... Nov 6 23:33:42.066891 coreos-metadata[1450]: Nov 06 23:33:42.066 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Nov 6 23:33:42.073878 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 6 23:33:42.077509 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Nov 6 23:33:42.082850 coreos-metadata[1450]: Nov 06 23:33:42.082 INFO Fetch successful Nov 6 23:33:42.086634 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 6 23:33:42.087304 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 6 23:33:42.091292 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 6 23:33:42.091505 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 6 23:33:42.100789 dbus-daemon[1451]: [system] SELinux support is enabled Nov 6 23:33:42.120066 jq[1464]: true Nov 6 23:33:42.105019 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 6 23:33:42.132760 update_engine[1463]: I20251106 23:33:42.132604 1463 main.cc:92] Flatcar Update Engine starting Nov 6 23:33:42.144326 jq[1480]: true Nov 6 23:33:42.154767 extend-filesystems[1455]: Found loop4 Nov 6 23:33:42.154767 extend-filesystems[1455]: Found loop5 Nov 6 23:33:42.154767 extend-filesystems[1455]: Found loop6 Nov 6 23:33:42.154767 extend-filesystems[1455]: Found loop7 Nov 6 23:33:42.154767 extend-filesystems[1455]: Found vda Nov 6 23:33:42.154767 extend-filesystems[1455]: Found vda1 Nov 6 23:33:42.154767 extend-filesystems[1455]: Found vda2 Nov 6 23:33:42.154767 extend-filesystems[1455]: Found vda3 Nov 6 23:33:42.154767 extend-filesystems[1455]: Found usr Nov 6 23:33:42.154767 extend-filesystems[1455]: Found vda4 Nov 6 23:33:42.154767 extend-filesystems[1455]: Found vda6 Nov 6 23:33:42.154767 extend-filesystems[1455]: Found vda7 Nov 6 23:33:42.154767 extend-filesystems[1455]: Found vda9 Nov 6 23:33:42.220714 extend-filesystems[1455]: Checking size of /dev/vda9 Nov 6 23:33:42.222658 tar[1469]: linux-amd64/LICENSE Nov 6 23:33:42.222658 tar[1469]: linux-amd64/helm Nov 6 23:33:42.229172 update_engine[1463]: I20251106 23:33:42.161091 1463 update_check_scheduler.cc:74] Next update check in 9m52s Nov 6 23:33:42.156560 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 6 23:33:42.156600 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 6 23:33:42.167511 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 6 23:33:42.167615 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Nov 6 23:33:42.167640 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 6 23:33:42.180298 systemd[1]: Started update-engine.service - Update Engine. Nov 6 23:33:42.180477 (ntainerd)[1481]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 6 23:33:42.186095 systemd[1]: motdgen.service: Deactivated successfully. Nov 6 23:33:42.186330 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 6 23:33:42.199096 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 6 23:33:42.251938 extend-filesystems[1455]: Resized partition /dev/vda9 Nov 6 23:33:42.249310 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 6 23:33:42.251582 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 6 23:33:42.261774 extend-filesystems[1512]: resize2fs 1.47.1 (20-May-2024) Nov 6 23:33:42.271486 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Nov 6 23:33:42.281950 systemd-logind[1460]: New seat seat0. Nov 6 23:33:42.317509 systemd-logind[1460]: Watching system buttons on /dev/input/event1 (Power Button) Nov 6 23:33:42.317536 systemd-logind[1460]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 6 23:33:42.317855 systemd[1]: Started systemd-logind.service - User Login Management. Nov 6 23:33:42.332768 bash[1513]: Updated "/home/core/.ssh/authorized_keys" Nov 6 23:33:42.335770 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 6 23:33:42.349052 systemd[1]: Starting sshkeys.service... Nov 6 23:33:42.380302 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Nov 6 23:33:42.393451 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Nov 6 23:33:42.410766 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Nov 6 23:33:42.423042 extend-filesystems[1512]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 6 23:33:42.423042 extend-filesystems[1512]: old_desc_blocks = 1, new_desc_blocks = 8 Nov 6 23:33:42.423042 extend-filesystems[1512]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Nov 6 23:33:42.433459 extend-filesystems[1455]: Resized filesystem in /dev/vda9 Nov 6 23:33:42.433459 extend-filesystems[1455]: Found vdb Nov 6 23:33:42.424113 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 6 23:33:42.425180 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 6 23:33:42.453760 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1391) Nov 6 23:33:42.520771 coreos-metadata[1517]: Nov 06 23:33:42.513 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Nov 6 23:33:42.529366 coreos-metadata[1517]: Nov 06 23:33:42.528 INFO Fetch successful Nov 6 23:33:42.601939 unknown[1517]: wrote ssh authorized keys file for user: core Nov 6 23:33:42.657840 update-ssh-keys[1530]: Updated "/home/core/.ssh/authorized_keys" Nov 6 23:33:42.659220 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Nov 6 23:33:42.667200 systemd[1]: Finished sshkeys.service. Nov 6 23:33:42.673330 systemd-networkd[1383]: eth1: Gained IPv6LL Nov 6 23:33:42.674833 systemd-timesyncd[1349]: Network configuration changed, trying to establish connection. Nov 6 23:33:42.683891 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 6 23:33:42.689285 systemd[1]: Reached target network-online.target - Network is Online. Nov 6 23:33:42.701028 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 23:33:42.711181 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 6 23:33:42.732755 locksmithd[1498]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 6 23:33:42.807410 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 6 23:33:42.832361 containerd[1481]: time="2025-11-06T23:33:42.832207124Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Nov 6 23:33:42.885633 containerd[1481]: time="2025-11-06T23:33:42.884169321Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 6 23:33:42.890617 containerd[1481]: time="2025-11-06T23:33:42.889898010Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.113-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 6 23:33:42.890617 containerd[1481]: time="2025-11-06T23:33:42.889946466Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 6 23:33:42.890617 containerd[1481]: time="2025-11-06T23:33:42.889966433Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 6 23:33:42.890617 containerd[1481]: time="2025-11-06T23:33:42.890130256Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Nov 6 23:33:42.890617 containerd[1481]: time="2025-11-06T23:33:42.890144566Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Nov 6 23:33:42.890617 containerd[1481]: time="2025-11-06T23:33:42.890201211Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Nov 6 23:33:42.890617 containerd[1481]: time="2025-11-06T23:33:42.890214765Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 6 23:33:42.890617 containerd[1481]: time="2025-11-06T23:33:42.890457185Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 6 23:33:42.890617 containerd[1481]: time="2025-11-06T23:33:42.890476032Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 6 23:33:42.890617 containerd[1481]: time="2025-11-06T23:33:42.890489072Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Nov 6 23:33:42.890617 containerd[1481]: time="2025-11-06T23:33:42.890499467Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 6 23:33:42.890995 containerd[1481]: time="2025-11-06T23:33:42.890575160Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 6 23:33:42.890995 containerd[1481]: time="2025-11-06T23:33:42.890946718Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 6 23:33:42.891757 containerd[1481]: time="2025-11-06T23:33:42.891117355Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 6 23:33:42.891757 containerd[1481]: time="2025-11-06T23:33:42.891136496Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 6 23:33:42.891757 containerd[1481]: time="2025-11-06T23:33:42.891216343Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 6 23:33:42.891757 containerd[1481]: time="2025-11-06T23:33:42.891259414Z" level=info msg="metadata content store policy set" policy=shared Nov 6 23:33:42.897089 containerd[1481]: time="2025-11-06T23:33:42.896669107Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 6 23:33:42.897089 containerd[1481]: time="2025-11-06T23:33:42.896786678Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 6 23:33:42.899405 containerd[1481]: time="2025-11-06T23:33:42.896815054Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Nov 6 23:33:42.899405 containerd[1481]: time="2025-11-06T23:33:42.898872807Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Nov 6 23:33:42.899405 containerd[1481]: time="2025-11-06T23:33:42.898896836Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 6 23:33:42.899405 containerd[1481]: time="2025-11-06T23:33:42.899114097Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 6 23:33:42.899405 containerd[1481]: time="2025-11-06T23:33:42.899373534Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 6 23:33:42.899640 containerd[1481]: time="2025-11-06T23:33:42.899527295Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Nov 6 23:33:42.899640 containerd[1481]: time="2025-11-06T23:33:42.899544518Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Nov 6 23:33:42.899640 containerd[1481]: time="2025-11-06T23:33:42.899559573Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Nov 6 23:33:42.899640 containerd[1481]: time="2025-11-06T23:33:42.899576263Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 6 23:33:42.899640 containerd[1481]: time="2025-11-06T23:33:42.899589260Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 6 23:33:42.899640 containerd[1481]: time="2025-11-06T23:33:42.899601330Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 6 23:33:42.899640 containerd[1481]: time="2025-11-06T23:33:42.899615605Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 6 23:33:42.899640 containerd[1481]: time="2025-11-06T23:33:42.899631717Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 6 23:33:42.899812 containerd[1481]: time="2025-11-06T23:33:42.899644075Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 6 23:33:42.899812 containerd[1481]: time="2025-11-06T23:33:42.899656402Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 6 23:33:42.899812 containerd[1481]: time="2025-11-06T23:33:42.899667841Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 6 23:33:42.899812 containerd[1481]: time="2025-11-06T23:33:42.899687635Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 6 23:33:42.899812 containerd[1481]: time="2025-11-06T23:33:42.899701863Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 6 23:33:42.899812 containerd[1481]: time="2025-11-06T23:33:42.899713522Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 6 23:33:42.899812 containerd[1481]: time="2025-11-06T23:33:42.899725715Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 6 23:33:42.899812 containerd[1481]: time="2025-11-06T23:33:42.899764214Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 6 23:33:42.899812 containerd[1481]: time="2025-11-06T23:33:42.899778964Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 6 23:33:42.899812 containerd[1481]: time="2025-11-06T23:33:42.899790400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 6 23:33:42.899812 containerd[1481]: time="2025-11-06T23:33:42.899802967Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 6 23:33:42.900052 containerd[1481]: time="2025-11-06T23:33:42.899816606Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Nov 6 23:33:42.900052 containerd[1481]: time="2025-11-06T23:33:42.899830698Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Nov 6 23:33:42.900052 containerd[1481]: time="2025-11-06T23:33:42.899861597Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 6 23:33:42.900052 containerd[1481]: time="2025-11-06T23:33:42.899876856Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Nov 6 23:33:42.900052 containerd[1481]: time="2025-11-06T23:33:42.899889608Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 6 23:33:42.900052 containerd[1481]: time="2025-11-06T23:33:42.899903612Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Nov 6 23:33:42.900052 containerd[1481]: time="2025-11-06T23:33:42.899927079Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Nov 6 23:33:42.900052 containerd[1481]: time="2025-11-06T23:33:42.899945200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 6 23:33:42.900052 containerd[1481]: time="2025-11-06T23:33:42.899984299Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 6 23:33:42.900052 containerd[1481]: time="2025-11-06T23:33:42.900036980Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 6 23:33:42.900248 containerd[1481]: time="2025-11-06T23:33:42.900055453Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Nov 6 23:33:42.900248 containerd[1481]: time="2025-11-06T23:33:42.900067583Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 6 23:33:42.900248 containerd[1481]: time="2025-11-06T23:33:42.900079178Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Nov 6 23:33:42.900248 containerd[1481]: time="2025-11-06T23:33:42.900088034Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 6 23:33:42.900248 containerd[1481]: time="2025-11-06T23:33:42.900100018Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Nov 6 23:33:42.900248 containerd[1481]: time="2025-11-06T23:33:42.900109463Z" level=info msg="NRI interface is disabled by configuration." Nov 6 23:33:42.900248 containerd[1481]: time="2025-11-06T23:33:42.900118831Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 6 23:33:42.900446 containerd[1481]: time="2025-11-06T23:33:42.900381970Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 6 23:33:42.900446 containerd[1481]: time="2025-11-06T23:33:42.900429145Z" level=info msg="Connect containerd service" Nov 6 23:33:42.900681 containerd[1481]: time="2025-11-06T23:33:42.900464481Z" level=info msg="using legacy CRI server" Nov 6 23:33:42.900681 containerd[1481]: time="2025-11-06T23:33:42.900471184Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 6 23:33:42.900681 containerd[1481]: time="2025-11-06T23:33:42.900602261Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 6 23:33:42.904766 containerd[1481]: time="2025-11-06T23:33:42.903551167Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 6 23:33:42.905469 containerd[1481]: time="2025-11-06T23:33:42.904933734Z" level=info msg="Start subscribing containerd event" Nov 6 23:33:42.905469 containerd[1481]: time="2025-11-06T23:33:42.905024773Z" level=info msg="Start recovering state" Nov 6 23:33:42.905469 containerd[1481]: time="2025-11-06T23:33:42.905105680Z" level=info msg="Start event monitor" Nov 6 23:33:42.905469 containerd[1481]: time="2025-11-06T23:33:42.905132325Z" level=info msg="Start snapshots syncer" Nov 6 23:33:42.905469 containerd[1481]: time="2025-11-06T23:33:42.905142226Z" level=info msg="Start cni network conf syncer for default" Nov 6 23:33:42.905469 containerd[1481]: time="2025-11-06T23:33:42.905150692Z" level=info msg="Start streaming server" Nov 6 23:33:42.905870 containerd[1481]: time="2025-11-06T23:33:42.905841209Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 6 23:33:42.905928 containerd[1481]: time="2025-11-06T23:33:42.905916709Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 6 23:33:42.906156 systemd[1]: Started containerd.service - containerd container runtime. Nov 6 23:33:42.914453 containerd[1481]: time="2025-11-06T23:33:42.913606705Z" level=info msg="containerd successfully booted in 0.084516s" Nov 6 23:33:42.954761 sshd_keygen[1486]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 6 23:33:42.990461 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 6 23:33:43.004905 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 6 23:33:43.017016 systemd[1]: issuegen.service: Deactivated successfully. Nov 6 23:33:43.017240 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 6 23:33:43.027398 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 6 23:33:43.047946 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 6 23:33:43.058572 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 6 23:33:43.071760 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 6 23:33:43.074828 systemd[1]: Reached target getty.target - Login Prompts. Nov 6 23:33:43.355056 tar[1469]: linux-amd64/README.md Nov 6 23:33:43.365817 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 6 23:33:43.377012 systemd-networkd[1383]: eth0: Gained IPv6LL Nov 6 23:33:43.377581 systemd-timesyncd[1349]: Network configuration changed, trying to establish connection. Nov 6 23:33:43.760725 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 6 23:33:43.770271 systemd[1]: Started sshd@0-147.182.203.129:22-147.75.109.163:46280.service - OpenSSH per-connection server daemon (147.75.109.163:46280). Nov 6 23:33:43.863158 sshd[1571]: Accepted publickey for core from 147.75.109.163 port 46280 ssh2: RSA SHA256:Z2OWs1J4lzJbb37sVDKV9BLSPLNt86phBIrOXH84t9A Nov 6 23:33:43.864880 sshd-session[1571]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:33:43.874727 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 6 23:33:43.885085 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 6 23:33:43.901748 systemd-logind[1460]: New session 1 of user core. Nov 6 23:33:43.910491 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 6 23:33:43.922266 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 6 23:33:43.933684 (systemd)[1575]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 6 23:33:43.938641 systemd-logind[1460]: New session c1 of user core. Nov 6 23:33:43.998569 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 23:33:44.003926 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 6 23:33:44.009510 (kubelet)[1586]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 6 23:33:44.112197 systemd[1575]: Queued start job for default target default.target. Nov 6 23:33:44.116045 systemd[1575]: Created slice app.slice - User Application Slice. Nov 6 23:33:44.116080 systemd[1575]: Reached target paths.target - Paths. Nov 6 23:33:44.116127 systemd[1575]: Reached target timers.target - Timers. Nov 6 23:33:44.118483 systemd[1575]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 6 23:33:44.141793 systemd[1575]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 6 23:33:44.141925 systemd[1575]: Reached target sockets.target - Sockets. Nov 6 23:33:44.141978 systemd[1575]: Reached target basic.target - Basic System. Nov 6 23:33:44.142019 systemd[1575]: Reached target default.target - Main User Target. Nov 6 23:33:44.142052 systemd[1575]: Startup finished in 190ms. Nov 6 23:33:44.142182 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 6 23:33:44.151034 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 6 23:33:44.153476 systemd[1]: Startup finished in 1.048s (kernel) + 7.568s (initrd) + 5.916s (userspace) = 14.533s. Nov 6 23:33:44.246386 systemd[1]: Started sshd@1-147.182.203.129:22-147.75.109.163:46284.service - OpenSSH per-connection server daemon (147.75.109.163:46284). Nov 6 23:33:44.314807 sshd[1600]: Accepted publickey for core from 147.75.109.163 port 46284 ssh2: RSA SHA256:Z2OWs1J4lzJbb37sVDKV9BLSPLNt86phBIrOXH84t9A Nov 6 23:33:44.317587 sshd-session[1600]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:33:44.325989 systemd-logind[1460]: New session 2 of user core. Nov 6 23:33:44.335092 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 6 23:33:44.399849 sshd[1602]: Connection closed by 147.75.109.163 port 46284 Nov 6 23:33:44.401000 sshd-session[1600]: pam_unix(sshd:session): session closed for user core Nov 6 23:33:44.415258 systemd[1]: sshd@1-147.182.203.129:22-147.75.109.163:46284.service: Deactivated successfully. Nov 6 23:33:44.419232 systemd[1]: session-2.scope: Deactivated successfully. Nov 6 23:33:44.420227 systemd-logind[1460]: Session 2 logged out. Waiting for processes to exit. Nov 6 23:33:44.430058 systemd[1]: Started sshd@2-147.182.203.129:22-147.75.109.163:46288.service - OpenSSH per-connection server daemon (147.75.109.163:46288). Nov 6 23:33:44.431947 systemd-logind[1460]: Removed session 2. Nov 6 23:33:44.481445 sshd[1607]: Accepted publickey for core from 147.75.109.163 port 46288 ssh2: RSA SHA256:Z2OWs1J4lzJbb37sVDKV9BLSPLNt86phBIrOXH84t9A Nov 6 23:33:44.483970 sshd-session[1607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:33:44.491386 systemd-logind[1460]: New session 3 of user core. Nov 6 23:33:44.494962 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 6 23:33:44.553366 sshd[1611]: Connection closed by 147.75.109.163 port 46288 Nov 6 23:33:44.554028 sshd-session[1607]: pam_unix(sshd:session): session closed for user core Nov 6 23:33:44.567149 systemd[1]: sshd@2-147.182.203.129:22-147.75.109.163:46288.service: Deactivated successfully. Nov 6 23:33:44.570091 systemd[1]: session-3.scope: Deactivated successfully. Nov 6 23:33:44.575587 systemd-logind[1460]: Session 3 logged out. Waiting for processes to exit. Nov 6 23:33:44.582420 systemd[1]: Started sshd@3-147.182.203.129:22-147.75.109.163:46300.service - OpenSSH per-connection server daemon (147.75.109.163:46300). Nov 6 23:33:44.586296 systemd-logind[1460]: Removed session 3. Nov 6 23:33:44.639775 sshd[1616]: Accepted publickey for core from 147.75.109.163 port 46300 ssh2: RSA SHA256:Z2OWs1J4lzJbb37sVDKV9BLSPLNt86phBIrOXH84t9A Nov 6 23:33:44.642364 sshd-session[1616]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:33:44.651154 systemd-logind[1460]: New session 4 of user core. Nov 6 23:33:44.665008 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 6 23:33:44.701435 kubelet[1586]: E1106 23:33:44.701352 1586 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 6 23:33:44.704452 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 6 23:33:44.704626 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 6 23:33:44.705260 systemd[1]: kubelet.service: Consumed 1.160s CPU time, 265.7M memory peak. Nov 6 23:33:44.731646 sshd[1619]: Connection closed by 147.75.109.163 port 46300 Nov 6 23:33:44.732242 sshd-session[1616]: pam_unix(sshd:session): session closed for user core Nov 6 23:33:44.747837 systemd[1]: sshd@3-147.182.203.129:22-147.75.109.163:46300.service: Deactivated successfully. Nov 6 23:33:44.750772 systemd[1]: session-4.scope: Deactivated successfully. Nov 6 23:33:44.753962 systemd-logind[1460]: Session 4 logged out. Waiting for processes to exit. Nov 6 23:33:44.760227 systemd[1]: Started sshd@4-147.182.203.129:22-147.75.109.163:46310.service - OpenSSH per-connection server daemon (147.75.109.163:46310). Nov 6 23:33:44.762176 systemd-logind[1460]: Removed session 4. Nov 6 23:33:44.814381 sshd[1625]: Accepted publickey for core from 147.75.109.163 port 46310 ssh2: RSA SHA256:Z2OWs1J4lzJbb37sVDKV9BLSPLNt86phBIrOXH84t9A Nov 6 23:33:44.816046 sshd-session[1625]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:33:44.822195 systemd-logind[1460]: New session 5 of user core. Nov 6 23:33:44.829025 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 6 23:33:44.902241 sudo[1629]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 6 23:33:44.903450 sudo[1629]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 6 23:33:44.916612 sudo[1629]: pam_unix(sudo:session): session closed for user root Nov 6 23:33:44.919806 sshd[1628]: Connection closed by 147.75.109.163 port 46310 Nov 6 23:33:44.920519 sshd-session[1625]: pam_unix(sshd:session): session closed for user core Nov 6 23:33:44.933910 systemd[1]: sshd@4-147.182.203.129:22-147.75.109.163:46310.service: Deactivated successfully. Nov 6 23:33:44.935906 systemd[1]: session-5.scope: Deactivated successfully. Nov 6 23:33:44.938831 systemd-logind[1460]: Session 5 logged out. Waiting for processes to exit. Nov 6 23:33:44.943670 systemd[1]: Started sshd@5-147.182.203.129:22-147.75.109.163:46314.service - OpenSSH per-connection server daemon (147.75.109.163:46314). Nov 6 23:33:44.945324 systemd-logind[1460]: Removed session 5. Nov 6 23:33:45.003252 sshd[1634]: Accepted publickey for core from 147.75.109.163 port 46314 ssh2: RSA SHA256:Z2OWs1J4lzJbb37sVDKV9BLSPLNt86phBIrOXH84t9A Nov 6 23:33:45.004847 sshd-session[1634]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:33:45.012524 systemd-logind[1460]: New session 6 of user core. Nov 6 23:33:45.018028 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 6 23:33:45.078622 sudo[1639]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 6 23:33:45.079105 sudo[1639]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 6 23:33:45.083675 sudo[1639]: pam_unix(sudo:session): session closed for user root Nov 6 23:33:45.091486 sudo[1638]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 6 23:33:45.091983 sudo[1638]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 6 23:33:45.112674 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 6 23:33:45.149545 augenrules[1661]: No rules Nov 6 23:33:45.151452 systemd[1]: audit-rules.service: Deactivated successfully. Nov 6 23:33:45.151809 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 6 23:33:45.153481 sudo[1638]: pam_unix(sudo:session): session closed for user root Nov 6 23:33:45.157557 sshd[1637]: Connection closed by 147.75.109.163 port 46314 Nov 6 23:33:45.158998 sshd-session[1634]: pam_unix(sshd:session): session closed for user core Nov 6 23:33:45.172616 systemd[1]: sshd@5-147.182.203.129:22-147.75.109.163:46314.service: Deactivated successfully. Nov 6 23:33:45.174718 systemd[1]: session-6.scope: Deactivated successfully. Nov 6 23:33:45.175512 systemd-logind[1460]: Session 6 logged out. Waiting for processes to exit. Nov 6 23:33:45.181130 systemd[1]: Started sshd@6-147.182.203.129:22-147.75.109.163:46324.service - OpenSSH per-connection server daemon (147.75.109.163:46324). Nov 6 23:33:45.186867 systemd-logind[1460]: Removed session 6. Nov 6 23:33:45.256460 sshd[1669]: Accepted publickey for core from 147.75.109.163 port 46324 ssh2: RSA SHA256:Z2OWs1J4lzJbb37sVDKV9BLSPLNt86phBIrOXH84t9A Nov 6 23:33:45.258552 sshd-session[1669]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:33:45.267680 systemd-logind[1460]: New session 7 of user core. Nov 6 23:33:45.275081 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 6 23:33:45.335160 sudo[1673]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 6 23:33:45.335460 sudo[1673]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 6 23:33:45.775080 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 6 23:33:45.776811 (dockerd)[1689]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 6 23:33:46.247797 dockerd[1689]: time="2025-11-06T23:33:46.247418743Z" level=info msg="Starting up" Nov 6 23:33:46.350672 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1879288839-merged.mount: Deactivated successfully. Nov 6 23:33:46.455342 dockerd[1689]: time="2025-11-06T23:33:46.455059580Z" level=info msg="Loading containers: start." Nov 6 23:33:46.664768 kernel: Initializing XFRM netlink socket Nov 6 23:33:46.697414 systemd-timesyncd[1349]: Network configuration changed, trying to establish connection. Nov 6 23:33:46.715207 systemd-timesyncd[1349]: Network configuration changed, trying to establish connection. Nov 6 23:33:46.768794 systemd-networkd[1383]: docker0: Link UP Nov 6 23:33:46.769408 systemd-timesyncd[1349]: Network configuration changed, trying to establish connection. Nov 6 23:33:46.800116 dockerd[1689]: time="2025-11-06T23:33:46.800070812Z" level=info msg="Loading containers: done." Nov 6 23:33:46.819082 dockerd[1689]: time="2025-11-06T23:33:46.819024633Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 6 23:33:46.819286 dockerd[1689]: time="2025-11-06T23:33:46.819140651Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Nov 6 23:33:46.819286 dockerd[1689]: time="2025-11-06T23:33:46.819250321Z" level=info msg="Daemon has completed initialization" Nov 6 23:33:46.852432 dockerd[1689]: time="2025-11-06T23:33:46.852350715Z" level=info msg="API listen on /run/docker.sock" Nov 6 23:33:46.852838 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 6 23:33:47.745914 containerd[1481]: time="2025-11-06T23:33:47.745403991Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Nov 6 23:33:48.408518 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount156530973.mount: Deactivated successfully. Nov 6 23:33:49.522770 containerd[1481]: time="2025-11-06T23:33:49.521263467Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:33:49.523770 containerd[1481]: time="2025-11-06T23:33:49.523667717Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=28837916" Nov 6 23:33:49.524475 containerd[1481]: time="2025-11-06T23:33:49.524407193Z" level=info msg="ImageCreate event name:\"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:33:49.528691 containerd[1481]: time="2025-11-06T23:33:49.528614559Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:33:49.531769 containerd[1481]: time="2025-11-06T23:33:49.530554335Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"28834515\" in 1.785098073s" Nov 6 23:33:49.531769 containerd[1481]: time="2025-11-06T23:33:49.530645715Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Nov 6 23:33:49.532393 containerd[1481]: time="2025-11-06T23:33:49.532361851Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Nov 6 23:33:50.927958 containerd[1481]: time="2025-11-06T23:33:50.927896166Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:33:50.929166 containerd[1481]: time="2025-11-06T23:33:50.929082432Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=24787027" Nov 6 23:33:50.929702 containerd[1481]: time="2025-11-06T23:33:50.929677876Z" level=info msg="ImageCreate event name:\"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:33:50.932758 containerd[1481]: time="2025-11-06T23:33:50.932381081Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:33:50.933601 containerd[1481]: time="2025-11-06T23:33:50.933568478Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"26421706\" in 1.401170834s" Nov 6 23:33:50.933702 containerd[1481]: time="2025-11-06T23:33:50.933687183Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Nov 6 23:33:50.934291 containerd[1481]: time="2025-11-06T23:33:50.934265183Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Nov 6 23:33:52.020025 containerd[1481]: time="2025-11-06T23:33:52.019969920Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:33:52.021456 containerd[1481]: time="2025-11-06T23:33:52.021398005Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=19176289" Nov 6 23:33:52.022171 containerd[1481]: time="2025-11-06T23:33:52.022136847Z" level=info msg="ImageCreate event name:\"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:33:52.024780 containerd[1481]: time="2025-11-06T23:33:52.024709859Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:33:52.026753 containerd[1481]: time="2025-11-06T23:33:52.026692466Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"20810986\" in 1.092397007s" Nov 6 23:33:52.026753 containerd[1481]: time="2025-11-06T23:33:52.026755141Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Nov 6 23:33:52.027237 containerd[1481]: time="2025-11-06T23:33:52.027206175Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Nov 6 23:33:53.146250 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount484644322.mount: Deactivated successfully. Nov 6 23:33:53.639196 containerd[1481]: time="2025-11-06T23:33:53.638928804Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:33:53.640075 containerd[1481]: time="2025-11-06T23:33:53.640017949Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=30924206" Nov 6 23:33:53.640661 containerd[1481]: time="2025-11-06T23:33:53.640628711Z" level=info msg="ImageCreate event name:\"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:33:53.642675 containerd[1481]: time="2025-11-06T23:33:53.642640047Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:33:53.643194 containerd[1481]: time="2025-11-06T23:33:53.643166191Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"30923225\" in 1.615930554s" Nov 6 23:33:53.643257 containerd[1481]: time="2025-11-06T23:33:53.643201382Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Nov 6 23:33:53.643832 containerd[1481]: time="2025-11-06T23:33:53.643796677Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Nov 6 23:33:54.082517 systemd-resolved[1332]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Nov 6 23:33:54.294337 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1481958731.mount: Deactivated successfully. Nov 6 23:33:54.955334 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 6 23:33:54.965024 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 23:33:55.135867 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 23:33:55.144441 (kubelet)[2019]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 6 23:33:55.217284 kubelet[2019]: E1106 23:33:55.217153 2019 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 6 23:33:55.221358 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 6 23:33:55.221505 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 6 23:33:55.223127 systemd[1]: kubelet.service: Consumed 195ms CPU time, 110.1M memory peak. Nov 6 23:33:55.295767 containerd[1481]: time="2025-11-06T23:33:55.295626917Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:33:55.296784 containerd[1481]: time="2025-11-06T23:33:55.296448843Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Nov 6 23:33:55.297761 containerd[1481]: time="2025-11-06T23:33:55.297537343Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:33:55.304773 containerd[1481]: time="2025-11-06T23:33:55.302988153Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:33:55.307754 containerd[1481]: time="2025-11-06T23:33:55.307303832Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.663454684s" Nov 6 23:33:55.307894 containerd[1481]: time="2025-11-06T23:33:55.307761692Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Nov 6 23:33:55.311649 containerd[1481]: time="2025-11-06T23:33:55.311609937Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 6 23:33:55.917304 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3895559617.mount: Deactivated successfully. Nov 6 23:33:55.922648 containerd[1481]: time="2025-11-06T23:33:55.922582427Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:33:55.923300 containerd[1481]: time="2025-11-06T23:33:55.922842159Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Nov 6 23:33:55.924075 containerd[1481]: time="2025-11-06T23:33:55.924046666Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:33:55.926759 containerd[1481]: time="2025-11-06T23:33:55.926692594Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:33:55.927962 containerd[1481]: time="2025-11-06T23:33:55.927921048Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 616.275314ms" Nov 6 23:33:55.927962 containerd[1481]: time="2025-11-06T23:33:55.927954729Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 6 23:33:55.929076 containerd[1481]: time="2025-11-06T23:33:55.929045948Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Nov 6 23:33:56.512856 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount645416791.mount: Deactivated successfully. Nov 6 23:33:57.137022 systemd-resolved[1332]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Nov 6 23:33:58.380030 containerd[1481]: time="2025-11-06T23:33:58.379959861Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:33:58.381906 containerd[1481]: time="2025-11-06T23:33:58.381796792Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Nov 6 23:33:58.382779 containerd[1481]: time="2025-11-06T23:33:58.382700516Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:33:58.387892 containerd[1481]: time="2025-11-06T23:33:58.387809718Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:33:58.389665 containerd[1481]: time="2025-11-06T23:33:58.389492057Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.460291443s" Nov 6 23:33:58.389665 containerd[1481]: time="2025-11-06T23:33:58.389534985Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Nov 6 23:34:01.669641 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 23:34:01.669824 systemd[1]: kubelet.service: Consumed 195ms CPU time, 110.1M memory peak. Nov 6 23:34:01.676043 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 23:34:01.714834 systemd[1]: Reload requested from client PID 2112 ('systemctl') (unit session-7.scope)... Nov 6 23:34:01.714854 systemd[1]: Reloading... Nov 6 23:34:01.864841 zram_generator::config[2156]: No configuration found. Nov 6 23:34:02.010174 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 6 23:34:02.129453 systemd[1]: Reloading finished in 414 ms. Nov 6 23:34:02.190166 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 23:34:02.202613 (kubelet)[2200]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 6 23:34:02.206143 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 23:34:02.208216 systemd[1]: kubelet.service: Deactivated successfully. Nov 6 23:34:02.208603 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 23:34:02.208697 systemd[1]: kubelet.service: Consumed 124ms CPU time, 98.3M memory peak. Nov 6 23:34:02.215212 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 23:34:02.379064 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 23:34:02.387993 (kubelet)[2213]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 6 23:34:02.448773 kubelet[2213]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 6 23:34:02.448773 kubelet[2213]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 6 23:34:02.448773 kubelet[2213]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 6 23:34:02.448773 kubelet[2213]: I1106 23:34:02.447662 2213 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 6 23:34:02.766703 kubelet[2213]: I1106 23:34:02.766106 2213 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 6 23:34:02.767850 kubelet[2213]: I1106 23:34:02.767793 2213 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 6 23:34:02.768769 kubelet[2213]: I1106 23:34:02.768509 2213 server.go:954] "Client rotation is on, will bootstrap in background" Nov 6 23:34:02.798023 kubelet[2213]: I1106 23:34:02.797663 2213 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 6 23:34:02.802925 kubelet[2213]: E1106 23:34:02.802821 2213 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://147.182.203.129:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 147.182.203.129:6443: connect: connection refused" logger="UnhandledError" Nov 6 23:34:02.809219 kubelet[2213]: E1106 23:34:02.809172 2213 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 6 23:34:02.809219 kubelet[2213]: I1106 23:34:02.809221 2213 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 6 23:34:02.813560 kubelet[2213]: I1106 23:34:02.813509 2213 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 6 23:34:02.815433 kubelet[2213]: I1106 23:34:02.815326 2213 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 6 23:34:02.815714 kubelet[2213]: I1106 23:34:02.815426 2213 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.2.4-n-4ba84db3ac","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 6 23:34:02.815866 kubelet[2213]: I1106 23:34:02.815729 2213 topology_manager.go:138] "Creating topology manager with none policy" Nov 6 23:34:02.815866 kubelet[2213]: I1106 23:34:02.815765 2213 container_manager_linux.go:304] "Creating device plugin manager" Nov 6 23:34:02.817329 kubelet[2213]: I1106 23:34:02.817292 2213 state_mem.go:36] "Initialized new in-memory state store" Nov 6 23:34:02.821964 kubelet[2213]: I1106 23:34:02.821882 2213 kubelet.go:446] "Attempting to sync node with API server" Nov 6 23:34:02.822104 kubelet[2213]: I1106 23:34:02.821977 2213 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 6 23:34:02.822104 kubelet[2213]: I1106 23:34:02.822013 2213 kubelet.go:352] "Adding apiserver pod source" Nov 6 23:34:02.822104 kubelet[2213]: I1106 23:34:02.822029 2213 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 6 23:34:02.828792 kubelet[2213]: W1106 23:34:02.828400 2213 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://147.182.203.129:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.2.4-n-4ba84db3ac&limit=500&resourceVersion=0": dial tcp 147.182.203.129:6443: connect: connection refused Nov 6 23:34:02.828792 kubelet[2213]: E1106 23:34:02.828475 2213 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://147.182.203.129:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.2.4-n-4ba84db3ac&limit=500&resourceVersion=0\": dial tcp 147.182.203.129:6443: connect: connection refused" logger="UnhandledError" Nov 6 23:34:02.829192 kubelet[2213]: W1106 23:34:02.829152 2213 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://147.182.203.129:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 147.182.203.129:6443: connect: connection refused Nov 6 23:34:02.829284 kubelet[2213]: E1106 23:34:02.829270 2213 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://147.182.203.129:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 147.182.203.129:6443: connect: connection refused" logger="UnhandledError" Nov 6 23:34:02.829430 kubelet[2213]: I1106 23:34:02.829417 2213 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Nov 6 23:34:02.835973 kubelet[2213]: I1106 23:34:02.835923 2213 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 6 23:34:02.836560 kubelet[2213]: W1106 23:34:02.836518 2213 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 6 23:34:02.840364 kubelet[2213]: I1106 23:34:02.840016 2213 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 6 23:34:02.840364 kubelet[2213]: I1106 23:34:02.840069 2213 server.go:1287] "Started kubelet" Nov 6 23:34:02.849164 kubelet[2213]: I1106 23:34:02.848847 2213 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 6 23:34:02.851763 kubelet[2213]: I1106 23:34:02.849988 2213 server.go:479] "Adding debug handlers to kubelet server" Nov 6 23:34:02.851763 kubelet[2213]: I1106 23:34:02.851566 2213 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 6 23:34:02.851966 kubelet[2213]: I1106 23:34:02.851854 2213 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 6 23:34:02.852446 kubelet[2213]: I1106 23:34:02.852424 2213 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 6 23:34:02.854192 kubelet[2213]: E1106 23:34:02.852956 2213 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://147.182.203.129:6443/api/v1/namespaces/default/events\": dial tcp 147.182.203.129:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230.2.4-n-4ba84db3ac.18758efc2809a251 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230.2.4-n-4ba84db3ac,UID:ci-4230.2.4-n-4ba84db3ac,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230.2.4-n-4ba84db3ac,},FirstTimestamp:2025-11-06 23:34:02.840040017 +0000 UTC m=+0.446428821,LastTimestamp:2025-11-06 23:34:02.840040017 +0000 UTC m=+0.446428821,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230.2.4-n-4ba84db3ac,}" Nov 6 23:34:02.854720 kubelet[2213]: I1106 23:34:02.854701 2213 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 6 23:34:02.857301 kubelet[2213]: E1106 23:34:02.856940 2213 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.4-n-4ba84db3ac\" not found" Nov 6 23:34:02.857301 kubelet[2213]: I1106 23:34:02.856984 2213 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 6 23:34:02.857301 kubelet[2213]: I1106 23:34:02.857184 2213 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 6 23:34:02.857301 kubelet[2213]: I1106 23:34:02.857288 2213 reconciler.go:26] "Reconciler: start to sync state" Nov 6 23:34:02.858698 kubelet[2213]: W1106 23:34:02.857721 2213 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://147.182.203.129:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.182.203.129:6443: connect: connection refused Nov 6 23:34:02.858698 kubelet[2213]: E1106 23:34:02.857784 2213 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://147.182.203.129:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 147.182.203.129:6443: connect: connection refused" logger="UnhandledError" Nov 6 23:34:02.858698 kubelet[2213]: E1106 23:34:02.858002 2213 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.182.203.129:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.4-n-4ba84db3ac?timeout=10s\": dial tcp 147.182.203.129:6443: connect: connection refused" interval="200ms" Nov 6 23:34:02.864235 kubelet[2213]: I1106 23:34:02.864210 2213 factory.go:221] Registration of the containerd container factory successfully Nov 6 23:34:02.864397 kubelet[2213]: I1106 23:34:02.864386 2213 factory.go:221] Registration of the systemd container factory successfully Nov 6 23:34:02.864579 kubelet[2213]: I1106 23:34:02.864563 2213 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 6 23:34:02.879616 kubelet[2213]: I1106 23:34:02.879559 2213 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 6 23:34:02.881157 kubelet[2213]: I1106 23:34:02.881120 2213 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 6 23:34:02.881316 kubelet[2213]: I1106 23:34:02.881306 2213 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 6 23:34:02.881386 kubelet[2213]: I1106 23:34:02.881377 2213 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 6 23:34:02.881430 kubelet[2213]: I1106 23:34:02.881424 2213 kubelet.go:2382] "Starting kubelet main sync loop" Nov 6 23:34:02.881533 kubelet[2213]: E1106 23:34:02.881516 2213 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 6 23:34:02.891334 kubelet[2213]: W1106 23:34:02.891277 2213 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://147.182.203.129:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.182.203.129:6443: connect: connection refused Nov 6 23:34:02.891648 kubelet[2213]: E1106 23:34:02.891537 2213 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://147.182.203.129:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 147.182.203.129:6443: connect: connection refused" logger="UnhandledError" Nov 6 23:34:02.896799 kubelet[2213]: I1106 23:34:02.896711 2213 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 6 23:34:02.897108 kubelet[2213]: I1106 23:34:02.896975 2213 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 6 23:34:02.897108 kubelet[2213]: I1106 23:34:02.897005 2213 state_mem.go:36] "Initialized new in-memory state store" Nov 6 23:34:02.898819 kubelet[2213]: I1106 23:34:02.898688 2213 policy_none.go:49] "None policy: Start" Nov 6 23:34:02.898819 kubelet[2213]: I1106 23:34:02.898709 2213 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 6 23:34:02.898819 kubelet[2213]: I1106 23:34:02.898723 2213 state_mem.go:35] "Initializing new in-memory state store" Nov 6 23:34:02.906200 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 6 23:34:02.918955 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 6 23:34:02.930037 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 6 23:34:02.931560 kubelet[2213]: I1106 23:34:02.931524 2213 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 6 23:34:02.932501 kubelet[2213]: I1106 23:34:02.931718 2213 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 6 23:34:02.932501 kubelet[2213]: I1106 23:34:02.931754 2213 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 6 23:34:02.932501 kubelet[2213]: I1106 23:34:02.932288 2213 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 6 23:34:02.933393 kubelet[2213]: E1106 23:34:02.933374 2213 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 6 23:34:02.933512 kubelet[2213]: E1106 23:34:02.933502 2213 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4230.2.4-n-4ba84db3ac\" not found" Nov 6 23:34:02.992285 systemd[1]: Created slice kubepods-burstable-pod2c54bc2e8ee1dbcad563163cad9c4be2.slice - libcontainer container kubepods-burstable-pod2c54bc2e8ee1dbcad563163cad9c4be2.slice. Nov 6 23:34:03.013200 kubelet[2213]: E1106 23:34:03.012959 2213 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.4-n-4ba84db3ac\" not found" node="ci-4230.2.4-n-4ba84db3ac" Nov 6 23:34:03.017152 systemd[1]: Created slice kubepods-burstable-podadc8d0f4d2ebaa6fe5c515b1100604dc.slice - libcontainer container kubepods-burstable-podadc8d0f4d2ebaa6fe5c515b1100604dc.slice. Nov 6 23:34:03.022272 kubelet[2213]: E1106 23:34:03.021900 2213 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.4-n-4ba84db3ac\" not found" node="ci-4230.2.4-n-4ba84db3ac" Nov 6 23:34:03.025314 systemd[1]: Created slice kubepods-burstable-podb2049d7b487aad8f10a0078febb7a888.slice - libcontainer container kubepods-burstable-podb2049d7b487aad8f10a0078febb7a888.slice. Nov 6 23:34:03.030767 kubelet[2213]: E1106 23:34:03.029962 2213 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.4-n-4ba84db3ac\" not found" node="ci-4230.2.4-n-4ba84db3ac" Nov 6 23:34:03.035383 kubelet[2213]: I1106 23:34:03.035340 2213 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.4-n-4ba84db3ac" Nov 6 23:34:03.036090 kubelet[2213]: E1106 23:34:03.036047 2213 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://147.182.203.129:6443/api/v1/nodes\": dial tcp 147.182.203.129:6443: connect: connection refused" node="ci-4230.2.4-n-4ba84db3ac" Nov 6 23:34:03.058834 kubelet[2213]: E1106 23:34:03.058766 2213 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.182.203.129:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.4-n-4ba84db3ac?timeout=10s\": dial tcp 147.182.203.129:6443: connect: connection refused" interval="400ms" Nov 6 23:34:03.059111 kubelet[2213]: I1106 23:34:03.058930 2213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/adc8d0f4d2ebaa6fe5c515b1100604dc-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.2.4-n-4ba84db3ac\" (UID: \"adc8d0f4d2ebaa6fe5c515b1100604dc\") " pod="kube-system/kube-controller-manager-ci-4230.2.4-n-4ba84db3ac" Nov 6 23:34:03.059111 kubelet[2213]: I1106 23:34:03.058957 2213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2c54bc2e8ee1dbcad563163cad9c4be2-k8s-certs\") pod \"kube-apiserver-ci-4230.2.4-n-4ba84db3ac\" (UID: \"2c54bc2e8ee1dbcad563163cad9c4be2\") " pod="kube-system/kube-apiserver-ci-4230.2.4-n-4ba84db3ac" Nov 6 23:34:03.059111 kubelet[2213]: I1106 23:34:03.058982 2213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2c54bc2e8ee1dbcad563163cad9c4be2-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.2.4-n-4ba84db3ac\" (UID: \"2c54bc2e8ee1dbcad563163cad9c4be2\") " pod="kube-system/kube-apiserver-ci-4230.2.4-n-4ba84db3ac" Nov 6 23:34:03.059111 kubelet[2213]: I1106 23:34:03.058999 2213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/adc8d0f4d2ebaa6fe5c515b1100604dc-ca-certs\") pod \"kube-controller-manager-ci-4230.2.4-n-4ba84db3ac\" (UID: \"adc8d0f4d2ebaa6fe5c515b1100604dc\") " pod="kube-system/kube-controller-manager-ci-4230.2.4-n-4ba84db3ac" Nov 6 23:34:03.059111 kubelet[2213]: I1106 23:34:03.059017 2213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/adc8d0f4d2ebaa6fe5c515b1100604dc-kubeconfig\") pod \"kube-controller-manager-ci-4230.2.4-n-4ba84db3ac\" (UID: \"adc8d0f4d2ebaa6fe5c515b1100604dc\") " pod="kube-system/kube-controller-manager-ci-4230.2.4-n-4ba84db3ac" Nov 6 23:34:03.059258 kubelet[2213]: I1106 23:34:03.059033 2213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2c54bc2e8ee1dbcad563163cad9c4be2-ca-certs\") pod \"kube-apiserver-ci-4230.2.4-n-4ba84db3ac\" (UID: \"2c54bc2e8ee1dbcad563163cad9c4be2\") " pod="kube-system/kube-apiserver-ci-4230.2.4-n-4ba84db3ac" Nov 6 23:34:03.059258 kubelet[2213]: I1106 23:34:03.059049 2213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/adc8d0f4d2ebaa6fe5c515b1100604dc-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.2.4-n-4ba84db3ac\" (UID: \"adc8d0f4d2ebaa6fe5c515b1100604dc\") " pod="kube-system/kube-controller-manager-ci-4230.2.4-n-4ba84db3ac" Nov 6 23:34:03.059258 kubelet[2213]: I1106 23:34:03.059065 2213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/adc8d0f4d2ebaa6fe5c515b1100604dc-k8s-certs\") pod \"kube-controller-manager-ci-4230.2.4-n-4ba84db3ac\" (UID: \"adc8d0f4d2ebaa6fe5c515b1100604dc\") " pod="kube-system/kube-controller-manager-ci-4230.2.4-n-4ba84db3ac" Nov 6 23:34:03.059258 kubelet[2213]: I1106 23:34:03.059080 2213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b2049d7b487aad8f10a0078febb7a888-kubeconfig\") pod \"kube-scheduler-ci-4230.2.4-n-4ba84db3ac\" (UID: \"b2049d7b487aad8f10a0078febb7a888\") " pod="kube-system/kube-scheduler-ci-4230.2.4-n-4ba84db3ac" Nov 6 23:34:03.239351 kubelet[2213]: I1106 23:34:03.239303 2213 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.4-n-4ba84db3ac" Nov 6 23:34:03.240001 kubelet[2213]: E1106 23:34:03.239948 2213 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://147.182.203.129:6443/api/v1/nodes\": dial tcp 147.182.203.129:6443: connect: connection refused" node="ci-4230.2.4-n-4ba84db3ac" Nov 6 23:34:03.314292 kubelet[2213]: E1106 23:34:03.314128 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:34:03.315447 containerd[1481]: time="2025-11-06T23:34:03.315400550Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.2.4-n-4ba84db3ac,Uid:2c54bc2e8ee1dbcad563163cad9c4be2,Namespace:kube-system,Attempt:0,}" Nov 6 23:34:03.318228 systemd-resolved[1332]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.3. Nov 6 23:34:03.322708 kubelet[2213]: E1106 23:34:03.322675 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:34:03.323498 containerd[1481]: time="2025-11-06T23:34:03.323404016Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.2.4-n-4ba84db3ac,Uid:adc8d0f4d2ebaa6fe5c515b1100604dc,Namespace:kube-system,Attempt:0,}" Nov 6 23:34:03.332197 kubelet[2213]: E1106 23:34:03.331881 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:34:03.333328 containerd[1481]: time="2025-11-06T23:34:03.333287974Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.2.4-n-4ba84db3ac,Uid:b2049d7b487aad8f10a0078febb7a888,Namespace:kube-system,Attempt:0,}" Nov 6 23:34:03.460245 kubelet[2213]: E1106 23:34:03.460182 2213 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.182.203.129:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.4-n-4ba84db3ac?timeout=10s\": dial tcp 147.182.203.129:6443: connect: connection refused" interval="800ms" Nov 6 23:34:03.642170 kubelet[2213]: I1106 23:34:03.641331 2213 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.4-n-4ba84db3ac" Nov 6 23:34:03.642170 kubelet[2213]: E1106 23:34:03.641820 2213 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://147.182.203.129:6443/api/v1/nodes\": dial tcp 147.182.203.129:6443: connect: connection refused" node="ci-4230.2.4-n-4ba84db3ac" Nov 6 23:34:03.767517 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3289178500.mount: Deactivated successfully. Nov 6 23:34:03.772809 containerd[1481]: time="2025-11-06T23:34:03.772718174Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 6 23:34:03.774664 containerd[1481]: time="2025-11-06T23:34:03.774623842Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 6 23:34:03.776156 containerd[1481]: time="2025-11-06T23:34:03.776103241Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Nov 6 23:34:03.776841 containerd[1481]: time="2025-11-06T23:34:03.776639882Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 6 23:34:03.778699 containerd[1481]: time="2025-11-06T23:34:03.778601278Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 6 23:34:03.778829 containerd[1481]: time="2025-11-06T23:34:03.778706501Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 6 23:34:03.786177 containerd[1481]: time="2025-11-06T23:34:03.785862532Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 6 23:34:03.787642 containerd[1481]: time="2025-11-06T23:34:03.787109647Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 453.720264ms" Nov 6 23:34:03.789443 containerd[1481]: time="2025-11-06T23:34:03.789181062Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 465.526307ms" Nov 6 23:34:03.792149 containerd[1481]: time="2025-11-06T23:34:03.791954545Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 6 23:34:03.793169 containerd[1481]: time="2025-11-06T23:34:03.793123296Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 476.402346ms" Nov 6 23:34:03.947599 containerd[1481]: time="2025-11-06T23:34:03.944000185Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 6 23:34:03.947599 containerd[1481]: time="2025-11-06T23:34:03.944089532Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 6 23:34:03.947599 containerd[1481]: time="2025-11-06T23:34:03.944108403Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 6 23:34:03.947599 containerd[1481]: time="2025-11-06T23:34:03.944216229Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 6 23:34:03.952323 containerd[1481]: time="2025-11-06T23:34:03.951640639Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 6 23:34:03.952323 containerd[1481]: time="2025-11-06T23:34:03.951717320Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 6 23:34:03.952323 containerd[1481]: time="2025-11-06T23:34:03.951753755Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 6 23:34:03.952323 containerd[1481]: time="2025-11-06T23:34:03.951841594Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 6 23:34:03.952323 containerd[1481]: time="2025-11-06T23:34:03.951923328Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 6 23:34:03.952323 containerd[1481]: time="2025-11-06T23:34:03.952034106Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 6 23:34:03.952323 containerd[1481]: time="2025-11-06T23:34:03.952061226Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 6 23:34:03.952323 containerd[1481]: time="2025-11-06T23:34:03.952223172Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 6 23:34:03.982246 systemd[1]: Started cri-containerd-2613078e5844f169c958ce73dfcfeaeb1a8a49bdd3219826df3b3acfd8f8f0a7.scope - libcontainer container 2613078e5844f169c958ce73dfcfeaeb1a8a49bdd3219826df3b3acfd8f8f0a7. Nov 6 23:34:03.993286 systemd[1]: Started cri-containerd-97bfeddfd5ee4192d8796ed233d515f759d5b275725b7988d0d6486e56919611.scope - libcontainer container 97bfeddfd5ee4192d8796ed233d515f759d5b275725b7988d0d6486e56919611. Nov 6 23:34:04.006917 systemd[1]: Started cri-containerd-4b403d53d455319611c3fd4fa7b809052130719ca568039e813b117200a95f59.scope - libcontainer container 4b403d53d455319611c3fd4fa7b809052130719ca568039e813b117200a95f59. Nov 6 23:34:04.085597 containerd[1481]: time="2025-11-06T23:34:04.083249327Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.2.4-n-4ba84db3ac,Uid:b2049d7b487aad8f10a0078febb7a888,Namespace:kube-system,Attempt:0,} returns sandbox id \"97bfeddfd5ee4192d8796ed233d515f759d5b275725b7988d0d6486e56919611\"" Nov 6 23:34:04.087911 kubelet[2213]: E1106 23:34:04.086954 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:34:04.096820 containerd[1481]: time="2025-11-06T23:34:04.096774237Z" level=info msg="CreateContainer within sandbox \"97bfeddfd5ee4192d8796ed233d515f759d5b275725b7988d0d6486e56919611\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 6 23:34:04.110104 containerd[1481]: time="2025-11-06T23:34:04.109911210Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.2.4-n-4ba84db3ac,Uid:adc8d0f4d2ebaa6fe5c515b1100604dc,Namespace:kube-system,Attempt:0,} returns sandbox id \"4b403d53d455319611c3fd4fa7b809052130719ca568039e813b117200a95f59\"" Nov 6 23:34:04.112783 kubelet[2213]: E1106 23:34:04.112560 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:34:04.118242 containerd[1481]: time="2025-11-06T23:34:04.118131263Z" level=info msg="CreateContainer within sandbox \"97bfeddfd5ee4192d8796ed233d515f759d5b275725b7988d0d6486e56919611\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"59b69f39afda0706c31aa0219726fdf377a394de233ae4c802eca58d3e2c4f8a\"" Nov 6 23:34:04.121104 containerd[1481]: time="2025-11-06T23:34:04.119834519Z" level=info msg="StartContainer for \"59b69f39afda0706c31aa0219726fdf377a394de233ae4c802eca58d3e2c4f8a\"" Nov 6 23:34:04.121402 containerd[1481]: time="2025-11-06T23:34:04.121376142Z" level=info msg="CreateContainer within sandbox \"4b403d53d455319611c3fd4fa7b809052130719ca568039e813b117200a95f59\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 6 23:34:04.130724 containerd[1481]: time="2025-11-06T23:34:04.130667859Z" level=info msg="CreateContainer within sandbox \"4b403d53d455319611c3fd4fa7b809052130719ca568039e813b117200a95f59\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"2e93182afd0fd4902a994adfb37ed37a24037df4c4d725b5ff9e53c508e4d1dd\"" Nov 6 23:34:04.131475 containerd[1481]: time="2025-11-06T23:34:04.131447582Z" level=info msg="StartContainer for \"2e93182afd0fd4902a994adfb37ed37a24037df4c4d725b5ff9e53c508e4d1dd\"" Nov 6 23:34:04.142527 containerd[1481]: time="2025-11-06T23:34:04.142482040Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.2.4-n-4ba84db3ac,Uid:2c54bc2e8ee1dbcad563163cad9c4be2,Namespace:kube-system,Attempt:0,} returns sandbox id \"2613078e5844f169c958ce73dfcfeaeb1a8a49bdd3219826df3b3acfd8f8f0a7\"" Nov 6 23:34:04.143692 kubelet[2213]: E1106 23:34:04.143649 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:34:04.147662 containerd[1481]: time="2025-11-06T23:34:04.147627948Z" level=info msg="CreateContainer within sandbox \"2613078e5844f169c958ce73dfcfeaeb1a8a49bdd3219826df3b3acfd8f8f0a7\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 6 23:34:04.164520 containerd[1481]: time="2025-11-06T23:34:04.164477211Z" level=info msg="CreateContainer within sandbox \"2613078e5844f169c958ce73dfcfeaeb1a8a49bdd3219826df3b3acfd8f8f0a7\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"5b2662fad778002c440dbfafa6282bdda025110c75cfe3390acd801d2ca80376\"" Nov 6 23:34:04.167220 containerd[1481]: time="2025-11-06T23:34:04.167185957Z" level=info msg="StartContainer for \"5b2662fad778002c440dbfafa6282bdda025110c75cfe3390acd801d2ca80376\"" Nov 6 23:34:04.190067 systemd[1]: Started cri-containerd-59b69f39afda0706c31aa0219726fdf377a394de233ae4c802eca58d3e2c4f8a.scope - libcontainer container 59b69f39afda0706c31aa0219726fdf377a394de233ae4c802eca58d3e2c4f8a. Nov 6 23:34:04.212174 kubelet[2213]: W1106 23:34:04.211941 2213 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://147.182.203.129:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.182.203.129:6443: connect: connection refused Nov 6 23:34:04.213436 systemd[1]: Started cri-containerd-2e93182afd0fd4902a994adfb37ed37a24037df4c4d725b5ff9e53c508e4d1dd.scope - libcontainer container 2e93182afd0fd4902a994adfb37ed37a24037df4c4d725b5ff9e53c508e4d1dd. Nov 6 23:34:04.214267 kubelet[2213]: E1106 23:34:04.214176 2213 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://147.182.203.129:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 147.182.203.129:6443: connect: connection refused" logger="UnhandledError" Nov 6 23:34:04.240932 systemd[1]: Started cri-containerd-5b2662fad778002c440dbfafa6282bdda025110c75cfe3390acd801d2ca80376.scope - libcontainer container 5b2662fad778002c440dbfafa6282bdda025110c75cfe3390acd801d2ca80376. Nov 6 23:34:04.263315 kubelet[2213]: E1106 23:34:04.263218 2213 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.182.203.129:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.4-n-4ba84db3ac?timeout=10s\": dial tcp 147.182.203.129:6443: connect: connection refused" interval="1.6s" Nov 6 23:34:04.317633 kubelet[2213]: W1106 23:34:04.317233 2213 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://147.182.203.129:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.2.4-n-4ba84db3ac&limit=500&resourceVersion=0": dial tcp 147.182.203.129:6443: connect: connection refused Nov 6 23:34:04.317633 kubelet[2213]: E1106 23:34:04.317335 2213 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://147.182.203.129:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.2.4-n-4ba84db3ac&limit=500&resourceVersion=0\": dial tcp 147.182.203.129:6443: connect: connection refused" logger="UnhandledError" Nov 6 23:34:04.326608 containerd[1481]: time="2025-11-06T23:34:04.326553569Z" level=info msg="StartContainer for \"59b69f39afda0706c31aa0219726fdf377a394de233ae4c802eca58d3e2c4f8a\" returns successfully" Nov 6 23:34:04.333520 containerd[1481]: time="2025-11-06T23:34:04.333268613Z" level=info msg="StartContainer for \"5b2662fad778002c440dbfafa6282bdda025110c75cfe3390acd801d2ca80376\" returns successfully" Nov 6 23:34:04.338880 containerd[1481]: time="2025-11-06T23:34:04.338421157Z" level=info msg="StartContainer for \"2e93182afd0fd4902a994adfb37ed37a24037df4c4d725b5ff9e53c508e4d1dd\" returns successfully" Nov 6 23:34:04.345565 kubelet[2213]: W1106 23:34:04.345391 2213 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://147.182.203.129:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 147.182.203.129:6443: connect: connection refused Nov 6 23:34:04.345713 kubelet[2213]: E1106 23:34:04.345617 2213 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://147.182.203.129:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 147.182.203.129:6443: connect: connection refused" logger="UnhandledError" Nov 6 23:34:04.411512 kubelet[2213]: W1106 23:34:04.411413 2213 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://147.182.203.129:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.182.203.129:6443: connect: connection refused Nov 6 23:34:04.411512 kubelet[2213]: E1106 23:34:04.411513 2213 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://147.182.203.129:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 147.182.203.129:6443: connect: connection refused" logger="UnhandledError" Nov 6 23:34:04.444434 kubelet[2213]: I1106 23:34:04.444398 2213 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.4-n-4ba84db3ac" Nov 6 23:34:04.447028 kubelet[2213]: E1106 23:34:04.446957 2213 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://147.182.203.129:6443/api/v1/nodes\": dial tcp 147.182.203.129:6443: connect: connection refused" node="ci-4230.2.4-n-4ba84db3ac" Nov 6 23:34:04.911575 kubelet[2213]: E1106 23:34:04.911545 2213 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.4-n-4ba84db3ac\" not found" node="ci-4230.2.4-n-4ba84db3ac" Nov 6 23:34:04.912137 kubelet[2213]: E1106 23:34:04.911673 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:34:04.916241 kubelet[2213]: E1106 23:34:04.916213 2213 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.4-n-4ba84db3ac\" not found" node="ci-4230.2.4-n-4ba84db3ac" Nov 6 23:34:04.916391 kubelet[2213]: E1106 23:34:04.916342 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:34:04.918048 kubelet[2213]: E1106 23:34:04.918022 2213 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.4-n-4ba84db3ac\" not found" node="ci-4230.2.4-n-4ba84db3ac" Nov 6 23:34:04.918197 kubelet[2213]: E1106 23:34:04.918127 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:34:05.920115 kubelet[2213]: E1106 23:34:05.920082 2213 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.4-n-4ba84db3ac\" not found" node="ci-4230.2.4-n-4ba84db3ac" Nov 6 23:34:05.920618 kubelet[2213]: E1106 23:34:05.920208 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:34:05.920618 kubelet[2213]: E1106 23:34:05.920420 2213 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.4-n-4ba84db3ac\" not found" node="ci-4230.2.4-n-4ba84db3ac" Nov 6 23:34:05.920618 kubelet[2213]: E1106 23:34:05.920550 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:34:06.048691 kubelet[2213]: I1106 23:34:06.048661 2213 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.4-n-4ba84db3ac" Nov 6 23:34:06.348565 kubelet[2213]: E1106 23:34:06.348413 2213 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4230.2.4-n-4ba84db3ac\" not found" node="ci-4230.2.4-n-4ba84db3ac" Nov 6 23:34:06.464530 kubelet[2213]: I1106 23:34:06.464484 2213 kubelet_node_status.go:78] "Successfully registered node" node="ci-4230.2.4-n-4ba84db3ac" Nov 6 23:34:06.464530 kubelet[2213]: E1106 23:34:06.464530 2213 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4230.2.4-n-4ba84db3ac\": node \"ci-4230.2.4-n-4ba84db3ac\" not found" Nov 6 23:34:06.558265 kubelet[2213]: I1106 23:34:06.558212 2213 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230.2.4-n-4ba84db3ac" Nov 6 23:34:06.564855 kubelet[2213]: E1106 23:34:06.564818 2213 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4230.2.4-n-4ba84db3ac\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4230.2.4-n-4ba84db3ac" Nov 6 23:34:06.564855 kubelet[2213]: I1106 23:34:06.564852 2213 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230.2.4-n-4ba84db3ac" Nov 6 23:34:06.576955 kubelet[2213]: E1106 23:34:06.575907 2213 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4230.2.4-n-4ba84db3ac\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4230.2.4-n-4ba84db3ac" Nov 6 23:34:06.576955 kubelet[2213]: I1106 23:34:06.575947 2213 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230.2.4-n-4ba84db3ac" Nov 6 23:34:06.579605 kubelet[2213]: E1106 23:34:06.579568 2213 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4230.2.4-n-4ba84db3ac\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4230.2.4-n-4ba84db3ac" Nov 6 23:34:06.833125 kubelet[2213]: I1106 23:34:06.832973 2213 apiserver.go:52] "Watching apiserver" Nov 6 23:34:06.858295 kubelet[2213]: I1106 23:34:06.858232 2213 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 6 23:34:06.921199 kubelet[2213]: I1106 23:34:06.921168 2213 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230.2.4-n-4ba84db3ac" Nov 6 23:34:06.923656 kubelet[2213]: E1106 23:34:06.923622 2213 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4230.2.4-n-4ba84db3ac\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4230.2.4-n-4ba84db3ac" Nov 6 23:34:06.923880 kubelet[2213]: E1106 23:34:06.923863 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:34:08.354867 systemd[1]: Reload requested from client PID 2486 ('systemctl') (unit session-7.scope)... Nov 6 23:34:08.354891 systemd[1]: Reloading... Nov 6 23:34:08.483849 zram_generator::config[2530]: No configuration found. Nov 6 23:34:08.650551 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 6 23:34:08.795619 systemd[1]: Reloading finished in 440 ms. Nov 6 23:34:08.817761 kubelet[2213]: I1106 23:34:08.814707 2213 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230.2.4-n-4ba84db3ac" Nov 6 23:34:08.823911 kubelet[2213]: W1106 23:34:08.823849 2213 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 6 23:34:08.824150 kubelet[2213]: E1106 23:34:08.824130 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:34:08.829222 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 23:34:08.842780 systemd[1]: kubelet.service: Deactivated successfully. Nov 6 23:34:08.843210 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 23:34:08.843303 systemd[1]: kubelet.service: Consumed 870ms CPU time, 131.5M memory peak. Nov 6 23:34:08.851097 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 23:34:09.001216 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 23:34:09.015367 (kubelet)[2581]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 6 23:34:09.086276 kubelet[2581]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 6 23:34:09.086841 kubelet[2581]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 6 23:34:09.086911 kubelet[2581]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 6 23:34:09.087152 kubelet[2581]: I1106 23:34:09.087068 2581 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 6 23:34:09.097046 kubelet[2581]: I1106 23:34:09.096988 2581 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 6 23:34:09.097812 kubelet[2581]: I1106 23:34:09.097252 2581 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 6 23:34:09.098034 kubelet[2581]: I1106 23:34:09.098016 2581 server.go:954] "Client rotation is on, will bootstrap in background" Nov 6 23:34:09.103694 kubelet[2581]: I1106 23:34:09.103656 2581 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 6 23:34:09.117608 kubelet[2581]: I1106 23:34:09.117482 2581 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 6 23:34:09.122458 kubelet[2581]: E1106 23:34:09.122344 2581 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 6 23:34:09.122458 kubelet[2581]: I1106 23:34:09.122462 2581 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 6 23:34:09.128234 kubelet[2581]: I1106 23:34:09.128158 2581 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 6 23:34:09.128511 kubelet[2581]: I1106 23:34:09.128468 2581 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 6 23:34:09.128785 kubelet[2581]: I1106 23:34:09.128521 2581 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.2.4-n-4ba84db3ac","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 6 23:34:09.128888 kubelet[2581]: I1106 23:34:09.128828 2581 topology_manager.go:138] "Creating topology manager with none policy" Nov 6 23:34:09.128888 kubelet[2581]: I1106 23:34:09.128842 2581 container_manager_linux.go:304] "Creating device plugin manager" Nov 6 23:34:09.128946 kubelet[2581]: I1106 23:34:09.128906 2581 state_mem.go:36] "Initialized new in-memory state store" Nov 6 23:34:09.129079 kubelet[2581]: I1106 23:34:09.129065 2581 kubelet.go:446] "Attempting to sync node with API server" Nov 6 23:34:09.129118 kubelet[2581]: I1106 23:34:09.129087 2581 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 6 23:34:09.129118 kubelet[2581]: I1106 23:34:09.129110 2581 kubelet.go:352] "Adding apiserver pod source" Nov 6 23:34:09.130917 kubelet[2581]: I1106 23:34:09.129121 2581 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 6 23:34:09.131415 kubelet[2581]: I1106 23:34:09.131393 2581 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Nov 6 23:34:09.132052 kubelet[2581]: I1106 23:34:09.132018 2581 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 6 23:34:09.132702 kubelet[2581]: I1106 23:34:09.132687 2581 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 6 23:34:09.132861 kubelet[2581]: I1106 23:34:09.132852 2581 server.go:1287] "Started kubelet" Nov 6 23:34:09.136313 kubelet[2581]: I1106 23:34:09.136229 2581 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 6 23:34:09.139013 kubelet[2581]: I1106 23:34:09.138985 2581 server.go:479] "Adding debug handlers to kubelet server" Nov 6 23:34:09.143166 kubelet[2581]: I1106 23:34:09.143100 2581 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 6 23:34:09.143592 kubelet[2581]: I1106 23:34:09.143573 2581 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 6 23:34:09.144713 kubelet[2581]: I1106 23:34:09.144421 2581 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 6 23:34:09.155125 kubelet[2581]: I1106 23:34:09.155045 2581 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 6 23:34:09.157382 kubelet[2581]: I1106 23:34:09.156985 2581 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 6 23:34:09.157382 kubelet[2581]: E1106 23:34:09.157336 2581 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.4-n-4ba84db3ac\" not found" Nov 6 23:34:09.159197 kubelet[2581]: I1106 23:34:09.158781 2581 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 6 23:34:09.159197 kubelet[2581]: I1106 23:34:09.158928 2581 reconciler.go:26] "Reconciler: start to sync state" Nov 6 23:34:09.170109 kubelet[2581]: I1106 23:34:09.170074 2581 factory.go:221] Registration of the systemd container factory successfully Nov 6 23:34:09.170428 kubelet[2581]: I1106 23:34:09.170377 2581 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 6 23:34:09.176373 kubelet[2581]: I1106 23:34:09.176344 2581 factory.go:221] Registration of the containerd container factory successfully Nov 6 23:34:09.178268 kubelet[2581]: I1106 23:34:09.177673 2581 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 6 23:34:09.179148 kubelet[2581]: I1106 23:34:09.179115 2581 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 6 23:34:09.179148 kubelet[2581]: I1106 23:34:09.179155 2581 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 6 23:34:09.179267 kubelet[2581]: I1106 23:34:09.179178 2581 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 6 23:34:09.179267 kubelet[2581]: I1106 23:34:09.179187 2581 kubelet.go:2382] "Starting kubelet main sync loop" Nov 6 23:34:09.179267 kubelet[2581]: E1106 23:34:09.179240 2581 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 6 23:34:09.193571 kubelet[2581]: E1106 23:34:09.193536 2581 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 6 23:34:09.249640 kubelet[2581]: I1106 23:34:09.249603 2581 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 6 23:34:09.249640 kubelet[2581]: I1106 23:34:09.249624 2581 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 6 23:34:09.249640 kubelet[2581]: I1106 23:34:09.249650 2581 state_mem.go:36] "Initialized new in-memory state store" Nov 6 23:34:09.250006 kubelet[2581]: I1106 23:34:09.249870 2581 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 6 23:34:09.250006 kubelet[2581]: I1106 23:34:09.249880 2581 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 6 23:34:09.250006 kubelet[2581]: I1106 23:34:09.249899 2581 policy_none.go:49] "None policy: Start" Nov 6 23:34:09.250006 kubelet[2581]: I1106 23:34:09.249909 2581 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 6 23:34:09.250006 kubelet[2581]: I1106 23:34:09.249922 2581 state_mem.go:35] "Initializing new in-memory state store" Nov 6 23:34:09.250640 kubelet[2581]: I1106 23:34:09.250160 2581 state_mem.go:75] "Updated machine memory state" Nov 6 23:34:09.256058 kubelet[2581]: I1106 23:34:09.255947 2581 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 6 23:34:09.257571 kubelet[2581]: I1106 23:34:09.257551 2581 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 6 23:34:09.258135 kubelet[2581]: I1106 23:34:09.258086 2581 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 6 23:34:09.258559 kubelet[2581]: I1106 23:34:09.258539 2581 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 6 23:34:09.265374 kubelet[2581]: E1106 23:34:09.265337 2581 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 6 23:34:09.283177 kubelet[2581]: I1106 23:34:09.283144 2581 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230.2.4-n-4ba84db3ac" Nov 6 23:34:09.286016 kubelet[2581]: I1106 23:34:09.285963 2581 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230.2.4-n-4ba84db3ac" Nov 6 23:34:09.286824 kubelet[2581]: I1106 23:34:09.286732 2581 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230.2.4-n-4ba84db3ac" Nov 6 23:34:09.292126 kubelet[2581]: W1106 23:34:09.291544 2581 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 6 23:34:09.295434 kubelet[2581]: W1106 23:34:09.295405 2581 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 6 23:34:09.296672 kubelet[2581]: W1106 23:34:09.296643 2581 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 6 23:34:09.296866 kubelet[2581]: E1106 23:34:09.296714 2581 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4230.2.4-n-4ba84db3ac\" already exists" pod="kube-system/kube-controller-manager-ci-4230.2.4-n-4ba84db3ac" Nov 6 23:34:09.361037 kubelet[2581]: I1106 23:34:09.360159 2581 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2c54bc2e8ee1dbcad563163cad9c4be2-k8s-certs\") pod \"kube-apiserver-ci-4230.2.4-n-4ba84db3ac\" (UID: \"2c54bc2e8ee1dbcad563163cad9c4be2\") " pod="kube-system/kube-apiserver-ci-4230.2.4-n-4ba84db3ac" Nov 6 23:34:09.361037 kubelet[2581]: I1106 23:34:09.360225 2581 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2c54bc2e8ee1dbcad563163cad9c4be2-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.2.4-n-4ba84db3ac\" (UID: \"2c54bc2e8ee1dbcad563163cad9c4be2\") " pod="kube-system/kube-apiserver-ci-4230.2.4-n-4ba84db3ac" Nov 6 23:34:09.361037 kubelet[2581]: I1106 23:34:09.360275 2581 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/adc8d0f4d2ebaa6fe5c515b1100604dc-kubeconfig\") pod \"kube-controller-manager-ci-4230.2.4-n-4ba84db3ac\" (UID: \"adc8d0f4d2ebaa6fe5c515b1100604dc\") " pod="kube-system/kube-controller-manager-ci-4230.2.4-n-4ba84db3ac" Nov 6 23:34:09.361037 kubelet[2581]: I1106 23:34:09.360299 2581 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b2049d7b487aad8f10a0078febb7a888-kubeconfig\") pod \"kube-scheduler-ci-4230.2.4-n-4ba84db3ac\" (UID: \"b2049d7b487aad8f10a0078febb7a888\") " pod="kube-system/kube-scheduler-ci-4230.2.4-n-4ba84db3ac" Nov 6 23:34:09.361037 kubelet[2581]: I1106 23:34:09.360319 2581 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2c54bc2e8ee1dbcad563163cad9c4be2-ca-certs\") pod \"kube-apiserver-ci-4230.2.4-n-4ba84db3ac\" (UID: \"2c54bc2e8ee1dbcad563163cad9c4be2\") " pod="kube-system/kube-apiserver-ci-4230.2.4-n-4ba84db3ac" Nov 6 23:34:09.361366 kubelet[2581]: I1106 23:34:09.360337 2581 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/adc8d0f4d2ebaa6fe5c515b1100604dc-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.2.4-n-4ba84db3ac\" (UID: \"adc8d0f4d2ebaa6fe5c515b1100604dc\") " pod="kube-system/kube-controller-manager-ci-4230.2.4-n-4ba84db3ac" Nov 6 23:34:09.361366 kubelet[2581]: I1106 23:34:09.360363 2581 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/adc8d0f4d2ebaa6fe5c515b1100604dc-k8s-certs\") pod \"kube-controller-manager-ci-4230.2.4-n-4ba84db3ac\" (UID: \"adc8d0f4d2ebaa6fe5c515b1100604dc\") " pod="kube-system/kube-controller-manager-ci-4230.2.4-n-4ba84db3ac" Nov 6 23:34:09.361366 kubelet[2581]: I1106 23:34:09.360386 2581 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/adc8d0f4d2ebaa6fe5c515b1100604dc-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.2.4-n-4ba84db3ac\" (UID: \"adc8d0f4d2ebaa6fe5c515b1100604dc\") " pod="kube-system/kube-controller-manager-ci-4230.2.4-n-4ba84db3ac" Nov 6 23:34:09.361366 kubelet[2581]: I1106 23:34:09.360410 2581 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/adc8d0f4d2ebaa6fe5c515b1100604dc-ca-certs\") pod \"kube-controller-manager-ci-4230.2.4-n-4ba84db3ac\" (UID: \"adc8d0f4d2ebaa6fe5c515b1100604dc\") " pod="kube-system/kube-controller-manager-ci-4230.2.4-n-4ba84db3ac" Nov 6 23:34:09.362187 kubelet[2581]: I1106 23:34:09.362157 2581 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.4-n-4ba84db3ac" Nov 6 23:34:09.371897 sudo[2614]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Nov 6 23:34:09.372307 sudo[2614]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Nov 6 23:34:09.374993 kubelet[2581]: I1106 23:34:09.374661 2581 kubelet_node_status.go:124] "Node was previously registered" node="ci-4230.2.4-n-4ba84db3ac" Nov 6 23:34:09.374993 kubelet[2581]: I1106 23:34:09.374791 2581 kubelet_node_status.go:78] "Successfully registered node" node="ci-4230.2.4-n-4ba84db3ac" Nov 6 23:34:09.593356 kubelet[2581]: E1106 23:34:09.593222 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:34:09.597031 kubelet[2581]: E1106 23:34:09.596906 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:34:09.597241 kubelet[2581]: E1106 23:34:09.597045 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:34:10.041597 sudo[2614]: pam_unix(sudo:session): session closed for user root Nov 6 23:34:10.132509 kubelet[2581]: I1106 23:34:10.129873 2581 apiserver.go:52] "Watching apiserver" Nov 6 23:34:10.159674 kubelet[2581]: I1106 23:34:10.159565 2581 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 6 23:34:10.220083 kubelet[2581]: I1106 23:34:10.220049 2581 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230.2.4-n-4ba84db3ac" Nov 6 23:34:10.220928 kubelet[2581]: E1106 23:34:10.220902 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:34:10.221349 kubelet[2581]: E1106 23:34:10.221294 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:34:10.237906 kubelet[2581]: W1106 23:34:10.237746 2581 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 6 23:34:10.237906 kubelet[2581]: E1106 23:34:10.237831 2581 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4230.2.4-n-4ba84db3ac\" already exists" pod="kube-system/kube-apiserver-ci-4230.2.4-n-4ba84db3ac" Nov 6 23:34:10.242159 kubelet[2581]: E1106 23:34:10.242012 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:34:10.291373 kubelet[2581]: I1106 23:34:10.291137 2581 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4230.2.4-n-4ba84db3ac" podStartSLOduration=1.291109258 podStartE2EDuration="1.291109258s" podCreationTimestamp="2025-11-06 23:34:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 23:34:10.266705322 +0000 UTC m=+1.241271953" watchObservedRunningTime="2025-11-06 23:34:10.291109258 +0000 UTC m=+1.265675895" Nov 6 23:34:10.308102 kubelet[2581]: I1106 23:34:10.307931 2581 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4230.2.4-n-4ba84db3ac" podStartSLOduration=1.3079120419999999 podStartE2EDuration="1.307912042s" podCreationTimestamp="2025-11-06 23:34:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 23:34:10.292594395 +0000 UTC m=+1.267161025" watchObservedRunningTime="2025-11-06 23:34:10.307912042 +0000 UTC m=+1.282478665" Nov 6 23:34:10.323136 kubelet[2581]: I1106 23:34:10.323055 2581 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4230.2.4-n-4ba84db3ac" podStartSLOduration=2.323033287 podStartE2EDuration="2.323033287s" podCreationTimestamp="2025-11-06 23:34:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 23:34:10.310039178 +0000 UTC m=+1.284605807" watchObservedRunningTime="2025-11-06 23:34:10.323033287 +0000 UTC m=+1.297599927" Nov 6 23:34:11.223784 kubelet[2581]: E1106 23:34:11.223633 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:34:11.223784 kubelet[2581]: E1106 23:34:11.223646 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:34:11.753588 sudo[1673]: pam_unix(sudo:session): session closed for user root Nov 6 23:34:11.757796 sshd[1672]: Connection closed by 147.75.109.163 port 46324 Nov 6 23:34:11.758498 sshd-session[1669]: pam_unix(sshd:session): session closed for user core Nov 6 23:34:11.763070 systemd[1]: sshd@6-147.182.203.129:22-147.75.109.163:46324.service: Deactivated successfully. Nov 6 23:34:11.766181 systemd[1]: session-7.scope: Deactivated successfully. Nov 6 23:34:11.766747 systemd[1]: session-7.scope: Consumed 5.460s CPU time, 219.2M memory peak. Nov 6 23:34:11.768445 systemd-logind[1460]: Session 7 logged out. Waiting for processes to exit. Nov 6 23:34:11.769505 systemd-logind[1460]: Removed session 7. Nov 6 23:34:12.226659 kubelet[2581]: E1106 23:34:12.225099 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:34:12.226659 kubelet[2581]: E1106 23:34:12.225354 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:34:13.293789 kubelet[2581]: I1106 23:34:13.293571 2581 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 6 23:34:13.294867 containerd[1481]: time="2025-11-06T23:34:13.294814583Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 6 23:34:13.295564 kubelet[2581]: I1106 23:34:13.295536 2581 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 6 23:34:13.873271 kubelet[2581]: W1106 23:34:13.872982 2581 reflector.go:569] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-4230.2.4-n-4ba84db3ac" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4230.2.4-n-4ba84db3ac' and this object Nov 6 23:34:13.873271 kubelet[2581]: E1106 23:34:13.873034 2581 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:ci-4230.2.4-n-4ba84db3ac\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4230.2.4-n-4ba84db3ac' and this object" logger="UnhandledError" Nov 6 23:34:13.873271 kubelet[2581]: W1106 23:34:13.873095 2581 reflector.go:569] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-4230.2.4-n-4ba84db3ac" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4230.2.4-n-4ba84db3ac' and this object Nov 6 23:34:13.873271 kubelet[2581]: E1106 23:34:13.873108 2581 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:ci-4230.2.4-n-4ba84db3ac\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4230.2.4-n-4ba84db3ac' and this object" logger="UnhandledError" Nov 6 23:34:13.873271 kubelet[2581]: W1106 23:34:13.873148 2581 reflector.go:569] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-4230.2.4-n-4ba84db3ac" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4230.2.4-n-4ba84db3ac' and this object Nov 6 23:34:13.873515 kubelet[2581]: E1106 23:34:13.873158 2581 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:ci-4230.2.4-n-4ba84db3ac\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4230.2.4-n-4ba84db3ac' and this object" logger="UnhandledError" Nov 6 23:34:13.886077 systemd[1]: Created slice kubepods-besteffort-pod232af6db_8e8b_4715_b615_33594644509b.slice - libcontainer container kubepods-besteffort-pod232af6db_8e8b_4715_b615_33594644509b.slice. Nov 6 23:34:13.886534 kubelet[2581]: I1106 23:34:13.886464 2581 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/af22054c-b470-4728-8365-16fe6fec5721-host-proc-sys-net\") pod \"cilium-2fdcw\" (UID: \"af22054c-b470-4728-8365-16fe6fec5721\") " pod="kube-system/cilium-2fdcw" Nov 6 23:34:13.886534 kubelet[2581]: I1106 23:34:13.886494 2581 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/232af6db-8e8b-4715-b615-33594644509b-lib-modules\") pod \"kube-proxy-5dczg\" (UID: \"232af6db-8e8b-4715-b615-33594644509b\") " pod="kube-system/kube-proxy-5dczg" Nov 6 23:34:13.886534 kubelet[2581]: I1106 23:34:13.886511 2581 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/af22054c-b470-4728-8365-16fe6fec5721-etc-cni-netd\") pod \"cilium-2fdcw\" (UID: \"af22054c-b470-4728-8365-16fe6fec5721\") " pod="kube-system/cilium-2fdcw" Nov 6 23:34:13.886534 kubelet[2581]: I1106 23:34:13.886528 2581 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/af22054c-b470-4728-8365-16fe6fec5721-lib-modules\") pod \"cilium-2fdcw\" (UID: \"af22054c-b470-4728-8365-16fe6fec5721\") " pod="kube-system/cilium-2fdcw" Nov 6 23:34:13.886774 kubelet[2581]: I1106 23:34:13.886546 2581 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/af22054c-b470-4728-8365-16fe6fec5721-clustermesh-secrets\") pod \"cilium-2fdcw\" (UID: \"af22054c-b470-4728-8365-16fe6fec5721\") " pod="kube-system/cilium-2fdcw" Nov 6 23:34:13.886774 kubelet[2581]: I1106 23:34:13.886560 2581 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/af22054c-b470-4728-8365-16fe6fec5721-cilium-config-path\") pod \"cilium-2fdcw\" (UID: \"af22054c-b470-4728-8365-16fe6fec5721\") " pod="kube-system/cilium-2fdcw" Nov 6 23:34:13.886774 kubelet[2581]: I1106 23:34:13.886575 2581 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtsfh\" (UniqueName: \"kubernetes.io/projected/af22054c-b470-4728-8365-16fe6fec5721-kube-api-access-gtsfh\") pod \"cilium-2fdcw\" (UID: \"af22054c-b470-4728-8365-16fe6fec5721\") " pod="kube-system/cilium-2fdcw" Nov 6 23:34:13.886774 kubelet[2581]: I1106 23:34:13.886593 2581 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/af22054c-b470-4728-8365-16fe6fec5721-host-proc-sys-kernel\") pod \"cilium-2fdcw\" (UID: \"af22054c-b470-4728-8365-16fe6fec5721\") " pod="kube-system/cilium-2fdcw" Nov 6 23:34:13.886774 kubelet[2581]: I1106 23:34:13.886608 2581 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/232af6db-8e8b-4715-b615-33594644509b-xtables-lock\") pod \"kube-proxy-5dczg\" (UID: \"232af6db-8e8b-4715-b615-33594644509b\") " pod="kube-system/kube-proxy-5dczg" Nov 6 23:34:13.887019 kubelet[2581]: I1106 23:34:13.886623 2581 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/af22054c-b470-4728-8365-16fe6fec5721-cilium-cgroup\") pod \"cilium-2fdcw\" (UID: \"af22054c-b470-4728-8365-16fe6fec5721\") " pod="kube-system/cilium-2fdcw" Nov 6 23:34:13.887019 kubelet[2581]: I1106 23:34:13.886637 2581 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/af22054c-b470-4728-8365-16fe6fec5721-cni-path\") pod \"cilium-2fdcw\" (UID: \"af22054c-b470-4728-8365-16fe6fec5721\") " pod="kube-system/cilium-2fdcw" Nov 6 23:34:13.887019 kubelet[2581]: I1106 23:34:13.886663 2581 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/af22054c-b470-4728-8365-16fe6fec5721-xtables-lock\") pod \"cilium-2fdcw\" (UID: \"af22054c-b470-4728-8365-16fe6fec5721\") " pod="kube-system/cilium-2fdcw" Nov 6 23:34:13.887019 kubelet[2581]: I1106 23:34:13.886683 2581 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpmxs\" (UniqueName: \"kubernetes.io/projected/232af6db-8e8b-4715-b615-33594644509b-kube-api-access-fpmxs\") pod \"kube-proxy-5dczg\" (UID: \"232af6db-8e8b-4715-b615-33594644509b\") " pod="kube-system/kube-proxy-5dczg" Nov 6 23:34:13.887019 kubelet[2581]: I1106 23:34:13.886700 2581 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/af22054c-b470-4728-8365-16fe6fec5721-hostproc\") pod \"cilium-2fdcw\" (UID: \"af22054c-b470-4728-8365-16fe6fec5721\") " pod="kube-system/cilium-2fdcw" Nov 6 23:34:13.887019 kubelet[2581]: I1106 23:34:13.886716 2581 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/af22054c-b470-4728-8365-16fe6fec5721-cilium-run\") pod \"cilium-2fdcw\" (UID: \"af22054c-b470-4728-8365-16fe6fec5721\") " pod="kube-system/cilium-2fdcw" Nov 6 23:34:13.887279 kubelet[2581]: I1106 23:34:13.886730 2581 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/af22054c-b470-4728-8365-16fe6fec5721-bpf-maps\") pod \"cilium-2fdcw\" (UID: \"af22054c-b470-4728-8365-16fe6fec5721\") " pod="kube-system/cilium-2fdcw" Nov 6 23:34:13.887279 kubelet[2581]: I1106 23:34:13.886814 2581 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/af22054c-b470-4728-8365-16fe6fec5721-hubble-tls\") pod \"cilium-2fdcw\" (UID: \"af22054c-b470-4728-8365-16fe6fec5721\") " pod="kube-system/cilium-2fdcw" Nov 6 23:34:13.887279 kubelet[2581]: I1106 23:34:13.886832 2581 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/232af6db-8e8b-4715-b615-33594644509b-kube-proxy\") pod \"kube-proxy-5dczg\" (UID: \"232af6db-8e8b-4715-b615-33594644509b\") " pod="kube-system/kube-proxy-5dczg" Nov 6 23:34:13.899706 systemd[1]: Created slice kubepods-burstable-podaf22054c_b470_4728_8365_16fe6fec5721.slice - libcontainer container kubepods-burstable-podaf22054c_b470_4728_8365_16fe6fec5721.slice. Nov 6 23:34:13.996836 kubelet[2581]: E1106 23:34:13.996787 2581 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Nov 6 23:34:13.996836 kubelet[2581]: E1106 23:34:13.996835 2581 projected.go:194] Error preparing data for projected volume kube-api-access-fpmxs for pod kube-system/kube-proxy-5dczg: configmap "kube-root-ca.crt" not found Nov 6 23:34:13.997161 kubelet[2581]: E1106 23:34:13.996906 2581 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/232af6db-8e8b-4715-b615-33594644509b-kube-api-access-fpmxs podName:232af6db-8e8b-4715-b615-33594644509b nodeName:}" failed. No retries permitted until 2025-11-06 23:34:14.496883753 +0000 UTC m=+5.471450376 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-fpmxs" (UniqueName: "kubernetes.io/projected/232af6db-8e8b-4715-b615-33594644509b-kube-api-access-fpmxs") pod "kube-proxy-5dczg" (UID: "232af6db-8e8b-4715-b615-33594644509b") : configmap "kube-root-ca.crt" not found Nov 6 23:34:13.999398 kubelet[2581]: E1106 23:34:13.999366 2581 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Nov 6 23:34:13.999398 kubelet[2581]: E1106 23:34:13.999394 2581 projected.go:194] Error preparing data for projected volume kube-api-access-gtsfh for pod kube-system/cilium-2fdcw: configmap "kube-root-ca.crt" not found Nov 6 23:34:13.999601 kubelet[2581]: E1106 23:34:13.999445 2581 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/af22054c-b470-4728-8365-16fe6fec5721-kube-api-access-gtsfh podName:af22054c-b470-4728-8365-16fe6fec5721 nodeName:}" failed. No retries permitted until 2025-11-06 23:34:14.499429154 +0000 UTC m=+5.473995762 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-gtsfh" (UniqueName: "kubernetes.io/projected/af22054c-b470-4728-8365-16fe6fec5721-kube-api-access-gtsfh") pod "cilium-2fdcw" (UID: "af22054c-b470-4728-8365-16fe6fec5721") : configmap "kube-root-ca.crt" not found Nov 6 23:34:14.005787 kubelet[2581]: E1106 23:34:14.005206 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:34:14.228703 kubelet[2581]: E1106 23:34:14.228555 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:34:14.367537 systemd[1]: Created slice kubepods-besteffort-poddc36399f_6964_4c2f_9460_d33defdaabae.slice - libcontainer container kubepods-besteffort-poddc36399f_6964_4c2f_9460_d33defdaabae.slice. Nov 6 23:34:14.490847 kubelet[2581]: I1106 23:34:14.490654 2581 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccjbm\" (UniqueName: \"kubernetes.io/projected/dc36399f-6964-4c2f-9460-d33defdaabae-kube-api-access-ccjbm\") pod \"cilium-operator-6c4d7847fc-7hp98\" (UID: \"dc36399f-6964-4c2f-9460-d33defdaabae\") " pod="kube-system/cilium-operator-6c4d7847fc-7hp98" Nov 6 23:34:14.490847 kubelet[2581]: I1106 23:34:14.490757 2581 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dc36399f-6964-4c2f-9460-d33defdaabae-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-7hp98\" (UID: \"dc36399f-6964-4c2f-9460-d33defdaabae\") " pod="kube-system/cilium-operator-6c4d7847fc-7hp98" Nov 6 23:34:14.800029 kubelet[2581]: E1106 23:34:14.799892 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:34:14.801120 containerd[1481]: time="2025-11-06T23:34:14.801063819Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5dczg,Uid:232af6db-8e8b-4715-b615-33594644509b,Namespace:kube-system,Attempt:0,}" Nov 6 23:34:14.830216 containerd[1481]: time="2025-11-06T23:34:14.829180055Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 6 23:34:14.830216 containerd[1481]: time="2025-11-06T23:34:14.829282054Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 6 23:34:14.830216 containerd[1481]: time="2025-11-06T23:34:14.829305652Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 6 23:34:14.830216 containerd[1481]: time="2025-11-06T23:34:14.829447875Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 6 23:34:14.865058 systemd[1]: Started cri-containerd-9792694bfa2c6f814f16c3d9865fbf4192d9e5eaf974ece00165cfd9e1f44822.scope - libcontainer container 9792694bfa2c6f814f16c3d9865fbf4192d9e5eaf974ece00165cfd9e1f44822. Nov 6 23:34:14.902062 containerd[1481]: time="2025-11-06T23:34:14.901988610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5dczg,Uid:232af6db-8e8b-4715-b615-33594644509b,Namespace:kube-system,Attempt:0,} returns sandbox id \"9792694bfa2c6f814f16c3d9865fbf4192d9e5eaf974ece00165cfd9e1f44822\"" Nov 6 23:34:14.905021 kubelet[2581]: E1106 23:34:14.903470 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:34:14.909696 containerd[1481]: time="2025-11-06T23:34:14.909641446Z" level=info msg="CreateContainer within sandbox \"9792694bfa2c6f814f16c3d9865fbf4192d9e5eaf974ece00165cfd9e1f44822\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 6 23:34:14.931308 containerd[1481]: time="2025-11-06T23:34:14.931256394Z" level=info msg="CreateContainer within sandbox \"9792694bfa2c6f814f16c3d9865fbf4192d9e5eaf974ece00165cfd9e1f44822\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2bb45b9606f8226fe14df7fccd56dc47bdb6359e7de7f5bb595ab6ed5b250a35\"" Nov 6 23:34:14.932971 containerd[1481]: time="2025-11-06T23:34:14.932935980Z" level=info msg="StartContainer for \"2bb45b9606f8226fe14df7fccd56dc47bdb6359e7de7f5bb595ab6ed5b250a35\"" Nov 6 23:34:14.966955 systemd[1]: Started cri-containerd-2bb45b9606f8226fe14df7fccd56dc47bdb6359e7de7f5bb595ab6ed5b250a35.scope - libcontainer container 2bb45b9606f8226fe14df7fccd56dc47bdb6359e7de7f5bb595ab6ed5b250a35. Nov 6 23:34:14.971561 kubelet[2581]: E1106 23:34:14.970898 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:34:14.972836 containerd[1481]: time="2025-11-06T23:34:14.972685253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-7hp98,Uid:dc36399f-6964-4c2f-9460-d33defdaabae,Namespace:kube-system,Attempt:0,}" Nov 6 23:34:14.988490 kubelet[2581]: E1106 23:34:14.988152 2581 secret.go:189] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Nov 6 23:34:14.988490 kubelet[2581]: E1106 23:34:14.988253 2581 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/af22054c-b470-4728-8365-16fe6fec5721-clustermesh-secrets podName:af22054c-b470-4728-8365-16fe6fec5721 nodeName:}" failed. No retries permitted until 2025-11-06 23:34:15.488234612 +0000 UTC m=+6.462801234 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/af22054c-b470-4728-8365-16fe6fec5721-clustermesh-secrets") pod "cilium-2fdcw" (UID: "af22054c-b470-4728-8365-16fe6fec5721") : failed to sync secret cache: timed out waiting for the condition Nov 6 23:34:15.014482 containerd[1481]: time="2025-11-06T23:34:15.014265508Z" level=info msg="StartContainer for \"2bb45b9606f8226fe14df7fccd56dc47bdb6359e7de7f5bb595ab6ed5b250a35\" returns successfully" Nov 6 23:34:15.021708 containerd[1481]: time="2025-11-06T23:34:15.021420171Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 6 23:34:15.022805 containerd[1481]: time="2025-11-06T23:34:15.022711889Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 6 23:34:15.022940 containerd[1481]: time="2025-11-06T23:34:15.022826626Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 6 23:34:15.023168 containerd[1481]: time="2025-11-06T23:34:15.023133991Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 6 23:34:15.055051 systemd[1]: Started cri-containerd-628149c0522cf7ee6b4b82e9f3e712f26e4de9134a60848388865bfd81710887.scope - libcontainer container 628149c0522cf7ee6b4b82e9f3e712f26e4de9134a60848388865bfd81710887. Nov 6 23:34:15.110716 containerd[1481]: time="2025-11-06T23:34:15.110676591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-7hp98,Uid:dc36399f-6964-4c2f-9460-d33defdaabae,Namespace:kube-system,Attempt:0,} returns sandbox id \"628149c0522cf7ee6b4b82e9f3e712f26e4de9134a60848388865bfd81710887\"" Nov 6 23:34:15.113758 kubelet[2581]: E1106 23:34:15.113011 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:34:15.115332 containerd[1481]: time="2025-11-06T23:34:15.115159055Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Nov 6 23:34:15.233022 kubelet[2581]: E1106 23:34:15.232887 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:34:15.245861 kubelet[2581]: I1106 23:34:15.245552 2581 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5dczg" podStartSLOduration=2.245522368 podStartE2EDuration="2.245522368s" podCreationTimestamp="2025-11-06 23:34:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 23:34:15.245294525 +0000 UTC m=+6.219861154" watchObservedRunningTime="2025-11-06 23:34:15.245522368 +0000 UTC m=+6.220088996" Nov 6 23:34:15.705768 kubelet[2581]: E1106 23:34:15.705316 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:34:15.706289 containerd[1481]: time="2025-11-06T23:34:15.706081595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2fdcw,Uid:af22054c-b470-4728-8365-16fe6fec5721,Namespace:kube-system,Attempt:0,}" Nov 6 23:34:15.740438 containerd[1481]: time="2025-11-06T23:34:15.740010572Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 6 23:34:15.740438 containerd[1481]: time="2025-11-06T23:34:15.740088688Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 6 23:34:15.740438 containerd[1481]: time="2025-11-06T23:34:15.740105327Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 6 23:34:15.740438 containerd[1481]: time="2025-11-06T23:34:15.740233675Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 6 23:34:15.768907 systemd[1]: run-containerd-runc-k8s.io-a9b6121669cf8c3e79958b080c61f91209340ddc749fa189e76f517269107c44-runc.oMPakH.mount: Deactivated successfully. Nov 6 23:34:15.778058 systemd[1]: Started cri-containerd-a9b6121669cf8c3e79958b080c61f91209340ddc749fa189e76f517269107c44.scope - libcontainer container a9b6121669cf8c3e79958b080c61f91209340ddc749fa189e76f517269107c44. Nov 6 23:34:15.808215 containerd[1481]: time="2025-11-06T23:34:15.808174869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2fdcw,Uid:af22054c-b470-4728-8365-16fe6fec5721,Namespace:kube-system,Attempt:0,} returns sandbox id \"a9b6121669cf8c3e79958b080c61f91209340ddc749fa189e76f517269107c44\"" Nov 6 23:34:15.809847 kubelet[2581]: E1106 23:34:15.809815 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:34:18.114149 systemd-timesyncd[1349]: Contacted time server 193.187.181.6:123 (2.flatcar.pool.ntp.org). Nov 6 23:34:18.114182 systemd-resolved[1332]: Clock change detected. Flushing caches. Nov 6 23:34:18.114231 systemd-timesyncd[1349]: Initial clock synchronization to Thu 2025-11-06 23:34:18.113663 UTC. Nov 6 23:34:18.230672 containerd[1481]: time="2025-11-06T23:34:18.230599924Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:34:18.232015 containerd[1481]: time="2025-11-06T23:34:18.231962700Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Nov 6 23:34:18.232611 containerd[1481]: time="2025-11-06T23:34:18.232567915Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:34:18.234386 containerd[1481]: time="2025-11-06T23:34:18.234337414Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.299963719s" Nov 6 23:34:18.234386 containerd[1481]: time="2025-11-06T23:34:18.234373055Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Nov 6 23:34:18.236350 containerd[1481]: time="2025-11-06T23:34:18.235959154Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Nov 6 23:34:18.241808 containerd[1481]: time="2025-11-06T23:34:18.241108933Z" level=info msg="CreateContainer within sandbox \"628149c0522cf7ee6b4b82e9f3e712f26e4de9134a60848388865bfd81710887\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Nov 6 23:34:18.256464 containerd[1481]: time="2025-11-06T23:34:18.256418876Z" level=info msg="CreateContainer within sandbox \"628149c0522cf7ee6b4b82e9f3e712f26e4de9134a60848388865bfd81710887\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"0552e62e685f7fde67164e29030155c650dff620f0145b69465ae1c0c85e1d5e\"" Nov 6 23:34:18.257315 containerd[1481]: time="2025-11-06T23:34:18.257164544Z" level=info msg="StartContainer for \"0552e62e685f7fde67164e29030155c650dff620f0145b69465ae1c0c85e1d5e\"" Nov 6 23:34:18.302108 systemd[1]: Started cri-containerd-0552e62e685f7fde67164e29030155c650dff620f0145b69465ae1c0c85e1d5e.scope - libcontainer container 0552e62e685f7fde67164e29030155c650dff620f0145b69465ae1c0c85e1d5e. Nov 6 23:34:18.340393 containerd[1481]: time="2025-11-06T23:34:18.340195915Z" level=info msg="StartContainer for \"0552e62e685f7fde67164e29030155c650dff620f0145b69465ae1c0c85e1d5e\" returns successfully" Nov 6 23:34:18.421550 systemd[1]: run-containerd-runc-k8s.io-0552e62e685f7fde67164e29030155c650dff620f0145b69465ae1c0c85e1d5e-runc.W1SbcN.mount: Deactivated successfully. Nov 6 23:34:19.066785 kubelet[2581]: E1106 23:34:19.066744 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:34:20.077820 kubelet[2581]: E1106 23:34:20.077364 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:34:22.534938 kubelet[2581]: E1106 23:34:22.534735 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:34:22.577930 kubelet[2581]: I1106 23:34:22.576286 2581 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-7hp98" podStartSLOduration=6.272981218 podStartE2EDuration="8.575536438s" podCreationTimestamp="2025-11-06 23:34:14 +0000 UTC" firstStartedPulling="2025-11-06 23:34:15.114022311 +0000 UTC m=+6.088588918" lastFinishedPulling="2025-11-06 23:34:18.235634982 +0000 UTC m=+8.391144138" observedRunningTime="2025-11-06 23:34:19.15664516 +0000 UTC m=+9.312154327" watchObservedRunningTime="2025-11-06 23:34:22.575536438 +0000 UTC m=+12.731045619" Nov 6 23:34:22.698048 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount237874335.mount: Deactivated successfully. Nov 6 23:34:22.865985 kubelet[2581]: E1106 23:34:22.865545 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:34:23.095713 kubelet[2581]: E1106 23:34:23.095680 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:34:24.831679 containerd[1481]: time="2025-11-06T23:34:24.831627333Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:34:24.833943 containerd[1481]: time="2025-11-06T23:34:24.833883443Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Nov 6 23:34:24.834197 containerd[1481]: time="2025-11-06T23:34:24.834084015Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:34:24.835837 containerd[1481]: time="2025-11-06T23:34:24.835441049Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 6.599444901s" Nov 6 23:34:24.835837 containerd[1481]: time="2025-11-06T23:34:24.835475813Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Nov 6 23:34:24.838004 containerd[1481]: time="2025-11-06T23:34:24.837969644Z" level=info msg="CreateContainer within sandbox \"a9b6121669cf8c3e79958b080c61f91209340ddc749fa189e76f517269107c44\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 6 23:34:24.916170 containerd[1481]: time="2025-11-06T23:34:24.916119537Z" level=info msg="CreateContainer within sandbox \"a9b6121669cf8c3e79958b080c61f91209340ddc749fa189e76f517269107c44\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e56ff1f93447042ddc902a3f9371ae4c2007c8718f8a8d3c139708c44d69aca1\"" Nov 6 23:34:24.920260 containerd[1481]: time="2025-11-06T23:34:24.917232252Z" level=info msg="StartContainer for \"e56ff1f93447042ddc902a3f9371ae4c2007c8718f8a8d3c139708c44d69aca1\"" Nov 6 23:34:25.036120 systemd[1]: Started cri-containerd-e56ff1f93447042ddc902a3f9371ae4c2007c8718f8a8d3c139708c44d69aca1.scope - libcontainer container e56ff1f93447042ddc902a3f9371ae4c2007c8718f8a8d3c139708c44d69aca1. Nov 6 23:34:25.072230 containerd[1481]: time="2025-11-06T23:34:25.072167452Z" level=info msg="StartContainer for \"e56ff1f93447042ddc902a3f9371ae4c2007c8718f8a8d3c139708c44d69aca1\" returns successfully" Nov 6 23:34:25.091081 systemd[1]: cri-containerd-e56ff1f93447042ddc902a3f9371ae4c2007c8718f8a8d3c139708c44d69aca1.scope: Deactivated successfully. Nov 6 23:34:25.104922 kubelet[2581]: E1106 23:34:25.104614 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:34:25.217023 containerd[1481]: time="2025-11-06T23:34:25.199882412Z" level=info msg="shim disconnected" id=e56ff1f93447042ddc902a3f9371ae4c2007c8718f8a8d3c139708c44d69aca1 namespace=k8s.io Nov 6 23:34:25.217023 containerd[1481]: time="2025-11-06T23:34:25.217012417Z" level=warning msg="cleaning up after shim disconnected" id=e56ff1f93447042ddc902a3f9371ae4c2007c8718f8a8d3c139708c44d69aca1 namespace=k8s.io Nov 6 23:34:25.217023 containerd[1481]: time="2025-11-06T23:34:25.217039235Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 6 23:34:25.905596 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e56ff1f93447042ddc902a3f9371ae4c2007c8718f8a8d3c139708c44d69aca1-rootfs.mount: Deactivated successfully. Nov 6 23:34:26.107425 kubelet[2581]: E1106 23:34:26.107108 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:34:26.112535 containerd[1481]: time="2025-11-06T23:34:26.112490662Z" level=info msg="CreateContainer within sandbox \"a9b6121669cf8c3e79958b080c61f91209340ddc749fa189e76f517269107c44\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 6 23:34:26.128133 containerd[1481]: time="2025-11-06T23:34:26.128080396Z" level=info msg="CreateContainer within sandbox \"a9b6121669cf8c3e79958b080c61f91209340ddc749fa189e76f517269107c44\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"bab1f82452550ead18fbe6e3cd0a6baea237477bd14fc6357fa201859d961b53\"" Nov 6 23:34:26.130112 containerd[1481]: time="2025-11-06T23:34:26.129712086Z" level=info msg="StartContainer for \"bab1f82452550ead18fbe6e3cd0a6baea237477bd14fc6357fa201859d961b53\"" Nov 6 23:34:26.168017 systemd[1]: Started cri-containerd-bab1f82452550ead18fbe6e3cd0a6baea237477bd14fc6357fa201859d961b53.scope - libcontainer container bab1f82452550ead18fbe6e3cd0a6baea237477bd14fc6357fa201859d961b53. Nov 6 23:34:26.199289 containerd[1481]: time="2025-11-06T23:34:26.199134383Z" level=info msg="StartContainer for \"bab1f82452550ead18fbe6e3cd0a6baea237477bd14fc6357fa201859d961b53\" returns successfully" Nov 6 23:34:26.218319 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 6 23:34:26.218558 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 6 23:34:26.218752 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Nov 6 23:34:26.225951 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 6 23:34:26.226742 systemd[1]: cri-containerd-bab1f82452550ead18fbe6e3cd0a6baea237477bd14fc6357fa201859d961b53.scope: Deactivated successfully. Nov 6 23:34:26.256571 containerd[1481]: time="2025-11-06T23:34:26.256306328Z" level=info msg="shim disconnected" id=bab1f82452550ead18fbe6e3cd0a6baea237477bd14fc6357fa201859d961b53 namespace=k8s.io Nov 6 23:34:26.256571 containerd[1481]: time="2025-11-06T23:34:26.256361624Z" level=warning msg="cleaning up after shim disconnected" id=bab1f82452550ead18fbe6e3cd0a6baea237477bd14fc6357fa201859d961b53 namespace=k8s.io Nov 6 23:34:26.256571 containerd[1481]: time="2025-11-06T23:34:26.256370164Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 6 23:34:26.278896 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 6 23:34:26.906302 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bab1f82452550ead18fbe6e3cd0a6baea237477bd14fc6357fa201859d961b53-rootfs.mount: Deactivated successfully. Nov 6 23:34:27.112899 kubelet[2581]: E1106 23:34:27.112858 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:34:27.119620 containerd[1481]: time="2025-11-06T23:34:27.119125424Z" level=info msg="CreateContainer within sandbox \"a9b6121669cf8c3e79958b080c61f91209340ddc749fa189e76f517269107c44\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 6 23:34:27.169472 containerd[1481]: time="2025-11-06T23:34:27.169133628Z" level=info msg="CreateContainer within sandbox \"a9b6121669cf8c3e79958b080c61f91209340ddc749fa189e76f517269107c44\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"5f4dc0c8b88a456304d55842ee4b59753e9bce5fd25050f4acaa81c9940e815a\"" Nov 6 23:34:27.170584 containerd[1481]: time="2025-11-06T23:34:27.170484740Z" level=info msg="StartContainer for \"5f4dc0c8b88a456304d55842ee4b59753e9bce5fd25050f4acaa81c9940e815a\"" Nov 6 23:34:27.235077 systemd[1]: Started cri-containerd-5f4dc0c8b88a456304d55842ee4b59753e9bce5fd25050f4acaa81c9940e815a.scope - libcontainer container 5f4dc0c8b88a456304d55842ee4b59753e9bce5fd25050f4acaa81c9940e815a. Nov 6 23:34:27.273771 containerd[1481]: time="2025-11-06T23:34:27.272378279Z" level=info msg="StartContainer for \"5f4dc0c8b88a456304d55842ee4b59753e9bce5fd25050f4acaa81c9940e815a\" returns successfully" Nov 6 23:34:27.276856 systemd[1]: cri-containerd-5f4dc0c8b88a456304d55842ee4b59753e9bce5fd25050f4acaa81c9940e815a.scope: Deactivated successfully. Nov 6 23:34:27.306827 containerd[1481]: time="2025-11-06T23:34:27.306744277Z" level=info msg="shim disconnected" id=5f4dc0c8b88a456304d55842ee4b59753e9bce5fd25050f4acaa81c9940e815a namespace=k8s.io Nov 6 23:34:27.307284 containerd[1481]: time="2025-11-06T23:34:27.307256874Z" level=warning msg="cleaning up after shim disconnected" id=5f4dc0c8b88a456304d55842ee4b59753e9bce5fd25050f4acaa81c9940e815a namespace=k8s.io Nov 6 23:34:27.307496 containerd[1481]: time="2025-11-06T23:34:27.307395826Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 6 23:34:27.906289 systemd[1]: run-containerd-runc-k8s.io-5f4dc0c8b88a456304d55842ee4b59753e9bce5fd25050f4acaa81c9940e815a-runc.XuQGpT.mount: Deactivated successfully. Nov 6 23:34:27.906422 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5f4dc0c8b88a456304d55842ee4b59753e9bce5fd25050f4acaa81c9940e815a-rootfs.mount: Deactivated successfully. Nov 6 23:34:28.119584 kubelet[2581]: E1106 23:34:28.117166 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:34:28.121459 containerd[1481]: time="2025-11-06T23:34:28.121414123Z" level=info msg="CreateContainer within sandbox \"a9b6121669cf8c3e79958b080c61f91209340ddc749fa189e76f517269107c44\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 6 23:34:28.143861 containerd[1481]: time="2025-11-06T23:34:28.143283941Z" level=info msg="CreateContainer within sandbox \"a9b6121669cf8c3e79958b080c61f91209340ddc749fa189e76f517269107c44\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f11bc25cb8513203869f67a4347fed0ee61aaf33c7ea498725cbff113684f75f\"" Nov 6 23:34:28.148023 containerd[1481]: time="2025-11-06T23:34:28.145559042Z" level=info msg="StartContainer for \"f11bc25cb8513203869f67a4347fed0ee61aaf33c7ea498725cbff113684f75f\"" Nov 6 23:34:28.187079 systemd[1]: Started cri-containerd-f11bc25cb8513203869f67a4347fed0ee61aaf33c7ea498725cbff113684f75f.scope - libcontainer container f11bc25cb8513203869f67a4347fed0ee61aaf33c7ea498725cbff113684f75f. Nov 6 23:34:28.222846 systemd[1]: cri-containerd-f11bc25cb8513203869f67a4347fed0ee61aaf33c7ea498725cbff113684f75f.scope: Deactivated successfully. Nov 6 23:34:28.225238 containerd[1481]: time="2025-11-06T23:34:28.225189770Z" level=info msg="StartContainer for \"f11bc25cb8513203869f67a4347fed0ee61aaf33c7ea498725cbff113684f75f\" returns successfully" Nov 6 23:34:28.251978 containerd[1481]: time="2025-11-06T23:34:28.251903205Z" level=info msg="shim disconnected" id=f11bc25cb8513203869f67a4347fed0ee61aaf33c7ea498725cbff113684f75f namespace=k8s.io Nov 6 23:34:28.252516 containerd[1481]: time="2025-11-06T23:34:28.252291122Z" level=warning msg="cleaning up after shim disconnected" id=f11bc25cb8513203869f67a4347fed0ee61aaf33c7ea498725cbff113684f75f namespace=k8s.io Nov 6 23:34:28.252516 containerd[1481]: time="2025-11-06T23:34:28.252318753Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 6 23:34:28.450592 update_engine[1463]: I20251106 23:34:28.450340 1463 update_attempter.cc:509] Updating boot flags... Nov 6 23:34:28.535094 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (3270) Nov 6 23:34:28.624995 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (3274) Nov 6 23:34:28.722828 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (3274) Nov 6 23:34:28.907542 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f11bc25cb8513203869f67a4347fed0ee61aaf33c7ea498725cbff113684f75f-rootfs.mount: Deactivated successfully. Nov 6 23:34:29.124027 kubelet[2581]: E1106 23:34:29.123418 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:34:29.132450 containerd[1481]: time="2025-11-06T23:34:29.128382559Z" level=info msg="CreateContainer within sandbox \"a9b6121669cf8c3e79958b080c61f91209340ddc749fa189e76f517269107c44\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 6 23:34:29.149064 containerd[1481]: time="2025-11-06T23:34:29.148786011Z" level=info msg="CreateContainer within sandbox \"a9b6121669cf8c3e79958b080c61f91209340ddc749fa189e76f517269107c44\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f81db3c648faa5788374c8e5a8697e7c248f81d797d5be71d86d729cbb29c167\"" Nov 6 23:34:29.150770 containerd[1481]: time="2025-11-06T23:34:29.149708092Z" level=info msg="StartContainer for \"f81db3c648faa5788374c8e5a8697e7c248f81d797d5be71d86d729cbb29c167\"" Nov 6 23:34:29.194040 systemd[1]: Started cri-containerd-f81db3c648faa5788374c8e5a8697e7c248f81d797d5be71d86d729cbb29c167.scope - libcontainer container f81db3c648faa5788374c8e5a8697e7c248f81d797d5be71d86d729cbb29c167. Nov 6 23:34:29.237466 containerd[1481]: time="2025-11-06T23:34:29.237416957Z" level=info msg="StartContainer for \"f81db3c648faa5788374c8e5a8697e7c248f81d797d5be71d86d729cbb29c167\" returns successfully" Nov 6 23:34:29.408938 kubelet[2581]: I1106 23:34:29.407711 2581 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 6 23:34:29.471024 systemd[1]: Created slice kubepods-burstable-pode47b176b_d139_4cbf_a90c_caa9fbe1ee55.slice - libcontainer container kubepods-burstable-pode47b176b_d139_4cbf_a90c_caa9fbe1ee55.slice. Nov 6 23:34:29.483448 systemd[1]: Created slice kubepods-burstable-podf8960a46_1670_4e6c_8c0c_f639647ec2c2.slice - libcontainer container kubepods-burstable-podf8960a46_1670_4e6c_8c0c_f639647ec2c2.slice. Nov 6 23:34:29.525221 kubelet[2581]: I1106 23:34:29.525163 2581 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e47b176b-d139-4cbf-a90c-caa9fbe1ee55-config-volume\") pod \"coredns-668d6bf9bc-m98hg\" (UID: \"e47b176b-d139-4cbf-a90c-caa9fbe1ee55\") " pod="kube-system/coredns-668d6bf9bc-m98hg" Nov 6 23:34:29.525221 kubelet[2581]: I1106 23:34:29.525233 2581 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f8960a46-1670-4e6c-8c0c-f639647ec2c2-config-volume\") pod \"coredns-668d6bf9bc-rj4sx\" (UID: \"f8960a46-1670-4e6c-8c0c-f639647ec2c2\") " pod="kube-system/coredns-668d6bf9bc-rj4sx" Nov 6 23:34:29.525470 kubelet[2581]: I1106 23:34:29.525355 2581 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6zt7\" (UniqueName: \"kubernetes.io/projected/e47b176b-d139-4cbf-a90c-caa9fbe1ee55-kube-api-access-c6zt7\") pod \"coredns-668d6bf9bc-m98hg\" (UID: \"e47b176b-d139-4cbf-a90c-caa9fbe1ee55\") " pod="kube-system/coredns-668d6bf9bc-m98hg" Nov 6 23:34:29.525470 kubelet[2581]: I1106 23:34:29.525393 2581 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njcds\" (UniqueName: \"kubernetes.io/projected/f8960a46-1670-4e6c-8c0c-f639647ec2c2-kube-api-access-njcds\") pod \"coredns-668d6bf9bc-rj4sx\" (UID: \"f8960a46-1670-4e6c-8c0c-f639647ec2c2\") " pod="kube-system/coredns-668d6bf9bc-rj4sx" Nov 6 23:34:29.778262 kubelet[2581]: E1106 23:34:29.777345 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:34:29.780820 containerd[1481]: time="2025-11-06T23:34:29.779992043Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-m98hg,Uid:e47b176b-d139-4cbf-a90c-caa9fbe1ee55,Namespace:kube-system,Attempt:0,}" Nov 6 23:34:29.791337 kubelet[2581]: E1106 23:34:29.791288 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:34:29.792297 containerd[1481]: time="2025-11-06T23:34:29.792258486Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rj4sx,Uid:f8960a46-1670-4e6c-8c0c-f639647ec2c2,Namespace:kube-system,Attempt:0,}" Nov 6 23:34:30.129753 kubelet[2581]: E1106 23:34:30.129676 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:34:31.131364 kubelet[2581]: E1106 23:34:31.131330 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:34:31.519842 systemd-networkd[1383]: cilium_host: Link UP Nov 6 23:34:31.520288 systemd-networkd[1383]: cilium_net: Link UP Nov 6 23:34:31.520969 systemd-networkd[1383]: cilium_net: Gained carrier Nov 6 23:34:31.521832 systemd-networkd[1383]: cilium_host: Gained carrier Nov 6 23:34:31.666147 systemd-networkd[1383]: cilium_vxlan: Link UP Nov 6 23:34:31.666155 systemd-networkd[1383]: cilium_vxlan: Gained carrier Nov 6 23:34:32.124013 kernel: NET: Registered PF_ALG protocol family Nov 6 23:34:32.137210 kubelet[2581]: E1106 23:34:32.135109 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:34:32.324010 systemd-networkd[1383]: cilium_net: Gained IPv6LL Nov 6 23:34:32.516992 systemd-networkd[1383]: cilium_host: Gained IPv6LL Nov 6 23:34:32.961040 systemd-networkd[1383]: lxc_health: Link UP Nov 6 23:34:32.968975 systemd-networkd[1383]: lxc_health: Gained carrier Nov 6 23:34:33.142438 kubelet[2581]: E1106 23:34:33.141019 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:34:33.377655 kernel: eth0: renamed from tmp32e3b Nov 6 23:34:33.376474 systemd-networkd[1383]: lxcf05303bcc400: Link UP Nov 6 23:34:33.383339 systemd-networkd[1383]: lxcf05303bcc400: Gained carrier Nov 6 23:34:33.413095 kernel: eth0: renamed from tmp855be Nov 6 23:34:33.419424 systemd-networkd[1383]: lxc530ea0916419: Link UP Nov 6 23:34:33.421536 systemd-networkd[1383]: lxc530ea0916419: Gained carrier Nov 6 23:34:33.668033 systemd-networkd[1383]: cilium_vxlan: Gained IPv6LL Nov 6 23:34:34.526959 kubelet[2581]: E1106 23:34:34.526921 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:34:34.550806 kubelet[2581]: I1106 23:34:34.550706 2581 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-2fdcw" podStartSLOduration=13.343830034 podStartE2EDuration="21.550686501s" podCreationTimestamp="2025-11-06 23:34:13 +0000 UTC" firstStartedPulling="2025-11-06 23:34:15.810603105 +0000 UTC m=+6.785169726" lastFinishedPulling="2025-11-06 23:34:24.83651705 +0000 UTC m=+14.992026193" observedRunningTime="2025-11-06 23:34:30.147886118 +0000 UTC m=+20.303395282" watchObservedRunningTime="2025-11-06 23:34:34.550686501 +0000 UTC m=+24.706195665" Nov 6 23:34:34.820091 systemd-networkd[1383]: lxc530ea0916419: Gained IPv6LL Nov 6 23:34:34.884880 systemd-networkd[1383]: lxc_health: Gained IPv6LL Nov 6 23:34:35.142059 kubelet[2581]: E1106 23:34:35.141924 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:34:35.334579 systemd-networkd[1383]: lxcf05303bcc400: Gained IPv6LL Nov 6 23:34:37.573043 containerd[1481]: time="2025-11-06T23:34:37.571037684Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 6 23:34:37.573043 containerd[1481]: time="2025-11-06T23:34:37.571106469Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 6 23:34:37.573043 containerd[1481]: time="2025-11-06T23:34:37.571122757Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 6 23:34:37.573043 containerd[1481]: time="2025-11-06T23:34:37.571219137Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 6 23:34:37.573043 containerd[1481]: time="2025-11-06T23:34:37.570757464Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 6 23:34:37.573043 containerd[1481]: time="2025-11-06T23:34:37.570833514Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 6 23:34:37.573043 containerd[1481]: time="2025-11-06T23:34:37.570850065Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 6 23:34:37.573043 containerd[1481]: time="2025-11-06T23:34:37.570942695Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 6 23:34:37.644484 systemd[1]: Started cri-containerd-32e3ba46135ef69392a90c913165302e87a7792a5f00af0396eec2ceb917d50f.scope - libcontainer container 32e3ba46135ef69392a90c913165302e87a7792a5f00af0396eec2ceb917d50f. Nov 6 23:34:37.650496 systemd[1]: Started cri-containerd-855be255fe50e518b82f12b75ac50df4ea6b4c41286fcefe7797c6f0ae125f33.scope - libcontainer container 855be255fe50e518b82f12b75ac50df4ea6b4c41286fcefe7797c6f0ae125f33. Nov 6 23:34:37.752860 containerd[1481]: time="2025-11-06T23:34:37.752787746Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rj4sx,Uid:f8960a46-1670-4e6c-8c0c-f639647ec2c2,Namespace:kube-system,Attempt:0,} returns sandbox id \"855be255fe50e518b82f12b75ac50df4ea6b4c41286fcefe7797c6f0ae125f33\"" Nov 6 23:34:37.756432 kubelet[2581]: E1106 23:34:37.756126 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:34:37.764985 containerd[1481]: time="2025-11-06T23:34:37.764762038Z" level=info msg="CreateContainer within sandbox \"855be255fe50e518b82f12b75ac50df4ea6b4c41286fcefe7797c6f0ae125f33\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 6 23:34:37.767922 containerd[1481]: time="2025-11-06T23:34:37.767769909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-m98hg,Uid:e47b176b-d139-4cbf-a90c-caa9fbe1ee55,Namespace:kube-system,Attempt:0,} returns sandbox id \"32e3ba46135ef69392a90c913165302e87a7792a5f00af0396eec2ceb917d50f\"" Nov 6 23:34:37.769644 kubelet[2581]: E1106 23:34:37.769613 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:34:37.772152 containerd[1481]: time="2025-11-06T23:34:37.772116783Z" level=info msg="CreateContainer within sandbox \"32e3ba46135ef69392a90c913165302e87a7792a5f00af0396eec2ceb917d50f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 6 23:34:37.794400 containerd[1481]: time="2025-11-06T23:34:37.794351002Z" level=info msg="CreateContainer within sandbox \"32e3ba46135ef69392a90c913165302e87a7792a5f00af0396eec2ceb917d50f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f322d1b8ec816221eb312cddcbdbbbf9496b426910b2d05a5ce0e7e5ad827ee8\"" Nov 6 23:34:37.799495 containerd[1481]: time="2025-11-06T23:34:37.795226927Z" level=info msg="StartContainer for \"f322d1b8ec816221eb312cddcbdbbbf9496b426910b2d05a5ce0e7e5ad827ee8\"" Nov 6 23:34:37.802512 containerd[1481]: time="2025-11-06T23:34:37.802356269Z" level=info msg="CreateContainer within sandbox \"855be255fe50e518b82f12b75ac50df4ea6b4c41286fcefe7797c6f0ae125f33\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5cb6168d66ede03ada0d404c08eaf9932c61eed53df086cf2f09eb01f41fe913\"" Nov 6 23:34:37.803377 containerd[1481]: time="2025-11-06T23:34:37.803343400Z" level=info msg="StartContainer for \"5cb6168d66ede03ada0d404c08eaf9932c61eed53df086cf2f09eb01f41fe913\"" Nov 6 23:34:37.840082 systemd[1]: Started cri-containerd-f322d1b8ec816221eb312cddcbdbbbf9496b426910b2d05a5ce0e7e5ad827ee8.scope - libcontainer container f322d1b8ec816221eb312cddcbdbbbf9496b426910b2d05a5ce0e7e5ad827ee8. Nov 6 23:34:37.854401 systemd[1]: Started cri-containerd-5cb6168d66ede03ada0d404c08eaf9932c61eed53df086cf2f09eb01f41fe913.scope - libcontainer container 5cb6168d66ede03ada0d404c08eaf9932c61eed53df086cf2f09eb01f41fe913. Nov 6 23:34:37.900391 containerd[1481]: time="2025-11-06T23:34:37.900341997Z" level=info msg="StartContainer for \"f322d1b8ec816221eb312cddcbdbbbf9496b426910b2d05a5ce0e7e5ad827ee8\" returns successfully" Nov 6 23:34:37.900620 containerd[1481]: time="2025-11-06T23:34:37.900343241Z" level=info msg="StartContainer for \"5cb6168d66ede03ada0d404c08eaf9932c61eed53df086cf2f09eb01f41fe913\" returns successfully" Nov 6 23:34:38.149671 kubelet[2581]: E1106 23:34:38.149398 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:34:38.153074 kubelet[2581]: E1106 23:34:38.153042 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:34:38.175117 kubelet[2581]: I1106 23:34:38.175033 2581 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-rj4sx" podStartSLOduration=24.175008058 podStartE2EDuration="24.175008058s" podCreationTimestamp="2025-11-06 23:34:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 23:34:38.172974366 +0000 UTC m=+28.328483531" watchObservedRunningTime="2025-11-06 23:34:38.175008058 +0000 UTC m=+28.330517223" Nov 6 23:34:38.195472 kubelet[2581]: I1106 23:34:38.195373 2581 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-m98hg" podStartSLOduration=24.195351923 podStartE2EDuration="24.195351923s" podCreationTimestamp="2025-11-06 23:34:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 23:34:38.191355458 +0000 UTC m=+28.346864622" watchObservedRunningTime="2025-11-06 23:34:38.195351923 +0000 UTC m=+28.350861087" Nov 6 23:34:38.580810 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2547519667.mount: Deactivated successfully. Nov 6 23:34:39.155519 kubelet[2581]: E1106 23:34:39.155128 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:34:39.155519 kubelet[2581]: E1106 23:34:39.155255 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:34:40.157586 kubelet[2581]: E1106 23:34:40.157433 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:34:40.157586 kubelet[2581]: E1106 23:34:40.157473 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:34:53.947216 systemd[1]: Started sshd@7-147.182.203.129:22-147.75.109.163:39404.service - OpenSSH per-connection server daemon (147.75.109.163:39404). Nov 6 23:34:54.055469 sshd[3968]: Accepted publickey for core from 147.75.109.163 port 39404 ssh2: RSA SHA256:Z2OWs1J4lzJbb37sVDKV9BLSPLNt86phBIrOXH84t9A Nov 6 23:34:54.059647 sshd-session[3968]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:34:54.069687 systemd-logind[1460]: New session 8 of user core. Nov 6 23:34:54.075045 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 6 23:34:54.640261 sshd[3970]: Connection closed by 147.75.109.163 port 39404 Nov 6 23:34:54.641310 sshd-session[3968]: pam_unix(sshd:session): session closed for user core Nov 6 23:34:54.645864 systemd[1]: sshd@7-147.182.203.129:22-147.75.109.163:39404.service: Deactivated successfully. Nov 6 23:34:54.648007 systemd[1]: session-8.scope: Deactivated successfully. Nov 6 23:34:54.648727 systemd-logind[1460]: Session 8 logged out. Waiting for processes to exit. Nov 6 23:34:54.650360 systemd-logind[1460]: Removed session 8. Nov 6 23:34:59.661281 systemd[1]: Started sshd@8-147.182.203.129:22-147.75.109.163:39414.service - OpenSSH per-connection server daemon (147.75.109.163:39414). Nov 6 23:34:59.718467 sshd[3983]: Accepted publickey for core from 147.75.109.163 port 39414 ssh2: RSA SHA256:Z2OWs1J4lzJbb37sVDKV9BLSPLNt86phBIrOXH84t9A Nov 6 23:34:59.720348 sshd-session[3983]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:34:59.725725 systemd-logind[1460]: New session 9 of user core. Nov 6 23:34:59.733024 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 6 23:34:59.864085 sshd[3985]: Connection closed by 147.75.109.163 port 39414 Nov 6 23:34:59.865179 sshd-session[3983]: pam_unix(sshd:session): session closed for user core Nov 6 23:34:59.871556 systemd-logind[1460]: Session 9 logged out. Waiting for processes to exit. Nov 6 23:34:59.872682 systemd[1]: sshd@8-147.182.203.129:22-147.75.109.163:39414.service: Deactivated successfully. Nov 6 23:34:59.875299 systemd[1]: session-9.scope: Deactivated successfully. Nov 6 23:34:59.876575 systemd-logind[1460]: Removed session 9. Nov 6 23:35:04.886227 systemd[1]: Started sshd@9-147.182.203.129:22-147.75.109.163:54416.service - OpenSSH per-connection server daemon (147.75.109.163:54416). Nov 6 23:35:04.948698 sshd[3998]: Accepted publickey for core from 147.75.109.163 port 54416 ssh2: RSA SHA256:Z2OWs1J4lzJbb37sVDKV9BLSPLNt86phBIrOXH84t9A Nov 6 23:35:04.951149 sshd-session[3998]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:35:04.959534 systemd-logind[1460]: New session 10 of user core. Nov 6 23:35:04.968184 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 6 23:35:05.125920 sshd[4000]: Connection closed by 147.75.109.163 port 54416 Nov 6 23:35:05.126393 sshd-session[3998]: pam_unix(sshd:session): session closed for user core Nov 6 23:35:05.133216 systemd[1]: sshd@9-147.182.203.129:22-147.75.109.163:54416.service: Deactivated successfully. Nov 6 23:35:05.137592 systemd[1]: session-10.scope: Deactivated successfully. Nov 6 23:35:05.139784 systemd-logind[1460]: Session 10 logged out. Waiting for processes to exit. Nov 6 23:35:05.141312 systemd-logind[1460]: Removed session 10. Nov 6 23:35:10.146229 systemd[1]: Started sshd@10-147.182.203.129:22-147.75.109.163:50326.service - OpenSSH per-connection server daemon (147.75.109.163:50326). Nov 6 23:35:10.200751 sshd[4016]: Accepted publickey for core from 147.75.109.163 port 50326 ssh2: RSA SHA256:Z2OWs1J4lzJbb37sVDKV9BLSPLNt86phBIrOXH84t9A Nov 6 23:35:10.202863 sshd-session[4016]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:35:10.209305 systemd-logind[1460]: New session 11 of user core. Nov 6 23:35:10.216034 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 6 23:35:10.363628 sshd[4018]: Connection closed by 147.75.109.163 port 50326 Nov 6 23:35:10.363422 sshd-session[4016]: pam_unix(sshd:session): session closed for user core Nov 6 23:35:10.369721 systemd[1]: sshd@10-147.182.203.129:22-147.75.109.163:50326.service: Deactivated successfully. Nov 6 23:35:10.373528 systemd[1]: session-11.scope: Deactivated successfully. Nov 6 23:35:10.374760 systemd-logind[1460]: Session 11 logged out. Waiting for processes to exit. Nov 6 23:35:10.375977 systemd-logind[1460]: Removed session 11. Nov 6 23:35:15.387168 systemd[1]: Started sshd@11-147.182.203.129:22-147.75.109.163:50332.service - OpenSSH per-connection server daemon (147.75.109.163:50332). Nov 6 23:35:15.441825 sshd[4030]: Accepted publickey for core from 147.75.109.163 port 50332 ssh2: RSA SHA256:Z2OWs1J4lzJbb37sVDKV9BLSPLNt86phBIrOXH84t9A Nov 6 23:35:15.443477 sshd-session[4030]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:35:15.448499 systemd-logind[1460]: New session 12 of user core. Nov 6 23:35:15.455031 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 6 23:35:15.587995 sshd[4032]: Connection closed by 147.75.109.163 port 50332 Nov 6 23:35:15.588891 sshd-session[4030]: pam_unix(sshd:session): session closed for user core Nov 6 23:35:15.598722 systemd[1]: sshd@11-147.182.203.129:22-147.75.109.163:50332.service: Deactivated successfully. Nov 6 23:35:15.601129 systemd[1]: session-12.scope: Deactivated successfully. Nov 6 23:35:15.602115 systemd-logind[1460]: Session 12 logged out. Waiting for processes to exit. Nov 6 23:35:15.612733 systemd[1]: Started sshd@12-147.182.203.129:22-147.75.109.163:50346.service - OpenSSH per-connection server daemon (147.75.109.163:50346). Nov 6 23:35:15.615519 systemd-logind[1460]: Removed session 12. Nov 6 23:35:15.668908 sshd[4043]: Accepted publickey for core from 147.75.109.163 port 50346 ssh2: RSA SHA256:Z2OWs1J4lzJbb37sVDKV9BLSPLNt86phBIrOXH84t9A Nov 6 23:35:15.670694 sshd-session[4043]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:35:15.676399 systemd-logind[1460]: New session 13 of user core. Nov 6 23:35:15.686129 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 6 23:35:15.881377 sshd[4046]: Connection closed by 147.75.109.163 port 50346 Nov 6 23:35:15.882574 sshd-session[4043]: pam_unix(sshd:session): session closed for user core Nov 6 23:35:15.898547 systemd[1]: sshd@12-147.182.203.129:22-147.75.109.163:50346.service: Deactivated successfully. Nov 6 23:35:15.903985 systemd[1]: session-13.scope: Deactivated successfully. Nov 6 23:35:15.906619 systemd-logind[1460]: Session 13 logged out. Waiting for processes to exit. Nov 6 23:35:15.914947 systemd[1]: Started sshd@13-147.182.203.129:22-147.75.109.163:50360.service - OpenSSH per-connection server daemon (147.75.109.163:50360). Nov 6 23:35:15.917935 systemd-logind[1460]: Removed session 13. Nov 6 23:35:15.988502 sshd[4056]: Accepted publickey for core from 147.75.109.163 port 50360 ssh2: RSA SHA256:Z2OWs1J4lzJbb37sVDKV9BLSPLNt86phBIrOXH84t9A Nov 6 23:35:15.989283 sshd-session[4056]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:35:15.995555 systemd-logind[1460]: New session 14 of user core. Nov 6 23:35:16.001234 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 6 23:35:16.138558 sshd[4059]: Connection closed by 147.75.109.163 port 50360 Nov 6 23:35:16.139415 sshd-session[4056]: pam_unix(sshd:session): session closed for user core Nov 6 23:35:16.142567 systemd[1]: sshd@13-147.182.203.129:22-147.75.109.163:50360.service: Deactivated successfully. Nov 6 23:35:16.145483 systemd[1]: session-14.scope: Deactivated successfully. Nov 6 23:35:16.147574 systemd-logind[1460]: Session 14 logged out. Waiting for processes to exit. Nov 6 23:35:16.149253 systemd-logind[1460]: Removed session 14. Nov 6 23:35:21.155693 systemd[1]: Started sshd@14-147.182.203.129:22-147.75.109.163:53694.service - OpenSSH per-connection server daemon (147.75.109.163:53694). Nov 6 23:35:21.217757 sshd[4075]: Accepted publickey for core from 147.75.109.163 port 53694 ssh2: RSA SHA256:Z2OWs1J4lzJbb37sVDKV9BLSPLNt86phBIrOXH84t9A Nov 6 23:35:21.220176 sshd-session[4075]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:35:21.226005 systemd-logind[1460]: New session 15 of user core. Nov 6 23:35:21.232226 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 6 23:35:21.365532 sshd[4077]: Connection closed by 147.75.109.163 port 53694 Nov 6 23:35:21.364855 sshd-session[4075]: pam_unix(sshd:session): session closed for user core Nov 6 23:35:21.368576 systemd[1]: sshd@14-147.182.203.129:22-147.75.109.163:53694.service: Deactivated successfully. Nov 6 23:35:21.371593 systemd[1]: session-15.scope: Deactivated successfully. Nov 6 23:35:21.374211 systemd-logind[1460]: Session 15 logged out. Waiting for processes to exit. Nov 6 23:35:21.375378 systemd-logind[1460]: Removed session 15. Nov 6 23:35:22.000808 kubelet[2581]: E1106 23:35:22.000369 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:35:22.999464 kubelet[2581]: E1106 23:35:22.999393 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:35:24.000000 kubelet[2581]: E1106 23:35:23.998887 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:35:26.384231 systemd[1]: Started sshd@15-147.182.203.129:22-147.75.109.163:53706.service - OpenSSH per-connection server daemon (147.75.109.163:53706). Nov 6 23:35:26.440834 sshd[4088]: Accepted publickey for core from 147.75.109.163 port 53706 ssh2: RSA SHA256:Z2OWs1J4lzJbb37sVDKV9BLSPLNt86phBIrOXH84t9A Nov 6 23:35:26.442588 sshd-session[4088]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:35:26.447951 systemd-logind[1460]: New session 16 of user core. Nov 6 23:35:26.458064 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 6 23:35:26.594398 sshd[4090]: Connection closed by 147.75.109.163 port 53706 Nov 6 23:35:26.595405 sshd-session[4088]: pam_unix(sshd:session): session closed for user core Nov 6 23:35:26.609821 systemd[1]: sshd@15-147.182.203.129:22-147.75.109.163:53706.service: Deactivated successfully. Nov 6 23:35:26.612866 systemd[1]: session-16.scope: Deactivated successfully. Nov 6 23:35:26.614895 systemd-logind[1460]: Session 16 logged out. Waiting for processes to exit. Nov 6 23:35:26.620206 systemd[1]: Started sshd@16-147.182.203.129:22-147.75.109.163:53712.service - OpenSSH per-connection server daemon (147.75.109.163:53712). Nov 6 23:35:26.623245 systemd-logind[1460]: Removed session 16. Nov 6 23:35:26.684323 sshd[4101]: Accepted publickey for core from 147.75.109.163 port 53712 ssh2: RSA SHA256:Z2OWs1J4lzJbb37sVDKV9BLSPLNt86phBIrOXH84t9A Nov 6 23:35:26.685952 sshd-session[4101]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:35:26.693004 systemd-logind[1460]: New session 17 of user core. Nov 6 23:35:26.700084 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 6 23:35:27.048255 sshd[4104]: Connection closed by 147.75.109.163 port 53712 Nov 6 23:35:27.049463 sshd-session[4101]: pam_unix(sshd:session): session closed for user core Nov 6 23:35:27.059011 systemd[1]: sshd@16-147.182.203.129:22-147.75.109.163:53712.service: Deactivated successfully. Nov 6 23:35:27.061560 systemd[1]: session-17.scope: Deactivated successfully. Nov 6 23:35:27.063510 systemd-logind[1460]: Session 17 logged out. Waiting for processes to exit. Nov 6 23:35:27.069210 systemd[1]: Started sshd@17-147.182.203.129:22-147.75.109.163:53714.service - OpenSSH per-connection server daemon (147.75.109.163:53714). Nov 6 23:35:27.070920 systemd-logind[1460]: Removed session 17. Nov 6 23:35:27.126065 sshd[4113]: Accepted publickey for core from 147.75.109.163 port 53714 ssh2: RSA SHA256:Z2OWs1J4lzJbb37sVDKV9BLSPLNt86phBIrOXH84t9A Nov 6 23:35:27.127918 sshd-session[4113]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:35:27.136434 systemd-logind[1460]: New session 18 of user core. Nov 6 23:35:27.151228 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 6 23:35:27.916843 sshd[4116]: Connection closed by 147.75.109.163 port 53714 Nov 6 23:35:27.917694 sshd-session[4113]: pam_unix(sshd:session): session closed for user core Nov 6 23:35:27.934551 systemd[1]: sshd@17-147.182.203.129:22-147.75.109.163:53714.service: Deactivated successfully. Nov 6 23:35:27.939148 systemd[1]: session-18.scope: Deactivated successfully. Nov 6 23:35:27.942446 systemd-logind[1460]: Session 18 logged out. Waiting for processes to exit. Nov 6 23:35:27.950207 systemd[1]: Started sshd@18-147.182.203.129:22-147.75.109.163:53720.service - OpenSSH per-connection server daemon (147.75.109.163:53720). Nov 6 23:35:27.951392 systemd-logind[1460]: Removed session 18. Nov 6 23:35:28.043504 sshd[4130]: Accepted publickey for core from 147.75.109.163 port 53720 ssh2: RSA SHA256:Z2OWs1J4lzJbb37sVDKV9BLSPLNt86phBIrOXH84t9A Nov 6 23:35:28.046639 sshd-session[4130]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:35:28.055923 systemd-logind[1460]: New session 19 of user core. Nov 6 23:35:28.060451 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 6 23:35:28.377604 sshd[4135]: Connection closed by 147.75.109.163 port 53720 Nov 6 23:35:28.378926 sshd-session[4130]: pam_unix(sshd:session): session closed for user core Nov 6 23:35:28.394344 systemd[1]: sshd@18-147.182.203.129:22-147.75.109.163:53720.service: Deactivated successfully. Nov 6 23:35:28.402910 systemd[1]: session-19.scope: Deactivated successfully. Nov 6 23:35:28.406134 systemd-logind[1460]: Session 19 logged out. Waiting for processes to exit. Nov 6 23:35:28.415316 systemd[1]: Started sshd@19-147.182.203.129:22-147.75.109.163:53736.service - OpenSSH per-connection server daemon (147.75.109.163:53736). Nov 6 23:35:28.417436 systemd-logind[1460]: Removed session 19. Nov 6 23:35:28.470862 sshd[4144]: Accepted publickey for core from 147.75.109.163 port 53736 ssh2: RSA SHA256:Z2OWs1J4lzJbb37sVDKV9BLSPLNt86phBIrOXH84t9A Nov 6 23:35:28.472608 sshd-session[4144]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:35:28.481257 systemd-logind[1460]: New session 20 of user core. Nov 6 23:35:28.484020 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 6 23:35:28.621034 sshd[4147]: Connection closed by 147.75.109.163 port 53736 Nov 6 23:35:28.621880 sshd-session[4144]: pam_unix(sshd:session): session closed for user core Nov 6 23:35:28.627107 systemd[1]: sshd@19-147.182.203.129:22-147.75.109.163:53736.service: Deactivated successfully. Nov 6 23:35:28.630377 systemd[1]: session-20.scope: Deactivated successfully. Nov 6 23:35:28.631444 systemd-logind[1460]: Session 20 logged out. Waiting for processes to exit. Nov 6 23:35:28.632728 systemd-logind[1460]: Removed session 20. Nov 6 23:35:30.999883 kubelet[2581]: E1106 23:35:30.999575 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:35:32.002661 kubelet[2581]: E1106 23:35:32.002542 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:35:33.638023 systemd[1]: Started sshd@20-147.182.203.129:22-147.75.109.163:45474.service - OpenSSH per-connection server daemon (147.75.109.163:45474). Nov 6 23:35:33.701949 sshd[4159]: Accepted publickey for core from 147.75.109.163 port 45474 ssh2: RSA SHA256:Z2OWs1J4lzJbb37sVDKV9BLSPLNt86phBIrOXH84t9A Nov 6 23:35:33.703661 sshd-session[4159]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:35:33.710881 systemd-logind[1460]: New session 21 of user core. Nov 6 23:35:33.717065 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 6 23:35:33.870249 sshd[4161]: Connection closed by 147.75.109.163 port 45474 Nov 6 23:35:33.871078 sshd-session[4159]: pam_unix(sshd:session): session closed for user core Nov 6 23:35:33.876739 systemd[1]: sshd@20-147.182.203.129:22-147.75.109.163:45474.service: Deactivated successfully. Nov 6 23:35:33.879757 systemd[1]: session-21.scope: Deactivated successfully. Nov 6 23:35:33.881336 systemd-logind[1460]: Session 21 logged out. Waiting for processes to exit. Nov 6 23:35:33.882610 systemd-logind[1460]: Removed session 21. Nov 6 23:35:38.891164 systemd[1]: Started sshd@21-147.182.203.129:22-147.75.109.163:45480.service - OpenSSH per-connection server daemon (147.75.109.163:45480). Nov 6 23:35:38.947959 sshd[4175]: Accepted publickey for core from 147.75.109.163 port 45480 ssh2: RSA SHA256:Z2OWs1J4lzJbb37sVDKV9BLSPLNt86phBIrOXH84t9A Nov 6 23:35:38.949693 sshd-session[4175]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:35:38.955564 systemd-logind[1460]: New session 22 of user core. Nov 6 23:35:38.961082 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 6 23:35:39.098635 sshd[4177]: Connection closed by 147.75.109.163 port 45480 Nov 6 23:35:39.098500 sshd-session[4175]: pam_unix(sshd:session): session closed for user core Nov 6 23:35:39.105005 systemd[1]: sshd@21-147.182.203.129:22-147.75.109.163:45480.service: Deactivated successfully. Nov 6 23:35:39.108168 systemd[1]: session-22.scope: Deactivated successfully. Nov 6 23:35:39.109775 systemd-logind[1460]: Session 22 logged out. Waiting for processes to exit. Nov 6 23:35:39.111327 systemd-logind[1460]: Removed session 22. Nov 6 23:35:44.118122 systemd[1]: Started sshd@22-147.182.203.129:22-147.75.109.163:39628.service - OpenSSH per-connection server daemon (147.75.109.163:39628). Nov 6 23:35:44.175073 sshd[4189]: Accepted publickey for core from 147.75.109.163 port 39628 ssh2: RSA SHA256:Z2OWs1J4lzJbb37sVDKV9BLSPLNt86phBIrOXH84t9A Nov 6 23:35:44.177106 sshd-session[4189]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:35:44.184711 systemd-logind[1460]: New session 23 of user core. Nov 6 23:35:44.189165 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 6 23:35:44.327925 sshd[4191]: Connection closed by 147.75.109.163 port 39628 Nov 6 23:35:44.327628 sshd-session[4189]: pam_unix(sshd:session): session closed for user core Nov 6 23:35:44.331909 systemd-logind[1460]: Session 23 logged out. Waiting for processes to exit. Nov 6 23:35:44.332245 systemd[1]: sshd@22-147.182.203.129:22-147.75.109.163:39628.service: Deactivated successfully. Nov 6 23:35:44.335330 systemd[1]: session-23.scope: Deactivated successfully. Nov 6 23:35:44.338260 systemd-logind[1460]: Removed session 23. Nov 6 23:35:46.000707 kubelet[2581]: E1106 23:35:45.999821 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:35:46.999531 kubelet[2581]: E1106 23:35:46.999420 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:35:49.348178 systemd[1]: Started sshd@23-147.182.203.129:22-147.75.109.163:39634.service - OpenSSH per-connection server daemon (147.75.109.163:39634). Nov 6 23:35:49.406240 sshd[4205]: Accepted publickey for core from 147.75.109.163 port 39634 ssh2: RSA SHA256:Z2OWs1J4lzJbb37sVDKV9BLSPLNt86phBIrOXH84t9A Nov 6 23:35:49.408418 sshd-session[4205]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:35:49.414700 systemd-logind[1460]: New session 24 of user core. Nov 6 23:35:49.423060 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 6 23:35:49.575918 sshd[4207]: Connection closed by 147.75.109.163 port 39634 Nov 6 23:35:49.574669 sshd-session[4205]: pam_unix(sshd:session): session closed for user core Nov 6 23:35:49.585128 systemd[1]: sshd@23-147.182.203.129:22-147.75.109.163:39634.service: Deactivated successfully. Nov 6 23:35:49.587336 systemd[1]: session-24.scope: Deactivated successfully. Nov 6 23:35:49.588139 systemd-logind[1460]: Session 24 logged out. Waiting for processes to exit. Nov 6 23:35:49.594268 systemd[1]: Started sshd@24-147.182.203.129:22-147.75.109.163:39650.service - OpenSSH per-connection server daemon (147.75.109.163:39650). Nov 6 23:35:49.596527 systemd-logind[1460]: Removed session 24. Nov 6 23:35:49.648306 sshd[4217]: Accepted publickey for core from 147.75.109.163 port 39650 ssh2: RSA SHA256:Z2OWs1J4lzJbb37sVDKV9BLSPLNt86phBIrOXH84t9A Nov 6 23:35:49.650310 sshd-session[4217]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:35:49.656016 systemd-logind[1460]: New session 25 of user core. Nov 6 23:35:49.668171 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 6 23:35:51.350040 containerd[1481]: time="2025-11-06T23:35:51.349848145Z" level=info msg="StopContainer for \"0552e62e685f7fde67164e29030155c650dff620f0145b69465ae1c0c85e1d5e\" with timeout 30 (s)" Nov 6 23:35:51.361916 containerd[1481]: time="2025-11-06T23:35:51.361333463Z" level=info msg="Stop container \"0552e62e685f7fde67164e29030155c650dff620f0145b69465ae1c0c85e1d5e\" with signal terminated" Nov 6 23:35:51.385326 containerd[1481]: time="2025-11-06T23:35:51.385065934Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 6 23:35:51.400016 systemd[1]: cri-containerd-0552e62e685f7fde67164e29030155c650dff620f0145b69465ae1c0c85e1d5e.scope: Deactivated successfully. Nov 6 23:35:51.411105 containerd[1481]: time="2025-11-06T23:35:51.411069500Z" level=info msg="StopContainer for \"f81db3c648faa5788374c8e5a8697e7c248f81d797d5be71d86d729cbb29c167\" with timeout 2 (s)" Nov 6 23:35:51.411701 containerd[1481]: time="2025-11-06T23:35:51.411672341Z" level=info msg="Stop container \"f81db3c648faa5788374c8e5a8697e7c248f81d797d5be71d86d729cbb29c167\" with signal terminated" Nov 6 23:35:51.422373 systemd-networkd[1383]: lxc_health: Link DOWN Nov 6 23:35:51.422687 systemd-networkd[1383]: lxc_health: Lost carrier Nov 6 23:35:51.445160 systemd[1]: cri-containerd-f81db3c648faa5788374c8e5a8697e7c248f81d797d5be71d86d729cbb29c167.scope: Deactivated successfully. Nov 6 23:35:51.445452 systemd[1]: cri-containerd-f81db3c648faa5788374c8e5a8697e7c248f81d797d5be71d86d729cbb29c167.scope: Consumed 7.578s CPU time, 172.1M memory peak, 49.4M read from disk, 14.6M written to disk. Nov 6 23:35:51.460189 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0552e62e685f7fde67164e29030155c650dff620f0145b69465ae1c0c85e1d5e-rootfs.mount: Deactivated successfully. Nov 6 23:35:51.471447 containerd[1481]: time="2025-11-06T23:35:51.471231916Z" level=info msg="shim disconnected" id=0552e62e685f7fde67164e29030155c650dff620f0145b69465ae1c0c85e1d5e namespace=k8s.io Nov 6 23:35:51.471447 containerd[1481]: time="2025-11-06T23:35:51.471386454Z" level=warning msg="cleaning up after shim disconnected" id=0552e62e685f7fde67164e29030155c650dff620f0145b69465ae1c0c85e1d5e namespace=k8s.io Nov 6 23:35:51.471447 containerd[1481]: time="2025-11-06T23:35:51.471403522Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 6 23:35:51.488774 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f81db3c648faa5788374c8e5a8697e7c248f81d797d5be71d86d729cbb29c167-rootfs.mount: Deactivated successfully. Nov 6 23:35:51.492540 containerd[1481]: time="2025-11-06T23:35:51.492480675Z" level=info msg="shim disconnected" id=f81db3c648faa5788374c8e5a8697e7c248f81d797d5be71d86d729cbb29c167 namespace=k8s.io Nov 6 23:35:51.492540 containerd[1481]: time="2025-11-06T23:35:51.492536048Z" level=warning msg="cleaning up after shim disconnected" id=f81db3c648faa5788374c8e5a8697e7c248f81d797d5be71d86d729cbb29c167 namespace=k8s.io Nov 6 23:35:51.492540 containerd[1481]: time="2025-11-06T23:35:51.492544205Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 6 23:35:51.501931 containerd[1481]: time="2025-11-06T23:35:51.501881676Z" level=warning msg="cleanup warnings time=\"2025-11-06T23:35:51Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Nov 6 23:35:51.507050 containerd[1481]: time="2025-11-06T23:35:51.506992329Z" level=info msg="StopContainer for \"0552e62e685f7fde67164e29030155c650dff620f0145b69465ae1c0c85e1d5e\" returns successfully" Nov 6 23:35:51.507763 containerd[1481]: time="2025-11-06T23:35:51.507736118Z" level=info msg="StopPodSandbox for \"628149c0522cf7ee6b4b82e9f3e712f26e4de9134a60848388865bfd81710887\"" Nov 6 23:35:51.521228 containerd[1481]: time="2025-11-06T23:35:51.519219948Z" level=info msg="StopContainer for \"f81db3c648faa5788374c8e5a8697e7c248f81d797d5be71d86d729cbb29c167\" returns successfully" Nov 6 23:35:51.522445 containerd[1481]: time="2025-11-06T23:35:51.521665938Z" level=info msg="StopPodSandbox for \"a9b6121669cf8c3e79958b080c61f91209340ddc749fa189e76f517269107c44\"" Nov 6 23:35:51.522578 containerd[1481]: time="2025-11-06T23:35:51.521761678Z" level=info msg="Container to stop \"f81db3c648faa5788374c8e5a8697e7c248f81d797d5be71d86d729cbb29c167\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 6 23:35:51.522578 containerd[1481]: time="2025-11-06T23:35:51.522489143Z" level=info msg="Container to stop \"e56ff1f93447042ddc902a3f9371ae4c2007c8718f8a8d3c139708c44d69aca1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 6 23:35:51.522578 containerd[1481]: time="2025-11-06T23:35:51.522499884Z" level=info msg="Container to stop \"bab1f82452550ead18fbe6e3cd0a6baea237477bd14fc6357fa201859d961b53\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 6 23:35:51.522578 containerd[1481]: time="2025-11-06T23:35:51.522513714Z" level=info msg="Container to stop \"5f4dc0c8b88a456304d55842ee4b59753e9bce5fd25050f4acaa81c9940e815a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 6 23:35:51.522578 containerd[1481]: time="2025-11-06T23:35:51.522533936Z" level=info msg="Container to stop \"f11bc25cb8513203869f67a4347fed0ee61aaf33c7ea498725cbff113684f75f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 6 23:35:51.527986 containerd[1481]: time="2025-11-06T23:35:51.512423942Z" level=info msg="Container to stop \"0552e62e685f7fde67164e29030155c650dff620f0145b69465ae1c0c85e1d5e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 6 23:35:51.527440 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a9b6121669cf8c3e79958b080c61f91209340ddc749fa189e76f517269107c44-shm.mount: Deactivated successfully. Nov 6 23:35:51.533505 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-628149c0522cf7ee6b4b82e9f3e712f26e4de9134a60848388865bfd81710887-shm.mount: Deactivated successfully. Nov 6 23:35:51.539608 systemd[1]: cri-containerd-a9b6121669cf8c3e79958b080c61f91209340ddc749fa189e76f517269107c44.scope: Deactivated successfully. Nov 6 23:35:51.542513 systemd[1]: cri-containerd-628149c0522cf7ee6b4b82e9f3e712f26e4de9134a60848388865bfd81710887.scope: Deactivated successfully. Nov 6 23:35:51.574344 containerd[1481]: time="2025-11-06T23:35:51.574042005Z" level=info msg="shim disconnected" id=a9b6121669cf8c3e79958b080c61f91209340ddc749fa189e76f517269107c44 namespace=k8s.io Nov 6 23:35:51.574344 containerd[1481]: time="2025-11-06T23:35:51.574203737Z" level=warning msg="cleaning up after shim disconnected" id=a9b6121669cf8c3e79958b080c61f91209340ddc749fa189e76f517269107c44 namespace=k8s.io Nov 6 23:35:51.574344 containerd[1481]: time="2025-11-06T23:35:51.574219579Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 6 23:35:51.582001 containerd[1481]: time="2025-11-06T23:35:51.581469334Z" level=info msg="shim disconnected" id=628149c0522cf7ee6b4b82e9f3e712f26e4de9134a60848388865bfd81710887 namespace=k8s.io Nov 6 23:35:51.582001 containerd[1481]: time="2025-11-06T23:35:51.581627393Z" level=warning msg="cleaning up after shim disconnected" id=628149c0522cf7ee6b4b82e9f3e712f26e4de9134a60848388865bfd81710887 namespace=k8s.io Nov 6 23:35:51.582001 containerd[1481]: time="2025-11-06T23:35:51.581747805Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 6 23:35:51.592431 containerd[1481]: time="2025-11-06T23:35:51.592370215Z" level=warning msg="cleanup warnings time=\"2025-11-06T23:35:51Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Nov 6 23:35:51.594487 containerd[1481]: time="2025-11-06T23:35:51.594446707Z" level=info msg="TearDown network for sandbox \"a9b6121669cf8c3e79958b080c61f91209340ddc749fa189e76f517269107c44\" successfully" Nov 6 23:35:51.595374 containerd[1481]: time="2025-11-06T23:35:51.594865283Z" level=info msg="StopPodSandbox for \"a9b6121669cf8c3e79958b080c61f91209340ddc749fa189e76f517269107c44\" returns successfully" Nov 6 23:35:51.611662 containerd[1481]: time="2025-11-06T23:35:51.611541553Z" level=info msg="TearDown network for sandbox \"628149c0522cf7ee6b4b82e9f3e712f26e4de9134a60848388865bfd81710887\" successfully" Nov 6 23:35:51.611662 containerd[1481]: time="2025-11-06T23:35:51.611578857Z" level=info msg="StopPodSandbox for \"628149c0522cf7ee6b4b82e9f3e712f26e4de9134a60848388865bfd81710887\" returns successfully" Nov 6 23:35:51.702955 kubelet[2581]: I1106 23:35:51.702851 2581 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/af22054c-b470-4728-8365-16fe6fec5721-hostproc\") pod \"af22054c-b470-4728-8365-16fe6fec5721\" (UID: \"af22054c-b470-4728-8365-16fe6fec5721\") " Nov 6 23:35:51.702955 kubelet[2581]: I1106 23:35:51.702905 2581 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ccjbm\" (UniqueName: \"kubernetes.io/projected/dc36399f-6964-4c2f-9460-d33defdaabae-kube-api-access-ccjbm\") pod \"dc36399f-6964-4c2f-9460-d33defdaabae\" (UID: \"dc36399f-6964-4c2f-9460-d33defdaabae\") " Nov 6 23:35:51.702955 kubelet[2581]: I1106 23:35:51.702933 2581 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/af22054c-b470-4728-8365-16fe6fec5721-host-proc-sys-net\") pod \"af22054c-b470-4728-8365-16fe6fec5721\" (UID: \"af22054c-b470-4728-8365-16fe6fec5721\") " Nov 6 23:35:51.702955 kubelet[2581]: I1106 23:35:51.702950 2581 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gtsfh\" (UniqueName: \"kubernetes.io/projected/af22054c-b470-4728-8365-16fe6fec5721-kube-api-access-gtsfh\") pod \"af22054c-b470-4728-8365-16fe6fec5721\" (UID: \"af22054c-b470-4728-8365-16fe6fec5721\") " Nov 6 23:35:51.702955 kubelet[2581]: I1106 23:35:51.702964 2581 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/af22054c-b470-4728-8365-16fe6fec5721-host-proc-sys-kernel\") pod \"af22054c-b470-4728-8365-16fe6fec5721\" (UID: \"af22054c-b470-4728-8365-16fe6fec5721\") " Nov 6 23:35:51.704012 kubelet[2581]: I1106 23:35:51.702986 2581 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dc36399f-6964-4c2f-9460-d33defdaabae-cilium-config-path\") pod \"dc36399f-6964-4c2f-9460-d33defdaabae\" (UID: \"dc36399f-6964-4c2f-9460-d33defdaabae\") " Nov 6 23:35:51.704012 kubelet[2581]: I1106 23:35:51.703004 2581 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/af22054c-b470-4728-8365-16fe6fec5721-cilium-run\") pod \"af22054c-b470-4728-8365-16fe6fec5721\" (UID: \"af22054c-b470-4728-8365-16fe6fec5721\") " Nov 6 23:35:51.704012 kubelet[2581]: I1106 23:35:51.703018 2581 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/af22054c-b470-4728-8365-16fe6fec5721-bpf-maps\") pod \"af22054c-b470-4728-8365-16fe6fec5721\" (UID: \"af22054c-b470-4728-8365-16fe6fec5721\") " Nov 6 23:35:51.704012 kubelet[2581]: I1106 23:35:51.703036 2581 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/af22054c-b470-4728-8365-16fe6fec5721-clustermesh-secrets\") pod \"af22054c-b470-4728-8365-16fe6fec5721\" (UID: \"af22054c-b470-4728-8365-16fe6fec5721\") " Nov 6 23:35:51.704012 kubelet[2581]: I1106 23:35:51.703053 2581 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/af22054c-b470-4728-8365-16fe6fec5721-cilium-cgroup\") pod \"af22054c-b470-4728-8365-16fe6fec5721\" (UID: \"af22054c-b470-4728-8365-16fe6fec5721\") " Nov 6 23:35:51.704012 kubelet[2581]: I1106 23:35:51.703070 2581 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/af22054c-b470-4728-8365-16fe6fec5721-hubble-tls\") pod \"af22054c-b470-4728-8365-16fe6fec5721\" (UID: \"af22054c-b470-4728-8365-16fe6fec5721\") " Nov 6 23:35:51.704293 kubelet[2581]: I1106 23:35:51.703083 2581 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/af22054c-b470-4728-8365-16fe6fec5721-etc-cni-netd\") pod \"af22054c-b470-4728-8365-16fe6fec5721\" (UID: \"af22054c-b470-4728-8365-16fe6fec5721\") " Nov 6 23:35:51.704293 kubelet[2581]: I1106 23:35:51.703099 2581 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/af22054c-b470-4728-8365-16fe6fec5721-lib-modules\") pod \"af22054c-b470-4728-8365-16fe6fec5721\" (UID: \"af22054c-b470-4728-8365-16fe6fec5721\") " Nov 6 23:35:51.704293 kubelet[2581]: I1106 23:35:51.703112 2581 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/af22054c-b470-4728-8365-16fe6fec5721-xtables-lock\") pod \"af22054c-b470-4728-8365-16fe6fec5721\" (UID: \"af22054c-b470-4728-8365-16fe6fec5721\") " Nov 6 23:35:51.704293 kubelet[2581]: I1106 23:35:51.703137 2581 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/af22054c-b470-4728-8365-16fe6fec5721-cilium-config-path\") pod \"af22054c-b470-4728-8365-16fe6fec5721\" (UID: \"af22054c-b470-4728-8365-16fe6fec5721\") " Nov 6 23:35:51.704293 kubelet[2581]: I1106 23:35:51.703155 2581 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/af22054c-b470-4728-8365-16fe6fec5721-cni-path\") pod \"af22054c-b470-4728-8365-16fe6fec5721\" (UID: \"af22054c-b470-4728-8365-16fe6fec5721\") " Nov 6 23:35:51.704293 kubelet[2581]: I1106 23:35:51.703258 2581 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/af22054c-b470-4728-8365-16fe6fec5721-cni-path" (OuterVolumeSpecName: "cni-path") pod "af22054c-b470-4728-8365-16fe6fec5721" (UID: "af22054c-b470-4728-8365-16fe6fec5721"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 23:35:51.704525 kubelet[2581]: I1106 23:35:51.703303 2581 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/af22054c-b470-4728-8365-16fe6fec5721-hostproc" (OuterVolumeSpecName: "hostproc") pod "af22054c-b470-4728-8365-16fe6fec5721" (UID: "af22054c-b470-4728-8365-16fe6fec5721"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 23:35:51.706186 kubelet[2581]: I1106 23:35:51.706140 2581 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/af22054c-b470-4728-8365-16fe6fec5721-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "af22054c-b470-4728-8365-16fe6fec5721" (UID: "af22054c-b470-4728-8365-16fe6fec5721"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 23:35:51.706617 kubelet[2581]: I1106 23:35:51.706270 2581 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/af22054c-b470-4728-8365-16fe6fec5721-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "af22054c-b470-4728-8365-16fe6fec5721" (UID: "af22054c-b470-4728-8365-16fe6fec5721"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 23:35:51.707273 kubelet[2581]: I1106 23:35:51.707242 2581 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/af22054c-b470-4728-8365-16fe6fec5721-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "af22054c-b470-4728-8365-16fe6fec5721" (UID: "af22054c-b470-4728-8365-16fe6fec5721"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 23:35:51.707349 kubelet[2581]: I1106 23:35:51.707303 2581 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/af22054c-b470-4728-8365-16fe6fec5721-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "af22054c-b470-4728-8365-16fe6fec5721" (UID: "af22054c-b470-4728-8365-16fe6fec5721"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 23:35:51.707349 kubelet[2581]: I1106 23:35:51.707323 2581 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/af22054c-b470-4728-8365-16fe6fec5721-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "af22054c-b470-4728-8365-16fe6fec5721" (UID: "af22054c-b470-4728-8365-16fe6fec5721"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 23:35:51.708834 kubelet[2581]: I1106 23:35:51.707255 2581 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/af22054c-b470-4728-8365-16fe6fec5721-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "af22054c-b470-4728-8365-16fe6fec5721" (UID: "af22054c-b470-4728-8365-16fe6fec5721"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 23:35:51.709178 kubelet[2581]: I1106 23:35:51.709157 2581 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/af22054c-b470-4728-8365-16fe6fec5721-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "af22054c-b470-4728-8365-16fe6fec5721" (UID: "af22054c-b470-4728-8365-16fe6fec5721"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 23:35:51.709298 kubelet[2581]: I1106 23:35:51.709160 2581 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/af22054c-b470-4728-8365-16fe6fec5721-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "af22054c-b470-4728-8365-16fe6fec5721" (UID: "af22054c-b470-4728-8365-16fe6fec5721"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 23:35:51.711779 kubelet[2581]: I1106 23:35:51.711743 2581 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc36399f-6964-4c2f-9460-d33defdaabae-kube-api-access-ccjbm" (OuterVolumeSpecName: "kube-api-access-ccjbm") pod "dc36399f-6964-4c2f-9460-d33defdaabae" (UID: "dc36399f-6964-4c2f-9460-d33defdaabae"). InnerVolumeSpecName "kube-api-access-ccjbm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 6 23:35:51.712074 kubelet[2581]: I1106 23:35:51.711999 2581 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af22054c-b470-4728-8365-16fe6fec5721-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "af22054c-b470-4728-8365-16fe6fec5721" (UID: "af22054c-b470-4728-8365-16fe6fec5721"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 6 23:35:51.714035 kubelet[2581]: I1106 23:35:51.714003 2581 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af22054c-b470-4728-8365-16fe6fec5721-kube-api-access-gtsfh" (OuterVolumeSpecName: "kube-api-access-gtsfh") pod "af22054c-b470-4728-8365-16fe6fec5721" (UID: "af22054c-b470-4728-8365-16fe6fec5721"). InnerVolumeSpecName "kube-api-access-gtsfh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 6 23:35:51.714520 kubelet[2581]: I1106 23:35:51.714488 2581 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af22054c-b470-4728-8365-16fe6fec5721-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "af22054c-b470-4728-8365-16fe6fec5721" (UID: "af22054c-b470-4728-8365-16fe6fec5721"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 6 23:35:51.716058 kubelet[2581]: I1106 23:35:51.716029 2581 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af22054c-b470-4728-8365-16fe6fec5721-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "af22054c-b470-4728-8365-16fe6fec5721" (UID: "af22054c-b470-4728-8365-16fe6fec5721"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 6 23:35:51.716244 kubelet[2581]: I1106 23:35:51.716209 2581 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc36399f-6964-4c2f-9460-d33defdaabae-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "dc36399f-6964-4c2f-9460-d33defdaabae" (UID: "dc36399f-6964-4c2f-9460-d33defdaabae"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 6 23:35:51.803511 kubelet[2581]: I1106 23:35:51.803388 2581 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/af22054c-b470-4728-8365-16fe6fec5721-cni-path\") on node \"ci-4230.2.4-n-4ba84db3ac\" DevicePath \"\"" Nov 6 23:35:51.803511 kubelet[2581]: I1106 23:35:51.803431 2581 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/af22054c-b470-4728-8365-16fe6fec5721-host-proc-sys-net\") on node \"ci-4230.2.4-n-4ba84db3ac\" DevicePath \"\"" Nov 6 23:35:51.803511 kubelet[2581]: I1106 23:35:51.803446 2581 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gtsfh\" (UniqueName: \"kubernetes.io/projected/af22054c-b470-4728-8365-16fe6fec5721-kube-api-access-gtsfh\") on node \"ci-4230.2.4-n-4ba84db3ac\" DevicePath \"\"" Nov 6 23:35:51.803511 kubelet[2581]: I1106 23:35:51.803460 2581 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/af22054c-b470-4728-8365-16fe6fec5721-host-proc-sys-kernel\") on node \"ci-4230.2.4-n-4ba84db3ac\" DevicePath \"\"" Nov 6 23:35:51.803511 kubelet[2581]: I1106 23:35:51.803469 2581 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/af22054c-b470-4728-8365-16fe6fec5721-hostproc\") on node \"ci-4230.2.4-n-4ba84db3ac\" DevicePath \"\"" Nov 6 23:35:51.803511 kubelet[2581]: I1106 23:35:51.803480 2581 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ccjbm\" (UniqueName: \"kubernetes.io/projected/dc36399f-6964-4c2f-9460-d33defdaabae-kube-api-access-ccjbm\") on node \"ci-4230.2.4-n-4ba84db3ac\" DevicePath \"\"" Nov 6 23:35:51.803511 kubelet[2581]: I1106 23:35:51.803489 2581 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dc36399f-6964-4c2f-9460-d33defdaabae-cilium-config-path\") on node \"ci-4230.2.4-n-4ba84db3ac\" DevicePath \"\"" Nov 6 23:35:51.803511 kubelet[2581]: I1106 23:35:51.803497 2581 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/af22054c-b470-4728-8365-16fe6fec5721-cilium-run\") on node \"ci-4230.2.4-n-4ba84db3ac\" DevicePath \"\"" Nov 6 23:35:51.803910 kubelet[2581]: I1106 23:35:51.803505 2581 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/af22054c-b470-4728-8365-16fe6fec5721-bpf-maps\") on node \"ci-4230.2.4-n-4ba84db3ac\" DevicePath \"\"" Nov 6 23:35:51.803910 kubelet[2581]: I1106 23:35:51.803514 2581 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/af22054c-b470-4728-8365-16fe6fec5721-cilium-cgroup\") on node \"ci-4230.2.4-n-4ba84db3ac\" DevicePath \"\"" Nov 6 23:35:51.803910 kubelet[2581]: I1106 23:35:51.803526 2581 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/af22054c-b470-4728-8365-16fe6fec5721-hubble-tls\") on node \"ci-4230.2.4-n-4ba84db3ac\" DevicePath \"\"" Nov 6 23:35:51.803910 kubelet[2581]: I1106 23:35:51.803534 2581 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/af22054c-b470-4728-8365-16fe6fec5721-clustermesh-secrets\") on node \"ci-4230.2.4-n-4ba84db3ac\" DevicePath \"\"" Nov 6 23:35:51.803910 kubelet[2581]: I1106 23:35:51.803542 2581 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/af22054c-b470-4728-8365-16fe6fec5721-lib-modules\") on node \"ci-4230.2.4-n-4ba84db3ac\" DevicePath \"\"" Nov 6 23:35:51.803910 kubelet[2581]: I1106 23:35:51.803551 2581 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/af22054c-b470-4728-8365-16fe6fec5721-xtables-lock\") on node \"ci-4230.2.4-n-4ba84db3ac\" DevicePath \"\"" Nov 6 23:35:51.803910 kubelet[2581]: I1106 23:35:51.803558 2581 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/af22054c-b470-4728-8365-16fe6fec5721-etc-cni-netd\") on node \"ci-4230.2.4-n-4ba84db3ac\" DevicePath \"\"" Nov 6 23:35:51.803910 kubelet[2581]: I1106 23:35:51.803567 2581 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/af22054c-b470-4728-8365-16fe6fec5721-cilium-config-path\") on node \"ci-4230.2.4-n-4ba84db3ac\" DevicePath \"\"" Nov 6 23:35:52.007933 systemd[1]: Removed slice kubepods-besteffort-poddc36399f_6964_4c2f_9460_d33defdaabae.slice - libcontainer container kubepods-besteffort-poddc36399f_6964_4c2f_9460_d33defdaabae.slice. Nov 6 23:35:52.011780 systemd[1]: Removed slice kubepods-burstable-podaf22054c_b470_4728_8365_16fe6fec5721.slice - libcontainer container kubepods-burstable-podaf22054c_b470_4728_8365_16fe6fec5721.slice. Nov 6 23:35:52.012003 systemd[1]: kubepods-burstable-podaf22054c_b470_4728_8365_16fe6fec5721.slice: Consumed 7.680s CPU time, 172.4M memory peak, 49.4M read from disk, 14.8M written to disk. Nov 6 23:35:52.331426 kubelet[2581]: I1106 23:35:52.329449 2581 scope.go:117] "RemoveContainer" containerID="f81db3c648faa5788374c8e5a8697e7c248f81d797d5be71d86d729cbb29c167" Nov 6 23:35:52.339701 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a9b6121669cf8c3e79958b080c61f91209340ddc749fa189e76f517269107c44-rootfs.mount: Deactivated successfully. Nov 6 23:35:52.339851 systemd[1]: var-lib-kubelet-pods-af22054c\x2db470\x2d4728\x2d8365\x2d16fe6fec5721-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Nov 6 23:35:52.339940 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-628149c0522cf7ee6b4b82e9f3e712f26e4de9134a60848388865bfd81710887-rootfs.mount: Deactivated successfully. Nov 6 23:35:52.340001 systemd[1]: var-lib-kubelet-pods-af22054c\x2db470\x2d4728\x2d8365\x2d16fe6fec5721-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Nov 6 23:35:52.340069 systemd[1]: var-lib-kubelet-pods-dc36399f\x2d6964\x2d4c2f\x2d9460\x2dd33defdaabae-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dccjbm.mount: Deactivated successfully. Nov 6 23:35:52.340130 systemd[1]: var-lib-kubelet-pods-af22054c\x2db470\x2d4728\x2d8365\x2d16fe6fec5721-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgtsfh.mount: Deactivated successfully. Nov 6 23:35:52.351843 containerd[1481]: time="2025-11-06T23:35:52.350383425Z" level=info msg="RemoveContainer for \"f81db3c648faa5788374c8e5a8697e7c248f81d797d5be71d86d729cbb29c167\"" Nov 6 23:35:52.357940 containerd[1481]: time="2025-11-06T23:35:52.356433432Z" level=info msg="RemoveContainer for \"f81db3c648faa5788374c8e5a8697e7c248f81d797d5be71d86d729cbb29c167\" returns successfully" Nov 6 23:35:52.358557 kubelet[2581]: I1106 23:35:52.358235 2581 scope.go:117] "RemoveContainer" containerID="f11bc25cb8513203869f67a4347fed0ee61aaf33c7ea498725cbff113684f75f" Nov 6 23:35:52.359856 containerd[1481]: time="2025-11-06T23:35:52.359572463Z" level=info msg="RemoveContainer for \"f11bc25cb8513203869f67a4347fed0ee61aaf33c7ea498725cbff113684f75f\"" Nov 6 23:35:52.362833 containerd[1481]: time="2025-11-06T23:35:52.362387205Z" level=info msg="RemoveContainer for \"f11bc25cb8513203869f67a4347fed0ee61aaf33c7ea498725cbff113684f75f\" returns successfully" Nov 6 23:35:52.362969 kubelet[2581]: I1106 23:35:52.362608 2581 scope.go:117] "RemoveContainer" containerID="5f4dc0c8b88a456304d55842ee4b59753e9bce5fd25050f4acaa81c9940e815a" Nov 6 23:35:52.363816 containerd[1481]: time="2025-11-06T23:35:52.363593658Z" level=info msg="RemoveContainer for \"5f4dc0c8b88a456304d55842ee4b59753e9bce5fd25050f4acaa81c9940e815a\"" Nov 6 23:35:52.368402 containerd[1481]: time="2025-11-06T23:35:52.368321568Z" level=info msg="RemoveContainer for \"5f4dc0c8b88a456304d55842ee4b59753e9bce5fd25050f4acaa81c9940e815a\" returns successfully" Nov 6 23:35:52.369068 kubelet[2581]: I1106 23:35:52.368686 2581 scope.go:117] "RemoveContainer" containerID="bab1f82452550ead18fbe6e3cd0a6baea237477bd14fc6357fa201859d961b53" Nov 6 23:35:52.371435 containerd[1481]: time="2025-11-06T23:35:52.371392098Z" level=info msg="RemoveContainer for \"bab1f82452550ead18fbe6e3cd0a6baea237477bd14fc6357fa201859d961b53\"" Nov 6 23:35:52.377174 containerd[1481]: time="2025-11-06T23:35:52.377064770Z" level=info msg="RemoveContainer for \"bab1f82452550ead18fbe6e3cd0a6baea237477bd14fc6357fa201859d961b53\" returns successfully" Nov 6 23:35:52.377845 kubelet[2581]: I1106 23:35:52.377465 2581 scope.go:117] "RemoveContainer" containerID="e56ff1f93447042ddc902a3f9371ae4c2007c8718f8a8d3c139708c44d69aca1" Nov 6 23:35:52.380492 containerd[1481]: time="2025-11-06T23:35:52.380152939Z" level=info msg="RemoveContainer for \"e56ff1f93447042ddc902a3f9371ae4c2007c8718f8a8d3c139708c44d69aca1\"" Nov 6 23:35:52.383196 containerd[1481]: time="2025-11-06T23:35:52.383162855Z" level=info msg="RemoveContainer for \"e56ff1f93447042ddc902a3f9371ae4c2007c8718f8a8d3c139708c44d69aca1\" returns successfully" Nov 6 23:35:52.383570 kubelet[2581]: I1106 23:35:52.383548 2581 scope.go:117] "RemoveContainer" containerID="f81db3c648faa5788374c8e5a8697e7c248f81d797d5be71d86d729cbb29c167" Nov 6 23:35:52.384982 containerd[1481]: time="2025-11-06T23:35:52.384694742Z" level=error msg="ContainerStatus for \"f81db3c648faa5788374c8e5a8697e7c248f81d797d5be71d86d729cbb29c167\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f81db3c648faa5788374c8e5a8697e7c248f81d797d5be71d86d729cbb29c167\": not found" Nov 6 23:35:52.385377 kubelet[2581]: E1106 23:35:52.385267 2581 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f81db3c648faa5788374c8e5a8697e7c248f81d797d5be71d86d729cbb29c167\": not found" containerID="f81db3c648faa5788374c8e5a8697e7c248f81d797d5be71d86d729cbb29c167" Nov 6 23:35:52.388316 kubelet[2581]: I1106 23:35:52.385308 2581 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f81db3c648faa5788374c8e5a8697e7c248f81d797d5be71d86d729cbb29c167"} err="failed to get container status \"f81db3c648faa5788374c8e5a8697e7c248f81d797d5be71d86d729cbb29c167\": rpc error: code = NotFound desc = an error occurred when try to find container \"f81db3c648faa5788374c8e5a8697e7c248f81d797d5be71d86d729cbb29c167\": not found" Nov 6 23:35:52.388316 kubelet[2581]: I1106 23:35:52.388225 2581 scope.go:117] "RemoveContainer" containerID="f11bc25cb8513203869f67a4347fed0ee61aaf33c7ea498725cbff113684f75f" Nov 6 23:35:52.389695 containerd[1481]: time="2025-11-06T23:35:52.388755355Z" level=error msg="ContainerStatus for \"f11bc25cb8513203869f67a4347fed0ee61aaf33c7ea498725cbff113684f75f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f11bc25cb8513203869f67a4347fed0ee61aaf33c7ea498725cbff113684f75f\": not found" Nov 6 23:35:52.389782 kubelet[2581]: E1106 23:35:52.389489 2581 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f11bc25cb8513203869f67a4347fed0ee61aaf33c7ea498725cbff113684f75f\": not found" containerID="f11bc25cb8513203869f67a4347fed0ee61aaf33c7ea498725cbff113684f75f" Nov 6 23:35:52.389782 kubelet[2581]: I1106 23:35:52.389527 2581 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f11bc25cb8513203869f67a4347fed0ee61aaf33c7ea498725cbff113684f75f"} err="failed to get container status \"f11bc25cb8513203869f67a4347fed0ee61aaf33c7ea498725cbff113684f75f\": rpc error: code = NotFound desc = an error occurred when try to find container \"f11bc25cb8513203869f67a4347fed0ee61aaf33c7ea498725cbff113684f75f\": not found" Nov 6 23:35:52.389782 kubelet[2581]: I1106 23:35:52.389556 2581 scope.go:117] "RemoveContainer" containerID="5f4dc0c8b88a456304d55842ee4b59753e9bce5fd25050f4acaa81c9940e815a" Nov 6 23:35:52.391378 containerd[1481]: time="2025-11-06T23:35:52.391342377Z" level=error msg="ContainerStatus for \"5f4dc0c8b88a456304d55842ee4b59753e9bce5fd25050f4acaa81c9940e815a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5f4dc0c8b88a456304d55842ee4b59753e9bce5fd25050f4acaa81c9940e815a\": not found" Nov 6 23:35:52.391705 kubelet[2581]: E1106 23:35:52.391674 2581 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5f4dc0c8b88a456304d55842ee4b59753e9bce5fd25050f4acaa81c9940e815a\": not found" containerID="5f4dc0c8b88a456304d55842ee4b59753e9bce5fd25050f4acaa81c9940e815a" Nov 6 23:35:52.391820 kubelet[2581]: I1106 23:35:52.391733 2581 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5f4dc0c8b88a456304d55842ee4b59753e9bce5fd25050f4acaa81c9940e815a"} err="failed to get container status \"5f4dc0c8b88a456304d55842ee4b59753e9bce5fd25050f4acaa81c9940e815a\": rpc error: code = NotFound desc = an error occurred when try to find container \"5f4dc0c8b88a456304d55842ee4b59753e9bce5fd25050f4acaa81c9940e815a\": not found" Nov 6 23:35:52.391820 kubelet[2581]: I1106 23:35:52.391764 2581 scope.go:117] "RemoveContainer" containerID="bab1f82452550ead18fbe6e3cd0a6baea237477bd14fc6357fa201859d961b53" Nov 6 23:35:52.392297 containerd[1481]: time="2025-11-06T23:35:52.392100027Z" level=error msg="ContainerStatus for \"bab1f82452550ead18fbe6e3cd0a6baea237477bd14fc6357fa201859d961b53\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bab1f82452550ead18fbe6e3cd0a6baea237477bd14fc6357fa201859d961b53\": not found" Nov 6 23:35:52.392653 kubelet[2581]: E1106 23:35:52.392470 2581 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bab1f82452550ead18fbe6e3cd0a6baea237477bd14fc6357fa201859d961b53\": not found" containerID="bab1f82452550ead18fbe6e3cd0a6baea237477bd14fc6357fa201859d961b53" Nov 6 23:35:52.392653 kubelet[2581]: I1106 23:35:52.392510 2581 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bab1f82452550ead18fbe6e3cd0a6baea237477bd14fc6357fa201859d961b53"} err="failed to get container status \"bab1f82452550ead18fbe6e3cd0a6baea237477bd14fc6357fa201859d961b53\": rpc error: code = NotFound desc = an error occurred when try to find container \"bab1f82452550ead18fbe6e3cd0a6baea237477bd14fc6357fa201859d961b53\": not found" Nov 6 23:35:52.392653 kubelet[2581]: I1106 23:35:52.392530 2581 scope.go:117] "RemoveContainer" containerID="e56ff1f93447042ddc902a3f9371ae4c2007c8718f8a8d3c139708c44d69aca1" Nov 6 23:35:52.393397 containerd[1481]: time="2025-11-06T23:35:52.392967052Z" level=error msg="ContainerStatus for \"e56ff1f93447042ddc902a3f9371ae4c2007c8718f8a8d3c139708c44d69aca1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e56ff1f93447042ddc902a3f9371ae4c2007c8718f8a8d3c139708c44d69aca1\": not found" Nov 6 23:35:52.393466 kubelet[2581]: E1106 23:35:52.393124 2581 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e56ff1f93447042ddc902a3f9371ae4c2007c8718f8a8d3c139708c44d69aca1\": not found" containerID="e56ff1f93447042ddc902a3f9371ae4c2007c8718f8a8d3c139708c44d69aca1" Nov 6 23:35:52.393466 kubelet[2581]: I1106 23:35:52.393149 2581 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e56ff1f93447042ddc902a3f9371ae4c2007c8718f8a8d3c139708c44d69aca1"} err="failed to get container status \"e56ff1f93447042ddc902a3f9371ae4c2007c8718f8a8d3c139708c44d69aca1\": rpc error: code = NotFound desc = an error occurred when try to find container \"e56ff1f93447042ddc902a3f9371ae4c2007c8718f8a8d3c139708c44d69aca1\": not found" Nov 6 23:35:52.393466 kubelet[2581]: I1106 23:35:52.393165 2581 scope.go:117] "RemoveContainer" containerID="0552e62e685f7fde67164e29030155c650dff620f0145b69465ae1c0c85e1d5e" Nov 6 23:35:52.394736 containerd[1481]: time="2025-11-06T23:35:52.394464376Z" level=info msg="RemoveContainer for \"0552e62e685f7fde67164e29030155c650dff620f0145b69465ae1c0c85e1d5e\"" Nov 6 23:35:52.396729 containerd[1481]: time="2025-11-06T23:35:52.396683527Z" level=info msg="RemoveContainer for \"0552e62e685f7fde67164e29030155c650dff620f0145b69465ae1c0c85e1d5e\" returns successfully" Nov 6 23:35:52.397011 kubelet[2581]: I1106 23:35:52.396995 2581 scope.go:117] "RemoveContainer" containerID="0552e62e685f7fde67164e29030155c650dff620f0145b69465ae1c0c85e1d5e" Nov 6 23:35:52.397535 containerd[1481]: time="2025-11-06T23:35:52.397492460Z" level=error msg="ContainerStatus for \"0552e62e685f7fde67164e29030155c650dff620f0145b69465ae1c0c85e1d5e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0552e62e685f7fde67164e29030155c650dff620f0145b69465ae1c0c85e1d5e\": not found" Nov 6 23:35:52.397762 kubelet[2581]: E1106 23:35:52.397700 2581 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0552e62e685f7fde67164e29030155c650dff620f0145b69465ae1c0c85e1d5e\": not found" containerID="0552e62e685f7fde67164e29030155c650dff620f0145b69465ae1c0c85e1d5e" Nov 6 23:35:52.397762 kubelet[2581]: I1106 23:35:52.397738 2581 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0552e62e685f7fde67164e29030155c650dff620f0145b69465ae1c0c85e1d5e"} err="failed to get container status \"0552e62e685f7fde67164e29030155c650dff620f0145b69465ae1c0c85e1d5e\": rpc error: code = NotFound desc = an error occurred when try to find container \"0552e62e685f7fde67164e29030155c650dff620f0145b69465ae1c0c85e1d5e\": not found" Nov 6 23:35:53.275236 sshd[4220]: Connection closed by 147.75.109.163 port 39650 Nov 6 23:35:53.276227 sshd-session[4217]: pam_unix(sshd:session): session closed for user core Nov 6 23:35:53.288508 systemd[1]: sshd@24-147.182.203.129:22-147.75.109.163:39650.service: Deactivated successfully. Nov 6 23:35:53.291116 systemd[1]: session-25.scope: Deactivated successfully. Nov 6 23:35:53.292247 systemd-logind[1460]: Session 25 logged out. Waiting for processes to exit. Nov 6 23:35:53.300334 systemd[1]: Started sshd@25-147.182.203.129:22-147.75.109.163:38498.service - OpenSSH per-connection server daemon (147.75.109.163:38498). Nov 6 23:35:53.304429 systemd-logind[1460]: Removed session 25. Nov 6 23:35:53.380825 sshd[4385]: Accepted publickey for core from 147.75.109.163 port 38498 ssh2: RSA SHA256:Z2OWs1J4lzJbb37sVDKV9BLSPLNt86phBIrOXH84t9A Nov 6 23:35:53.382591 sshd-session[4385]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:35:53.388416 systemd-logind[1460]: New session 26 of user core. Nov 6 23:35:53.393063 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 6 23:35:54.002357 kubelet[2581]: I1106 23:35:54.002291 2581 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af22054c-b470-4728-8365-16fe6fec5721" path="/var/lib/kubelet/pods/af22054c-b470-4728-8365-16fe6fec5721/volumes" Nov 6 23:35:54.003773 kubelet[2581]: I1106 23:35:54.003729 2581 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc36399f-6964-4c2f-9460-d33defdaabae" path="/var/lib/kubelet/pods/dc36399f-6964-4c2f-9460-d33defdaabae/volumes" Nov 6 23:35:54.014926 sshd[4388]: Connection closed by 147.75.109.163 port 38498 Nov 6 23:35:54.014052 sshd-session[4385]: pam_unix(sshd:session): session closed for user core Nov 6 23:35:54.029142 systemd[1]: sshd@25-147.182.203.129:22-147.75.109.163:38498.service: Deactivated successfully. Nov 6 23:35:54.032925 systemd[1]: session-26.scope: Deactivated successfully. Nov 6 23:35:54.034120 systemd-logind[1460]: Session 26 logged out. Waiting for processes to exit. Nov 6 23:35:54.043257 systemd[1]: Started sshd@26-147.182.203.129:22-147.75.109.163:38504.service - OpenSSH per-connection server daemon (147.75.109.163:38504). Nov 6 23:35:54.047663 kubelet[2581]: I1106 23:35:54.047611 2581 memory_manager.go:355] "RemoveStaleState removing state" podUID="dc36399f-6964-4c2f-9460-d33defdaabae" containerName="cilium-operator" Nov 6 23:35:54.047663 kubelet[2581]: I1106 23:35:54.047647 2581 memory_manager.go:355] "RemoveStaleState removing state" podUID="af22054c-b470-4728-8365-16fe6fec5721" containerName="cilium-agent" Nov 6 23:35:54.052615 systemd-logind[1460]: Removed session 26. Nov 6 23:35:54.076048 systemd[1]: Created slice kubepods-burstable-podd93aca3f_d767_4085_9f3a_0504da2c019a.slice - libcontainer container kubepods-burstable-podd93aca3f_d767_4085_9f3a_0504da2c019a.slice. Nov 6 23:35:54.119824 kubelet[2581]: I1106 23:35:54.119642 2581 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d93aca3f-d767-4085-9f3a-0504da2c019a-cilium-cgroup\") pod \"cilium-cg89j\" (UID: \"d93aca3f-d767-4085-9f3a-0504da2c019a\") " pod="kube-system/cilium-cg89j" Nov 6 23:35:54.119824 kubelet[2581]: I1106 23:35:54.119686 2581 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d93aca3f-d767-4085-9f3a-0504da2c019a-etc-cni-netd\") pod \"cilium-cg89j\" (UID: \"d93aca3f-d767-4085-9f3a-0504da2c019a\") " pod="kube-system/cilium-cg89j" Nov 6 23:35:54.119824 kubelet[2581]: I1106 23:35:54.119708 2581 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d93aca3f-d767-4085-9f3a-0504da2c019a-cilium-run\") pod \"cilium-cg89j\" (UID: \"d93aca3f-d767-4085-9f3a-0504da2c019a\") " pod="kube-system/cilium-cg89j" Nov 6 23:35:54.119824 kubelet[2581]: I1106 23:35:54.119725 2581 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d93aca3f-d767-4085-9f3a-0504da2c019a-host-proc-sys-net\") pod \"cilium-cg89j\" (UID: \"d93aca3f-d767-4085-9f3a-0504da2c019a\") " pod="kube-system/cilium-cg89j" Nov 6 23:35:54.119824 kubelet[2581]: I1106 23:35:54.119772 2581 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d93aca3f-d767-4085-9f3a-0504da2c019a-host-proc-sys-kernel\") pod \"cilium-cg89j\" (UID: \"d93aca3f-d767-4085-9f3a-0504da2c019a\") " pod="kube-system/cilium-cg89j" Nov 6 23:35:54.120114 kubelet[2581]: I1106 23:35:54.119881 2581 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d93aca3f-d767-4085-9f3a-0504da2c019a-hubble-tls\") pod \"cilium-cg89j\" (UID: \"d93aca3f-d767-4085-9f3a-0504da2c019a\") " pod="kube-system/cilium-cg89j" Nov 6 23:35:54.120114 kubelet[2581]: I1106 23:35:54.119925 2581 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbg5b\" (UniqueName: \"kubernetes.io/projected/d93aca3f-d767-4085-9f3a-0504da2c019a-kube-api-access-xbg5b\") pod \"cilium-cg89j\" (UID: \"d93aca3f-d767-4085-9f3a-0504da2c019a\") " pod="kube-system/cilium-cg89j" Nov 6 23:35:54.120114 kubelet[2581]: I1106 23:35:54.119952 2581 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d93aca3f-d767-4085-9f3a-0504da2c019a-cilium-config-path\") pod \"cilium-cg89j\" (UID: \"d93aca3f-d767-4085-9f3a-0504da2c019a\") " pod="kube-system/cilium-cg89j" Nov 6 23:35:54.120114 kubelet[2581]: I1106 23:35:54.119971 2581 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d93aca3f-d767-4085-9f3a-0504da2c019a-hostproc\") pod \"cilium-cg89j\" (UID: \"d93aca3f-d767-4085-9f3a-0504da2c019a\") " pod="kube-system/cilium-cg89j" Nov 6 23:35:54.120114 kubelet[2581]: I1106 23:35:54.120005 2581 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d93aca3f-d767-4085-9f3a-0504da2c019a-cni-path\") pod \"cilium-cg89j\" (UID: \"d93aca3f-d767-4085-9f3a-0504da2c019a\") " pod="kube-system/cilium-cg89j" Nov 6 23:35:54.120114 kubelet[2581]: I1106 23:35:54.120020 2581 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/d93aca3f-d767-4085-9f3a-0504da2c019a-cilium-ipsec-secrets\") pod \"cilium-cg89j\" (UID: \"d93aca3f-d767-4085-9f3a-0504da2c019a\") " pod="kube-system/cilium-cg89j" Nov 6 23:35:54.120354 kubelet[2581]: I1106 23:35:54.120036 2581 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d93aca3f-d767-4085-9f3a-0504da2c019a-bpf-maps\") pod \"cilium-cg89j\" (UID: \"d93aca3f-d767-4085-9f3a-0504da2c019a\") " pod="kube-system/cilium-cg89j" Nov 6 23:35:54.120354 kubelet[2581]: I1106 23:35:54.120067 2581 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d93aca3f-d767-4085-9f3a-0504da2c019a-lib-modules\") pod \"cilium-cg89j\" (UID: \"d93aca3f-d767-4085-9f3a-0504da2c019a\") " pod="kube-system/cilium-cg89j" Nov 6 23:35:54.120354 kubelet[2581]: I1106 23:35:54.120085 2581 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d93aca3f-d767-4085-9f3a-0504da2c019a-xtables-lock\") pod \"cilium-cg89j\" (UID: \"d93aca3f-d767-4085-9f3a-0504da2c019a\") " pod="kube-system/cilium-cg89j" Nov 6 23:35:54.120354 kubelet[2581]: I1106 23:35:54.120102 2581 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d93aca3f-d767-4085-9f3a-0504da2c019a-clustermesh-secrets\") pod \"cilium-cg89j\" (UID: \"d93aca3f-d767-4085-9f3a-0504da2c019a\") " pod="kube-system/cilium-cg89j" Nov 6 23:35:54.143836 sshd[4397]: Accepted publickey for core from 147.75.109.163 port 38504 ssh2: RSA SHA256:Z2OWs1J4lzJbb37sVDKV9BLSPLNt86phBIrOXH84t9A Nov 6 23:35:54.145139 sshd-session[4397]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:35:54.152400 systemd-logind[1460]: New session 27 of user core. Nov 6 23:35:54.159105 systemd[1]: Started session-27.scope - Session 27 of User core. Nov 6 23:35:54.217373 sshd[4400]: Connection closed by 147.75.109.163 port 38504 Nov 6 23:35:54.218143 sshd-session[4397]: pam_unix(sshd:session): session closed for user core Nov 6 23:35:54.263462 systemd[1]: sshd@26-147.182.203.129:22-147.75.109.163:38504.service: Deactivated successfully. Nov 6 23:35:54.267355 systemd[1]: session-27.scope: Deactivated successfully. Nov 6 23:35:54.269439 systemd-logind[1460]: Session 27 logged out. Waiting for processes to exit. Nov 6 23:35:54.277205 systemd[1]: Started sshd@27-147.182.203.129:22-147.75.109.163:38514.service - OpenSSH per-connection server daemon (147.75.109.163:38514). Nov 6 23:35:54.278113 systemd-logind[1460]: Removed session 27. Nov 6 23:35:54.333066 sshd[4410]: Accepted publickey for core from 147.75.109.163 port 38514 ssh2: RSA SHA256:Z2OWs1J4lzJbb37sVDKV9BLSPLNt86phBIrOXH84t9A Nov 6 23:35:54.334667 sshd-session[4410]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:35:54.339948 systemd-logind[1460]: New session 28 of user core. Nov 6 23:35:54.348101 systemd[1]: Started session-28.scope - Session 28 of User core. Nov 6 23:35:54.385959 kubelet[2581]: E1106 23:35:54.385879 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:35:54.386678 containerd[1481]: time="2025-11-06T23:35:54.386618661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cg89j,Uid:d93aca3f-d767-4085-9f3a-0504da2c019a,Namespace:kube-system,Attempt:0,}" Nov 6 23:35:54.418228 containerd[1481]: time="2025-11-06T23:35:54.417416714Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 6 23:35:54.418228 containerd[1481]: time="2025-11-06T23:35:54.417920496Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 6 23:35:54.418228 containerd[1481]: time="2025-11-06T23:35:54.417943638Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 6 23:35:54.418228 containerd[1481]: time="2025-11-06T23:35:54.418075564Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 6 23:35:54.453046 systemd[1]: Started cri-containerd-4b07def7a4c249abb6108f4d7988992f52219b4d2830337e4320683ea7e7181a.scope - libcontainer container 4b07def7a4c249abb6108f4d7988992f52219b4d2830337e4320683ea7e7181a. Nov 6 23:35:54.490306 containerd[1481]: time="2025-11-06T23:35:54.490265050Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cg89j,Uid:d93aca3f-d767-4085-9f3a-0504da2c019a,Namespace:kube-system,Attempt:0,} returns sandbox id \"4b07def7a4c249abb6108f4d7988992f52219b4d2830337e4320683ea7e7181a\"" Nov 6 23:35:54.493438 kubelet[2581]: E1106 23:35:54.492336 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:35:54.498871 containerd[1481]: time="2025-11-06T23:35:54.498099331Z" level=info msg="CreateContainer within sandbox \"4b07def7a4c249abb6108f4d7988992f52219b4d2830337e4320683ea7e7181a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 6 23:35:54.512233 containerd[1481]: time="2025-11-06T23:35:54.512153048Z" level=info msg="CreateContainer within sandbox \"4b07def7a4c249abb6108f4d7988992f52219b4d2830337e4320683ea7e7181a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9966dc92ee7b7894557fa251261f364e977c33fed846f23fb5b9d01011141a04\"" Nov 6 23:35:54.513637 containerd[1481]: time="2025-11-06T23:35:54.513540619Z" level=info msg="StartContainer for \"9966dc92ee7b7894557fa251261f364e977c33fed846f23fb5b9d01011141a04\"" Nov 6 23:35:54.547129 systemd[1]: Started cri-containerd-9966dc92ee7b7894557fa251261f364e977c33fed846f23fb5b9d01011141a04.scope - libcontainer container 9966dc92ee7b7894557fa251261f364e977c33fed846f23fb5b9d01011141a04. Nov 6 23:35:54.583874 containerd[1481]: time="2025-11-06T23:35:54.583639728Z" level=info msg="StartContainer for \"9966dc92ee7b7894557fa251261f364e977c33fed846f23fb5b9d01011141a04\" returns successfully" Nov 6 23:35:54.605636 systemd[1]: cri-containerd-9966dc92ee7b7894557fa251261f364e977c33fed846f23fb5b9d01011141a04.scope: Deactivated successfully. Nov 6 23:35:54.637418 containerd[1481]: time="2025-11-06T23:35:54.637252121Z" level=info msg="shim disconnected" id=9966dc92ee7b7894557fa251261f364e977c33fed846f23fb5b9d01011141a04 namespace=k8s.io Nov 6 23:35:54.637418 containerd[1481]: time="2025-11-06T23:35:54.637405437Z" level=warning msg="cleaning up after shim disconnected" id=9966dc92ee7b7894557fa251261f364e977c33fed846f23fb5b9d01011141a04 namespace=k8s.io Nov 6 23:35:54.637418 containerd[1481]: time="2025-11-06T23:35:54.637414657Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 6 23:35:55.103503 kubelet[2581]: E1106 23:35:55.103396 2581 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Nov 6 23:35:55.359667 kubelet[2581]: E1106 23:35:55.359505 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:35:55.366825 containerd[1481]: time="2025-11-06T23:35:55.365065927Z" level=info msg="CreateContainer within sandbox \"4b07def7a4c249abb6108f4d7988992f52219b4d2830337e4320683ea7e7181a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 6 23:35:55.399021 containerd[1481]: time="2025-11-06T23:35:55.398958196Z" level=info msg="CreateContainer within sandbox \"4b07def7a4c249abb6108f4d7988992f52219b4d2830337e4320683ea7e7181a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ec4d4c42737098b47c4ac0a707ab0a85f12f70a0bf6c03c3754392a1f35fed51\"" Nov 6 23:35:55.400855 containerd[1481]: time="2025-11-06T23:35:55.400164592Z" level=info msg="StartContainer for \"ec4d4c42737098b47c4ac0a707ab0a85f12f70a0bf6c03c3754392a1f35fed51\"" Nov 6 23:35:55.401333 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2042690793.mount: Deactivated successfully. Nov 6 23:35:55.514112 systemd[1]: Started cri-containerd-ec4d4c42737098b47c4ac0a707ab0a85f12f70a0bf6c03c3754392a1f35fed51.scope - libcontainer container ec4d4c42737098b47c4ac0a707ab0a85f12f70a0bf6c03c3754392a1f35fed51. Nov 6 23:35:55.565827 containerd[1481]: time="2025-11-06T23:35:55.565759618Z" level=info msg="StartContainer for \"ec4d4c42737098b47c4ac0a707ab0a85f12f70a0bf6c03c3754392a1f35fed51\" returns successfully" Nov 6 23:35:55.577825 systemd[1]: cri-containerd-ec4d4c42737098b47c4ac0a707ab0a85f12f70a0bf6c03c3754392a1f35fed51.scope: Deactivated successfully. Nov 6 23:35:55.617957 containerd[1481]: time="2025-11-06T23:35:55.617691551Z" level=info msg="shim disconnected" id=ec4d4c42737098b47c4ac0a707ab0a85f12f70a0bf6c03c3754392a1f35fed51 namespace=k8s.io Nov 6 23:35:55.617957 containerd[1481]: time="2025-11-06T23:35:55.617768299Z" level=warning msg="cleaning up after shim disconnected" id=ec4d4c42737098b47c4ac0a707ab0a85f12f70a0bf6c03c3754392a1f35fed51 namespace=k8s.io Nov 6 23:35:55.617957 containerd[1481]: time="2025-11-06T23:35:55.617777683Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 6 23:35:56.239207 systemd[1]: run-containerd-runc-k8s.io-ec4d4c42737098b47c4ac0a707ab0a85f12f70a0bf6c03c3754392a1f35fed51-runc.GREmyd.mount: Deactivated successfully. Nov 6 23:35:56.239681 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ec4d4c42737098b47c4ac0a707ab0a85f12f70a0bf6c03c3754392a1f35fed51-rootfs.mount: Deactivated successfully. Nov 6 23:35:56.364834 kubelet[2581]: E1106 23:35:56.364095 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:35:56.368317 containerd[1481]: time="2025-11-06T23:35:56.368243588Z" level=info msg="CreateContainer within sandbox \"4b07def7a4c249abb6108f4d7988992f52219b4d2830337e4320683ea7e7181a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 6 23:35:56.385219 containerd[1481]: time="2025-11-06T23:35:56.385166390Z" level=info msg="CreateContainer within sandbox \"4b07def7a4c249abb6108f4d7988992f52219b4d2830337e4320683ea7e7181a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"4c829a2abc6a233c4fb3d3aab876c87b8ddc92c036b472e9336e562897fcdfe9\"" Nov 6 23:35:56.388946 containerd[1481]: time="2025-11-06T23:35:56.386978963Z" level=info msg="StartContainer for \"4c829a2abc6a233c4fb3d3aab876c87b8ddc92c036b472e9336e562897fcdfe9\"" Nov 6 23:35:56.445056 systemd[1]: Started cri-containerd-4c829a2abc6a233c4fb3d3aab876c87b8ddc92c036b472e9336e562897fcdfe9.scope - libcontainer container 4c829a2abc6a233c4fb3d3aab876c87b8ddc92c036b472e9336e562897fcdfe9. Nov 6 23:35:56.493957 containerd[1481]: time="2025-11-06T23:35:56.493757070Z" level=info msg="StartContainer for \"4c829a2abc6a233c4fb3d3aab876c87b8ddc92c036b472e9336e562897fcdfe9\" returns successfully" Nov 6 23:35:56.501638 systemd[1]: cri-containerd-4c829a2abc6a233c4fb3d3aab876c87b8ddc92c036b472e9336e562897fcdfe9.scope: Deactivated successfully. Nov 6 23:35:56.540558 containerd[1481]: time="2025-11-06T23:35:56.540479034Z" level=info msg="shim disconnected" id=4c829a2abc6a233c4fb3d3aab876c87b8ddc92c036b472e9336e562897fcdfe9 namespace=k8s.io Nov 6 23:35:56.540558 containerd[1481]: time="2025-11-06T23:35:56.540530891Z" level=warning msg="cleaning up after shim disconnected" id=4c829a2abc6a233c4fb3d3aab876c87b8ddc92c036b472e9336e562897fcdfe9 namespace=k8s.io Nov 6 23:35:56.540558 containerd[1481]: time="2025-11-06T23:35:56.540538803Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 6 23:35:57.237949 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4c829a2abc6a233c4fb3d3aab876c87b8ddc92c036b472e9336e562897fcdfe9-rootfs.mount: Deactivated successfully. Nov 6 23:35:57.368507 kubelet[2581]: E1106 23:35:57.368425 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:35:57.372881 containerd[1481]: time="2025-11-06T23:35:57.372695924Z" level=info msg="CreateContainer within sandbox \"4b07def7a4c249abb6108f4d7988992f52219b4d2830337e4320683ea7e7181a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 6 23:35:57.393650 containerd[1481]: time="2025-11-06T23:35:57.393595659Z" level=info msg="CreateContainer within sandbox \"4b07def7a4c249abb6108f4d7988992f52219b4d2830337e4320683ea7e7181a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"8cc2662b9fe4c824d5671acfd62adcd41ca9a5d1ee4ae236a95e2245b35ab9aa\"" Nov 6 23:35:57.400510 containerd[1481]: time="2025-11-06T23:35:57.396340126Z" level=info msg="StartContainer for \"8cc2662b9fe4c824d5671acfd62adcd41ca9a5d1ee4ae236a95e2245b35ab9aa\"" Nov 6 23:35:57.444690 systemd[1]: Started cri-containerd-8cc2662b9fe4c824d5671acfd62adcd41ca9a5d1ee4ae236a95e2245b35ab9aa.scope - libcontainer container 8cc2662b9fe4c824d5671acfd62adcd41ca9a5d1ee4ae236a95e2245b35ab9aa. Nov 6 23:35:57.480469 systemd[1]: cri-containerd-8cc2662b9fe4c824d5671acfd62adcd41ca9a5d1ee4ae236a95e2245b35ab9aa.scope: Deactivated successfully. Nov 6 23:35:57.483088 containerd[1481]: time="2025-11-06T23:35:57.483026400Z" level=info msg="StartContainer for \"8cc2662b9fe4c824d5671acfd62adcd41ca9a5d1ee4ae236a95e2245b35ab9aa\" returns successfully" Nov 6 23:35:57.511626 containerd[1481]: time="2025-11-06T23:35:57.511280517Z" level=info msg="shim disconnected" id=8cc2662b9fe4c824d5671acfd62adcd41ca9a5d1ee4ae236a95e2245b35ab9aa namespace=k8s.io Nov 6 23:35:57.511626 containerd[1481]: time="2025-11-06T23:35:57.511404574Z" level=warning msg="cleaning up after shim disconnected" id=8cc2662b9fe4c824d5671acfd62adcd41ca9a5d1ee4ae236a95e2245b35ab9aa namespace=k8s.io Nov 6 23:35:57.511626 containerd[1481]: time="2025-11-06T23:35:57.511417898Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 6 23:35:58.238452 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8cc2662b9fe4c824d5671acfd62adcd41ca9a5d1ee4ae236a95e2245b35ab9aa-rootfs.mount: Deactivated successfully. Nov 6 23:35:58.374619 kubelet[2581]: E1106 23:35:58.374173 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:35:58.378700 containerd[1481]: time="2025-11-06T23:35:58.378263008Z" level=info msg="CreateContainer within sandbox \"4b07def7a4c249abb6108f4d7988992f52219b4d2830337e4320683ea7e7181a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 6 23:35:58.406458 containerd[1481]: time="2025-11-06T23:35:58.406073077Z" level=info msg="CreateContainer within sandbox \"4b07def7a4c249abb6108f4d7988992f52219b4d2830337e4320683ea7e7181a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"325a892a4b7cf54c85d66324f236abad0056311fdf5856263fffd819b652fe01\"" Nov 6 23:35:58.408700 containerd[1481]: time="2025-11-06T23:35:58.408379259Z" level=info msg="StartContainer for \"325a892a4b7cf54c85d66324f236abad0056311fdf5856263fffd819b652fe01\"" Nov 6 23:35:58.453053 systemd[1]: Started cri-containerd-325a892a4b7cf54c85d66324f236abad0056311fdf5856263fffd819b652fe01.scope - libcontainer container 325a892a4b7cf54c85d66324f236abad0056311fdf5856263fffd819b652fe01. Nov 6 23:35:58.489975 containerd[1481]: time="2025-11-06T23:35:58.489286389Z" level=info msg="StartContainer for \"325a892a4b7cf54c85d66324f236abad0056311fdf5856263fffd819b652fe01\" returns successfully" Nov 6 23:35:58.939845 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Nov 6 23:35:59.238354 systemd[1]: run-containerd-runc-k8s.io-325a892a4b7cf54c85d66324f236abad0056311fdf5856263fffd819b652fe01-runc.lRLPhD.mount: Deactivated successfully. Nov 6 23:35:59.381645 kubelet[2581]: E1106 23:35:59.380614 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:35:59.402752 kubelet[2581]: I1106 23:35:59.402690 2581 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-cg89j" podStartSLOduration=5.402666902 podStartE2EDuration="5.402666902s" podCreationTimestamp="2025-11-06 23:35:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 23:35:59.401957311 +0000 UTC m=+109.557466467" watchObservedRunningTime="2025-11-06 23:35:59.402666902 +0000 UTC m=+109.558176079" Nov 6 23:36:00.389318 kubelet[2581]: E1106 23:36:00.389268 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:36:00.846591 systemd[1]: run-containerd-runc-k8s.io-325a892a4b7cf54c85d66324f236abad0056311fdf5856263fffd819b652fe01-runc.9xaHb3.mount: Deactivated successfully. Nov 6 23:36:02.396822 kubelet[2581]: E1106 23:36:02.396750 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:36:02.440096 systemd-networkd[1383]: lxc_health: Link UP Nov 6 23:36:02.445103 systemd-networkd[1383]: lxc_health: Gained carrier Nov 6 23:36:03.314779 systemd[1]: run-containerd-runc-k8s.io-325a892a4b7cf54c85d66324f236abad0056311fdf5856263fffd819b652fe01-runc.tcHHlF.mount: Deactivated successfully. Nov 6 23:36:03.396564 kubelet[2581]: E1106 23:36:03.396526 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:36:03.653093 systemd-networkd[1383]: lxc_health: Gained IPv6LL Nov 6 23:36:04.398487 kubelet[2581]: E1106 23:36:04.398446 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:36:05.536464 systemd[1]: run-containerd-runc-k8s.io-325a892a4b7cf54c85d66324f236abad0056311fdf5856263fffd819b652fe01-runc.fATPNQ.mount: Deactivated successfully. Nov 6 23:36:07.711104 systemd[1]: run-containerd-runc-k8s.io-325a892a4b7cf54c85d66324f236abad0056311fdf5856263fffd819b652fe01-runc.Z8vZ9e.mount: Deactivated successfully. Nov 6 23:36:07.773719 sshd[4413]: Connection closed by 147.75.109.163 port 38514 Nov 6 23:36:07.775545 sshd-session[4410]: pam_unix(sshd:session): session closed for user core Nov 6 23:36:07.787424 systemd[1]: sshd@27-147.182.203.129:22-147.75.109.163:38514.service: Deactivated successfully. Nov 6 23:36:07.789988 systemd[1]: session-28.scope: Deactivated successfully. Nov 6 23:36:07.792433 systemd-logind[1460]: Session 28 logged out. Waiting for processes to exit. Nov 6 23:36:07.794194 systemd-logind[1460]: Removed session 28. Nov 6 23:36:08.999559 kubelet[2581]: E1106 23:36:08.999458 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:36:10.007121 containerd[1481]: time="2025-11-06T23:36:10.007072054Z" level=info msg="StopPodSandbox for \"628149c0522cf7ee6b4b82e9f3e712f26e4de9134a60848388865bfd81710887\"" Nov 6 23:36:10.008958 containerd[1481]: time="2025-11-06T23:36:10.007903463Z" level=info msg="TearDown network for sandbox \"628149c0522cf7ee6b4b82e9f3e712f26e4de9134a60848388865bfd81710887\" successfully" Nov 6 23:36:10.008958 containerd[1481]: time="2025-11-06T23:36:10.007936899Z" level=info msg="StopPodSandbox for \"628149c0522cf7ee6b4b82e9f3e712f26e4de9134a60848388865bfd81710887\" returns successfully" Nov 6 23:36:10.010893 containerd[1481]: time="2025-11-06T23:36:10.010141480Z" level=info msg="RemovePodSandbox for \"628149c0522cf7ee6b4b82e9f3e712f26e4de9134a60848388865bfd81710887\"" Nov 6 23:36:10.010893 containerd[1481]: time="2025-11-06T23:36:10.010204642Z" level=info msg="Forcibly stopping sandbox \"628149c0522cf7ee6b4b82e9f3e712f26e4de9134a60848388865bfd81710887\"" Nov 6 23:36:10.010893 containerd[1481]: time="2025-11-06T23:36:10.010299194Z" level=info msg="TearDown network for sandbox \"628149c0522cf7ee6b4b82e9f3e712f26e4de9134a60848388865bfd81710887\" successfully" Nov 6 23:36:10.020320 containerd[1481]: time="2025-11-06T23:36:10.020262073Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"628149c0522cf7ee6b4b82e9f3e712f26e4de9134a60848388865bfd81710887\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 6 23:36:10.020569 containerd[1481]: time="2025-11-06T23:36:10.020548345Z" level=info msg="RemovePodSandbox \"628149c0522cf7ee6b4b82e9f3e712f26e4de9134a60848388865bfd81710887\" returns successfully" Nov 6 23:36:10.021583 containerd[1481]: time="2025-11-06T23:36:10.021306997Z" level=info msg="StopPodSandbox for \"a9b6121669cf8c3e79958b080c61f91209340ddc749fa189e76f517269107c44\"" Nov 6 23:36:10.021583 containerd[1481]: time="2025-11-06T23:36:10.021424694Z" level=info msg="TearDown network for sandbox \"a9b6121669cf8c3e79958b080c61f91209340ddc749fa189e76f517269107c44\" successfully" Nov 6 23:36:10.021583 containerd[1481]: time="2025-11-06T23:36:10.021479893Z" level=info msg="StopPodSandbox for \"a9b6121669cf8c3e79958b080c61f91209340ddc749fa189e76f517269107c44\" returns successfully" Nov 6 23:36:10.021852 containerd[1481]: time="2025-11-06T23:36:10.021783542Z" level=info msg="RemovePodSandbox for \"a9b6121669cf8c3e79958b080c61f91209340ddc749fa189e76f517269107c44\"" Nov 6 23:36:10.021852 containerd[1481]: time="2025-11-06T23:36:10.021829291Z" level=info msg="Forcibly stopping sandbox \"a9b6121669cf8c3e79958b080c61f91209340ddc749fa189e76f517269107c44\"" Nov 6 23:36:10.021925 containerd[1481]: time="2025-11-06T23:36:10.021888405Z" level=info msg="TearDown network for sandbox \"a9b6121669cf8c3e79958b080c61f91209340ddc749fa189e76f517269107c44\" successfully" Nov 6 23:36:10.024262 containerd[1481]: time="2025-11-06T23:36:10.024213251Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a9b6121669cf8c3e79958b080c61f91209340ddc749fa189e76f517269107c44\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 6 23:36:10.025281 containerd[1481]: time="2025-11-06T23:36:10.024279730Z" level=info msg="RemovePodSandbox \"a9b6121669cf8c3e79958b080c61f91209340ddc749fa189e76f517269107c44\" returns successfully"