Nov 6 23:37:32.990100 kernel: Linux version 6.6.113-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Thu Nov 6 22:02:38 -00 2025 Nov 6 23:37:32.990146 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=1a4810aa24298684dd9efd264f1d9b812e4e16f32429f4615db9ff284dd4ac25 Nov 6 23:37:32.990170 kernel: BIOS-provided physical RAM map: Nov 6 23:37:32.990184 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Nov 6 23:37:32.990197 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Nov 6 23:37:32.990210 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Nov 6 23:37:32.990226 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Nov 6 23:37:32.990240 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Nov 6 23:37:32.990254 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 6 23:37:32.990268 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Nov 6 23:37:32.990287 kernel: NX (Execute Disable) protection: active Nov 6 23:37:32.990301 kernel: APIC: Static calls initialized Nov 6 23:37:32.990321 kernel: SMBIOS 2.8 present. Nov 6 23:37:32.990335 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Nov 6 23:37:32.990352 kernel: Hypervisor detected: KVM Nov 6 23:37:32.990367 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 6 23:37:32.990393 kernel: kvm-clock: using sched offset of 3163716839 cycles Nov 6 23:37:32.990410 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 6 23:37:32.990426 kernel: tsc: Detected 2494.138 MHz processor Nov 6 23:37:32.990441 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 6 23:37:32.990457 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 6 23:37:32.990473 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Nov 6 23:37:32.990489 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Nov 6 23:37:32.990504 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 6 23:37:32.990524 kernel: ACPI: Early table checksum verification disabled Nov 6 23:37:32.990540 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Nov 6 23:37:32.990555 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 23:37:32.990570 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 23:37:32.990585 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 23:37:32.990600 kernel: ACPI: FACS 0x000000007FFE0000 000040 Nov 6 23:37:32.990616 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 23:37:32.990631 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 23:37:32.990646 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 23:37:32.990667 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 23:37:32.990694 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Nov 6 23:37:32.990711 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Nov 6 23:37:32.990726 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Nov 6 23:37:32.990741 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Nov 6 23:37:32.990756 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Nov 6 23:37:32.990772 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Nov 6 23:37:32.990800 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Nov 6 23:37:32.990816 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Nov 6 23:37:32.990832 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Nov 6 23:37:32.990848 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Nov 6 23:37:32.990864 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Nov 6 23:37:32.990884 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] Nov 6 23:37:32.990901 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] Nov 6 23:37:32.990922 kernel: Zone ranges: Nov 6 23:37:32.990938 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 6 23:37:32.990954 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Nov 6 23:37:32.990970 kernel: Normal empty Nov 6 23:37:32.990987 kernel: Movable zone start for each node Nov 6 23:37:32.991003 kernel: Early memory node ranges Nov 6 23:37:32.991019 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Nov 6 23:37:32.991035 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Nov 6 23:37:32.991051 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Nov 6 23:37:32.991067 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 6 23:37:32.991089 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Nov 6 23:37:32.991108 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Nov 6 23:37:32.991124 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 6 23:37:32.991141 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 6 23:37:32.991157 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 6 23:37:32.991173 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 6 23:37:32.991189 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 6 23:37:32.991206 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 6 23:37:32.991222 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 6 23:37:32.991244 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 6 23:37:32.991260 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 6 23:37:32.991276 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 6 23:37:32.991291 kernel: TSC deadline timer available Nov 6 23:37:32.991307 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Nov 6 23:37:32.991323 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 6 23:37:32.991339 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Nov 6 23:37:32.991358 kernel: Booting paravirtualized kernel on KVM Nov 6 23:37:32.991374 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 6 23:37:32.991396 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Nov 6 23:37:32.991412 kernel: percpu: Embedded 58 pages/cpu s196712 r8192 d32664 u1048576 Nov 6 23:37:32.991428 kernel: pcpu-alloc: s196712 r8192 d32664 u1048576 alloc=1*2097152 Nov 6 23:37:32.991444 kernel: pcpu-alloc: [0] 0 1 Nov 6 23:37:32.991460 kernel: kvm-guest: PV spinlocks disabled, no host support Nov 6 23:37:32.991478 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=1a4810aa24298684dd9efd264f1d9b812e4e16f32429f4615db9ff284dd4ac25 Nov 6 23:37:32.991495 kernel: random: crng init done Nov 6 23:37:32.991511 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 6 23:37:32.991532 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 6 23:37:32.991548 kernel: Fallback order for Node 0: 0 Nov 6 23:37:32.991565 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 Nov 6 23:37:32.991580 kernel: Policy zone: DMA32 Nov 6 23:37:32.991597 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 6 23:37:32.991614 kernel: Memory: 1969156K/2096612K available (14336K kernel code, 2288K rwdata, 22872K rodata, 43520K init, 1560K bss, 127196K reserved, 0K cma-reserved) Nov 6 23:37:32.991630 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 6 23:37:32.991645 kernel: Kernel/User page tables isolation: enabled Nov 6 23:37:32.991661 kernel: ftrace: allocating 37954 entries in 149 pages Nov 6 23:37:32.992654 kernel: ftrace: allocated 149 pages with 4 groups Nov 6 23:37:32.992713 kernel: Dynamic Preempt: voluntary Nov 6 23:37:32.992728 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 6 23:37:32.992743 kernel: rcu: RCU event tracing is enabled. Nov 6 23:37:32.992757 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 6 23:37:32.992771 kernel: Trampoline variant of Tasks RCU enabled. Nov 6 23:37:32.992786 kernel: Rude variant of Tasks RCU enabled. Nov 6 23:37:32.992799 kernel: Tracing variant of Tasks RCU enabled. Nov 6 23:37:32.992813 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 6 23:37:32.992836 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 6 23:37:32.992850 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Nov 6 23:37:32.992863 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 6 23:37:32.992877 kernel: Console: colour VGA+ 80x25 Nov 6 23:37:32.992898 kernel: printk: console [tty0] enabled Nov 6 23:37:32.992912 kernel: printk: console [ttyS0] enabled Nov 6 23:37:32.992927 kernel: ACPI: Core revision 20230628 Nov 6 23:37:32.992941 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 6 23:37:32.992957 kernel: APIC: Switch to symmetric I/O mode setup Nov 6 23:37:32.992976 kernel: x2apic enabled Nov 6 23:37:32.992990 kernel: APIC: Switched APIC routing to: physical x2apic Nov 6 23:37:32.993004 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 6 23:37:32.993021 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39838d43, max_idle_ns: 440795267131 ns Nov 6 23:37:32.993035 kernel: Calibrating delay loop (skipped) preset value.. 4988.27 BogoMIPS (lpj=2494138) Nov 6 23:37:32.993050 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Nov 6 23:37:32.993065 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Nov 6 23:37:32.993082 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 6 23:37:32.993118 kernel: Spectre V2 : Mitigation: Retpolines Nov 6 23:37:32.993135 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 6 23:37:32.993152 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Nov 6 23:37:32.993170 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 6 23:37:32.993191 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 6 23:37:32.993209 kernel: MDS: Mitigation: Clear CPU buffers Nov 6 23:37:32.993226 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Nov 6 23:37:32.993243 kernel: active return thunk: its_return_thunk Nov 6 23:37:32.993265 kernel: ITS: Mitigation: Aligned branch/return thunks Nov 6 23:37:32.993288 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 6 23:37:32.993305 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 6 23:37:32.993322 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 6 23:37:32.993339 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 6 23:37:32.993356 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Nov 6 23:37:32.993373 kernel: Freeing SMP alternatives memory: 32K Nov 6 23:37:32.993390 kernel: pid_max: default: 32768 minimum: 301 Nov 6 23:37:32.993407 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 6 23:37:32.993429 kernel: landlock: Up and running. Nov 6 23:37:32.993446 kernel: SELinux: Initializing. Nov 6 23:37:32.993463 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 6 23:37:32.993480 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 6 23:37:32.993497 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Nov 6 23:37:32.993514 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 6 23:37:32.993531 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 6 23:37:32.993548 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 6 23:37:32.993565 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Nov 6 23:37:32.993586 kernel: signal: max sigframe size: 1776 Nov 6 23:37:32.993603 kernel: rcu: Hierarchical SRCU implementation. Nov 6 23:37:32.993620 kernel: rcu: Max phase no-delay instances is 400. Nov 6 23:37:32.993637 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Nov 6 23:37:32.993654 kernel: smp: Bringing up secondary CPUs ... Nov 6 23:37:32.993670 kernel: smpboot: x86: Booting SMP configuration: Nov 6 23:37:32.993710 kernel: .... node #0, CPUs: #1 Nov 6 23:37:32.993726 kernel: smp: Brought up 1 node, 2 CPUs Nov 6 23:37:32.993745 kernel: smpboot: Max logical packages: 1 Nov 6 23:37:32.993769 kernel: smpboot: Total of 2 processors activated (9976.55 BogoMIPS) Nov 6 23:37:32.993785 kernel: devtmpfs: initialized Nov 6 23:37:32.993808 kernel: x86/mm: Memory block size: 128MB Nov 6 23:37:32.993823 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 6 23:37:32.993840 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 6 23:37:32.993859 kernel: pinctrl core: initialized pinctrl subsystem Nov 6 23:37:32.993877 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 6 23:37:32.993894 kernel: audit: initializing netlink subsys (disabled) Nov 6 23:37:32.993911 kernel: audit: type=2000 audit(1762472252.332:1): state=initialized audit_enabled=0 res=1 Nov 6 23:37:32.993934 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 6 23:37:32.993951 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 6 23:37:32.993966 kernel: cpuidle: using governor menu Nov 6 23:37:32.993983 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 6 23:37:32.993998 kernel: dca service started, version 1.12.1 Nov 6 23:37:32.994013 kernel: PCI: Using configuration type 1 for base access Nov 6 23:37:32.994029 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 6 23:37:32.994044 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 6 23:37:32.994059 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 6 23:37:32.994080 kernel: ACPI: Added _OSI(Module Device) Nov 6 23:37:32.994095 kernel: ACPI: Added _OSI(Processor Device) Nov 6 23:37:32.994111 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 6 23:37:32.994127 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 6 23:37:32.994143 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Nov 6 23:37:32.994159 kernel: ACPI: Interpreter enabled Nov 6 23:37:32.994175 kernel: ACPI: PM: (supports S0 S5) Nov 6 23:37:32.994191 kernel: ACPI: Using IOAPIC for interrupt routing Nov 6 23:37:32.994208 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 6 23:37:32.994224 kernel: PCI: Using E820 reservations for host bridge windows Nov 6 23:37:32.994247 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Nov 6 23:37:32.994264 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 6 23:37:32.994620 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Nov 6 23:37:32.994841 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Nov 6 23:37:32.995023 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Nov 6 23:37:32.995044 kernel: acpiphp: Slot [3] registered Nov 6 23:37:32.995061 kernel: acpiphp: Slot [4] registered Nov 6 23:37:32.995086 kernel: acpiphp: Slot [5] registered Nov 6 23:37:32.995102 kernel: acpiphp: Slot [6] registered Nov 6 23:37:32.995119 kernel: acpiphp: Slot [7] registered Nov 6 23:37:32.995135 kernel: acpiphp: Slot [8] registered Nov 6 23:37:32.995151 kernel: acpiphp: Slot [9] registered Nov 6 23:37:32.995168 kernel: acpiphp: Slot [10] registered Nov 6 23:37:32.995185 kernel: acpiphp: Slot [11] registered Nov 6 23:37:32.995201 kernel: acpiphp: Slot [12] registered Nov 6 23:37:32.995217 kernel: acpiphp: Slot [13] registered Nov 6 23:37:32.995239 kernel: acpiphp: Slot [14] registered Nov 6 23:37:32.995256 kernel: acpiphp: Slot [15] registered Nov 6 23:37:32.995272 kernel: acpiphp: Slot [16] registered Nov 6 23:37:32.995289 kernel: acpiphp: Slot [17] registered Nov 6 23:37:32.995305 kernel: acpiphp: Slot [18] registered Nov 6 23:37:32.995321 kernel: acpiphp: Slot [19] registered Nov 6 23:37:32.995338 kernel: acpiphp: Slot [20] registered Nov 6 23:37:32.995354 kernel: acpiphp: Slot [21] registered Nov 6 23:37:32.995370 kernel: acpiphp: Slot [22] registered Nov 6 23:37:32.995387 kernel: acpiphp: Slot [23] registered Nov 6 23:37:32.995408 kernel: acpiphp: Slot [24] registered Nov 6 23:37:32.995425 kernel: acpiphp: Slot [25] registered Nov 6 23:37:32.995441 kernel: acpiphp: Slot [26] registered Nov 6 23:37:32.995457 kernel: acpiphp: Slot [27] registered Nov 6 23:37:32.995474 kernel: acpiphp: Slot [28] registered Nov 6 23:37:32.995490 kernel: acpiphp: Slot [29] registered Nov 6 23:37:32.995506 kernel: acpiphp: Slot [30] registered Nov 6 23:37:32.995522 kernel: acpiphp: Slot [31] registered Nov 6 23:37:32.995538 kernel: PCI host bridge to bus 0000:00 Nov 6 23:37:32.995793 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 6 23:37:32.995962 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 6 23:37:32.996123 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 6 23:37:32.996325 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Nov 6 23:37:32.996588 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Nov 6 23:37:32.997902 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 6 23:37:32.998888 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Nov 6 23:37:32.999158 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Nov 6 23:37:32.999414 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Nov 6 23:37:32.999634 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Nov 6 23:37:32.999917 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Nov 6 23:37:33.000130 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Nov 6 23:37:33.000347 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Nov 6 23:37:33.002954 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Nov 6 23:37:33.003217 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Nov 6 23:37:33.003433 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Nov 6 23:37:33.003645 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Nov 6 23:37:33.005046 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Nov 6 23:37:33.005275 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Nov 6 23:37:33.005512 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Nov 6 23:37:33.006810 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Nov 6 23:37:33.007110 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Nov 6 23:37:33.007319 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Nov 6 23:37:33.007501 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Nov 6 23:37:33.007681 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 6 23:37:33.009060 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Nov 6 23:37:33.009273 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Nov 6 23:37:33.009468 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Nov 6 23:37:33.009662 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Nov 6 23:37:33.010935 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Nov 6 23:37:33.011140 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Nov 6 23:37:33.011337 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Nov 6 23:37:33.011526 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Nov 6 23:37:33.011817 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Nov 6 23:37:33.012018 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Nov 6 23:37:33.012216 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Nov 6 23:37:33.012426 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Nov 6 23:37:33.012669 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Nov 6 23:37:33.015000 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Nov 6 23:37:33.015242 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Nov 6 23:37:33.015459 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Nov 6 23:37:33.015751 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Nov 6 23:37:33.015959 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Nov 6 23:37:33.016165 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Nov 6 23:37:33.016355 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Nov 6 23:37:33.016591 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Nov 6 23:37:33.019902 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Nov 6 23:37:33.020144 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Nov 6 23:37:33.020168 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 6 23:37:33.020186 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 6 23:37:33.020203 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 6 23:37:33.020220 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 6 23:37:33.020237 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Nov 6 23:37:33.020254 kernel: iommu: Default domain type: Translated Nov 6 23:37:33.020279 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 6 23:37:33.020297 kernel: PCI: Using ACPI for IRQ routing Nov 6 23:37:33.020315 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 6 23:37:33.020334 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Nov 6 23:37:33.020352 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Nov 6 23:37:33.020606 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Nov 6 23:37:33.020851 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Nov 6 23:37:33.021071 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 6 23:37:33.021095 kernel: vgaarb: loaded Nov 6 23:37:33.021124 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 6 23:37:33.021143 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 6 23:37:33.021163 kernel: clocksource: Switched to clocksource kvm-clock Nov 6 23:37:33.021183 kernel: VFS: Disk quotas dquot_6.6.0 Nov 6 23:37:33.021202 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 6 23:37:33.021222 kernel: pnp: PnP ACPI init Nov 6 23:37:33.021243 kernel: pnp: PnP ACPI: found 4 devices Nov 6 23:37:33.021263 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 6 23:37:33.021283 kernel: NET: Registered PF_INET protocol family Nov 6 23:37:33.021309 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 6 23:37:33.021329 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Nov 6 23:37:33.021349 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 6 23:37:33.021368 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 6 23:37:33.021388 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Nov 6 23:37:33.021408 kernel: TCP: Hash tables configured (established 16384 bind 16384) Nov 6 23:37:33.021427 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 6 23:37:33.021447 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 6 23:37:33.021466 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 6 23:37:33.021492 kernel: NET: Registered PF_XDP protocol family Nov 6 23:37:33.024859 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 6 23:37:33.025082 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 6 23:37:33.025261 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 6 23:37:33.025435 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Nov 6 23:37:33.025608 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Nov 6 23:37:33.025849 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Nov 6 23:37:33.026052 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Nov 6 23:37:33.026090 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Nov 6 23:37:33.027825 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7a0 took 31733 usecs Nov 6 23:37:33.027865 kernel: PCI: CLS 0 bytes, default 64 Nov 6 23:37:33.027884 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Nov 6 23:37:33.027902 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39838d43, max_idle_ns: 440795267131 ns Nov 6 23:37:33.027920 kernel: Initialise system trusted keyrings Nov 6 23:37:33.027939 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Nov 6 23:37:33.027956 kernel: Key type asymmetric registered Nov 6 23:37:33.027983 kernel: Asymmetric key parser 'x509' registered Nov 6 23:37:33.028001 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Nov 6 23:37:33.028018 kernel: io scheduler mq-deadline registered Nov 6 23:37:33.028035 kernel: io scheduler kyber registered Nov 6 23:37:33.028053 kernel: io scheduler bfq registered Nov 6 23:37:33.028070 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 6 23:37:33.028088 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Nov 6 23:37:33.028106 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Nov 6 23:37:33.028124 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Nov 6 23:37:33.028153 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 6 23:37:33.028170 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 6 23:37:33.028187 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 6 23:37:33.028215 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 6 23:37:33.028230 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 6 23:37:33.028248 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 6 23:37:33.028571 kernel: rtc_cmos 00:03: RTC can wake from S4 Nov 6 23:37:33.028796 kernel: rtc_cmos 00:03: registered as rtc0 Nov 6 23:37:33.028989 kernel: rtc_cmos 00:03: setting system clock to 2025-11-06T23:37:32 UTC (1762472252) Nov 6 23:37:33.029163 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Nov 6 23:37:33.029183 kernel: intel_pstate: CPU model not supported Nov 6 23:37:33.029201 kernel: NET: Registered PF_INET6 protocol family Nov 6 23:37:33.029220 kernel: Segment Routing with IPv6 Nov 6 23:37:33.029237 kernel: In-situ OAM (IOAM) with IPv6 Nov 6 23:37:33.029254 kernel: NET: Registered PF_PACKET protocol family Nov 6 23:37:33.029272 kernel: Key type dns_resolver registered Nov 6 23:37:33.029289 kernel: IPI shorthand broadcast: enabled Nov 6 23:37:33.029317 kernel: sched_clock: Marking stable (912003991, 146807811)->(1176571865, -117760063) Nov 6 23:37:33.029335 kernel: registered taskstats version 1 Nov 6 23:37:33.029352 kernel: Loading compiled-in X.509 certificates Nov 6 23:37:33.029370 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.113-flatcar: d06f6bc77ef9183fbb55ec1fc021fe2cce974996' Nov 6 23:37:33.029387 kernel: Key type .fscrypt registered Nov 6 23:37:33.029404 kernel: Key type fscrypt-provisioning registered Nov 6 23:37:33.029421 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 6 23:37:33.029438 kernel: ima: Allocated hash algorithm: sha1 Nov 6 23:37:33.029456 kernel: ima: No architecture policies found Nov 6 23:37:33.029481 kernel: clk: Disabling unused clocks Nov 6 23:37:33.029499 kernel: Freeing unused kernel image (initmem) memory: 43520K Nov 6 23:37:33.029516 kernel: Write protecting the kernel read-only data: 38912k Nov 6 23:37:33.029534 kernel: Freeing unused kernel image (rodata/data gap) memory: 1704K Nov 6 23:37:33.029613 kernel: Run /init as init process Nov 6 23:37:33.029640 kernel: with arguments: Nov 6 23:37:33.029658 kernel: /init Nov 6 23:37:33.029676 kernel: with environment: Nov 6 23:37:33.033191 kernel: HOME=/ Nov 6 23:37:33.033234 kernel: TERM=linux Nov 6 23:37:33.033255 systemd[1]: Successfully made /usr/ read-only. Nov 6 23:37:33.033282 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 6 23:37:33.033302 systemd[1]: Detected virtualization kvm. Nov 6 23:37:33.033321 systemd[1]: Detected architecture x86-64. Nov 6 23:37:33.033339 systemd[1]: Running in initrd. Nov 6 23:37:33.033358 systemd[1]: No hostname configured, using default hostname. Nov 6 23:37:33.033387 systemd[1]: Hostname set to . Nov 6 23:37:33.033406 systemd[1]: Initializing machine ID from VM UUID. Nov 6 23:37:33.033424 systemd[1]: Queued start job for default target initrd.target. Nov 6 23:37:33.033443 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 6 23:37:33.033459 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 6 23:37:33.033476 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 6 23:37:33.033491 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 6 23:37:33.033505 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 6 23:37:33.033530 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 6 23:37:33.033548 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 6 23:37:33.033563 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 6 23:37:33.033580 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 6 23:37:33.033602 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 6 23:37:33.033623 systemd[1]: Reached target paths.target - Path Units. Nov 6 23:37:33.033644 systemd[1]: Reached target slices.target - Slice Units. Nov 6 23:37:33.033672 systemd[1]: Reached target swap.target - Swaps. Nov 6 23:37:33.033726 systemd[1]: Reached target timers.target - Timer Units. Nov 6 23:37:33.033745 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 6 23:37:33.033763 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 6 23:37:33.033781 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 6 23:37:33.033800 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Nov 6 23:37:33.033827 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 6 23:37:33.033846 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 6 23:37:33.033866 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 6 23:37:33.033887 systemd[1]: Reached target sockets.target - Socket Units. Nov 6 23:37:33.033910 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 6 23:37:33.033931 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 6 23:37:33.033953 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 6 23:37:33.033975 systemd[1]: Starting systemd-fsck-usr.service... Nov 6 23:37:33.034006 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 6 23:37:33.034027 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 6 23:37:33.034048 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 23:37:33.034069 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 6 23:37:33.034167 systemd-journald[184]: Collecting audit messages is disabled. Nov 6 23:37:33.034213 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 6 23:37:33.034230 systemd[1]: Finished systemd-fsck-usr.service. Nov 6 23:37:33.034247 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 6 23:37:33.034264 systemd-journald[184]: Journal started Nov 6 23:37:33.034306 systemd-journald[184]: Runtime Journal (/run/log/journal/87b0267e57ef494cbe1682a1f437b9d0) is 4.9M, max 39.3M, 34.4M free. Nov 6 23:37:33.001101 systemd-modules-load[185]: Inserted module 'overlay' Nov 6 23:37:33.038510 systemd[1]: Started systemd-journald.service - Journal Service. Nov 6 23:37:33.058721 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 6 23:37:33.060752 kernel: Bridge firewalling registered Nov 6 23:37:33.060004 systemd-modules-load[185]: Inserted module 'br_netfilter' Nov 6 23:37:33.065223 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 6 23:37:33.105420 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 6 23:37:33.113065 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 23:37:33.130151 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 6 23:37:33.132828 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 6 23:37:33.134906 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 6 23:37:33.138087 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 6 23:37:33.146359 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 6 23:37:33.165701 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 6 23:37:33.173991 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 6 23:37:33.178888 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 6 23:37:33.180778 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 6 23:37:33.196022 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 6 23:37:33.232999 dracut-cmdline[223]: dracut-dracut-053 Nov 6 23:37:33.237405 dracut-cmdline[223]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=1a4810aa24298684dd9efd264f1d9b812e4e16f32429f4615db9ff284dd4ac25 Nov 6 23:37:33.240643 systemd-resolved[219]: Positive Trust Anchors: Nov 6 23:37:33.240662 systemd-resolved[219]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 6 23:37:33.241860 systemd-resolved[219]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 6 23:37:33.250084 systemd-resolved[219]: Defaulting to hostname 'linux'. Nov 6 23:37:33.254428 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 6 23:37:33.255363 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 6 23:37:33.334757 kernel: SCSI subsystem initialized Nov 6 23:37:33.345737 kernel: Loading iSCSI transport class v2.0-870. Nov 6 23:37:33.357746 kernel: iscsi: registered transport (tcp) Nov 6 23:37:33.382230 kernel: iscsi: registered transport (qla4xxx) Nov 6 23:37:33.382347 kernel: QLogic iSCSI HBA Driver Nov 6 23:37:33.437660 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 6 23:37:33.444995 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 6 23:37:33.482301 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 6 23:37:33.482421 kernel: device-mapper: uevent: version 1.0.3 Nov 6 23:37:33.484059 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 6 23:37:33.529760 kernel: raid6: avx2x4 gen() 22557 MB/s Nov 6 23:37:33.546756 kernel: raid6: avx2x2 gen() 23359 MB/s Nov 6 23:37:33.563985 kernel: raid6: avx2x1 gen() 21174 MB/s Nov 6 23:37:33.564094 kernel: raid6: using algorithm avx2x2 gen() 23359 MB/s Nov 6 23:37:33.583756 kernel: raid6: .... xor() 20615 MB/s, rmw enabled Nov 6 23:37:33.583912 kernel: raid6: using avx2x2 recovery algorithm Nov 6 23:37:33.606734 kernel: xor: automatically using best checksumming function avx Nov 6 23:37:33.770744 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 6 23:37:33.786543 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 6 23:37:33.794019 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 6 23:37:33.819908 systemd-udevd[405]: Using default interface naming scheme 'v255'. Nov 6 23:37:33.826629 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 6 23:37:33.836931 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 6 23:37:33.855127 dracut-pre-trigger[415]: rd.md=0: removing MD RAID activation Nov 6 23:37:33.895947 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 6 23:37:33.902980 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 6 23:37:33.980215 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 6 23:37:33.991077 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 6 23:37:34.023980 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 6 23:37:34.027597 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 6 23:37:34.029340 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 6 23:37:34.031464 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 6 23:37:34.040027 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 6 23:37:34.080342 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 6 23:37:34.097722 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Nov 6 23:37:34.102793 kernel: scsi host0: Virtio SCSI HBA Nov 6 23:37:34.125923 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Nov 6 23:37:34.159725 kernel: cryptd: max_cpu_qlen set to 1000 Nov 6 23:37:34.175893 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 6 23:37:34.175988 kernel: GPT:9289727 != 125829119 Nov 6 23:37:34.177567 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 6 23:37:34.177644 kernel: GPT:9289727 != 125829119 Nov 6 23:37:34.179073 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 6 23:37:34.179131 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 6 23:37:34.181232 kernel: AVX2 version of gcm_enc/dec engaged. Nov 6 23:37:34.182715 kernel: AES CTR mode by8 optimization enabled Nov 6 23:37:34.186741 kernel: ACPI: bus type USB registered Nov 6 23:37:34.188723 kernel: usbcore: registered new interface driver usbfs Nov 6 23:37:34.199725 kernel: usbcore: registered new interface driver hub Nov 6 23:37:34.204979 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Nov 6 23:37:34.210889 kernel: virtio_blk virtio5: [vdb] 980 512-byte logical blocks (502 kB/490 KiB) Nov 6 23:37:34.216273 kernel: usbcore: registered new device driver usb Nov 6 23:37:34.246722 kernel: libata version 3.00 loaded. Nov 6 23:37:34.251061 kernel: ata_piix 0000:00:01.1: version 2.13 Nov 6 23:37:34.259082 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 6 23:37:34.263472 kernel: scsi host1: ata_piix Nov 6 23:37:34.259231 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 6 23:37:34.271735 kernel: scsi host2: ata_piix Nov 6 23:37:34.272065 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Nov 6 23:37:34.272091 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Nov 6 23:37:34.262241 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 6 23:37:34.270737 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 6 23:37:34.270956 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 23:37:34.272799 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 23:37:34.283650 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 23:37:34.285791 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Nov 6 23:37:34.325178 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Nov 6 23:37:34.329728 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Nov 6 23:37:34.336720 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Nov 6 23:37:34.337109 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Nov 6 23:37:34.338722 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by (udev-worker) (456) Nov 6 23:37:34.341718 kernel: hub 1-0:1.0: USB hub found Nov 6 23:37:34.342123 kernel: hub 1-0:1.0: 2 ports detected Nov 6 23:37:34.348764 kernel: BTRFS: device fsid 7e63b391-7474-48b8-9614-cf161680d90d devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (450) Nov 6 23:37:34.367614 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 6 23:37:34.433606 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 23:37:34.449255 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 6 23:37:34.461403 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 6 23:37:34.462240 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Nov 6 23:37:34.477378 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 6 23:37:34.495063 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 6 23:37:34.499031 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 6 23:37:34.502973 disk-uuid[544]: Primary Header is updated. Nov 6 23:37:34.502973 disk-uuid[544]: Secondary Entries is updated. Nov 6 23:37:34.502973 disk-uuid[544]: Secondary Header is updated. Nov 6 23:37:34.509832 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 6 23:37:34.528850 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 6 23:37:34.532749 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 6 23:37:35.521768 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 6 23:37:35.522218 disk-uuid[545]: The operation has completed successfully. Nov 6 23:37:35.570288 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 6 23:37:35.570402 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 6 23:37:35.607908 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 6 23:37:35.614315 sh[564]: Success Nov 6 23:37:35.629708 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Nov 6 23:37:35.690098 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 6 23:37:35.701940 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 6 23:37:35.704289 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 6 23:37:35.730010 kernel: BTRFS info (device dm-0): first mount of filesystem 7e63b391-7474-48b8-9614-cf161680d90d Nov 6 23:37:35.730078 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 6 23:37:35.732969 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 6 23:37:35.733035 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 6 23:37:35.734309 kernel: BTRFS info (device dm-0): using free space tree Nov 6 23:37:35.746295 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 6 23:37:35.748340 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 6 23:37:35.761030 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 6 23:37:35.765918 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 6 23:37:35.787773 kernel: BTRFS info (device vda6): first mount of filesystem c2193637-3855-459d-ac6d-9b4591136350 Nov 6 23:37:35.787907 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 6 23:37:35.787925 kernel: BTRFS info (device vda6): using free space tree Nov 6 23:37:35.794771 kernel: BTRFS info (device vda6): auto enabling async discard Nov 6 23:37:35.802759 kernel: BTRFS info (device vda6): last unmount of filesystem c2193637-3855-459d-ac6d-9b4591136350 Nov 6 23:37:35.804819 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 6 23:37:35.810246 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 6 23:37:35.909726 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 6 23:37:35.925151 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 6 23:37:35.976960 systemd-networkd[745]: lo: Link UP Nov 6 23:37:35.976977 systemd-networkd[745]: lo: Gained carrier Nov 6 23:37:35.984745 systemd-networkd[745]: Enumeration completed Nov 6 23:37:35.985112 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 6 23:37:35.986071 systemd-networkd[745]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Nov 6 23:37:35.986079 systemd-networkd[745]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Nov 6 23:37:35.987480 systemd[1]: Reached target network.target - Network. Nov 6 23:37:35.988576 systemd-networkd[745]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 6 23:37:35.988583 systemd-networkd[745]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 6 23:37:35.990834 systemd-networkd[745]: eth0: Link UP Nov 6 23:37:35.995468 ignition[658]: Ignition 2.20.0 Nov 6 23:37:35.990842 systemd-networkd[745]: eth0: Gained carrier Nov 6 23:37:35.995476 ignition[658]: Stage: fetch-offline Nov 6 23:37:35.990860 systemd-networkd[745]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Nov 6 23:37:35.995519 ignition[658]: no configs at "/usr/lib/ignition/base.d" Nov 6 23:37:35.994033 systemd-networkd[745]: eth1: Link UP Nov 6 23:37:35.995528 ignition[658]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 6 23:37:35.994038 systemd-networkd[745]: eth1: Gained carrier Nov 6 23:37:35.995625 ignition[658]: parsed url from cmdline: "" Nov 6 23:37:35.994053 systemd-networkd[745]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 6 23:37:35.995629 ignition[658]: no config URL provided Nov 6 23:37:36.000320 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 6 23:37:35.995635 ignition[658]: reading system config file "/usr/lib/ignition/user.ign" Nov 6 23:37:35.995644 ignition[658]: no config at "/usr/lib/ignition/user.ign" Nov 6 23:37:35.995651 ignition[658]: failed to fetch config: resource requires networking Nov 6 23:37:35.996082 ignition[658]: Ignition finished successfully Nov 6 23:37:36.009914 systemd-networkd[745]: eth1: DHCPv4 address 10.124.0.32/20 acquired from 169.254.169.253 Nov 6 23:37:36.010393 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 6 23:37:36.013821 systemd-networkd[745]: eth0: DHCPv4 address 164.92.114.154/19, gateway 164.92.96.1 acquired from 169.254.169.253 Nov 6 23:37:36.037548 ignition[753]: Ignition 2.20.0 Nov 6 23:37:36.037565 ignition[753]: Stage: fetch Nov 6 23:37:36.037906 ignition[753]: no configs at "/usr/lib/ignition/base.d" Nov 6 23:37:36.037924 ignition[753]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 6 23:37:36.038090 ignition[753]: parsed url from cmdline: "" Nov 6 23:37:36.038096 ignition[753]: no config URL provided Nov 6 23:37:36.038105 ignition[753]: reading system config file "/usr/lib/ignition/user.ign" Nov 6 23:37:36.038119 ignition[753]: no config at "/usr/lib/ignition/user.ign" Nov 6 23:37:36.038157 ignition[753]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Nov 6 23:37:36.051935 ignition[753]: GET result: OK Nov 6 23:37:36.053004 ignition[753]: parsing config with SHA512: 94db1cf1cbf8c544306767db2f7f343697ca9eb7f83b60d1d6d1e9a5d5a2f15b4b15f62edbdaf67a553c31c4b5b53489bface0780b59cbb90a16c33a61c97ee0 Nov 6 23:37:36.058606 unknown[753]: fetched base config from "system" Nov 6 23:37:36.058621 unknown[753]: fetched base config from "system" Nov 6 23:37:36.058627 unknown[753]: fetched user config from "digitalocean" Nov 6 23:37:36.059833 ignition[753]: fetch: fetch complete Nov 6 23:37:36.059840 ignition[753]: fetch: fetch passed Nov 6 23:37:36.059916 ignition[753]: Ignition finished successfully Nov 6 23:37:36.062617 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 6 23:37:36.068079 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 6 23:37:36.103450 ignition[760]: Ignition 2.20.0 Nov 6 23:37:36.104469 ignition[760]: Stage: kargs Nov 6 23:37:36.104876 ignition[760]: no configs at "/usr/lib/ignition/base.d" Nov 6 23:37:36.104895 ignition[760]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 6 23:37:36.106185 ignition[760]: kargs: kargs passed Nov 6 23:37:36.106258 ignition[760]: Ignition finished successfully Nov 6 23:37:36.109283 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 6 23:37:36.116080 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 6 23:37:36.140656 ignition[766]: Ignition 2.20.0 Nov 6 23:37:36.140670 ignition[766]: Stage: disks Nov 6 23:37:36.140932 ignition[766]: no configs at "/usr/lib/ignition/base.d" Nov 6 23:37:36.140943 ignition[766]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 6 23:37:36.144005 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 6 23:37:36.142010 ignition[766]: disks: disks passed Nov 6 23:37:36.142083 ignition[766]: Ignition finished successfully Nov 6 23:37:36.151270 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 6 23:37:36.152835 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 6 23:37:36.153901 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 6 23:37:36.154939 systemd[1]: Reached target sysinit.target - System Initialization. Nov 6 23:37:36.155926 systemd[1]: Reached target basic.target - Basic System. Nov 6 23:37:36.162062 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 6 23:37:36.183305 systemd-fsck[775]: ROOT: clean, 14/553520 files, 52654/553472 blocks Nov 6 23:37:36.186556 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 6 23:37:36.193969 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 6 23:37:36.302709 kernel: EXT4-fs (vda9): mounted filesystem 2abcf372-764b-46c0-a870-42c779c5f871 r/w with ordered data mode. Quota mode: none. Nov 6 23:37:36.303505 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 6 23:37:36.305354 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 6 23:37:36.311895 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 6 23:37:36.317094 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 6 23:37:36.320057 systemd[1]: Starting flatcar-afterburn-network.service - Flatcar Afterburn network service... Nov 6 23:37:36.328731 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by mount (783) Nov 6 23:37:36.332737 kernel: BTRFS info (device vda6): first mount of filesystem c2193637-3855-459d-ac6d-9b4591136350 Nov 6 23:37:36.332905 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 6 23:37:36.332928 kernel: BTRFS info (device vda6): using free space tree Nov 6 23:37:36.338589 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Nov 6 23:37:36.342088 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 6 23:37:36.342161 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 6 23:37:36.348012 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 6 23:37:36.362265 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 6 23:37:36.363188 kernel: BTRFS info (device vda6): auto enabling async discard Nov 6 23:37:36.372306 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 6 23:37:36.438752 coreos-metadata[791]: Nov 06 23:37:36.438 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Nov 6 23:37:36.454192 initrd-setup-root[814]: cut: /sysroot/etc/passwd: No such file or directory Nov 6 23:37:36.455246 coreos-metadata[791]: Nov 06 23:37:36.452 INFO Fetch successful Nov 6 23:37:36.457209 coreos-metadata[785]: Nov 06 23:37:36.454 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Nov 6 23:37:36.463308 initrd-setup-root[821]: cut: /sysroot/etc/group: No such file or directory Nov 6 23:37:36.467481 coreos-metadata[791]: Nov 06 23:37:36.463 INFO wrote hostname ci-4230.2.4-n-07c3be35b1 to /sysroot/etc/hostname Nov 6 23:37:36.466003 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 6 23:37:36.471455 coreos-metadata[785]: Nov 06 23:37:36.469 INFO Fetch successful Nov 6 23:37:36.475072 initrd-setup-root[829]: cut: /sysroot/etc/shadow: No such file or directory Nov 6 23:37:36.479141 systemd[1]: flatcar-afterburn-network.service: Deactivated successfully. Nov 6 23:37:36.480480 systemd[1]: Finished flatcar-afterburn-network.service - Flatcar Afterburn network service. Nov 6 23:37:36.485611 initrd-setup-root[837]: cut: /sysroot/etc/gshadow: No such file or directory Nov 6 23:37:36.604575 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 6 23:37:36.610940 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 6 23:37:36.614936 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 6 23:37:36.626714 kernel: BTRFS info (device vda6): last unmount of filesystem c2193637-3855-459d-ac6d-9b4591136350 Nov 6 23:37:36.654639 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 6 23:37:36.668735 ignition[904]: INFO : Ignition 2.20.0 Nov 6 23:37:36.668735 ignition[904]: INFO : Stage: mount Nov 6 23:37:36.668735 ignition[904]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 6 23:37:36.668735 ignition[904]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 6 23:37:36.672632 ignition[904]: INFO : mount: mount passed Nov 6 23:37:36.672632 ignition[904]: INFO : Ignition finished successfully Nov 6 23:37:36.674115 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 6 23:37:36.680933 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 6 23:37:36.729210 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 6 23:37:36.736083 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 6 23:37:36.750725 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (916) Nov 6 23:37:36.753052 kernel: BTRFS info (device vda6): first mount of filesystem c2193637-3855-459d-ac6d-9b4591136350 Nov 6 23:37:36.753130 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 6 23:37:36.755232 kernel: BTRFS info (device vda6): using free space tree Nov 6 23:37:36.767725 kernel: BTRFS info (device vda6): auto enabling async discard Nov 6 23:37:36.770367 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 6 23:37:36.795460 ignition[932]: INFO : Ignition 2.20.0 Nov 6 23:37:36.795460 ignition[932]: INFO : Stage: files Nov 6 23:37:36.796786 ignition[932]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 6 23:37:36.796786 ignition[932]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 6 23:37:36.798007 ignition[932]: DEBUG : files: compiled without relabeling support, skipping Nov 6 23:37:36.798007 ignition[932]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 6 23:37:36.798007 ignition[932]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 6 23:37:36.801034 ignition[932]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 6 23:37:36.801985 ignition[932]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 6 23:37:36.803023 unknown[932]: wrote ssh authorized keys file for user: core Nov 6 23:37:36.803831 ignition[932]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 6 23:37:36.804998 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 6 23:37:36.805749 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Nov 6 23:37:36.836298 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 6 23:37:36.887335 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 6 23:37:36.887335 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 6 23:37:36.887335 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Nov 6 23:37:37.091860 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Nov 6 23:37:37.200128 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 6 23:37:37.200128 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Nov 6 23:37:37.202303 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Nov 6 23:37:37.202303 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 6 23:37:37.202303 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 6 23:37:37.202303 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 6 23:37:37.202303 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 6 23:37:37.202303 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 6 23:37:37.202303 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 6 23:37:37.202303 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 6 23:37:37.202303 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 6 23:37:37.202303 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 6 23:37:37.202303 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 6 23:37:37.202303 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 6 23:37:37.202303 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Nov 6 23:37:37.543849 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Nov 6 23:37:37.815463 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 6 23:37:37.815463 ignition[932]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Nov 6 23:37:37.818568 ignition[932]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 6 23:37:37.818568 ignition[932]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 6 23:37:37.818568 ignition[932]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Nov 6 23:37:37.818568 ignition[932]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Nov 6 23:37:37.818568 ignition[932]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Nov 6 23:37:37.818568 ignition[932]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 6 23:37:37.818568 ignition[932]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 6 23:37:37.818568 ignition[932]: INFO : files: files passed Nov 6 23:37:37.818568 ignition[932]: INFO : Ignition finished successfully Nov 6 23:37:37.820190 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 6 23:37:37.820195 systemd-networkd[745]: eth1: Gained IPv6LL Nov 6 23:37:37.829029 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 6 23:37:37.833025 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 6 23:37:37.841232 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 6 23:37:37.841368 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 6 23:37:37.852527 initrd-setup-root-after-ignition[962]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 6 23:37:37.852527 initrd-setup-root-after-ignition[962]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 6 23:37:37.855353 initrd-setup-root-after-ignition[966]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 6 23:37:37.857711 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 6 23:37:37.858531 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 6 23:37:37.863908 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 6 23:37:37.906326 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 6 23:37:37.906509 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 6 23:37:37.908653 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 6 23:37:37.909289 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 6 23:37:37.910485 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 6 23:37:37.915947 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 6 23:37:37.934728 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 6 23:37:37.941991 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 6 23:37:37.966329 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 6 23:37:37.967251 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 6 23:37:37.968352 systemd[1]: Stopped target timers.target - Timer Units. Nov 6 23:37:37.969389 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 6 23:37:37.969678 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 6 23:37:37.971395 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 6 23:37:37.972449 systemd[1]: Stopped target basic.target - Basic System. Nov 6 23:37:37.973890 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 6 23:37:37.975485 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 6 23:37:37.976789 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 6 23:37:37.977557 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 6 23:37:37.978610 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 6 23:37:37.979753 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 6 23:37:37.980880 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 6 23:37:37.982031 systemd[1]: Stopped target swap.target - Swaps. Nov 6 23:37:37.982880 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 6 23:37:37.983186 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 6 23:37:37.984413 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 6 23:37:37.985697 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 6 23:37:37.986663 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 6 23:37:37.986908 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 6 23:37:37.987812 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 6 23:37:37.988097 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 6 23:37:37.989373 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 6 23:37:37.989630 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 6 23:37:37.990750 systemd[1]: ignition-files.service: Deactivated successfully. Nov 6 23:37:37.991010 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 6 23:37:37.991788 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Nov 6 23:37:37.992014 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 6 23:37:37.999186 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 6 23:37:37.999789 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 6 23:37:38.000087 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 6 23:37:38.013212 systemd-networkd[745]: eth0: Gained IPv6LL Nov 6 23:37:38.015017 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 6 23:37:38.018144 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 6 23:37:38.018409 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 6 23:37:38.019017 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 6 23:37:38.019119 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 6 23:37:38.027305 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 6 23:37:38.027462 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 6 23:37:38.040019 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 6 23:37:38.042411 ignition[986]: INFO : Ignition 2.20.0 Nov 6 23:37:38.042411 ignition[986]: INFO : Stage: umount Nov 6 23:37:38.042411 ignition[986]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 6 23:37:38.042411 ignition[986]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 6 23:37:38.053465 ignition[986]: INFO : umount: umount passed Nov 6 23:37:38.053465 ignition[986]: INFO : Ignition finished successfully Nov 6 23:37:38.046339 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 6 23:37:38.046469 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 6 23:37:38.051612 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 6 23:37:38.051745 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 6 23:37:38.054625 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 6 23:37:38.055027 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 6 23:37:38.055762 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 6 23:37:38.055853 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 6 23:37:38.056649 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 6 23:37:38.056727 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 6 23:37:38.057516 systemd[1]: Stopped target network.target - Network. Nov 6 23:37:38.058432 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 6 23:37:38.058495 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 6 23:37:38.059429 systemd[1]: Stopped target paths.target - Path Units. Nov 6 23:37:38.060328 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 6 23:37:38.063795 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 6 23:37:38.064400 systemd[1]: Stopped target slices.target - Slice Units. Nov 6 23:37:38.065213 systemd[1]: Stopped target sockets.target - Socket Units. Nov 6 23:37:38.066165 systemd[1]: iscsid.socket: Deactivated successfully. Nov 6 23:37:38.066235 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 6 23:37:38.067118 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 6 23:37:38.067164 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 6 23:37:38.067918 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 6 23:37:38.067985 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 6 23:37:38.068818 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 6 23:37:38.068875 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 6 23:37:38.069575 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 6 23:37:38.069625 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 6 23:37:38.070646 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 6 23:37:38.071494 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 6 23:37:38.075084 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 6 23:37:38.075271 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 6 23:37:38.082475 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Nov 6 23:37:38.083037 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 6 23:37:38.083200 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 6 23:37:38.085475 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Nov 6 23:37:38.087305 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 6 23:37:38.087367 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 6 23:37:38.092854 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 6 23:37:38.093464 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 6 23:37:38.093581 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 6 23:37:38.094204 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 6 23:37:38.094258 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 6 23:37:38.094885 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 6 23:37:38.094946 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 6 23:37:38.095804 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 6 23:37:38.095863 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 6 23:37:38.097414 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 6 23:37:38.101384 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Nov 6 23:37:38.101496 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Nov 6 23:37:38.116489 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 6 23:37:38.117385 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 6 23:37:38.118526 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 6 23:37:38.118629 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 6 23:37:38.120286 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 6 23:37:38.120416 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 6 23:37:38.121192 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 6 23:37:38.121241 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 6 23:37:38.122036 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 6 23:37:38.122113 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 6 23:37:38.123513 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 6 23:37:38.123573 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 6 23:37:38.124706 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 6 23:37:38.124766 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 6 23:37:38.132077 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 6 23:37:38.132624 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 6 23:37:38.132729 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 6 23:37:38.133745 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 6 23:37:38.133801 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 23:37:38.136948 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Nov 6 23:37:38.137041 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Nov 6 23:37:38.144062 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 6 23:37:38.144766 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 6 23:37:38.146136 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 6 23:37:38.150916 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 6 23:37:38.162610 systemd[1]: Switching root. Nov 6 23:37:38.213958 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Nov 6 23:37:38.214094 systemd-journald[184]: Journal stopped Nov 6 23:37:39.558369 kernel: SELinux: policy capability network_peer_controls=1 Nov 6 23:37:39.558478 kernel: SELinux: policy capability open_perms=1 Nov 6 23:37:39.558494 kernel: SELinux: policy capability extended_socket_class=1 Nov 6 23:37:39.558506 kernel: SELinux: policy capability always_check_network=0 Nov 6 23:37:39.558518 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 6 23:37:39.558532 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 6 23:37:39.558558 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 6 23:37:39.558570 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 6 23:37:39.558583 kernel: audit: type=1403 audit(1762472258.362:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 6 23:37:39.558602 systemd[1]: Successfully loaded SELinux policy in 41.482ms. Nov 6 23:37:39.558629 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 14.092ms. Nov 6 23:37:39.558645 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 6 23:37:39.558664 systemd[1]: Detected virtualization kvm. Nov 6 23:37:39.558722 systemd[1]: Detected architecture x86-64. Nov 6 23:37:39.558755 systemd[1]: Detected first boot. Nov 6 23:37:39.558786 systemd[1]: Hostname set to . Nov 6 23:37:39.558804 systemd[1]: Initializing machine ID from VM UUID. Nov 6 23:37:39.558824 zram_generator::config[1031]: No configuration found. Nov 6 23:37:39.558848 kernel: Guest personality initialized and is inactive Nov 6 23:37:39.558867 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Nov 6 23:37:39.558885 kernel: Initialized host personality Nov 6 23:37:39.558902 kernel: NET: Registered PF_VSOCK protocol family Nov 6 23:37:39.558922 systemd[1]: Populated /etc with preset unit settings. Nov 6 23:37:39.558949 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Nov 6 23:37:39.558964 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 6 23:37:39.558983 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 6 23:37:39.559006 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 6 23:37:39.559022 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 6 23:37:39.559035 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 6 23:37:39.559048 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 6 23:37:39.559065 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 6 23:37:39.559084 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 6 23:37:39.559098 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 6 23:37:39.559110 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 6 23:37:39.559124 systemd[1]: Created slice user.slice - User and Session Slice. Nov 6 23:37:39.559137 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 6 23:37:39.559150 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 6 23:37:39.559163 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 6 23:37:39.559177 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 6 23:37:39.559194 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 6 23:37:39.559213 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 6 23:37:39.559227 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 6 23:37:39.559240 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 6 23:37:39.559253 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 6 23:37:39.559266 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 6 23:37:39.559278 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 6 23:37:39.559296 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 6 23:37:39.559310 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 6 23:37:39.559324 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 6 23:37:39.559336 systemd[1]: Reached target slices.target - Slice Units. Nov 6 23:37:39.559349 systemd[1]: Reached target swap.target - Swaps. Nov 6 23:37:39.559363 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 6 23:37:39.559375 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 6 23:37:39.559388 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Nov 6 23:37:39.559400 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 6 23:37:39.559418 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 6 23:37:39.559431 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 6 23:37:39.559444 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 6 23:37:39.559457 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 6 23:37:39.559470 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 6 23:37:39.559483 systemd[1]: Mounting media.mount - External Media Directory... Nov 6 23:37:39.559496 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 23:37:39.559508 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 6 23:37:39.559521 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 6 23:37:39.559539 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 6 23:37:39.559552 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 6 23:37:39.559565 systemd[1]: Reached target machines.target - Containers. Nov 6 23:37:39.559578 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 6 23:37:39.559591 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 6 23:37:39.559604 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 6 23:37:39.559617 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 6 23:37:39.559629 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 6 23:37:39.559642 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 6 23:37:39.559660 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 6 23:37:39.559673 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 6 23:37:39.559698 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 6 23:37:39.559712 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 6 23:37:39.559725 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 6 23:37:39.559738 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 6 23:37:39.559750 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 6 23:37:39.559763 systemd[1]: Stopped systemd-fsck-usr.service. Nov 6 23:37:39.559782 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 6 23:37:39.559795 kernel: fuse: init (API version 7.39) Nov 6 23:37:39.559808 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 6 23:37:39.559821 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 6 23:37:39.559834 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 6 23:37:39.559846 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 6 23:37:39.559860 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Nov 6 23:37:39.559874 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 6 23:37:39.559887 kernel: ACPI: bus type drm_connector registered Nov 6 23:37:39.559904 systemd[1]: verity-setup.service: Deactivated successfully. Nov 6 23:37:39.559924 systemd[1]: Stopped verity-setup.service. Nov 6 23:37:39.559950 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 23:37:39.559969 kernel: loop: module loaded Nov 6 23:37:39.559988 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 6 23:37:39.560008 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 6 23:37:39.560047 systemd[1]: Mounted media.mount - External Media Directory. Nov 6 23:37:39.560068 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 6 23:37:39.560088 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 6 23:37:39.560106 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 6 23:37:39.560137 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 6 23:37:39.560156 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 6 23:37:39.560175 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 6 23:37:39.560194 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 6 23:37:39.560213 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 6 23:37:39.560231 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 6 23:37:39.560250 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 6 23:37:39.560268 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 6 23:37:39.560287 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 6 23:37:39.560316 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 6 23:37:39.560335 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 6 23:37:39.560367 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 6 23:37:39.560386 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 6 23:37:39.560406 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 6 23:37:39.560426 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 6 23:37:39.560446 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 6 23:37:39.560516 systemd-journald[1112]: Collecting audit messages is disabled. Nov 6 23:37:39.560578 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 6 23:37:39.560603 systemd-journald[1112]: Journal started Nov 6 23:37:39.560638 systemd-journald[1112]: Runtime Journal (/run/log/journal/87b0267e57ef494cbe1682a1f437b9d0) is 4.9M, max 39.3M, 34.4M free. Nov 6 23:37:39.140225 systemd[1]: Queued start job for default target multi-user.target. Nov 6 23:37:39.151617 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 6 23:37:39.152141 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 6 23:37:39.562717 systemd[1]: Started systemd-journald.service - Journal Service. Nov 6 23:37:39.564949 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Nov 6 23:37:39.580632 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 6 23:37:39.590343 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 6 23:37:39.594911 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 6 23:37:39.595451 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 6 23:37:39.595490 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 6 23:37:39.598870 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Nov 6 23:37:39.602835 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 6 23:37:39.613057 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 6 23:37:39.616041 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 6 23:37:39.619246 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 6 23:37:39.629870 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 6 23:37:39.630601 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 6 23:37:39.635861 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 6 23:37:39.636599 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 6 23:37:39.645020 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 6 23:37:39.653582 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 6 23:37:39.656748 systemd-journald[1112]: Time spent on flushing to /var/log/journal/87b0267e57ef494cbe1682a1f437b9d0 is 119.301ms for 998 entries. Nov 6 23:37:39.656748 systemd-journald[1112]: System Journal (/var/log/journal/87b0267e57ef494cbe1682a1f437b9d0) is 8M, max 195.6M, 187.6M free. Nov 6 23:37:39.804619 systemd-journald[1112]: Received client request to flush runtime journal. Nov 6 23:37:39.805237 kernel: loop0: detected capacity change from 0 to 138176 Nov 6 23:37:39.805284 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 6 23:37:39.805299 kernel: loop1: detected capacity change from 0 to 8 Nov 6 23:37:39.663196 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 6 23:37:39.668193 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 6 23:37:39.668885 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 6 23:37:39.669642 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 6 23:37:39.702408 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 6 23:37:39.703169 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 6 23:37:39.714455 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Nov 6 23:37:39.775860 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 6 23:37:39.793908 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Nov 6 23:37:39.807330 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 6 23:37:39.810027 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 6 23:37:39.831113 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Nov 6 23:37:39.838722 kernel: loop2: detected capacity change from 0 to 229808 Nov 6 23:37:39.837140 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 6 23:37:39.847126 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 6 23:37:39.881926 udevadm[1175]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Nov 6 23:37:39.888849 kernel: loop3: detected capacity change from 0 to 147912 Nov 6 23:37:39.922383 systemd-tmpfiles[1177]: ACLs are not supported, ignoring. Nov 6 23:37:39.922403 systemd-tmpfiles[1177]: ACLs are not supported, ignoring. Nov 6 23:37:39.937766 kernel: loop4: detected capacity change from 0 to 138176 Nov 6 23:37:39.937437 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 6 23:37:39.962960 kernel: loop5: detected capacity change from 0 to 8 Nov 6 23:37:39.963097 kernel: loop6: detected capacity change from 0 to 229808 Nov 6 23:37:39.981822 kernel: loop7: detected capacity change from 0 to 147912 Nov 6 23:37:40.002096 (sd-merge)[1181]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Nov 6 23:37:40.003943 (sd-merge)[1181]: Merged extensions into '/usr'. Nov 6 23:37:40.011827 systemd[1]: Reload requested from client PID 1157 ('systemd-sysext') (unit systemd-sysext.service)... Nov 6 23:37:40.011846 systemd[1]: Reloading... Nov 6 23:37:40.248172 zram_generator::config[1210]: No configuration found. Nov 6 23:37:40.272284 ldconfig[1152]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 6 23:37:40.489868 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 6 23:37:40.604835 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 6 23:37:40.605400 systemd[1]: Reloading finished in 593 ms. Nov 6 23:37:40.622146 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 6 23:37:40.626751 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 6 23:37:40.646996 systemd[1]: Starting ensure-sysext.service... Nov 6 23:37:40.651945 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 6 23:37:40.671946 systemd[1]: Reload requested from client PID 1253 ('systemctl') (unit ensure-sysext.service)... Nov 6 23:37:40.671973 systemd[1]: Reloading... Nov 6 23:37:40.741605 systemd-tmpfiles[1254]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 6 23:37:40.741996 systemd-tmpfiles[1254]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 6 23:37:40.744611 systemd-tmpfiles[1254]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 6 23:37:40.746939 systemd-tmpfiles[1254]: ACLs are not supported, ignoring. Nov 6 23:37:40.747157 systemd-tmpfiles[1254]: ACLs are not supported, ignoring. Nov 6 23:37:40.756348 systemd-tmpfiles[1254]: Detected autofs mount point /boot during canonicalization of boot. Nov 6 23:37:40.757098 systemd-tmpfiles[1254]: Skipping /boot Nov 6 23:37:40.782138 systemd-tmpfiles[1254]: Detected autofs mount point /boot during canonicalization of boot. Nov 6 23:37:40.782901 systemd-tmpfiles[1254]: Skipping /boot Nov 6 23:37:40.862742 zram_generator::config[1294]: No configuration found. Nov 6 23:37:41.026012 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 6 23:37:41.133116 systemd[1]: Reloading finished in 460 ms. Nov 6 23:37:41.151972 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 6 23:37:41.165293 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 6 23:37:41.181335 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 6 23:37:41.185155 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 6 23:37:41.196130 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 6 23:37:41.208211 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 6 23:37:41.221302 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 6 23:37:41.227256 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 6 23:37:41.236098 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 23:37:41.236405 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 6 23:37:41.244170 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 6 23:37:41.254063 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 6 23:37:41.259619 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 6 23:37:41.260382 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 6 23:37:41.260568 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 6 23:37:41.270187 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 6 23:37:41.270727 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 23:37:41.277176 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 23:37:41.277381 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 6 23:37:41.277571 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 6 23:37:41.277657 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 6 23:37:41.278837 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 23:37:41.284187 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 23:37:41.284481 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 6 23:37:41.299106 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 6 23:37:41.299982 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 6 23:37:41.300159 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 6 23:37:41.300341 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 23:37:41.307568 systemd[1]: Finished ensure-sysext.service. Nov 6 23:37:41.326025 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 6 23:37:41.329215 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 6 23:37:41.331698 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 6 23:37:41.331994 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 6 23:37:41.345377 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 6 23:37:41.346614 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 6 23:37:41.351225 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 6 23:37:41.351457 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 6 23:37:41.357681 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 6 23:37:41.358650 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 6 23:37:41.360144 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 6 23:37:41.366744 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 6 23:37:41.376308 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 6 23:37:41.379372 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 6 23:37:41.379702 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 6 23:37:41.396002 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 6 23:37:41.405898 systemd-udevd[1337]: Using default interface naming scheme 'v255'. Nov 6 23:37:41.435186 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 6 23:37:41.441378 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 6 23:37:41.451485 augenrules[1372]: No rules Nov 6 23:37:41.452348 systemd[1]: audit-rules.service: Deactivated successfully. Nov 6 23:37:41.452673 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 6 23:37:41.468453 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 6 23:37:41.478937 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 6 23:37:41.621251 systemd-resolved[1331]: Positive Trust Anchors: Nov 6 23:37:41.621925 systemd-resolved[1331]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 6 23:37:41.622093 systemd-resolved[1331]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 6 23:37:41.632252 systemd-resolved[1331]: Using system hostname 'ci-4230.2.4-n-07c3be35b1'. Nov 6 23:37:41.636780 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 6 23:37:41.637511 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 6 23:37:41.664802 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 6 23:37:41.665621 systemd[1]: Reached target time-set.target - System Time Set. Nov 6 23:37:41.673558 systemd-networkd[1382]: lo: Link UP Nov 6 23:37:41.673572 systemd-networkd[1382]: lo: Gained carrier Nov 6 23:37:41.674588 systemd-networkd[1382]: Enumeration completed Nov 6 23:37:41.674763 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 6 23:37:41.675557 systemd-timesyncd[1348]: No network connectivity, watching for changes. Nov 6 23:37:41.676332 systemd[1]: Reached target network.target - Network. Nov 6 23:37:41.691259 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Nov 6 23:37:41.706128 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 6 23:37:41.709996 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 6 23:37:41.762079 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Nov 6 23:37:41.792378 systemd-networkd[1382]: eth1: Configuring with /run/systemd/network/10-ae:3b:08:f1:f1:42.network. Nov 6 23:37:41.793998 systemd-networkd[1382]: eth1: Link UP Nov 6 23:37:41.794008 systemd-networkd[1382]: eth1: Gained carrier Nov 6 23:37:41.801585 systemd-timesyncd[1348]: Network configuration changed, trying to establish connection. Nov 6 23:37:41.821361 systemd[1]: Condition check resulted in dev-disk-by\x2dlabel-config\x2d2.device - /dev/disk/by-label/config-2 being skipped. Nov 6 23:37:41.834739 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Nov 6 23:37:41.835577 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 23:37:41.836421 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 6 23:37:41.845061 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 6 23:37:41.852488 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 6 23:37:41.857913 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 6 23:37:41.860103 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 6 23:37:41.860165 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 6 23:37:41.860215 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 6 23:37:41.860249 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 23:37:41.862370 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 6 23:37:41.863669 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 6 23:37:41.880495 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 6 23:37:41.882926 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 6 23:37:41.904124 systemd-networkd[1382]: eth0: Configuring with /run/systemd/network/10-76:33:19:08:13:68.network. Nov 6 23:37:41.908114 systemd-networkd[1382]: eth0: Link UP Nov 6 23:37:41.908124 systemd-networkd[1382]: eth0: Gained carrier Nov 6 23:37:41.917731 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1381) Nov 6 23:37:41.917842 kernel: ISO 9660 Extensions: RRIP_1991A Nov 6 23:37:41.923051 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Nov 6 23:37:41.940050 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 6 23:37:41.941661 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 6 23:37:41.961408 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 6 23:37:41.961512 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 6 23:37:41.970806 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Nov 6 23:37:41.973730 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Nov 6 23:37:41.995772 kernel: ACPI: button: Power Button [PWRF] Nov 6 23:37:42.015722 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Nov 6 23:37:42.057719 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Nov 6 23:37:42.066814 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Nov 6 23:37:42.081046 kernel: Console: switching to colour dummy device 80x25 Nov 6 23:37:42.083317 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Nov 6 23:37:42.083426 kernel: [drm] features: -context_init Nov 6 23:37:42.095764 kernel: [drm] number of scanouts: 1 Nov 6 23:37:42.095884 kernel: [drm] number of cap sets: 0 Nov 6 23:37:42.105191 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 23:37:42.121728 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Nov 6 23:37:42.123544 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 6 23:37:42.135236 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 6 23:37:42.148901 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Nov 6 23:37:42.149016 kernel: Console: switching to colour frame buffer device 128x48 Nov 6 23:37:42.155212 kernel: mousedev: PS/2 mouse device common for all mice Nov 6 23:37:42.171957 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Nov 6 23:37:42.179367 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 6 23:37:42.180081 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 23:37:42.187365 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Nov 6 23:37:42.209325 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 23:37:42.211128 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 6 23:37:42.220595 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 6 23:37:42.220929 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 23:37:42.229129 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 23:37:42.313310 kernel: EDAC MC: Ver: 3.0.0 Nov 6 23:37:42.337206 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 23:37:42.345590 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Nov 6 23:37:42.358013 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Nov 6 23:37:42.372993 lvm[1442]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 6 23:37:42.407488 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Nov 6 23:37:42.409318 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 6 23:37:42.409466 systemd[1]: Reached target sysinit.target - System Initialization. Nov 6 23:37:42.409652 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 6 23:37:42.409787 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 6 23:37:42.410093 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 6 23:37:42.410294 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 6 23:37:42.410390 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 6 23:37:42.410457 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 6 23:37:42.410487 systemd[1]: Reached target paths.target - Path Units. Nov 6 23:37:42.410542 systemd[1]: Reached target timers.target - Timer Units. Nov 6 23:37:42.412896 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 6 23:37:42.416832 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 6 23:37:42.422478 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Nov 6 23:37:42.425775 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Nov 6 23:37:42.426652 systemd[1]: Reached target ssh-access.target - SSH Access Available. Nov 6 23:37:42.442063 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 6 23:37:42.444808 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Nov 6 23:37:42.458152 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Nov 6 23:37:42.461597 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 6 23:37:42.464302 systemd[1]: Reached target sockets.target - Socket Units. Nov 6 23:37:42.465874 systemd[1]: Reached target basic.target - Basic System. Nov 6 23:37:42.466461 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 6 23:37:42.466490 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 6 23:37:42.469917 lvm[1446]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 6 23:37:42.474050 systemd[1]: Starting containerd.service - containerd container runtime... Nov 6 23:37:42.485190 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 6 23:37:42.491007 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 6 23:37:42.496912 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 6 23:37:42.508051 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 6 23:37:42.509931 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 6 23:37:42.516969 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 6 23:37:42.520958 jq[1450]: false Nov 6 23:37:42.525058 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 6 23:37:42.531446 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 6 23:37:42.541982 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 6 23:37:42.544020 extend-filesystems[1451]: Found loop4 Nov 6 23:37:42.546531 extend-filesystems[1451]: Found loop5 Nov 6 23:37:42.546531 extend-filesystems[1451]: Found loop6 Nov 6 23:37:42.546531 extend-filesystems[1451]: Found loop7 Nov 6 23:37:42.546531 extend-filesystems[1451]: Found vda Nov 6 23:37:42.546531 extend-filesystems[1451]: Found vda1 Nov 6 23:37:42.546531 extend-filesystems[1451]: Found vda2 Nov 6 23:37:42.546531 extend-filesystems[1451]: Found vda3 Nov 6 23:37:42.546531 extend-filesystems[1451]: Found usr Nov 6 23:37:42.546531 extend-filesystems[1451]: Found vda4 Nov 6 23:37:42.546531 extend-filesystems[1451]: Found vda6 Nov 6 23:37:42.546531 extend-filesystems[1451]: Found vda7 Nov 6 23:37:42.546531 extend-filesystems[1451]: Found vda9 Nov 6 23:37:42.546531 extend-filesystems[1451]: Checking size of /dev/vda9 Nov 6 23:37:42.563103 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 6 23:37:42.570007 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 6 23:37:42.573157 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 6 23:37:42.575663 systemd[1]: Starting update-engine.service - Update Engine... Nov 6 23:37:42.596129 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 6 23:37:42.600131 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Nov 6 23:37:42.614741 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 6 23:37:42.615866 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 6 23:37:42.623613 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 6 23:37:42.623998 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 6 23:37:42.637476 jq[1462]: true Nov 6 23:37:42.662305 extend-filesystems[1451]: Resized partition /dev/vda9 Nov 6 23:37:42.684980 extend-filesystems[1481]: resize2fs 1.47.1 (20-May-2024) Nov 6 23:37:42.695818 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Nov 6 23:37:42.695905 coreos-metadata[1448]: Nov 06 23:37:42.692 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Nov 6 23:37:42.693972 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 6 23:37:42.693663 dbus-daemon[1449]: [system] SELinux support is enabled Nov 6 23:37:42.704435 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 6 23:37:42.704479 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 6 23:37:42.705654 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 6 23:37:42.706870 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Nov 6 23:37:42.706918 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 6 23:37:42.712023 coreos-metadata[1448]: Nov 06 23:37:42.709 INFO Fetch successful Nov 6 23:37:42.721236 tar[1465]: linux-amd64/LICENSE Nov 6 23:37:42.721236 tar[1465]: linux-amd64/helm Nov 6 23:37:42.725919 jq[1469]: true Nov 6 23:37:42.736201 (ntainerd)[1474]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 6 23:37:42.741883 systemd[1]: motdgen.service: Deactivated successfully. Nov 6 23:37:42.742117 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 6 23:37:42.752599 update_engine[1461]: I20251106 23:37:42.737596 1461 main.cc:92] Flatcar Update Engine starting Nov 6 23:37:42.762627 systemd[1]: Started update-engine.service - Update Engine. Nov 6 23:37:42.775272 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1383) Nov 6 23:37:42.774708 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 6 23:37:42.778581 update_engine[1461]: I20251106 23:37:42.777922 1461 update_check_scheduler.cc:74] Next update check in 11m52s Nov 6 23:37:42.797154 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Nov 6 23:37:42.803235 systemd-logind[1460]: New seat seat0. Nov 6 23:37:42.810109 systemd-logind[1460]: Watching system buttons on /dev/input/event1 (Power Button) Nov 6 23:37:42.810140 systemd-logind[1460]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 6 23:37:42.811803 extend-filesystems[1481]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 6 23:37:42.811803 extend-filesystems[1481]: old_desc_blocks = 1, new_desc_blocks = 8 Nov 6 23:37:42.811803 extend-filesystems[1481]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Nov 6 23:37:42.824562 extend-filesystems[1451]: Resized filesystem in /dev/vda9 Nov 6 23:37:42.824562 extend-filesystems[1451]: Found vdb Nov 6 23:37:42.812303 systemd[1]: Started systemd-logind.service - User Login Management. Nov 6 23:37:42.820614 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 6 23:37:42.820905 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 6 23:37:42.872905 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 6 23:37:42.874104 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 6 23:37:43.006743 bash[1513]: Updated "/home/core/.ssh/authorized_keys" Nov 6 23:37:43.010905 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 6 23:37:43.031167 systemd[1]: Starting sshkeys.service... Nov 6 23:37:43.069658 systemd-networkd[1382]: eth0: Gained IPv6LL Nov 6 23:37:43.075775 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 6 23:37:43.082880 systemd[1]: Reached target network-online.target - Network is Online. Nov 6 23:37:43.097772 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 23:37:43.106148 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 6 23:37:43.135838 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Nov 6 23:37:43.149227 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Nov 6 23:37:43.271422 locksmithd[1493]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 6 23:37:43.292818 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 6 23:37:43.311940 coreos-metadata[1529]: Nov 06 23:37:43.310 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Nov 6 23:37:43.325724 coreos-metadata[1529]: Nov 06 23:37:43.324 INFO Fetch successful Nov 6 23:37:43.338203 sshd_keygen[1489]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 6 23:37:43.343898 unknown[1529]: wrote ssh authorized keys file for user: core Nov 6 23:37:43.405540 containerd[1474]: time="2025-11-06T23:37:43.404602030Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Nov 6 23:37:43.415496 update-ssh-keys[1544]: Updated "/home/core/.ssh/authorized_keys" Nov 6 23:37:43.421005 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Nov 6 23:37:43.427161 systemd[1]: Finished sshkeys.service. Nov 6 23:37:43.453183 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 6 23:37:43.468250 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 6 23:37:43.503976 containerd[1474]: time="2025-11-06T23:37:43.503850100Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 6 23:37:43.510782 systemd[1]: issuegen.service: Deactivated successfully. Nov 6 23:37:43.511194 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 6 23:37:43.520097 containerd[1474]: time="2025-11-06T23:37:43.518343337Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.113-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 6 23:37:43.520097 containerd[1474]: time="2025-11-06T23:37:43.518405209Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 6 23:37:43.520097 containerd[1474]: time="2025-11-06T23:37:43.518432352Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 6 23:37:43.520097 containerd[1474]: time="2025-11-06T23:37:43.518651861Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Nov 6 23:37:43.520097 containerd[1474]: time="2025-11-06T23:37:43.518674067Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Nov 6 23:37:43.524605 containerd[1474]: time="2025-11-06T23:37:43.520590077Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Nov 6 23:37:43.524605 containerd[1474]: time="2025-11-06T23:37:43.520632536Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 6 23:37:43.524605 containerd[1474]: time="2025-11-06T23:37:43.521037584Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 6 23:37:43.524605 containerd[1474]: time="2025-11-06T23:37:43.521072188Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 6 23:37:43.524605 containerd[1474]: time="2025-11-06T23:37:43.521092490Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Nov 6 23:37:43.524605 containerd[1474]: time="2025-11-06T23:37:43.521107535Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 6 23:37:43.524605 containerd[1474]: time="2025-11-06T23:37:43.521254364Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 6 23:37:43.524605 containerd[1474]: time="2025-11-06T23:37:43.521582316Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 6 23:37:43.522274 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 6 23:37:43.526159 containerd[1474]: time="2025-11-06T23:37:43.526076553Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 6 23:37:43.527893 containerd[1474]: time="2025-11-06T23:37:43.527829237Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 6 23:37:43.528660 containerd[1474]: time="2025-11-06T23:37:43.528303458Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 6 23:37:43.528660 containerd[1474]: time="2025-11-06T23:37:43.528428740Z" level=info msg="metadata content store policy set" policy=shared Nov 6 23:37:43.536668 containerd[1474]: time="2025-11-06T23:37:43.533754806Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 6 23:37:43.536668 containerd[1474]: time="2025-11-06T23:37:43.533858393Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 6 23:37:43.536668 containerd[1474]: time="2025-11-06T23:37:43.533889544Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Nov 6 23:37:43.536668 containerd[1474]: time="2025-11-06T23:37:43.533915220Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Nov 6 23:37:43.536668 containerd[1474]: time="2025-11-06T23:37:43.533937041Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 6 23:37:43.536668 containerd[1474]: time="2025-11-06T23:37:43.534212961Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 6 23:37:43.536668 containerd[1474]: time="2025-11-06T23:37:43.534566494Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 6 23:37:43.536668 containerd[1474]: time="2025-11-06T23:37:43.534801379Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Nov 6 23:37:43.536668 containerd[1474]: time="2025-11-06T23:37:43.534857744Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Nov 6 23:37:43.536668 containerd[1474]: time="2025-11-06T23:37:43.534890409Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Nov 6 23:37:43.536668 containerd[1474]: time="2025-11-06T23:37:43.534912356Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 6 23:37:43.536668 containerd[1474]: time="2025-11-06T23:37:43.534933327Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 6 23:37:43.536668 containerd[1474]: time="2025-11-06T23:37:43.534953700Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 6 23:37:43.536668 containerd[1474]: time="2025-11-06T23:37:43.534976615Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 6 23:37:43.538470 containerd[1474]: time="2025-11-06T23:37:43.534998402Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 6 23:37:43.538470 containerd[1474]: time="2025-11-06T23:37:43.535018975Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 6 23:37:43.538470 containerd[1474]: time="2025-11-06T23:37:43.535043172Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 6 23:37:43.538470 containerd[1474]: time="2025-11-06T23:37:43.535062490Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 6 23:37:43.538470 containerd[1474]: time="2025-11-06T23:37:43.535090962Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 6 23:37:43.538470 containerd[1474]: time="2025-11-06T23:37:43.535113862Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 6 23:37:43.538470 containerd[1474]: time="2025-11-06T23:37:43.535142998Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 6 23:37:43.538470 containerd[1474]: time="2025-11-06T23:37:43.535165123Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 6 23:37:43.538470 containerd[1474]: time="2025-11-06T23:37:43.535183258Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 6 23:37:43.538470 containerd[1474]: time="2025-11-06T23:37:43.535202365Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 6 23:37:43.538470 containerd[1474]: time="2025-11-06T23:37:43.535220349Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 6 23:37:43.538470 containerd[1474]: time="2025-11-06T23:37:43.535239212Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 6 23:37:43.538470 containerd[1474]: time="2025-11-06T23:37:43.535260441Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Nov 6 23:37:43.538470 containerd[1474]: time="2025-11-06T23:37:43.535285498Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Nov 6 23:37:43.539459 containerd[1474]: time="2025-11-06T23:37:43.535303679Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 6 23:37:43.539459 containerd[1474]: time="2025-11-06T23:37:43.535322084Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Nov 6 23:37:43.539459 containerd[1474]: time="2025-11-06T23:37:43.535341684Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 6 23:37:43.539459 containerd[1474]: time="2025-11-06T23:37:43.535364185Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Nov 6 23:37:43.539459 containerd[1474]: time="2025-11-06T23:37:43.535402632Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Nov 6 23:37:43.539459 containerd[1474]: time="2025-11-06T23:37:43.535421855Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 6 23:37:43.539459 containerd[1474]: time="2025-11-06T23:37:43.535439911Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 6 23:37:43.539459 containerd[1474]: time="2025-11-06T23:37:43.538928784Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 6 23:37:43.539459 containerd[1474]: time="2025-11-06T23:37:43.539009905Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Nov 6 23:37:43.539459 containerd[1474]: time="2025-11-06T23:37:43.539034856Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 6 23:37:43.539459 containerd[1474]: time="2025-11-06T23:37:43.539050553Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Nov 6 23:37:43.539459 containerd[1474]: time="2025-11-06T23:37:43.539076381Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 6 23:37:43.539459 containerd[1474]: time="2025-11-06T23:37:43.539130649Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Nov 6 23:37:43.539459 containerd[1474]: time="2025-11-06T23:37:43.539272836Z" level=info msg="NRI interface is disabled by configuration." Nov 6 23:37:43.543034 containerd[1474]: time="2025-11-06T23:37:43.539292239Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 6 23:37:43.543151 containerd[1474]: time="2025-11-06T23:37:43.540599640Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 6 23:37:43.543151 containerd[1474]: time="2025-11-06T23:37:43.540888065Z" level=info msg="Connect containerd service" Nov 6 23:37:43.543151 containerd[1474]: time="2025-11-06T23:37:43.541838210Z" level=info msg="using legacy CRI server" Nov 6 23:37:43.543151 containerd[1474]: time="2025-11-06T23:37:43.541862486Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 6 23:37:43.543151 containerd[1474]: time="2025-11-06T23:37:43.542017905Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 6 23:37:43.546334 containerd[1474]: time="2025-11-06T23:37:43.545915761Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 6 23:37:43.546594 containerd[1474]: time="2025-11-06T23:37:43.546495217Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 6 23:37:43.547713 containerd[1474]: time="2025-11-06T23:37:43.546567145Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 6 23:37:43.547713 containerd[1474]: time="2025-11-06T23:37:43.546791900Z" level=info msg="Start subscribing containerd event" Nov 6 23:37:43.547839 containerd[1474]: time="2025-11-06T23:37:43.547744942Z" level=info msg="Start recovering state" Nov 6 23:37:43.547864 containerd[1474]: time="2025-11-06T23:37:43.547842286Z" level=info msg="Start event monitor" Nov 6 23:37:43.547864 containerd[1474]: time="2025-11-06T23:37:43.547860066Z" level=info msg="Start snapshots syncer" Nov 6 23:37:43.547911 containerd[1474]: time="2025-11-06T23:37:43.547873133Z" level=info msg="Start cni network conf syncer for default" Nov 6 23:37:43.547911 containerd[1474]: time="2025-11-06T23:37:43.547883809Z" level=info msg="Start streaming server" Nov 6 23:37:43.553312 containerd[1474]: time="2025-11-06T23:37:43.547996396Z" level=info msg="containerd successfully booted in 0.147107s" Nov 6 23:37:43.548144 systemd[1]: Started containerd.service - containerd container runtime. Nov 6 23:37:43.579781 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 6 23:37:43.581840 systemd-networkd[1382]: eth1: Gained IPv6LL Nov 6 23:37:43.597551 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 6 23:37:43.612867 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 6 23:37:43.616388 systemd[1]: Reached target getty.target - Login Prompts. Nov 6 23:37:43.930995 tar[1465]: linux-amd64/README.md Nov 6 23:37:43.959518 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 6 23:37:44.525025 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 23:37:44.526423 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 6 23:37:44.531119 systemd[1]: Startup finished in 1.057s (kernel) + 5.650s (initrd) + 6.209s (userspace) = 12.917s. Nov 6 23:37:44.535574 (kubelet)[1571]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 6 23:37:45.234459 kubelet[1571]: E1106 23:37:45.234288 1571 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 6 23:37:45.237942 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 6 23:37:45.238195 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 6 23:37:45.238872 systemd[1]: kubelet.service: Consumed 1.387s CPU time, 267.1M memory peak. Nov 6 23:37:46.123610 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 6 23:37:46.136170 systemd[1]: Started sshd@0-164.92.114.154:22-147.75.109.163:48938.service - OpenSSH per-connection server daemon (147.75.109.163:48938). Nov 6 23:37:46.217729 sshd[1583]: Accepted publickey for core from 147.75.109.163 port 48938 ssh2: RSA SHA256:Z2OWs1J4lzJbb37sVDKV9BLSPLNt86phBIrOXH84t9A Nov 6 23:37:46.220519 sshd-session[1583]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:37:46.234518 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 6 23:37:46.243124 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 6 23:37:46.246029 systemd-logind[1460]: New session 1 of user core. Nov 6 23:37:46.259796 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 6 23:37:46.266097 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 6 23:37:46.279073 (systemd)[1587]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 6 23:37:46.282325 systemd-logind[1460]: New session c1 of user core. Nov 6 23:37:46.460415 systemd[1587]: Queued start job for default target default.target. Nov 6 23:37:46.467037 systemd[1587]: Created slice app.slice - User Application Slice. Nov 6 23:37:46.467080 systemd[1587]: Reached target paths.target - Paths. Nov 6 23:37:46.467129 systemd[1587]: Reached target timers.target - Timers. Nov 6 23:37:46.468961 systemd[1587]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 6 23:37:46.483221 systemd[1587]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 6 23:37:46.483358 systemd[1587]: Reached target sockets.target - Sockets. Nov 6 23:37:46.483412 systemd[1587]: Reached target basic.target - Basic System. Nov 6 23:37:46.483453 systemd[1587]: Reached target default.target - Main User Target. Nov 6 23:37:46.483485 systemd[1587]: Startup finished in 192ms. Nov 6 23:37:46.483781 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 6 23:37:46.497042 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 6 23:37:46.578255 systemd[1]: Started sshd@1-164.92.114.154:22-147.75.109.163:48942.service - OpenSSH per-connection server daemon (147.75.109.163:48942). Nov 6 23:37:46.639224 sshd[1598]: Accepted publickey for core from 147.75.109.163 port 48942 ssh2: RSA SHA256:Z2OWs1J4lzJbb37sVDKV9BLSPLNt86phBIrOXH84t9A Nov 6 23:37:46.641032 sshd-session[1598]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:37:46.647185 systemd-logind[1460]: New session 2 of user core. Nov 6 23:37:46.657997 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 6 23:37:46.721458 sshd[1600]: Connection closed by 147.75.109.163 port 48942 Nov 6 23:37:46.721962 sshd-session[1598]: pam_unix(sshd:session): session closed for user core Nov 6 23:37:46.743314 systemd[1]: sshd@1-164.92.114.154:22-147.75.109.163:48942.service: Deactivated successfully. Nov 6 23:37:46.746359 systemd[1]: session-2.scope: Deactivated successfully. Nov 6 23:37:46.750031 systemd-logind[1460]: Session 2 logged out. Waiting for processes to exit. Nov 6 23:37:46.756572 systemd[1]: Started sshd@2-164.92.114.154:22-147.75.109.163:48952.service - OpenSSH per-connection server daemon (147.75.109.163:48952). Nov 6 23:37:46.759424 systemd-logind[1460]: Removed session 2. Nov 6 23:37:46.827481 sshd[1605]: Accepted publickey for core from 147.75.109.163 port 48952 ssh2: RSA SHA256:Z2OWs1J4lzJbb37sVDKV9BLSPLNt86phBIrOXH84t9A Nov 6 23:37:46.829520 sshd-session[1605]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:37:46.837008 systemd-logind[1460]: New session 3 of user core. Nov 6 23:37:46.846085 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 6 23:37:46.909732 sshd[1608]: Connection closed by 147.75.109.163 port 48952 Nov 6 23:37:46.909523 sshd-session[1605]: pam_unix(sshd:session): session closed for user core Nov 6 23:37:46.922209 systemd[1]: sshd@2-164.92.114.154:22-147.75.109.163:48952.service: Deactivated successfully. Nov 6 23:37:46.925380 systemd[1]: session-3.scope: Deactivated successfully. Nov 6 23:37:46.927867 systemd-logind[1460]: Session 3 logged out. Waiting for processes to exit. Nov 6 23:37:46.931213 systemd[1]: Started sshd@3-164.92.114.154:22-147.75.109.163:48954.service - OpenSSH per-connection server daemon (147.75.109.163:48954). Nov 6 23:37:46.935080 systemd-logind[1460]: Removed session 3. Nov 6 23:37:46.999746 sshd[1613]: Accepted publickey for core from 147.75.109.163 port 48954 ssh2: RSA SHA256:Z2OWs1J4lzJbb37sVDKV9BLSPLNt86phBIrOXH84t9A Nov 6 23:37:47.001759 sshd-session[1613]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:37:47.014973 systemd-logind[1460]: New session 4 of user core. Nov 6 23:37:47.020950 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 6 23:37:47.084282 sshd[1616]: Connection closed by 147.75.109.163 port 48954 Nov 6 23:37:47.085134 sshd-session[1613]: pam_unix(sshd:session): session closed for user core Nov 6 23:37:47.096553 systemd[1]: sshd@3-164.92.114.154:22-147.75.109.163:48954.service: Deactivated successfully. Nov 6 23:37:47.099177 systemd[1]: session-4.scope: Deactivated successfully. Nov 6 23:37:47.103148 systemd-logind[1460]: Session 4 logged out. Waiting for processes to exit. Nov 6 23:37:47.108431 systemd[1]: Started sshd@4-164.92.114.154:22-147.75.109.163:48966.service - OpenSSH per-connection server daemon (147.75.109.163:48966). Nov 6 23:37:47.110101 systemd-logind[1460]: Removed session 4. Nov 6 23:37:47.166224 sshd[1621]: Accepted publickey for core from 147.75.109.163 port 48966 ssh2: RSA SHA256:Z2OWs1J4lzJbb37sVDKV9BLSPLNt86phBIrOXH84t9A Nov 6 23:37:47.168169 sshd-session[1621]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:37:47.175512 systemd-logind[1460]: New session 5 of user core. Nov 6 23:37:47.185060 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 6 23:37:47.256754 sudo[1625]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 6 23:37:47.257119 sudo[1625]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 6 23:37:47.273849 sudo[1625]: pam_unix(sudo:session): session closed for user root Nov 6 23:37:47.278718 sshd[1624]: Connection closed by 147.75.109.163 port 48966 Nov 6 23:37:47.278263 sshd-session[1621]: pam_unix(sshd:session): session closed for user core Nov 6 23:37:47.291916 systemd[1]: sshd@4-164.92.114.154:22-147.75.109.163:48966.service: Deactivated successfully. Nov 6 23:37:47.294179 systemd[1]: session-5.scope: Deactivated successfully. Nov 6 23:37:47.296912 systemd-logind[1460]: Session 5 logged out. Waiting for processes to exit. Nov 6 23:37:47.302170 systemd[1]: Started sshd@5-164.92.114.154:22-147.75.109.163:48974.service - OpenSSH per-connection server daemon (147.75.109.163:48974). Nov 6 23:37:47.304165 systemd-logind[1460]: Removed session 5. Nov 6 23:37:47.367174 sshd[1630]: Accepted publickey for core from 147.75.109.163 port 48974 ssh2: RSA SHA256:Z2OWs1J4lzJbb37sVDKV9BLSPLNt86phBIrOXH84t9A Nov 6 23:37:47.369853 sshd-session[1630]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:37:47.378752 systemd-logind[1460]: New session 6 of user core. Nov 6 23:37:47.388043 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 6 23:37:47.452714 sudo[1635]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 6 23:37:47.453841 sudo[1635]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 6 23:37:47.459487 sudo[1635]: pam_unix(sudo:session): session closed for user root Nov 6 23:37:47.467602 sudo[1634]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 6 23:37:47.468068 sudo[1634]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 6 23:37:47.485242 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 6 23:37:47.538116 augenrules[1657]: No rules Nov 6 23:37:47.539426 systemd[1]: audit-rules.service: Deactivated successfully. Nov 6 23:37:47.539768 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 6 23:37:47.541655 sudo[1634]: pam_unix(sudo:session): session closed for user root Nov 6 23:37:47.546151 sshd[1633]: Connection closed by 147.75.109.163 port 48974 Nov 6 23:37:47.546930 sshd-session[1630]: pam_unix(sshd:session): session closed for user core Nov 6 23:37:47.559855 systemd[1]: sshd@5-164.92.114.154:22-147.75.109.163:48974.service: Deactivated successfully. Nov 6 23:37:47.562737 systemd[1]: session-6.scope: Deactivated successfully. Nov 6 23:37:47.566946 systemd-logind[1460]: Session 6 logged out. Waiting for processes to exit. Nov 6 23:37:47.572745 systemd[1]: Started sshd@6-164.92.114.154:22-147.75.109.163:48978.service - OpenSSH per-connection server daemon (147.75.109.163:48978). Nov 6 23:37:47.575191 systemd-logind[1460]: Removed session 6. Nov 6 23:37:47.625075 sshd[1665]: Accepted publickey for core from 147.75.109.163 port 48978 ssh2: RSA SHA256:Z2OWs1J4lzJbb37sVDKV9BLSPLNt86phBIrOXH84t9A Nov 6 23:37:47.626984 sshd-session[1665]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:37:47.635421 systemd-logind[1460]: New session 7 of user core. Nov 6 23:37:47.641090 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 6 23:37:47.702324 sudo[1669]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 6 23:37:47.703183 sudo[1669]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 6 23:37:48.197113 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 6 23:37:48.201704 (dockerd)[1687]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 6 23:37:49.159663 systemd-resolved[1331]: Clock change detected. Flushing caches. Nov 6 23:37:49.160117 systemd-timesyncd[1348]: Contacted time server 23.141.40.124:123 (3.flatcar.pool.ntp.org). Nov 6 23:37:49.160198 systemd-timesyncd[1348]: Initial clock synchronization to Thu 2025-11-06 23:37:49.158999 UTC. Nov 6 23:37:49.603873 dockerd[1687]: time="2025-11-06T23:37:49.603816503Z" level=info msg="Starting up" Nov 6 23:37:49.713983 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport4284950649-merged.mount: Deactivated successfully. Nov 6 23:37:49.739449 dockerd[1687]: time="2025-11-06T23:37:49.739396787Z" level=info msg="Loading containers: start." Nov 6 23:37:49.926261 kernel: Initializing XFRM netlink socket Nov 6 23:37:50.044115 systemd-networkd[1382]: docker0: Link UP Nov 6 23:37:50.079642 dockerd[1687]: time="2025-11-06T23:37:50.079580388Z" level=info msg="Loading containers: done." Nov 6 23:37:50.105365 dockerd[1687]: time="2025-11-06T23:37:50.105291559Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 6 23:37:50.105610 dockerd[1687]: time="2025-11-06T23:37:50.105441978Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Nov 6 23:37:50.105610 dockerd[1687]: time="2025-11-06T23:37:50.105584626Z" level=info msg="Daemon has completed initialization" Nov 6 23:37:50.146326 dockerd[1687]: time="2025-11-06T23:37:50.145299761Z" level=info msg="API listen on /run/docker.sock" Nov 6 23:37:50.146865 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 6 23:37:51.061346 containerd[1474]: time="2025-11-06T23:37:51.061286528Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Nov 6 23:37:51.773472 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3439086910.mount: Deactivated successfully. Nov 6 23:37:53.057933 containerd[1474]: time="2025-11-06T23:37:53.057693440Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:37:53.059937 containerd[1474]: time="2025-11-06T23:37:53.059705730Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=30114893" Nov 6 23:37:53.060912 containerd[1474]: time="2025-11-06T23:37:53.060788278Z" level=info msg="ImageCreate event name:\"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:37:53.067944 containerd[1474]: time="2025-11-06T23:37:53.067814095Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:37:53.069262 containerd[1474]: time="2025-11-06T23:37:53.068986917Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"30111492\" in 2.007651174s" Nov 6 23:37:53.069262 containerd[1474]: time="2025-11-06T23:37:53.069042582Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\"" Nov 6 23:37:53.070254 containerd[1474]: time="2025-11-06T23:37:53.070207406Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Nov 6 23:37:54.821286 containerd[1474]: time="2025-11-06T23:37:54.821149782Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:37:54.823126 containerd[1474]: time="2025-11-06T23:37:54.822952226Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=26020844" Nov 6 23:37:54.824263 containerd[1474]: time="2025-11-06T23:37:54.823704606Z" level=info msg="ImageCreate event name:\"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:37:54.828441 containerd[1474]: time="2025-11-06T23:37:54.828380018Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:37:54.830642 containerd[1474]: time="2025-11-06T23:37:54.830577189Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"27681301\" in 1.760312441s" Nov 6 23:37:54.830784 containerd[1474]: time="2025-11-06T23:37:54.830643765Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\"" Nov 6 23:37:54.831962 containerd[1474]: time="2025-11-06T23:37:54.831849898Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Nov 6 23:37:54.996661 systemd-resolved[1331]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Nov 6 23:37:56.103701 containerd[1474]: time="2025-11-06T23:37:56.103630929Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:37:56.105054 containerd[1474]: time="2025-11-06T23:37:56.105001609Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=20155568" Nov 6 23:37:56.105479 containerd[1474]: time="2025-11-06T23:37:56.105450781Z" level=info msg="ImageCreate event name:\"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:37:56.109418 containerd[1474]: time="2025-11-06T23:37:56.109355691Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:37:56.112508 containerd[1474]: time="2025-11-06T23:37:56.112453549Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"21816043\" in 1.280329141s" Nov 6 23:37:56.112508 containerd[1474]: time="2025-11-06T23:37:56.112501208Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\"" Nov 6 23:37:56.116047 containerd[1474]: time="2025-11-06T23:37:56.115688794Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Nov 6 23:37:56.261218 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 6 23:37:56.268537 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 23:37:56.437318 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 23:37:56.443300 (kubelet)[1957]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 6 23:37:56.510709 kubelet[1957]: E1106 23:37:56.510583 1957 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 6 23:37:56.515135 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 6 23:37:56.515367 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 6 23:37:56.516826 systemd[1]: kubelet.service: Consumed 200ms CPU time, 111.6M memory peak. Nov 6 23:37:57.368139 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1017783214.mount: Deactivated successfully. Nov 6 23:37:57.944959 containerd[1474]: time="2025-11-06T23:37:57.944896024Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:37:57.946021 containerd[1474]: time="2025-11-06T23:37:57.945767267Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=31929469" Nov 6 23:37:57.946639 containerd[1474]: time="2025-11-06T23:37:57.946303224Z" level=info msg="ImageCreate event name:\"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:37:57.948330 containerd[1474]: time="2025-11-06T23:37:57.948297586Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:37:57.949130 containerd[1474]: time="2025-11-06T23:37:57.949098039Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"31928488\" in 1.833358504s" Nov 6 23:37:57.949249 containerd[1474]: time="2025-11-06T23:37:57.949216443Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\"" Nov 6 23:37:57.950068 containerd[1474]: time="2025-11-06T23:37:57.950042285Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Nov 6 23:37:58.051594 systemd-resolved[1331]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Nov 6 23:37:58.473146 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2830904604.mount: Deactivated successfully. Nov 6 23:37:59.302969 containerd[1474]: time="2025-11-06T23:37:59.302889403Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:37:59.304650 containerd[1474]: time="2025-11-06T23:37:59.304586836Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Nov 6 23:37:59.305267 containerd[1474]: time="2025-11-06T23:37:59.305218639Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:37:59.309296 containerd[1474]: time="2025-11-06T23:37:59.309210198Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:37:59.310261 containerd[1474]: time="2025-11-06T23:37:59.310015575Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.359844191s" Nov 6 23:37:59.310261 containerd[1474]: time="2025-11-06T23:37:59.310057220Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Nov 6 23:37:59.310805 containerd[1474]: time="2025-11-06T23:37:59.310667159Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 6 23:37:59.724831 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3218007114.mount: Deactivated successfully. Nov 6 23:37:59.729796 containerd[1474]: time="2025-11-06T23:37:59.729746289Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:37:59.731353 containerd[1474]: time="2025-11-06T23:37:59.731289860Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Nov 6 23:37:59.732357 containerd[1474]: time="2025-11-06T23:37:59.732317270Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:37:59.734996 containerd[1474]: time="2025-11-06T23:37:59.734946487Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:37:59.736494 containerd[1474]: time="2025-11-06T23:37:59.736450854Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 425.743499ms" Nov 6 23:37:59.736494 containerd[1474]: time="2025-11-06T23:37:59.736485413Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 6 23:37:59.737015 containerd[1474]: time="2025-11-06T23:37:59.736988549Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Nov 6 23:38:00.268665 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount931082133.mount: Deactivated successfully. Nov 6 23:38:02.114537 containerd[1474]: time="2025-11-06T23:38:02.112980681Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:38:02.114537 containerd[1474]: time="2025-11-06T23:38:02.114462235Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58378433" Nov 6 23:38:02.115319 containerd[1474]: time="2025-11-06T23:38:02.115286428Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:38:02.119402 containerd[1474]: time="2025-11-06T23:38:02.119335070Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:38:02.121314 containerd[1474]: time="2025-11-06T23:38:02.121247268Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.384200369s" Nov 6 23:38:02.121314 containerd[1474]: time="2025-11-06T23:38:02.121309511Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Nov 6 23:38:06.761451 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 6 23:38:06.770416 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 23:38:06.946617 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 23:38:06.948896 (kubelet)[2111]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 6 23:38:07.009269 kubelet[2111]: E1106 23:38:07.008588 2111 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 6 23:38:07.010427 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 6 23:38:07.010649 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 6 23:38:07.011180 systemd[1]: kubelet.service: Consumed 166ms CPU time, 108.1M memory peak. Nov 6 23:38:08.151474 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 23:38:08.151813 systemd[1]: kubelet.service: Consumed 166ms CPU time, 108.1M memory peak. Nov 6 23:38:08.159605 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 23:38:08.201880 systemd[1]: Reload requested from client PID 2126 ('systemctl') (unit session-7.scope)... Nov 6 23:38:08.201917 systemd[1]: Reloading... Nov 6 23:38:08.345337 zram_generator::config[2167]: No configuration found. Nov 6 23:38:08.494836 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 6 23:38:08.625189 systemd[1]: Reloading finished in 422 ms. Nov 6 23:38:08.683978 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 23:38:08.696529 (kubelet)[2216]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 6 23:38:08.698569 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 23:38:08.699001 systemd[1]: kubelet.service: Deactivated successfully. Nov 6 23:38:08.699348 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 23:38:08.699417 systemd[1]: kubelet.service: Consumed 115ms CPU time, 98.1M memory peak. Nov 6 23:38:08.706791 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 23:38:08.863569 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 23:38:08.865157 (kubelet)[2228]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 6 23:38:08.926595 kubelet[2228]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 6 23:38:08.927057 kubelet[2228]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 6 23:38:08.927104 kubelet[2228]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 6 23:38:08.927291 kubelet[2228]: I1106 23:38:08.927256 2228 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 6 23:38:09.030568 kubelet[2228]: I1106 23:38:09.030509 2228 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 6 23:38:09.030568 kubelet[2228]: I1106 23:38:09.030557 2228 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 6 23:38:09.031031 kubelet[2228]: I1106 23:38:09.031001 2228 server.go:956] "Client rotation is on, will bootstrap in background" Nov 6 23:38:09.062888 kubelet[2228]: I1106 23:38:09.062243 2228 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 6 23:38:09.065026 kubelet[2228]: E1106 23:38:09.064975 2228 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://164.92.114.154:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 164.92.114.154:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 6 23:38:09.080510 kubelet[2228]: E1106 23:38:09.080444 2228 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 6 23:38:09.080510 kubelet[2228]: I1106 23:38:09.080502 2228 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 6 23:38:09.094871 kubelet[2228]: I1106 23:38:09.094811 2228 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 6 23:38:09.095354 kubelet[2228]: I1106 23:38:09.095144 2228 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 6 23:38:09.098309 kubelet[2228]: I1106 23:38:09.095186 2228 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.2.4-n-07c3be35b1","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 6 23:38:09.098309 kubelet[2228]: I1106 23:38:09.098305 2228 topology_manager.go:138] "Creating topology manager with none policy" Nov 6 23:38:09.098309 kubelet[2228]: I1106 23:38:09.098323 2228 container_manager_linux.go:303] "Creating device plugin manager" Nov 6 23:38:09.098848 kubelet[2228]: I1106 23:38:09.098517 2228 state_mem.go:36] "Initialized new in-memory state store" Nov 6 23:38:09.132946 kubelet[2228]: I1106 23:38:09.131025 2228 kubelet.go:480] "Attempting to sync node with API server" Nov 6 23:38:09.132946 kubelet[2228]: I1106 23:38:09.131088 2228 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 6 23:38:09.132946 kubelet[2228]: I1106 23:38:09.131121 2228 kubelet.go:386] "Adding apiserver pod source" Nov 6 23:38:09.132946 kubelet[2228]: I1106 23:38:09.131143 2228 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 6 23:38:09.143007 kubelet[2228]: I1106 23:38:09.142361 2228 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Nov 6 23:38:09.143007 kubelet[2228]: I1106 23:38:09.142915 2228 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 6 23:38:09.144730 kubelet[2228]: W1106 23:38:09.144500 2228 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 6 23:38:09.144730 kubelet[2228]: E1106 23:38:09.144664 2228 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://164.92.114.154:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.2.4-n-07c3be35b1&limit=500&resourceVersion=0\": dial tcp 164.92.114.154:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 6 23:38:09.144730 kubelet[2228]: E1106 23:38:09.144501 2228 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://164.92.114.154:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 164.92.114.154:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 6 23:38:09.150991 kubelet[2228]: I1106 23:38:09.150400 2228 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 6 23:38:09.150991 kubelet[2228]: I1106 23:38:09.150484 2228 server.go:1289] "Started kubelet" Nov 6 23:38:09.153458 kubelet[2228]: I1106 23:38:09.153416 2228 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 6 23:38:09.156074 kubelet[2228]: E1106 23:38:09.154541 2228 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://164.92.114.154:6443/api/v1/namespaces/default/events\": dial tcp 164.92.114.154:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230.2.4-n-07c3be35b1.18758f358147ed75 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230.2.4-n-07c3be35b1,UID:ci-4230.2.4-n-07c3be35b1,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230.2.4-n-07c3be35b1,},FirstTimestamp:2025-11-06 23:38:09.150430581 +0000 UTC m=+0.279724673,LastTimestamp:2025-11-06 23:38:09.150430581 +0000 UTC m=+0.279724673,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230.2.4-n-07c3be35b1,}" Nov 6 23:38:09.156074 kubelet[2228]: I1106 23:38:09.156010 2228 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 6 23:38:09.159494 kubelet[2228]: I1106 23:38:09.158808 2228 server.go:317] "Adding debug handlers to kubelet server" Nov 6 23:38:09.166720 kubelet[2228]: I1106 23:38:09.164929 2228 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 6 23:38:09.167211 kubelet[2228]: I1106 23:38:09.167180 2228 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 6 23:38:09.168596 kubelet[2228]: E1106 23:38:09.168563 2228 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.4-n-07c3be35b1\" not found" Nov 6 23:38:09.169483 kubelet[2228]: I1106 23:38:09.168954 2228 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 6 23:38:09.170421 kubelet[2228]: I1106 23:38:09.170386 2228 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 6 23:38:09.170508 kubelet[2228]: I1106 23:38:09.169206 2228 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 6 23:38:09.183923 kubelet[2228]: I1106 23:38:09.183807 2228 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 6 23:38:09.185257 kubelet[2228]: I1106 23:38:09.184219 2228 reconciler.go:26] "Reconciler: start to sync state" Nov 6 23:38:09.185257 kubelet[2228]: E1106 23:38:09.184949 2228 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://164.92.114.154:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.4-n-07c3be35b1?timeout=10s\": dial tcp 164.92.114.154:6443: connect: connection refused" interval="200ms" Nov 6 23:38:09.187740 kubelet[2228]: I1106 23:38:09.187700 2228 factory.go:223] Registration of the systemd container factory successfully Nov 6 23:38:09.188063 kubelet[2228]: I1106 23:38:09.188042 2228 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 6 23:38:09.189639 kubelet[2228]: E1106 23:38:09.189611 2228 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://164.92.114.154:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 164.92.114.154:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 6 23:38:09.192488 kubelet[2228]: E1106 23:38:09.192205 2228 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 6 23:38:09.193323 kubelet[2228]: I1106 23:38:09.193070 2228 factory.go:223] Registration of the containerd container factory successfully Nov 6 23:38:09.212946 kubelet[2228]: I1106 23:38:09.212876 2228 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 6 23:38:09.212946 kubelet[2228]: I1106 23:38:09.212900 2228 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 6 23:38:09.212946 kubelet[2228]: I1106 23:38:09.212941 2228 state_mem.go:36] "Initialized new in-memory state store" Nov 6 23:38:09.213586 kubelet[2228]: I1106 23:38:09.213268 2228 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 6 23:38:09.213941 kubelet[2228]: I1106 23:38:09.213695 2228 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 6 23:38:09.213941 kubelet[2228]: I1106 23:38:09.213798 2228 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 6 23:38:09.213941 kubelet[2228]: I1106 23:38:09.213812 2228 kubelet.go:2436] "Starting kubelet main sync loop" Nov 6 23:38:09.213941 kubelet[2228]: E1106 23:38:09.213892 2228 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 6 23:38:09.215588 kubelet[2228]: I1106 23:38:09.214779 2228 policy_none.go:49] "None policy: Start" Nov 6 23:38:09.215588 kubelet[2228]: I1106 23:38:09.214801 2228 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 6 23:38:09.215588 kubelet[2228]: I1106 23:38:09.214812 2228 state_mem.go:35] "Initializing new in-memory state store" Nov 6 23:38:09.215785 kubelet[2228]: E1106 23:38:09.215607 2228 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://164.92.114.154:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 164.92.114.154:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 6 23:38:09.222810 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 6 23:38:09.231384 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 6 23:38:09.235668 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 6 23:38:09.247752 kubelet[2228]: E1106 23:38:09.247129 2228 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 6 23:38:09.247752 kubelet[2228]: I1106 23:38:09.247467 2228 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 6 23:38:09.247752 kubelet[2228]: I1106 23:38:09.247499 2228 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 6 23:38:09.248020 kubelet[2228]: I1106 23:38:09.247829 2228 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 6 23:38:09.251388 kubelet[2228]: E1106 23:38:09.250825 2228 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 6 23:38:09.251388 kubelet[2228]: E1106 23:38:09.250876 2228 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4230.2.4-n-07c3be35b1\" not found" Nov 6 23:38:09.325150 systemd[1]: Created slice kubepods-burstable-pod8911f68c6912a2243235728fea084557.slice - libcontainer container kubepods-burstable-pod8911f68c6912a2243235728fea084557.slice. Nov 6 23:38:09.338434 kubelet[2228]: E1106 23:38:09.338386 2228 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.4-n-07c3be35b1\" not found" node="ci-4230.2.4-n-07c3be35b1" Nov 6 23:38:09.342475 systemd[1]: Created slice kubepods-burstable-pod1b83adc0069caf191de558684c9952ff.slice - libcontainer container kubepods-burstable-pod1b83adc0069caf191de558684c9952ff.slice. Nov 6 23:38:09.345575 kubelet[2228]: E1106 23:38:09.345547 2228 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.4-n-07c3be35b1\" not found" node="ci-4230.2.4-n-07c3be35b1" Nov 6 23:38:09.348072 systemd[1]: Created slice kubepods-burstable-pod74d374abf0133fc3f6922e24c9593e3a.slice - libcontainer container kubepods-burstable-pod74d374abf0133fc3f6922e24c9593e3a.slice. Nov 6 23:38:09.349836 kubelet[2228]: I1106 23:38:09.349108 2228 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.4-n-07c3be35b1" Nov 6 23:38:09.349836 kubelet[2228]: E1106 23:38:09.349787 2228 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://164.92.114.154:6443/api/v1/nodes\": dial tcp 164.92.114.154:6443: connect: connection refused" node="ci-4230.2.4-n-07c3be35b1" Nov 6 23:38:09.351078 kubelet[2228]: E1106 23:38:09.351057 2228 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.4-n-07c3be35b1\" not found" node="ci-4230.2.4-n-07c3be35b1" Nov 6 23:38:09.387256 kubelet[2228]: I1106 23:38:09.385706 2228 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8911f68c6912a2243235728fea084557-kubeconfig\") pod \"kube-scheduler-ci-4230.2.4-n-07c3be35b1\" (UID: \"8911f68c6912a2243235728fea084557\") " pod="kube-system/kube-scheduler-ci-4230.2.4-n-07c3be35b1" Nov 6 23:38:09.387587 kubelet[2228]: E1106 23:38:09.385962 2228 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://164.92.114.154:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.4-n-07c3be35b1?timeout=10s\": dial tcp 164.92.114.154:6443: connect: connection refused" interval="400ms" Nov 6 23:38:09.387587 kubelet[2228]: I1106 23:38:09.387538 2228 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1b83adc0069caf191de558684c9952ff-ca-certs\") pod \"kube-apiserver-ci-4230.2.4-n-07c3be35b1\" (UID: \"1b83adc0069caf191de558684c9952ff\") " pod="kube-system/kube-apiserver-ci-4230.2.4-n-07c3be35b1" Nov 6 23:38:09.387757 kubelet[2228]: I1106 23:38:09.387615 2228 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/74d374abf0133fc3f6922e24c9593e3a-ca-certs\") pod \"kube-controller-manager-ci-4230.2.4-n-07c3be35b1\" (UID: \"74d374abf0133fc3f6922e24c9593e3a\") " pod="kube-system/kube-controller-manager-ci-4230.2.4-n-07c3be35b1" Nov 6 23:38:09.387757 kubelet[2228]: I1106 23:38:09.387663 2228 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/74d374abf0133fc3f6922e24c9593e3a-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.2.4-n-07c3be35b1\" (UID: \"74d374abf0133fc3f6922e24c9593e3a\") " pod="kube-system/kube-controller-manager-ci-4230.2.4-n-07c3be35b1" Nov 6 23:38:09.387757 kubelet[2228]: I1106 23:38:09.387687 2228 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/74d374abf0133fc3f6922e24c9593e3a-kubeconfig\") pod \"kube-controller-manager-ci-4230.2.4-n-07c3be35b1\" (UID: \"74d374abf0133fc3f6922e24c9593e3a\") " pod="kube-system/kube-controller-manager-ci-4230.2.4-n-07c3be35b1" Nov 6 23:38:09.387757 kubelet[2228]: I1106 23:38:09.387732 2228 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/74d374abf0133fc3f6922e24c9593e3a-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.2.4-n-07c3be35b1\" (UID: \"74d374abf0133fc3f6922e24c9593e3a\") " pod="kube-system/kube-controller-manager-ci-4230.2.4-n-07c3be35b1" Nov 6 23:38:09.387757 kubelet[2228]: I1106 23:38:09.387755 2228 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1b83adc0069caf191de558684c9952ff-k8s-certs\") pod \"kube-apiserver-ci-4230.2.4-n-07c3be35b1\" (UID: \"1b83adc0069caf191de558684c9952ff\") " pod="kube-system/kube-apiserver-ci-4230.2.4-n-07c3be35b1" Nov 6 23:38:09.388060 kubelet[2228]: I1106 23:38:09.387780 2228 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1b83adc0069caf191de558684c9952ff-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.2.4-n-07c3be35b1\" (UID: \"1b83adc0069caf191de558684c9952ff\") " pod="kube-system/kube-apiserver-ci-4230.2.4-n-07c3be35b1" Nov 6 23:38:09.388060 kubelet[2228]: I1106 23:38:09.387803 2228 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/74d374abf0133fc3f6922e24c9593e3a-k8s-certs\") pod \"kube-controller-manager-ci-4230.2.4-n-07c3be35b1\" (UID: \"74d374abf0133fc3f6922e24c9593e3a\") " pod="kube-system/kube-controller-manager-ci-4230.2.4-n-07c3be35b1" Nov 6 23:38:09.551138 kubelet[2228]: I1106 23:38:09.551099 2228 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.4-n-07c3be35b1" Nov 6 23:38:09.551569 kubelet[2228]: E1106 23:38:09.551526 2228 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://164.92.114.154:6443/api/v1/nodes\": dial tcp 164.92.114.154:6443: connect: connection refused" node="ci-4230.2.4-n-07c3be35b1" Nov 6 23:38:09.640253 kubelet[2228]: E1106 23:38:09.640064 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:38:09.642364 containerd[1474]: time="2025-11-06T23:38:09.642203450Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.2.4-n-07c3be35b1,Uid:8911f68c6912a2243235728fea084557,Namespace:kube-system,Attempt:0,}" Nov 6 23:38:09.644757 systemd-resolved[1331]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.2. Nov 6 23:38:09.647240 kubelet[2228]: E1106 23:38:09.646822 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:38:09.647457 containerd[1474]: time="2025-11-06T23:38:09.647405306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.2.4-n-07c3be35b1,Uid:1b83adc0069caf191de558684c9952ff,Namespace:kube-system,Attempt:0,}" Nov 6 23:38:09.652006 kubelet[2228]: E1106 23:38:09.651964 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:38:09.653111 containerd[1474]: time="2025-11-06T23:38:09.652787415Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.2.4-n-07c3be35b1,Uid:74d374abf0133fc3f6922e24c9593e3a,Namespace:kube-system,Attempt:0,}" Nov 6 23:38:09.788961 kubelet[2228]: E1106 23:38:09.788909 2228 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://164.92.114.154:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.4-n-07c3be35b1?timeout=10s\": dial tcp 164.92.114.154:6443: connect: connection refused" interval="800ms" Nov 6 23:38:09.953653 kubelet[2228]: I1106 23:38:09.953252 2228 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.4-n-07c3be35b1" Nov 6 23:38:09.954465 kubelet[2228]: E1106 23:38:09.954426 2228 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://164.92.114.154:6443/api/v1/nodes\": dial tcp 164.92.114.154:6443: connect: connection refused" node="ci-4230.2.4-n-07c3be35b1" Nov 6 23:38:10.006177 kubelet[2228]: E1106 23:38:10.006103 2228 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://164.92.114.154:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 164.92.114.154:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 6 23:38:10.045655 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3768544309.mount: Deactivated successfully. Nov 6 23:38:10.050321 containerd[1474]: time="2025-11-06T23:38:10.049855778Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 6 23:38:10.051094 containerd[1474]: time="2025-11-06T23:38:10.051009398Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Nov 6 23:38:10.051729 containerd[1474]: time="2025-11-06T23:38:10.051650223Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 6 23:38:10.053784 containerd[1474]: time="2025-11-06T23:38:10.053650577Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 6 23:38:10.054252 containerd[1474]: time="2025-11-06T23:38:10.054023143Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 6 23:38:10.057776 containerd[1474]: time="2025-11-06T23:38:10.057673845Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 6 23:38:10.060251 containerd[1474]: time="2025-11-06T23:38:10.058716097Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 6 23:38:10.060251 containerd[1474]: time="2025-11-06T23:38:10.059309528Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 6 23:38:10.061567 containerd[1474]: time="2025-11-06T23:38:10.061522861Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 418.672798ms" Nov 6 23:38:10.063607 containerd[1474]: time="2025-11-06T23:38:10.063559989Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 410.665487ms" Nov 6 23:38:10.068948 containerd[1474]: time="2025-11-06T23:38:10.068886784Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 421.350007ms" Nov 6 23:38:10.171604 kubelet[2228]: E1106 23:38:10.170651 2228 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://164.92.114.154:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 164.92.114.154:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 6 23:38:10.224517 kubelet[2228]: E1106 23:38:10.223598 2228 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://164.92.114.154:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 164.92.114.154:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 6 23:38:10.247329 containerd[1474]: time="2025-11-06T23:38:10.246015245Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 6 23:38:10.247578 containerd[1474]: time="2025-11-06T23:38:10.247269519Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 6 23:38:10.247578 containerd[1474]: time="2025-11-06T23:38:10.247297824Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 6 23:38:10.247578 containerd[1474]: time="2025-11-06T23:38:10.247399029Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 6 23:38:10.254663 containerd[1474]: time="2025-11-06T23:38:10.254501651Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 6 23:38:10.256541 containerd[1474]: time="2025-11-06T23:38:10.256293918Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 6 23:38:10.256541 containerd[1474]: time="2025-11-06T23:38:10.256350240Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 6 23:38:10.256541 containerd[1474]: time="2025-11-06T23:38:10.256454412Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 6 23:38:10.260811 containerd[1474]: time="2025-11-06T23:38:10.260687976Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 6 23:38:10.261129 containerd[1474]: time="2025-11-06T23:38:10.261090463Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 6 23:38:10.261265 containerd[1474]: time="2025-11-06T23:38:10.261218310Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 6 23:38:10.261578 containerd[1474]: time="2025-11-06T23:38:10.261534746Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 6 23:38:10.287672 systemd[1]: Started cri-containerd-14bc6a1af1a8300476b6a8ff1f93b5902f52fa5fdce09ec4157f0c743fdbabca.scope - libcontainer container 14bc6a1af1a8300476b6a8ff1f93b5902f52fa5fdce09ec4157f0c743fdbabca. Nov 6 23:38:10.295245 systemd[1]: Started cri-containerd-2638bde63685d49f4aa3fe4a5184b433f8b3137b845826f45a4106ba5ad94d3f.scope - libcontainer container 2638bde63685d49f4aa3fe4a5184b433f8b3137b845826f45a4106ba5ad94d3f. Nov 6 23:38:10.318749 systemd[1]: Started cri-containerd-5e3ee68428daa5b3c9de3e929384d6ee8b3a70b962b40dabe607c59f4ac40e91.scope - libcontainer container 5e3ee68428daa5b3c9de3e929384d6ee8b3a70b962b40dabe607c59f4ac40e91. Nov 6 23:38:10.414190 containerd[1474]: time="2025-11-06T23:38:10.414057414Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.2.4-n-07c3be35b1,Uid:1b83adc0069caf191de558684c9952ff,Namespace:kube-system,Attempt:0,} returns sandbox id \"2638bde63685d49f4aa3fe4a5184b433f8b3137b845826f45a4106ba5ad94d3f\"" Nov 6 23:38:10.418066 kubelet[2228]: E1106 23:38:10.417747 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:38:10.421666 containerd[1474]: time="2025-11-06T23:38:10.421618983Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.2.4-n-07c3be35b1,Uid:74d374abf0133fc3f6922e24c9593e3a,Namespace:kube-system,Attempt:0,} returns sandbox id \"5e3ee68428daa5b3c9de3e929384d6ee8b3a70b962b40dabe607c59f4ac40e91\"" Nov 6 23:38:10.424920 containerd[1474]: time="2025-11-06T23:38:10.424783260Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.2.4-n-07c3be35b1,Uid:8911f68c6912a2243235728fea084557,Namespace:kube-system,Attempt:0,} returns sandbox id \"14bc6a1af1a8300476b6a8ff1f93b5902f52fa5fdce09ec4157f0c743fdbabca\"" Nov 6 23:38:10.427687 kubelet[2228]: E1106 23:38:10.427643 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:38:10.427836 kubelet[2228]: E1106 23:38:10.427413 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:38:10.430141 containerd[1474]: time="2025-11-06T23:38:10.429977553Z" level=info msg="CreateContainer within sandbox \"2638bde63685d49f4aa3fe4a5184b433f8b3137b845826f45a4106ba5ad94d3f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 6 23:38:10.437973 containerd[1474]: time="2025-11-06T23:38:10.437692022Z" level=info msg="CreateContainer within sandbox \"14bc6a1af1a8300476b6a8ff1f93b5902f52fa5fdce09ec4157f0c743fdbabca\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 6 23:38:10.446914 containerd[1474]: time="2025-11-06T23:38:10.446751116Z" level=info msg="CreateContainer within sandbox \"5e3ee68428daa5b3c9de3e929384d6ee8b3a70b962b40dabe607c59f4ac40e91\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 6 23:38:10.455497 containerd[1474]: time="2025-11-06T23:38:10.455429689Z" level=info msg="CreateContainer within sandbox \"2638bde63685d49f4aa3fe4a5184b433f8b3137b845826f45a4106ba5ad94d3f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d60ad177113c449ccc65173cc536252c19b69769f4bc59eec2e7f1e56a1cb8b3\"" Nov 6 23:38:10.459435 containerd[1474]: time="2025-11-06T23:38:10.459369398Z" level=info msg="StartContainer for \"d60ad177113c449ccc65173cc536252c19b69769f4bc59eec2e7f1e56a1cb8b3\"" Nov 6 23:38:10.466554 containerd[1474]: time="2025-11-06T23:38:10.466491017Z" level=info msg="CreateContainer within sandbox \"14bc6a1af1a8300476b6a8ff1f93b5902f52fa5fdce09ec4157f0c743fdbabca\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"da0fe005859e98cf13e5ebd771ec9208e6ba1ee3223bb6df4d18822e259e00dd\"" Nov 6 23:38:10.467951 containerd[1474]: time="2025-11-06T23:38:10.467610999Z" level=info msg="CreateContainer within sandbox \"5e3ee68428daa5b3c9de3e929384d6ee8b3a70b962b40dabe607c59f4ac40e91\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"9fe4ccb021290cf9c752f2418f9259f225e975b749d5aeb16ffd4a523a63b835\"" Nov 6 23:38:10.469262 containerd[1474]: time="2025-11-06T23:38:10.468269360Z" level=info msg="StartContainer for \"9fe4ccb021290cf9c752f2418f9259f225e975b749d5aeb16ffd4a523a63b835\"" Nov 6 23:38:10.469494 containerd[1474]: time="2025-11-06T23:38:10.469471274Z" level=info msg="StartContainer for \"da0fe005859e98cf13e5ebd771ec9208e6ba1ee3223bb6df4d18822e259e00dd\"" Nov 6 23:38:10.505011 systemd[1]: Started cri-containerd-d60ad177113c449ccc65173cc536252c19b69769f4bc59eec2e7f1e56a1cb8b3.scope - libcontainer container d60ad177113c449ccc65173cc536252c19b69769f4bc59eec2e7f1e56a1cb8b3. Nov 6 23:38:10.528805 systemd[1]: Started cri-containerd-9fe4ccb021290cf9c752f2418f9259f225e975b749d5aeb16ffd4a523a63b835.scope - libcontainer container 9fe4ccb021290cf9c752f2418f9259f225e975b749d5aeb16ffd4a523a63b835. Nov 6 23:38:10.537470 systemd[1]: Started cri-containerd-da0fe005859e98cf13e5ebd771ec9208e6ba1ee3223bb6df4d18822e259e00dd.scope - libcontainer container da0fe005859e98cf13e5ebd771ec9208e6ba1ee3223bb6df4d18822e259e00dd. Nov 6 23:38:10.590407 kubelet[2228]: E1106 23:38:10.589811 2228 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://164.92.114.154:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.4-n-07c3be35b1?timeout=10s\": dial tcp 164.92.114.154:6443: connect: connection refused" interval="1.6s" Nov 6 23:38:10.613797 containerd[1474]: time="2025-11-06T23:38:10.613732019Z" level=info msg="StartContainer for \"d60ad177113c449ccc65173cc536252c19b69769f4bc59eec2e7f1e56a1cb8b3\" returns successfully" Nov 6 23:38:10.619796 containerd[1474]: time="2025-11-06T23:38:10.619749900Z" level=info msg="StartContainer for \"9fe4ccb021290cf9c752f2418f9259f225e975b749d5aeb16ffd4a523a63b835\" returns successfully" Nov 6 23:38:10.630479 containerd[1474]: time="2025-11-06T23:38:10.629924773Z" level=info msg="StartContainer for \"da0fe005859e98cf13e5ebd771ec9208e6ba1ee3223bb6df4d18822e259e00dd\" returns successfully" Nov 6 23:38:10.740047 kubelet[2228]: E1106 23:38:10.739901 2228 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://164.92.114.154:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.2.4-n-07c3be35b1&limit=500&resourceVersion=0\": dial tcp 164.92.114.154:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 6 23:38:10.756962 kubelet[2228]: I1106 23:38:10.756562 2228 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.4-n-07c3be35b1" Nov 6 23:38:10.757623 kubelet[2228]: E1106 23:38:10.757517 2228 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://164.92.114.154:6443/api/v1/nodes\": dial tcp 164.92.114.154:6443: connect: connection refused" node="ci-4230.2.4-n-07c3be35b1" Nov 6 23:38:11.228811 kubelet[2228]: E1106 23:38:11.228769 2228 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.4-n-07c3be35b1\" not found" node="ci-4230.2.4-n-07c3be35b1" Nov 6 23:38:11.230275 kubelet[2228]: E1106 23:38:11.228920 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:38:11.230965 kubelet[2228]: E1106 23:38:11.230755 2228 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.4-n-07c3be35b1\" not found" node="ci-4230.2.4-n-07c3be35b1" Nov 6 23:38:11.230965 kubelet[2228]: E1106 23:38:11.230901 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:38:11.233794 kubelet[2228]: E1106 23:38:11.233602 2228 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.4-n-07c3be35b1\" not found" node="ci-4230.2.4-n-07c3be35b1" Nov 6 23:38:11.233794 kubelet[2228]: E1106 23:38:11.233728 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:38:12.236239 kubelet[2228]: E1106 23:38:12.236193 2228 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.4-n-07c3be35b1\" not found" node="ci-4230.2.4-n-07c3be35b1" Nov 6 23:38:12.237781 kubelet[2228]: E1106 23:38:12.237441 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:38:12.237781 kubelet[2228]: E1106 23:38:12.236385 2228 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.4-n-07c3be35b1\" not found" node="ci-4230.2.4-n-07c3be35b1" Nov 6 23:38:12.237781 kubelet[2228]: E1106 23:38:12.237572 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:38:12.359853 kubelet[2228]: I1106 23:38:12.359731 2228 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.4-n-07c3be35b1" Nov 6 23:38:13.037479 kubelet[2228]: E1106 23:38:13.037421 2228 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4230.2.4-n-07c3be35b1\" not found" node="ci-4230.2.4-n-07c3be35b1" Nov 6 23:38:13.157464 kubelet[2228]: I1106 23:38:13.157406 2228 kubelet_node_status.go:78] "Successfully registered node" node="ci-4230.2.4-n-07c3be35b1" Nov 6 23:38:13.157464 kubelet[2228]: E1106 23:38:13.157463 2228 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4230.2.4-n-07c3be35b1\": node \"ci-4230.2.4-n-07c3be35b1\" not found" Nov 6 23:38:13.172021 kubelet[2228]: E1106 23:38:13.171967 2228 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.4-n-07c3be35b1\" not found" Nov 6 23:38:13.272569 kubelet[2228]: E1106 23:38:13.272511 2228 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.4-n-07c3be35b1\" not found" Nov 6 23:38:13.372970 kubelet[2228]: E1106 23:38:13.372790 2228 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.4-n-07c3be35b1\" not found" Nov 6 23:38:13.470727 kubelet[2228]: E1106 23:38:13.470674 2228 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.4-n-07c3be35b1\" not found" node="ci-4230.2.4-n-07c3be35b1" Nov 6 23:38:13.470907 kubelet[2228]: E1106 23:38:13.470868 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:38:13.473259 kubelet[2228]: E1106 23:38:13.473195 2228 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.4-n-07c3be35b1\" not found" Nov 6 23:38:13.572012 kubelet[2228]: I1106 23:38:13.571969 2228 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230.2.4-n-07c3be35b1" Nov 6 23:38:13.581509 kubelet[2228]: E1106 23:38:13.580991 2228 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4230.2.4-n-07c3be35b1\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4230.2.4-n-07c3be35b1" Nov 6 23:38:13.581509 kubelet[2228]: I1106 23:38:13.581024 2228 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230.2.4-n-07c3be35b1" Nov 6 23:38:13.584094 kubelet[2228]: E1106 23:38:13.584054 2228 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4230.2.4-n-07c3be35b1\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4230.2.4-n-07c3be35b1" Nov 6 23:38:13.584094 kubelet[2228]: I1106 23:38:13.584088 2228 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230.2.4-n-07c3be35b1" Nov 6 23:38:13.586856 kubelet[2228]: E1106 23:38:13.586800 2228 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4230.2.4-n-07c3be35b1\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4230.2.4-n-07c3be35b1" Nov 6 23:38:14.141205 kubelet[2228]: I1106 23:38:14.141153 2228 apiserver.go:52] "Watching apiserver" Nov 6 23:38:14.185481 kubelet[2228]: I1106 23:38:14.185358 2228 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 6 23:38:15.350846 systemd[1]: Reload requested from client PID 2507 ('systemctl') (unit session-7.scope)... Nov 6 23:38:15.351254 systemd[1]: Reloading... Nov 6 23:38:15.463285 zram_generator::config[2547]: No configuration found. Nov 6 23:38:15.578076 kubelet[2228]: I1106 23:38:15.578033 2228 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230.2.4-n-07c3be35b1" Nov 6 23:38:15.588247 kubelet[2228]: I1106 23:38:15.586767 2228 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 6 23:38:15.588736 kubelet[2228]: E1106 23:38:15.588481 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:38:15.643893 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 6 23:38:15.790215 systemd[1]: Reloading finished in 438 ms. Nov 6 23:38:15.825757 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 23:38:15.842952 systemd[1]: kubelet.service: Deactivated successfully. Nov 6 23:38:15.843791 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 23:38:15.844159 systemd[1]: kubelet.service: Consumed 728ms CPU time, 125.4M memory peak. Nov 6 23:38:15.853701 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 23:38:16.030885 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 23:38:16.046075 (kubelet)[2602]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 6 23:38:16.158272 kubelet[2602]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 6 23:38:16.158272 kubelet[2602]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 6 23:38:16.158272 kubelet[2602]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 6 23:38:16.158824 kubelet[2602]: I1106 23:38:16.158332 2602 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 6 23:38:16.166214 kubelet[2602]: I1106 23:38:16.166157 2602 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 6 23:38:16.166214 kubelet[2602]: I1106 23:38:16.166207 2602 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 6 23:38:16.166746 kubelet[2602]: I1106 23:38:16.166697 2602 server.go:956] "Client rotation is on, will bootstrap in background" Nov 6 23:38:16.171010 kubelet[2602]: I1106 23:38:16.170556 2602 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 6 23:38:16.176887 kubelet[2602]: I1106 23:38:16.176828 2602 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 6 23:38:16.181679 kubelet[2602]: E1106 23:38:16.181622 2602 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 6 23:38:16.181954 kubelet[2602]: I1106 23:38:16.181936 2602 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 6 23:38:16.190143 kubelet[2602]: I1106 23:38:16.189960 2602 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 6 23:38:16.192272 kubelet[2602]: I1106 23:38:16.191822 2602 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 6 23:38:16.192272 kubelet[2602]: I1106 23:38:16.191912 2602 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.2.4-n-07c3be35b1","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 6 23:38:16.192272 kubelet[2602]: I1106 23:38:16.192166 2602 topology_manager.go:138] "Creating topology manager with none policy" Nov 6 23:38:16.192272 kubelet[2602]: I1106 23:38:16.192182 2602 container_manager_linux.go:303] "Creating device plugin manager" Nov 6 23:38:16.192577 kubelet[2602]: I1106 23:38:16.192361 2602 state_mem.go:36] "Initialized new in-memory state store" Nov 6 23:38:16.192648 kubelet[2602]: I1106 23:38:16.192615 2602 kubelet.go:480] "Attempting to sync node with API server" Nov 6 23:38:16.192648 kubelet[2602]: I1106 23:38:16.192639 2602 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 6 23:38:16.192729 kubelet[2602]: I1106 23:38:16.192674 2602 kubelet.go:386] "Adding apiserver pod source" Nov 6 23:38:16.192729 kubelet[2602]: I1106 23:38:16.192697 2602 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 6 23:38:16.199913 kubelet[2602]: I1106 23:38:16.199681 2602 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Nov 6 23:38:16.206813 kubelet[2602]: I1106 23:38:16.206741 2602 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 6 23:38:16.216072 kubelet[2602]: I1106 23:38:16.215019 2602 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 6 23:38:16.216072 kubelet[2602]: I1106 23:38:16.215116 2602 server.go:1289] "Started kubelet" Nov 6 23:38:16.222009 kubelet[2602]: I1106 23:38:16.220740 2602 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 6 23:38:16.226405 kubelet[2602]: I1106 23:38:16.226354 2602 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 6 23:38:16.227605 kubelet[2602]: I1106 23:38:16.227478 2602 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 6 23:38:16.228753 kubelet[2602]: I1106 23:38:16.228720 2602 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 6 23:38:16.228977 kubelet[2602]: I1106 23:38:16.228911 2602 reconciler.go:26] "Reconciler: start to sync state" Nov 6 23:38:16.229326 kubelet[2602]: I1106 23:38:16.229198 2602 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 6 23:38:16.229940 kubelet[2602]: I1106 23:38:16.229748 2602 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 6 23:38:16.234857 kubelet[2602]: I1106 23:38:16.234803 2602 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 6 23:38:16.242197 kubelet[2602]: I1106 23:38:16.242041 2602 server.go:317] "Adding debug handlers to kubelet server" Nov 6 23:38:16.247013 kubelet[2602]: I1106 23:38:16.246984 2602 factory.go:223] Registration of the systemd container factory successfully Nov 6 23:38:16.248597 kubelet[2602]: I1106 23:38:16.247568 2602 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 6 23:38:16.249423 kubelet[2602]: E1106 23:38:16.249361 2602 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 6 23:38:16.254417 kubelet[2602]: I1106 23:38:16.254364 2602 factory.go:223] Registration of the containerd container factory successfully Nov 6 23:38:16.255200 kubelet[2602]: I1106 23:38:16.255134 2602 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 6 23:38:16.258322 kubelet[2602]: I1106 23:38:16.258275 2602 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 6 23:38:16.258322 kubelet[2602]: I1106 23:38:16.258319 2602 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 6 23:38:16.258906 kubelet[2602]: I1106 23:38:16.258355 2602 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 6 23:38:16.258906 kubelet[2602]: I1106 23:38:16.258368 2602 kubelet.go:2436] "Starting kubelet main sync loop" Nov 6 23:38:16.258906 kubelet[2602]: E1106 23:38:16.258440 2602 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 6 23:38:16.348160 kubelet[2602]: I1106 23:38:16.346785 2602 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 6 23:38:16.348160 kubelet[2602]: I1106 23:38:16.346807 2602 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 6 23:38:16.348160 kubelet[2602]: I1106 23:38:16.346830 2602 state_mem.go:36] "Initialized new in-memory state store" Nov 6 23:38:16.348160 kubelet[2602]: I1106 23:38:16.346999 2602 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 6 23:38:16.348160 kubelet[2602]: I1106 23:38:16.347010 2602 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 6 23:38:16.348160 kubelet[2602]: I1106 23:38:16.347047 2602 policy_none.go:49] "None policy: Start" Nov 6 23:38:16.348160 kubelet[2602]: I1106 23:38:16.347058 2602 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 6 23:38:16.348160 kubelet[2602]: I1106 23:38:16.347069 2602 state_mem.go:35] "Initializing new in-memory state store" Nov 6 23:38:16.348160 kubelet[2602]: I1106 23:38:16.347170 2602 state_mem.go:75] "Updated machine memory state" Nov 6 23:38:16.358279 kubelet[2602]: E1106 23:38:16.355629 2602 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 6 23:38:16.358279 kubelet[2602]: I1106 23:38:16.356551 2602 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 6 23:38:16.358279 kubelet[2602]: I1106 23:38:16.356591 2602 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 6 23:38:16.358279 kubelet[2602]: I1106 23:38:16.357020 2602 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 6 23:38:16.358795 kubelet[2602]: E1106 23:38:16.358771 2602 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 6 23:38:16.362578 kubelet[2602]: I1106 23:38:16.362543 2602 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230.2.4-n-07c3be35b1" Nov 6 23:38:16.370604 kubelet[2602]: I1106 23:38:16.370551 2602 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230.2.4-n-07c3be35b1" Nov 6 23:38:16.374792 kubelet[2602]: I1106 23:38:16.374712 2602 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230.2.4-n-07c3be35b1" Nov 6 23:38:16.379705 kubelet[2602]: I1106 23:38:16.379327 2602 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 6 23:38:16.387260 kubelet[2602]: I1106 23:38:16.385397 2602 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 6 23:38:16.387260 kubelet[2602]: I1106 23:38:16.385448 2602 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 6 23:38:16.387260 kubelet[2602]: E1106 23:38:16.385673 2602 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4230.2.4-n-07c3be35b1\" already exists" pod="kube-system/kube-controller-manager-ci-4230.2.4-n-07c3be35b1" Nov 6 23:38:16.396070 sudo[2638]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Nov 6 23:38:16.397431 sudo[2638]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Nov 6 23:38:16.433458 kubelet[2602]: I1106 23:38:16.432870 2602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1b83adc0069caf191de558684c9952ff-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.2.4-n-07c3be35b1\" (UID: \"1b83adc0069caf191de558684c9952ff\") " pod="kube-system/kube-apiserver-ci-4230.2.4-n-07c3be35b1" Nov 6 23:38:16.433458 kubelet[2602]: I1106 23:38:16.432951 2602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/74d374abf0133fc3f6922e24c9593e3a-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.2.4-n-07c3be35b1\" (UID: \"74d374abf0133fc3f6922e24c9593e3a\") " pod="kube-system/kube-controller-manager-ci-4230.2.4-n-07c3be35b1" Nov 6 23:38:16.433458 kubelet[2602]: I1106 23:38:16.433017 2602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/74d374abf0133fc3f6922e24c9593e3a-kubeconfig\") pod \"kube-controller-manager-ci-4230.2.4-n-07c3be35b1\" (UID: \"74d374abf0133fc3f6922e24c9593e3a\") " pod="kube-system/kube-controller-manager-ci-4230.2.4-n-07c3be35b1" Nov 6 23:38:16.433458 kubelet[2602]: I1106 23:38:16.433048 2602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1b83adc0069caf191de558684c9952ff-k8s-certs\") pod \"kube-apiserver-ci-4230.2.4-n-07c3be35b1\" (UID: \"1b83adc0069caf191de558684c9952ff\") " pod="kube-system/kube-apiserver-ci-4230.2.4-n-07c3be35b1" Nov 6 23:38:16.433458 kubelet[2602]: I1106 23:38:16.433075 2602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/74d374abf0133fc3f6922e24c9593e3a-ca-certs\") pod \"kube-controller-manager-ci-4230.2.4-n-07c3be35b1\" (UID: \"74d374abf0133fc3f6922e24c9593e3a\") " pod="kube-system/kube-controller-manager-ci-4230.2.4-n-07c3be35b1" Nov 6 23:38:16.433988 kubelet[2602]: I1106 23:38:16.433103 2602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/74d374abf0133fc3f6922e24c9593e3a-k8s-certs\") pod \"kube-controller-manager-ci-4230.2.4-n-07c3be35b1\" (UID: \"74d374abf0133fc3f6922e24c9593e3a\") " pod="kube-system/kube-controller-manager-ci-4230.2.4-n-07c3be35b1" Nov 6 23:38:16.433988 kubelet[2602]: I1106 23:38:16.433148 2602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/74d374abf0133fc3f6922e24c9593e3a-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.2.4-n-07c3be35b1\" (UID: \"74d374abf0133fc3f6922e24c9593e3a\") " pod="kube-system/kube-controller-manager-ci-4230.2.4-n-07c3be35b1" Nov 6 23:38:16.433988 kubelet[2602]: I1106 23:38:16.433178 2602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8911f68c6912a2243235728fea084557-kubeconfig\") pod \"kube-scheduler-ci-4230.2.4-n-07c3be35b1\" (UID: \"8911f68c6912a2243235728fea084557\") " pod="kube-system/kube-scheduler-ci-4230.2.4-n-07c3be35b1" Nov 6 23:38:16.433988 kubelet[2602]: I1106 23:38:16.433276 2602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1b83adc0069caf191de558684c9952ff-ca-certs\") pod \"kube-apiserver-ci-4230.2.4-n-07c3be35b1\" (UID: \"1b83adc0069caf191de558684c9952ff\") " pod="kube-system/kube-apiserver-ci-4230.2.4-n-07c3be35b1" Nov 6 23:38:16.468004 kubelet[2602]: I1106 23:38:16.467779 2602 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.4-n-07c3be35b1" Nov 6 23:38:16.485138 kubelet[2602]: I1106 23:38:16.485094 2602 kubelet_node_status.go:124] "Node was previously registered" node="ci-4230.2.4-n-07c3be35b1" Nov 6 23:38:16.485378 kubelet[2602]: I1106 23:38:16.485199 2602 kubelet_node_status.go:78] "Successfully registered node" node="ci-4230.2.4-n-07c3be35b1" Nov 6 23:38:16.682298 kubelet[2602]: E1106 23:38:16.681040 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:38:16.686281 kubelet[2602]: E1106 23:38:16.686198 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:38:16.688321 kubelet[2602]: E1106 23:38:16.687048 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:38:17.047552 sudo[2638]: pam_unix(sudo:session): session closed for user root Nov 6 23:38:17.197908 kubelet[2602]: I1106 23:38:17.197546 2602 apiserver.go:52] "Watching apiserver" Nov 6 23:38:17.229571 kubelet[2602]: I1106 23:38:17.229509 2602 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 6 23:38:17.297305 kubelet[2602]: E1106 23:38:17.295389 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:38:17.297305 kubelet[2602]: E1106 23:38:17.295878 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:38:17.297305 kubelet[2602]: E1106 23:38:17.296276 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:38:17.335380 kubelet[2602]: I1106 23:38:17.334402 2602 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4230.2.4-n-07c3be35b1" podStartSLOduration=1.334298887 podStartE2EDuration="1.334298887s" podCreationTimestamp="2025-11-06 23:38:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 23:38:17.334066458 +0000 UTC m=+1.278380367" watchObservedRunningTime="2025-11-06 23:38:17.334298887 +0000 UTC m=+1.278612793" Nov 6 23:38:17.366948 kubelet[2602]: I1106 23:38:17.366869 2602 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4230.2.4-n-07c3be35b1" podStartSLOduration=1.366845007 podStartE2EDuration="1.366845007s" podCreationTimestamp="2025-11-06 23:38:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 23:38:17.353598099 +0000 UTC m=+1.297912006" watchObservedRunningTime="2025-11-06 23:38:17.366845007 +0000 UTC m=+1.311158913" Nov 6 23:38:17.384099 kubelet[2602]: I1106 23:38:17.384017 2602 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4230.2.4-n-07c3be35b1" podStartSLOduration=2.38399657 podStartE2EDuration="2.38399657s" podCreationTimestamp="2025-11-06 23:38:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 23:38:17.369575332 +0000 UTC m=+1.313889239" watchObservedRunningTime="2025-11-06 23:38:17.38399657 +0000 UTC m=+1.328310479" Nov 6 23:38:18.297890 kubelet[2602]: E1106 23:38:18.297479 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:38:18.299219 kubelet[2602]: E1106 23:38:18.299110 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:38:18.825984 sudo[1669]: pam_unix(sudo:session): session closed for user root Nov 6 23:38:18.832270 sshd[1668]: Connection closed by 147.75.109.163 port 48978 Nov 6 23:38:18.832989 sshd-session[1665]: pam_unix(sshd:session): session closed for user core Nov 6 23:38:18.837199 systemd-logind[1460]: Session 7 logged out. Waiting for processes to exit. Nov 6 23:38:18.838596 systemd[1]: sshd@6-164.92.114.154:22-147.75.109.163:48978.service: Deactivated successfully. Nov 6 23:38:18.843626 systemd[1]: session-7.scope: Deactivated successfully. Nov 6 23:38:18.844646 systemd[1]: session-7.scope: Consumed 8.419s CPU time, 217.5M memory peak. Nov 6 23:38:18.848627 systemd-logind[1460]: Removed session 7. Nov 6 23:38:20.128766 kubelet[2602]: I1106 23:38:20.128362 2602 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 6 23:38:20.129823 containerd[1474]: time="2025-11-06T23:38:20.129671221Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 6 23:38:20.130414 kubelet[2602]: I1106 23:38:20.130132 2602 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 6 23:38:21.043756 kubelet[2602]: E1106 23:38:21.042787 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:38:21.134937 systemd[1]: Created slice kubepods-besteffort-pod3ca1892f_5548_4c2d_b510_4f7e4beeca11.slice - libcontainer container kubepods-besteffort-pod3ca1892f_5548_4c2d_b510_4f7e4beeca11.slice. Nov 6 23:38:21.154418 systemd[1]: Created slice kubepods-burstable-podfc763845_118d_466b_9e2e_8414a02a094e.slice - libcontainer container kubepods-burstable-podfc763845_118d_466b_9e2e_8414a02a094e.slice. Nov 6 23:38:21.167559 kubelet[2602]: I1106 23:38:21.167515 2602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3ca1892f-5548-4c2d-b510-4f7e4beeca11-xtables-lock\") pod \"kube-proxy-hmjzf\" (UID: \"3ca1892f-5548-4c2d-b510-4f7e4beeca11\") " pod="kube-system/kube-proxy-hmjzf" Nov 6 23:38:21.169722 kubelet[2602]: I1106 23:38:21.168163 2602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fc763845-118d-466b-9e2e-8414a02a094e-cilium-run\") pod \"cilium-kktl4\" (UID: \"fc763845-118d-466b-9e2e-8414a02a094e\") " pod="kube-system/cilium-kktl4" Nov 6 23:38:21.169722 kubelet[2602]: I1106 23:38:21.168207 2602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fc763845-118d-466b-9e2e-8414a02a094e-hostproc\") pod \"cilium-kktl4\" (UID: \"fc763845-118d-466b-9e2e-8414a02a094e\") " pod="kube-system/cilium-kktl4" Nov 6 23:38:21.169722 kubelet[2602]: I1106 23:38:21.168245 2602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fc763845-118d-466b-9e2e-8414a02a094e-cni-path\") pod \"cilium-kktl4\" (UID: \"fc763845-118d-466b-9e2e-8414a02a094e\") " pod="kube-system/cilium-kktl4" Nov 6 23:38:21.169722 kubelet[2602]: I1106 23:38:21.168267 2602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fc763845-118d-466b-9e2e-8414a02a094e-xtables-lock\") pod \"cilium-kktl4\" (UID: \"fc763845-118d-466b-9e2e-8414a02a094e\") " pod="kube-system/cilium-kktl4" Nov 6 23:38:21.169722 kubelet[2602]: I1106 23:38:21.168287 2602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fc763845-118d-466b-9e2e-8414a02a094e-clustermesh-secrets\") pod \"cilium-kktl4\" (UID: \"fc763845-118d-466b-9e2e-8414a02a094e\") " pod="kube-system/cilium-kktl4" Nov 6 23:38:21.169722 kubelet[2602]: I1106 23:38:21.168302 2602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fc763845-118d-466b-9e2e-8414a02a094e-hubble-tls\") pod \"cilium-kktl4\" (UID: \"fc763845-118d-466b-9e2e-8414a02a094e\") " pod="kube-system/cilium-kktl4" Nov 6 23:38:21.170078 kubelet[2602]: I1106 23:38:21.168318 2602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfwvm\" (UniqueName: \"kubernetes.io/projected/fc763845-118d-466b-9e2e-8414a02a094e-kube-api-access-gfwvm\") pod \"cilium-kktl4\" (UID: \"fc763845-118d-466b-9e2e-8414a02a094e\") " pod="kube-system/cilium-kktl4" Nov 6 23:38:21.170078 kubelet[2602]: I1106 23:38:21.168336 2602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3ca1892f-5548-4c2d-b510-4f7e4beeca11-kube-proxy\") pod \"kube-proxy-hmjzf\" (UID: \"3ca1892f-5548-4c2d-b510-4f7e4beeca11\") " pod="kube-system/kube-proxy-hmjzf" Nov 6 23:38:21.170078 kubelet[2602]: I1106 23:38:21.168374 2602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3ca1892f-5548-4c2d-b510-4f7e4beeca11-lib-modules\") pod \"kube-proxy-hmjzf\" (UID: \"3ca1892f-5548-4c2d-b510-4f7e4beeca11\") " pod="kube-system/kube-proxy-hmjzf" Nov 6 23:38:21.170078 kubelet[2602]: I1106 23:38:21.168401 2602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fc763845-118d-466b-9e2e-8414a02a094e-etc-cni-netd\") pod \"cilium-kktl4\" (UID: \"fc763845-118d-466b-9e2e-8414a02a094e\") " pod="kube-system/cilium-kktl4" Nov 6 23:38:21.170078 kubelet[2602]: I1106 23:38:21.168417 2602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fc763845-118d-466b-9e2e-8414a02a094e-host-proc-sys-net\") pod \"cilium-kktl4\" (UID: \"fc763845-118d-466b-9e2e-8414a02a094e\") " pod="kube-system/cilium-kktl4" Nov 6 23:38:21.170212 kubelet[2602]: I1106 23:38:21.168440 2602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fc763845-118d-466b-9e2e-8414a02a094e-host-proc-sys-kernel\") pod \"cilium-kktl4\" (UID: \"fc763845-118d-466b-9e2e-8414a02a094e\") " pod="kube-system/cilium-kktl4" Nov 6 23:38:21.170212 kubelet[2602]: I1106 23:38:21.168464 2602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nt9j\" (UniqueName: \"kubernetes.io/projected/3ca1892f-5548-4c2d-b510-4f7e4beeca11-kube-api-access-5nt9j\") pod \"kube-proxy-hmjzf\" (UID: \"3ca1892f-5548-4c2d-b510-4f7e4beeca11\") " pod="kube-system/kube-proxy-hmjzf" Nov 6 23:38:21.170212 kubelet[2602]: I1106 23:38:21.168484 2602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fc763845-118d-466b-9e2e-8414a02a094e-bpf-maps\") pod \"cilium-kktl4\" (UID: \"fc763845-118d-466b-9e2e-8414a02a094e\") " pod="kube-system/cilium-kktl4" Nov 6 23:38:21.170212 kubelet[2602]: I1106 23:38:21.168506 2602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fc763845-118d-466b-9e2e-8414a02a094e-cilium-cgroup\") pod \"cilium-kktl4\" (UID: \"fc763845-118d-466b-9e2e-8414a02a094e\") " pod="kube-system/cilium-kktl4" Nov 6 23:38:21.170212 kubelet[2602]: I1106 23:38:21.168537 2602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fc763845-118d-466b-9e2e-8414a02a094e-lib-modules\") pod \"cilium-kktl4\" (UID: \"fc763845-118d-466b-9e2e-8414a02a094e\") " pod="kube-system/cilium-kktl4" Nov 6 23:38:21.171363 kubelet[2602]: I1106 23:38:21.168563 2602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fc763845-118d-466b-9e2e-8414a02a094e-cilium-config-path\") pod \"cilium-kktl4\" (UID: \"fc763845-118d-466b-9e2e-8414a02a094e\") " pod="kube-system/cilium-kktl4" Nov 6 23:38:21.274622 systemd[1]: Created slice kubepods-besteffort-podec8dd16a_f84c_4512_8367_2001ee2ca9e1.slice - libcontainer container kubepods-besteffort-podec8dd16a_f84c_4512_8367_2001ee2ca9e1.slice. Nov 6 23:38:21.306153 kubelet[2602]: E1106 23:38:21.305441 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:38:21.370117 kubelet[2602]: I1106 23:38:21.370007 2602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ec8dd16a-f84c-4512-8367-2001ee2ca9e1-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-d5trf\" (UID: \"ec8dd16a-f84c-4512-8367-2001ee2ca9e1\") " pod="kube-system/cilium-operator-6c4d7847fc-d5trf" Nov 6 23:38:21.370117 kubelet[2602]: I1106 23:38:21.370049 2602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xm2b7\" (UniqueName: \"kubernetes.io/projected/ec8dd16a-f84c-4512-8367-2001ee2ca9e1-kube-api-access-xm2b7\") pod \"cilium-operator-6c4d7847fc-d5trf\" (UID: \"ec8dd16a-f84c-4512-8367-2001ee2ca9e1\") " pod="kube-system/cilium-operator-6c4d7847fc-d5trf" Nov 6 23:38:21.444469 kubelet[2602]: E1106 23:38:21.444379 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:38:21.445578 containerd[1474]: time="2025-11-06T23:38:21.445456977Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hmjzf,Uid:3ca1892f-5548-4c2d-b510-4f7e4beeca11,Namespace:kube-system,Attempt:0,}" Nov 6 23:38:21.460168 kubelet[2602]: E1106 23:38:21.459490 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:38:21.461588 containerd[1474]: time="2025-11-06T23:38:21.460235439Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kktl4,Uid:fc763845-118d-466b-9e2e-8414a02a094e,Namespace:kube-system,Attempt:0,}" Nov 6 23:38:21.498129 containerd[1474]: time="2025-11-06T23:38:21.497661516Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 6 23:38:21.498129 containerd[1474]: time="2025-11-06T23:38:21.497723387Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 6 23:38:21.498129 containerd[1474]: time="2025-11-06T23:38:21.497737990Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 6 23:38:21.498129 containerd[1474]: time="2025-11-06T23:38:21.497840078Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 6 23:38:21.531587 systemd[1]: Started cri-containerd-99e0f234a9bb01f9e79fa2feef9f49fac350c97a6b5a5fe878691b8d11de2e5a.scope - libcontainer container 99e0f234a9bb01f9e79fa2feef9f49fac350c97a6b5a5fe878691b8d11de2e5a. Nov 6 23:38:21.537261 containerd[1474]: time="2025-11-06T23:38:21.535255198Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 6 23:38:21.537261 containerd[1474]: time="2025-11-06T23:38:21.535315248Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 6 23:38:21.537261 containerd[1474]: time="2025-11-06T23:38:21.535326702Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 6 23:38:21.537261 containerd[1474]: time="2025-11-06T23:38:21.535438195Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 6 23:38:21.565088 systemd[1]: Started cri-containerd-4dd27c27ac0bfe4dcd4c6fa2e22782c857644ccb07b7ee8f909c89b0b1e3755c.scope - libcontainer container 4dd27c27ac0bfe4dcd4c6fa2e22782c857644ccb07b7ee8f909c89b0b1e3755c. Nov 6 23:38:21.589437 containerd[1474]: time="2025-11-06T23:38:21.588809433Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hmjzf,Uid:3ca1892f-5548-4c2d-b510-4f7e4beeca11,Namespace:kube-system,Attempt:0,} returns sandbox id \"99e0f234a9bb01f9e79fa2feef9f49fac350c97a6b5a5fe878691b8d11de2e5a\"" Nov 6 23:38:21.590678 kubelet[2602]: E1106 23:38:21.590464 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:38:21.599983 containerd[1474]: time="2025-11-06T23:38:21.599734250Z" level=info msg="CreateContainer within sandbox \"99e0f234a9bb01f9e79fa2feef9f49fac350c97a6b5a5fe878691b8d11de2e5a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 6 23:38:21.603283 kubelet[2602]: E1106 23:38:21.603097 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:38:21.604281 containerd[1474]: time="2025-11-06T23:38:21.603776449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-d5trf,Uid:ec8dd16a-f84c-4512-8367-2001ee2ca9e1,Namespace:kube-system,Attempt:0,}" Nov 6 23:38:21.624202 containerd[1474]: time="2025-11-06T23:38:21.624152162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kktl4,Uid:fc763845-118d-466b-9e2e-8414a02a094e,Namespace:kube-system,Attempt:0,} returns sandbox id \"4dd27c27ac0bfe4dcd4c6fa2e22782c857644ccb07b7ee8f909c89b0b1e3755c\"" Nov 6 23:38:21.626657 kubelet[2602]: E1106 23:38:21.626298 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:38:21.631861 containerd[1474]: time="2025-11-06T23:38:21.631766916Z" level=info msg="CreateContainer within sandbox \"99e0f234a9bb01f9e79fa2feef9f49fac350c97a6b5a5fe878691b8d11de2e5a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8b2e4ca540694fdcaaf5c0aa3c190586fa64c4bb72451bf8b6b0ed3c004d0d46\"" Nov 6 23:38:21.634542 containerd[1474]: time="2025-11-06T23:38:21.634461160Z" level=info msg="StartContainer for \"8b2e4ca540694fdcaaf5c0aa3c190586fa64c4bb72451bf8b6b0ed3c004d0d46\"" Nov 6 23:38:21.638993 containerd[1474]: time="2025-11-06T23:38:21.637260661Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Nov 6 23:38:21.642289 kubelet[2602]: E1106 23:38:21.642031 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:38:21.687937 containerd[1474]: time="2025-11-06T23:38:21.685964324Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 6 23:38:21.687937 containerd[1474]: time="2025-11-06T23:38:21.686071119Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 6 23:38:21.687937 containerd[1474]: time="2025-11-06T23:38:21.686096001Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 6 23:38:21.687937 containerd[1474]: time="2025-11-06T23:38:21.686578071Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 6 23:38:21.724083 systemd[1]: Started cri-containerd-8b2e4ca540694fdcaaf5c0aa3c190586fa64c4bb72451bf8b6b0ed3c004d0d46.scope - libcontainer container 8b2e4ca540694fdcaaf5c0aa3c190586fa64c4bb72451bf8b6b0ed3c004d0d46. Nov 6 23:38:21.734667 systemd[1]: Started cri-containerd-d0143510f844ebc6ffd41a760dfd897bb4c56512dcbda4970ca4cd16cd7762ec.scope - libcontainer container d0143510f844ebc6ffd41a760dfd897bb4c56512dcbda4970ca4cd16cd7762ec. Nov 6 23:38:21.776853 containerd[1474]: time="2025-11-06T23:38:21.776714240Z" level=info msg="StartContainer for \"8b2e4ca540694fdcaaf5c0aa3c190586fa64c4bb72451bf8b6b0ed3c004d0d46\" returns successfully" Nov 6 23:38:21.812931 containerd[1474]: time="2025-11-06T23:38:21.812714805Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-d5trf,Uid:ec8dd16a-f84c-4512-8367-2001ee2ca9e1,Namespace:kube-system,Attempt:0,} returns sandbox id \"d0143510f844ebc6ffd41a760dfd897bb4c56512dcbda4970ca4cd16cd7762ec\"" Nov 6 23:38:21.817351 kubelet[2602]: E1106 23:38:21.815115 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:38:22.317134 kubelet[2602]: E1106 23:38:22.317098 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:38:22.320548 kubelet[2602]: E1106 23:38:22.318200 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:38:22.320798 kubelet[2602]: E1106 23:38:22.320391 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:38:22.356607 kubelet[2602]: I1106 23:38:22.356245 2602 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-hmjzf" podStartSLOduration=1.356202238 podStartE2EDuration="1.356202238s" podCreationTimestamp="2025-11-06 23:38:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 23:38:22.340273803 +0000 UTC m=+6.284587707" watchObservedRunningTime="2025-11-06 23:38:22.356202238 +0000 UTC m=+6.300516146" Nov 6 23:38:23.323873 kubelet[2602]: E1106 23:38:23.323822 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:38:24.100292 kubelet[2602]: E1106 23:38:24.099891 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:38:24.326466 kubelet[2602]: E1106 23:38:24.326073 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:38:26.166918 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1302597222.mount: Deactivated successfully. Nov 6 23:38:28.475026 containerd[1474]: time="2025-11-06T23:38:28.473967109Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:38:28.477738 containerd[1474]: time="2025-11-06T23:38:28.476386256Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Nov 6 23:38:28.479105 containerd[1474]: time="2025-11-06T23:38:28.479045836Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:38:28.481275 containerd[1474]: time="2025-11-06T23:38:28.481081236Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 6.843725354s" Nov 6 23:38:28.481275 containerd[1474]: time="2025-11-06T23:38:28.481131829Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Nov 6 23:38:28.483696 containerd[1474]: time="2025-11-06T23:38:28.483663558Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Nov 6 23:38:28.488416 containerd[1474]: time="2025-11-06T23:38:28.487941810Z" level=info msg="CreateContainer within sandbox \"4dd27c27ac0bfe4dcd4c6fa2e22782c857644ccb07b7ee8f909c89b0b1e3755c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 6 23:38:28.523537 update_engine[1461]: I20251106 23:38:28.523414 1461 update_attempter.cc:509] Updating boot flags... Nov 6 23:38:28.562805 containerd[1474]: time="2025-11-06T23:38:28.562736962Z" level=info msg="CreateContainer within sandbox \"4dd27c27ac0bfe4dcd4c6fa2e22782c857644ccb07b7ee8f909c89b0b1e3755c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"835d3258dacd5a26ac87f89c04fda942738116d114c493b4af3291c2ce7121c6\"" Nov 6 23:38:28.566207 containerd[1474]: time="2025-11-06T23:38:28.564880827Z" level=info msg="StartContainer for \"835d3258dacd5a26ac87f89c04fda942738116d114c493b4af3291c2ce7121c6\"" Nov 6 23:38:28.616359 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (3006) Nov 6 23:38:28.840339 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (3008) Nov 6 23:38:28.876258 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (3008) Nov 6 23:38:28.884291 systemd[1]: Started cri-containerd-835d3258dacd5a26ac87f89c04fda942738116d114c493b4af3291c2ce7121c6.scope - libcontainer container 835d3258dacd5a26ac87f89c04fda942738116d114c493b4af3291c2ce7121c6. Nov 6 23:38:29.024792 systemd[1]: cri-containerd-835d3258dacd5a26ac87f89c04fda942738116d114c493b4af3291c2ce7121c6.scope: Deactivated successfully. Nov 6 23:38:29.025084 systemd[1]: cri-containerd-835d3258dacd5a26ac87f89c04fda942738116d114c493b4af3291c2ce7121c6.scope: Consumed 29ms CPU time, 6.4M memory peak, 4K read from disk, 2.6M written to disk. Nov 6 23:38:29.030064 containerd[1474]: time="2025-11-06T23:38:29.029308515Z" level=info msg="StartContainer for \"835d3258dacd5a26ac87f89c04fda942738116d114c493b4af3291c2ce7121c6\" returns successfully" Nov 6 23:38:29.191487 containerd[1474]: time="2025-11-06T23:38:29.178459925Z" level=info msg="shim disconnected" id=835d3258dacd5a26ac87f89c04fda942738116d114c493b4af3291c2ce7121c6 namespace=k8s.io Nov 6 23:38:29.191487 containerd[1474]: time="2025-11-06T23:38:29.191385817Z" level=warning msg="cleaning up after shim disconnected" id=835d3258dacd5a26ac87f89c04fda942738116d114c493b4af3291c2ce7121c6 namespace=k8s.io Nov 6 23:38:29.191487 containerd[1474]: time="2025-11-06T23:38:29.191406982Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 6 23:38:29.209591 containerd[1474]: time="2025-11-06T23:38:29.209526159Z" level=warning msg="cleanup warnings time=\"2025-11-06T23:38:29Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Nov 6 23:38:29.350830 kubelet[2602]: E1106 23:38:29.350134 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:38:29.356674 containerd[1474]: time="2025-11-06T23:38:29.356619510Z" level=info msg="CreateContainer within sandbox \"4dd27c27ac0bfe4dcd4c6fa2e22782c857644ccb07b7ee8f909c89b0b1e3755c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 6 23:38:29.376817 containerd[1474]: time="2025-11-06T23:38:29.376610421Z" level=info msg="CreateContainer within sandbox \"4dd27c27ac0bfe4dcd4c6fa2e22782c857644ccb07b7ee8f909c89b0b1e3755c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"79aaf97a8c559ac20e3213c0de2b909b0f2cf13c5148294ba7a886d9fd4e107d\"" Nov 6 23:38:29.377825 containerd[1474]: time="2025-11-06T23:38:29.377792389Z" level=info msg="StartContainer for \"79aaf97a8c559ac20e3213c0de2b909b0f2cf13c5148294ba7a886d9fd4e107d\"" Nov 6 23:38:29.418560 systemd[1]: Started cri-containerd-79aaf97a8c559ac20e3213c0de2b909b0f2cf13c5148294ba7a886d9fd4e107d.scope - libcontainer container 79aaf97a8c559ac20e3213c0de2b909b0f2cf13c5148294ba7a886d9fd4e107d. Nov 6 23:38:29.503860 containerd[1474]: time="2025-11-06T23:38:29.503628845Z" level=info msg="StartContainer for \"79aaf97a8c559ac20e3213c0de2b909b0f2cf13c5148294ba7a886d9fd4e107d\" returns successfully" Nov 6 23:38:29.529911 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 6 23:38:29.530614 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 6 23:38:29.531365 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Nov 6 23:38:29.540523 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 6 23:38:29.540803 systemd[1]: cri-containerd-79aaf97a8c559ac20e3213c0de2b909b0f2cf13c5148294ba7a886d9fd4e107d.scope: Deactivated successfully. Nov 6 23:38:29.554154 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-835d3258dacd5a26ac87f89c04fda942738116d114c493b4af3291c2ce7121c6-rootfs.mount: Deactivated successfully. Nov 6 23:38:29.554809 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Nov 6 23:38:29.613735 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount740836516.mount: Deactivated successfully. Nov 6 23:38:29.616208 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 6 23:38:29.627830 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-79aaf97a8c559ac20e3213c0de2b909b0f2cf13c5148294ba7a886d9fd4e107d-rootfs.mount: Deactivated successfully. Nov 6 23:38:29.631600 containerd[1474]: time="2025-11-06T23:38:29.631340301Z" level=info msg="shim disconnected" id=79aaf97a8c559ac20e3213c0de2b909b0f2cf13c5148294ba7a886d9fd4e107d namespace=k8s.io Nov 6 23:38:29.631600 containerd[1474]: time="2025-11-06T23:38:29.631403597Z" level=warning msg="cleaning up after shim disconnected" id=79aaf97a8c559ac20e3213c0de2b909b0f2cf13c5148294ba7a886d9fd4e107d namespace=k8s.io Nov 6 23:38:29.631600 containerd[1474]: time="2025-11-06T23:38:29.631431818Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 6 23:38:30.193934 containerd[1474]: time="2025-11-06T23:38:30.193826695Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:38:30.194953 containerd[1474]: time="2025-11-06T23:38:30.194876561Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Nov 6 23:38:30.218099 containerd[1474]: time="2025-11-06T23:38:30.218000986Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:38:30.220539 containerd[1474]: time="2025-11-06T23:38:30.220374360Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 1.735949914s" Nov 6 23:38:30.220539 containerd[1474]: time="2025-11-06T23:38:30.220421722Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Nov 6 23:38:30.225011 containerd[1474]: time="2025-11-06T23:38:30.224941931Z" level=info msg="CreateContainer within sandbox \"d0143510f844ebc6ffd41a760dfd897bb4c56512dcbda4970ca4cd16cd7762ec\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Nov 6 23:38:30.248511 containerd[1474]: time="2025-11-06T23:38:30.248453082Z" level=info msg="CreateContainer within sandbox \"d0143510f844ebc6ffd41a760dfd897bb4c56512dcbda4970ca4cd16cd7762ec\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"eaafaabca44711771648c79018d397aa0e83cad2733c829e28e3cd0a923ce0dd\"" Nov 6 23:38:30.250129 containerd[1474]: time="2025-11-06T23:38:30.250089117Z" level=info msg="StartContainer for \"eaafaabca44711771648c79018d397aa0e83cad2733c829e28e3cd0a923ce0dd\"" Nov 6 23:38:30.290948 systemd[1]: Started cri-containerd-eaafaabca44711771648c79018d397aa0e83cad2733c829e28e3cd0a923ce0dd.scope - libcontainer container eaafaabca44711771648c79018d397aa0e83cad2733c829e28e3cd0a923ce0dd. Nov 6 23:38:30.345432 containerd[1474]: time="2025-11-06T23:38:30.345385194Z" level=info msg="StartContainer for \"eaafaabca44711771648c79018d397aa0e83cad2733c829e28e3cd0a923ce0dd\" returns successfully" Nov 6 23:38:30.360369 kubelet[2602]: E1106 23:38:30.358751 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:38:30.373503 kubelet[2602]: E1106 23:38:30.373461 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:38:30.378637 containerd[1474]: time="2025-11-06T23:38:30.378584311Z" level=info msg="CreateContainer within sandbox \"4dd27c27ac0bfe4dcd4c6fa2e22782c857644ccb07b7ee8f909c89b0b1e3755c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 6 23:38:30.419156 containerd[1474]: time="2025-11-06T23:38:30.419099708Z" level=info msg="CreateContainer within sandbox \"4dd27c27ac0bfe4dcd4c6fa2e22782c857644ccb07b7ee8f909c89b0b1e3755c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"6737868c5a0fd1a7d3a32cf812e6c3bb773e581e16ce394cb0bc8b057de6a0dc\"" Nov 6 23:38:30.420188 containerd[1474]: time="2025-11-06T23:38:30.420146120Z" level=info msg="StartContainer for \"6737868c5a0fd1a7d3a32cf812e6c3bb773e581e16ce394cb0bc8b057de6a0dc\"" Nov 6 23:38:30.438507 kubelet[2602]: I1106 23:38:30.434090 2602 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-d5trf" podStartSLOduration=1.032605998 podStartE2EDuration="9.434066591s" podCreationTimestamp="2025-11-06 23:38:21 +0000 UTC" firstStartedPulling="2025-11-06 23:38:21.820105696 +0000 UTC m=+5.764419595" lastFinishedPulling="2025-11-06 23:38:30.221566299 +0000 UTC m=+14.165880188" observedRunningTime="2025-11-06 23:38:30.390727477 +0000 UTC m=+14.335041385" watchObservedRunningTime="2025-11-06 23:38:30.434066591 +0000 UTC m=+14.378380502" Nov 6 23:38:30.478804 systemd[1]: Started cri-containerd-6737868c5a0fd1a7d3a32cf812e6c3bb773e581e16ce394cb0bc8b057de6a0dc.scope - libcontainer container 6737868c5a0fd1a7d3a32cf812e6c3bb773e581e16ce394cb0bc8b057de6a0dc. Nov 6 23:38:30.528955 containerd[1474]: time="2025-11-06T23:38:30.528894953Z" level=info msg="StartContainer for \"6737868c5a0fd1a7d3a32cf812e6c3bb773e581e16ce394cb0bc8b057de6a0dc\" returns successfully" Nov 6 23:38:30.535350 systemd[1]: cri-containerd-6737868c5a0fd1a7d3a32cf812e6c3bb773e581e16ce394cb0bc8b057de6a0dc.scope: Deactivated successfully. Nov 6 23:38:30.589448 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6737868c5a0fd1a7d3a32cf812e6c3bb773e581e16ce394cb0bc8b057de6a0dc-rootfs.mount: Deactivated successfully. Nov 6 23:38:30.590608 containerd[1474]: time="2025-11-06T23:38:30.589597647Z" level=info msg="shim disconnected" id=6737868c5a0fd1a7d3a32cf812e6c3bb773e581e16ce394cb0bc8b057de6a0dc namespace=k8s.io Nov 6 23:38:30.590608 containerd[1474]: time="2025-11-06T23:38:30.589664569Z" level=warning msg="cleaning up after shim disconnected" id=6737868c5a0fd1a7d3a32cf812e6c3bb773e581e16ce394cb0bc8b057de6a0dc namespace=k8s.io Nov 6 23:38:30.590608 containerd[1474]: time="2025-11-06T23:38:30.589673696Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 6 23:38:31.377920 kubelet[2602]: E1106 23:38:31.376845 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:38:31.377920 kubelet[2602]: E1106 23:38:31.376975 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:38:31.387548 containerd[1474]: time="2025-11-06T23:38:31.387473939Z" level=info msg="CreateContainer within sandbox \"4dd27c27ac0bfe4dcd4c6fa2e22782c857644ccb07b7ee8f909c89b0b1e3755c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 6 23:38:31.422430 containerd[1474]: time="2025-11-06T23:38:31.414065684Z" level=info msg="CreateContainer within sandbox \"4dd27c27ac0bfe4dcd4c6fa2e22782c857644ccb07b7ee8f909c89b0b1e3755c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"abb3e8592d9e5533d3470c20d3c235e05a4332468b1e6038445e1c849fddbb35\"" Nov 6 23:38:31.422430 containerd[1474]: time="2025-11-06T23:38:31.417520496Z" level=info msg="StartContainer for \"abb3e8592d9e5533d3470c20d3c235e05a4332468b1e6038445e1c849fddbb35\"" Nov 6 23:38:31.483510 systemd[1]: Started cri-containerd-abb3e8592d9e5533d3470c20d3c235e05a4332468b1e6038445e1c849fddbb35.scope - libcontainer container abb3e8592d9e5533d3470c20d3c235e05a4332468b1e6038445e1c849fddbb35. Nov 6 23:38:31.522280 systemd[1]: cri-containerd-abb3e8592d9e5533d3470c20d3c235e05a4332468b1e6038445e1c849fddbb35.scope: Deactivated successfully. Nov 6 23:38:31.523005 containerd[1474]: time="2025-11-06T23:38:31.522875500Z" level=info msg="StartContainer for \"abb3e8592d9e5533d3470c20d3c235e05a4332468b1e6038445e1c849fddbb35\" returns successfully" Nov 6 23:38:31.558969 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-abb3e8592d9e5533d3470c20d3c235e05a4332468b1e6038445e1c849fddbb35-rootfs.mount: Deactivated successfully. Nov 6 23:38:31.564264 containerd[1474]: time="2025-11-06T23:38:31.563027069Z" level=info msg="shim disconnected" id=abb3e8592d9e5533d3470c20d3c235e05a4332468b1e6038445e1c849fddbb35 namespace=k8s.io Nov 6 23:38:31.564264 containerd[1474]: time="2025-11-06T23:38:31.563086844Z" level=warning msg="cleaning up after shim disconnected" id=abb3e8592d9e5533d3470c20d3c235e05a4332468b1e6038445e1c849fddbb35 namespace=k8s.io Nov 6 23:38:31.564264 containerd[1474]: time="2025-11-06T23:38:31.563094586Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 6 23:38:32.382736 kubelet[2602]: E1106 23:38:32.382698 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:38:32.395599 containerd[1474]: time="2025-11-06T23:38:32.395530873Z" level=info msg="CreateContainer within sandbox \"4dd27c27ac0bfe4dcd4c6fa2e22782c857644ccb07b7ee8f909c89b0b1e3755c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 6 23:38:32.417010 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2705125620.mount: Deactivated successfully. Nov 6 23:38:32.421659 containerd[1474]: time="2025-11-06T23:38:32.421582838Z" level=info msg="CreateContainer within sandbox \"4dd27c27ac0bfe4dcd4c6fa2e22782c857644ccb07b7ee8f909c89b0b1e3755c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"df618b658c57ab3dd90e777f47d7da0b74cae2ca7d202ab3bf476f2d11e3c233\"" Nov 6 23:38:32.425512 containerd[1474]: time="2025-11-06T23:38:32.423664140Z" level=info msg="StartContainer for \"df618b658c57ab3dd90e777f47d7da0b74cae2ca7d202ab3bf476f2d11e3c233\"" Nov 6 23:38:32.465518 systemd[1]: Started cri-containerd-df618b658c57ab3dd90e777f47d7da0b74cae2ca7d202ab3bf476f2d11e3c233.scope - libcontainer container df618b658c57ab3dd90e777f47d7da0b74cae2ca7d202ab3bf476f2d11e3c233. Nov 6 23:38:32.503321 containerd[1474]: time="2025-11-06T23:38:32.503174680Z" level=info msg="StartContainer for \"df618b658c57ab3dd90e777f47d7da0b74cae2ca7d202ab3bf476f2d11e3c233\" returns successfully" Nov 6 23:38:32.693605 kubelet[2602]: I1106 23:38:32.691860 2602 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 6 23:38:32.758053 systemd[1]: Created slice kubepods-burstable-pod8623b039_7a6f_48ae_a3af_3e4f63d72593.slice - libcontainer container kubepods-burstable-pod8623b039_7a6f_48ae_a3af_3e4f63d72593.slice. Nov 6 23:38:32.780279 systemd[1]: Created slice kubepods-burstable-pod0487f1a6_b567_41c3_a685_874b69b7567e.slice - libcontainer container kubepods-burstable-pod0487f1a6_b567_41c3_a685_874b69b7567e.slice. Nov 6 23:38:32.853424 kubelet[2602]: I1106 23:38:32.853366 2602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8blph\" (UniqueName: \"kubernetes.io/projected/8623b039-7a6f-48ae-a3af-3e4f63d72593-kube-api-access-8blph\") pod \"coredns-674b8bbfcf-7gkr9\" (UID: \"8623b039-7a6f-48ae-a3af-3e4f63d72593\") " pod="kube-system/coredns-674b8bbfcf-7gkr9" Nov 6 23:38:32.853424 kubelet[2602]: I1106 23:38:32.853418 2602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0487f1a6-b567-41c3-a685-874b69b7567e-config-volume\") pod \"coredns-674b8bbfcf-fz9z7\" (UID: \"0487f1a6-b567-41c3-a685-874b69b7567e\") " pod="kube-system/coredns-674b8bbfcf-fz9z7" Nov 6 23:38:32.853624 kubelet[2602]: I1106 23:38:32.853451 2602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87m6z\" (UniqueName: \"kubernetes.io/projected/0487f1a6-b567-41c3-a685-874b69b7567e-kube-api-access-87m6z\") pod \"coredns-674b8bbfcf-fz9z7\" (UID: \"0487f1a6-b567-41c3-a685-874b69b7567e\") " pod="kube-system/coredns-674b8bbfcf-fz9z7" Nov 6 23:38:32.853624 kubelet[2602]: I1106 23:38:32.853471 2602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8623b039-7a6f-48ae-a3af-3e4f63d72593-config-volume\") pod \"coredns-674b8bbfcf-7gkr9\" (UID: \"8623b039-7a6f-48ae-a3af-3e4f63d72593\") " pod="kube-system/coredns-674b8bbfcf-7gkr9" Nov 6 23:38:33.073912 kubelet[2602]: E1106 23:38:33.073740 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:38:33.076801 containerd[1474]: time="2025-11-06T23:38:33.076722922Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-7gkr9,Uid:8623b039-7a6f-48ae-a3af-3e4f63d72593,Namespace:kube-system,Attempt:0,}" Nov 6 23:38:33.090297 kubelet[2602]: E1106 23:38:33.087026 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:38:33.090437 containerd[1474]: time="2025-11-06T23:38:33.087854939Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fz9z7,Uid:0487f1a6-b567-41c3-a685-874b69b7567e,Namespace:kube-system,Attempt:0,}" Nov 6 23:38:33.391658 kubelet[2602]: E1106 23:38:33.391452 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:38:33.413756 kubelet[2602]: I1106 23:38:33.413671 2602 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-kktl4" podStartSLOduration=5.562134101 podStartE2EDuration="12.413629713s" podCreationTimestamp="2025-11-06 23:38:21 +0000 UTC" firstStartedPulling="2025-11-06 23:38:21.631211765 +0000 UTC m=+5.575525666" lastFinishedPulling="2025-11-06 23:38:28.482707392 +0000 UTC m=+12.427021278" observedRunningTime="2025-11-06 23:38:33.412359712 +0000 UTC m=+17.356673619" watchObservedRunningTime="2025-11-06 23:38:33.413629713 +0000 UTC m=+17.357943621" Nov 6 23:38:34.392092 kubelet[2602]: E1106 23:38:34.392042 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:38:35.123473 systemd-networkd[1382]: cilium_host: Link UP Nov 6 23:38:35.125090 systemd-networkd[1382]: cilium_net: Link UP Nov 6 23:38:35.127217 systemd-networkd[1382]: cilium_net: Gained carrier Nov 6 23:38:35.127630 systemd-networkd[1382]: cilium_host: Gained carrier Nov 6 23:38:35.128276 systemd-networkd[1382]: cilium_net: Gained IPv6LL Nov 6 23:38:35.128595 systemd-networkd[1382]: cilium_host: Gained IPv6LL Nov 6 23:38:35.274621 systemd-networkd[1382]: cilium_vxlan: Link UP Nov 6 23:38:35.274631 systemd-networkd[1382]: cilium_vxlan: Gained carrier Nov 6 23:38:35.394373 kubelet[2602]: E1106 23:38:35.394196 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:38:35.708653 kernel: NET: Registered PF_ALG protocol family Nov 6 23:38:36.810043 systemd-networkd[1382]: lxc_health: Link UP Nov 6 23:38:36.825446 systemd-networkd[1382]: lxc_health: Gained carrier Nov 6 23:38:37.225260 kernel: eth0: renamed from tmp3691f Nov 6 23:38:37.233427 systemd-networkd[1382]: lxca396cdfb5569: Link UP Nov 6 23:38:37.242998 systemd-networkd[1382]: lxc9b57ee680e79: Link UP Nov 6 23:38:37.245260 kernel: eth0: renamed from tmpcea61 Nov 6 23:38:37.246669 systemd-networkd[1382]: lxca396cdfb5569: Gained carrier Nov 6 23:38:37.251407 systemd-networkd[1382]: lxc9b57ee680e79: Gained carrier Nov 6 23:38:37.289296 systemd-networkd[1382]: cilium_vxlan: Gained IPv6LL Nov 6 23:38:37.462662 kubelet[2602]: E1106 23:38:37.462608 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:38:38.243472 systemd-networkd[1382]: lxc_health: Gained IPv6LL Nov 6 23:38:38.627493 systemd-networkd[1382]: lxc9b57ee680e79: Gained IPv6LL Nov 6 23:38:38.691427 systemd-networkd[1382]: lxca396cdfb5569: Gained IPv6LL Nov 6 23:38:42.155840 containerd[1474]: time="2025-11-06T23:38:42.155677849Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 6 23:38:42.155840 containerd[1474]: time="2025-11-06T23:38:42.155761507Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 6 23:38:42.155840 containerd[1474]: time="2025-11-06T23:38:42.155792627Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 6 23:38:42.159167 containerd[1474]: time="2025-11-06T23:38:42.158859564Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 6 23:38:42.162652 containerd[1474]: time="2025-11-06T23:38:42.162379339Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 6 23:38:42.163248 containerd[1474]: time="2025-11-06T23:38:42.163143438Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 6 23:38:42.163680 containerd[1474]: time="2025-11-06T23:38:42.163217255Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 6 23:38:42.179404 containerd[1474]: time="2025-11-06T23:38:42.174364113Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 6 23:38:42.241012 systemd[1]: run-containerd-runc-k8s.io-3691fbbd2cf8eb3b486a8b3111a6bb59b945226730ecf33829fe4a7bdb7b4df0-runc.2QfCum.mount: Deactivated successfully. Nov 6 23:38:42.254494 systemd[1]: Started cri-containerd-3691fbbd2cf8eb3b486a8b3111a6bb59b945226730ecf33829fe4a7bdb7b4df0.scope - libcontainer container 3691fbbd2cf8eb3b486a8b3111a6bb59b945226730ecf33829fe4a7bdb7b4df0. Nov 6 23:38:42.257951 systemd[1]: Started cri-containerd-cea61c1abb3f182e7929104b1e8c8427b2e428222be02d03331d150a47d27609.scope - libcontainer container cea61c1abb3f182e7929104b1e8c8427b2e428222be02d03331d150a47d27609. Nov 6 23:38:42.369342 containerd[1474]: time="2025-11-06T23:38:42.369185117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fz9z7,Uid:0487f1a6-b567-41c3-a685-874b69b7567e,Namespace:kube-system,Attempt:0,} returns sandbox id \"3691fbbd2cf8eb3b486a8b3111a6bb59b945226730ecf33829fe4a7bdb7b4df0\"" Nov 6 23:38:42.373541 kubelet[2602]: E1106 23:38:42.373497 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:38:42.381014 containerd[1474]: time="2025-11-06T23:38:42.380864572Z" level=info msg="CreateContainer within sandbox \"3691fbbd2cf8eb3b486a8b3111a6bb59b945226730ecf33829fe4a7bdb7b4df0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 6 23:38:42.383981 containerd[1474]: time="2025-11-06T23:38:42.383936860Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-7gkr9,Uid:8623b039-7a6f-48ae-a3af-3e4f63d72593,Namespace:kube-system,Attempt:0,} returns sandbox id \"cea61c1abb3f182e7929104b1e8c8427b2e428222be02d03331d150a47d27609\"" Nov 6 23:38:42.385969 kubelet[2602]: E1106 23:38:42.385927 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:38:42.390893 containerd[1474]: time="2025-11-06T23:38:42.390621126Z" level=info msg="CreateContainer within sandbox \"cea61c1abb3f182e7929104b1e8c8427b2e428222be02d03331d150a47d27609\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 6 23:38:42.410284 containerd[1474]: time="2025-11-06T23:38:42.409718886Z" level=info msg="CreateContainer within sandbox \"cea61c1abb3f182e7929104b1e8c8427b2e428222be02d03331d150a47d27609\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a893e3b9c10c63fd317c244ee8c5b97422d2a453db4c46804cb701ddbc9f948e\"" Nov 6 23:38:42.414057 containerd[1474]: time="2025-11-06T23:38:42.411164833Z" level=info msg="StartContainer for \"a893e3b9c10c63fd317c244ee8c5b97422d2a453db4c46804cb701ddbc9f948e\"" Nov 6 23:38:42.420048 containerd[1474]: time="2025-11-06T23:38:42.419987976Z" level=info msg="CreateContainer within sandbox \"3691fbbd2cf8eb3b486a8b3111a6bb59b945226730ecf33829fe4a7bdb7b4df0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8678ca0211a4200fb0e8089e71a7d48b9ac4185bdc1d03ea356f911273a5d721\"" Nov 6 23:38:42.424262 containerd[1474]: time="2025-11-06T23:38:42.423434527Z" level=info msg="StartContainer for \"8678ca0211a4200fb0e8089e71a7d48b9ac4185bdc1d03ea356f911273a5d721\"" Nov 6 23:38:42.466033 systemd[1]: Started cri-containerd-a893e3b9c10c63fd317c244ee8c5b97422d2a453db4c46804cb701ddbc9f948e.scope - libcontainer container a893e3b9c10c63fd317c244ee8c5b97422d2a453db4c46804cb701ddbc9f948e. Nov 6 23:38:42.478549 systemd[1]: Started cri-containerd-8678ca0211a4200fb0e8089e71a7d48b9ac4185bdc1d03ea356f911273a5d721.scope - libcontainer container 8678ca0211a4200fb0e8089e71a7d48b9ac4185bdc1d03ea356f911273a5d721. Nov 6 23:38:42.534485 containerd[1474]: time="2025-11-06T23:38:42.534264165Z" level=info msg="StartContainer for \"a893e3b9c10c63fd317c244ee8c5b97422d2a453db4c46804cb701ddbc9f948e\" returns successfully" Nov 6 23:38:42.534485 containerd[1474]: time="2025-11-06T23:38:42.534364175Z" level=info msg="StartContainer for \"8678ca0211a4200fb0e8089e71a7d48b9ac4185bdc1d03ea356f911273a5d721\" returns successfully" Nov 6 23:38:43.432169 kubelet[2602]: E1106 23:38:43.431706 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:38:43.437128 kubelet[2602]: E1106 23:38:43.437053 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:38:43.459112 kubelet[2602]: I1106 23:38:43.458608 2602 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-7gkr9" podStartSLOduration=22.458571214 podStartE2EDuration="22.458571214s" podCreationTimestamp="2025-11-06 23:38:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 23:38:43.454915513 +0000 UTC m=+27.399229420" watchObservedRunningTime="2025-11-06 23:38:43.458571214 +0000 UTC m=+27.402885125" Nov 6 23:38:44.439772 kubelet[2602]: E1106 23:38:44.439480 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:38:44.439772 kubelet[2602]: E1106 23:38:44.439655 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:38:45.441720 kubelet[2602]: E1106 23:38:45.441514 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:38:45.441720 kubelet[2602]: E1106 23:38:45.441596 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:38:48.375329 kubelet[2602]: I1106 23:38:48.374476 2602 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 6 23:38:48.375329 kubelet[2602]: E1106 23:38:48.375063 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:38:48.397236 kubelet[2602]: I1106 23:38:48.397150 2602 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-fz9z7" podStartSLOduration=27.397132107 podStartE2EDuration="27.397132107s" podCreationTimestamp="2025-11-06 23:38:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 23:38:43.538469429 +0000 UTC m=+27.482783336" watchObservedRunningTime="2025-11-06 23:38:48.397132107 +0000 UTC m=+32.341446015" Nov 6 23:38:48.452161 kubelet[2602]: E1106 23:38:48.452120 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:39:02.925793 systemd[1]: Started sshd@7-164.92.114.154:22-147.75.109.163:60290.service - OpenSSH per-connection server daemon (147.75.109.163:60290). Nov 6 23:39:03.049097 sshd[4007]: Accepted publickey for core from 147.75.109.163 port 60290 ssh2: RSA SHA256:Z2OWs1J4lzJbb37sVDKV9BLSPLNt86phBIrOXH84t9A Nov 6 23:39:03.051938 sshd-session[4007]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:39:03.068342 systemd-logind[1460]: New session 8 of user core. Nov 6 23:39:03.077619 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 6 23:39:03.746992 sshd[4009]: Connection closed by 147.75.109.163 port 60290 Nov 6 23:39:03.748469 sshd-session[4007]: pam_unix(sshd:session): session closed for user core Nov 6 23:39:03.753002 systemd-logind[1460]: Session 8 logged out. Waiting for processes to exit. Nov 6 23:39:03.753551 systemd[1]: sshd@7-164.92.114.154:22-147.75.109.163:60290.service: Deactivated successfully. Nov 6 23:39:03.757098 systemd[1]: session-8.scope: Deactivated successfully. Nov 6 23:39:03.760004 systemd-logind[1460]: Removed session 8. Nov 6 23:39:08.773737 systemd[1]: Started sshd@8-164.92.114.154:22-147.75.109.163:60296.service - OpenSSH per-connection server daemon (147.75.109.163:60296). Nov 6 23:39:08.829499 sshd[4024]: Accepted publickey for core from 147.75.109.163 port 60296 ssh2: RSA SHA256:Z2OWs1J4lzJbb37sVDKV9BLSPLNt86phBIrOXH84t9A Nov 6 23:39:08.831624 sshd-session[4024]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:39:08.839437 systemd-logind[1460]: New session 9 of user core. Nov 6 23:39:08.845552 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 6 23:39:09.028149 sshd[4026]: Connection closed by 147.75.109.163 port 60296 Nov 6 23:39:09.029182 sshd-session[4024]: pam_unix(sshd:session): session closed for user core Nov 6 23:39:09.035918 systemd[1]: sshd@8-164.92.114.154:22-147.75.109.163:60296.service: Deactivated successfully. Nov 6 23:39:09.039975 systemd[1]: session-9.scope: Deactivated successfully. Nov 6 23:39:09.041608 systemd-logind[1460]: Session 9 logged out. Waiting for processes to exit. Nov 6 23:39:09.043850 systemd-logind[1460]: Removed session 9. Nov 6 23:39:14.048598 systemd[1]: Started sshd@9-164.92.114.154:22-147.75.109.163:58350.service - OpenSSH per-connection server daemon (147.75.109.163:58350). Nov 6 23:39:14.104853 sshd[4041]: Accepted publickey for core from 147.75.109.163 port 58350 ssh2: RSA SHA256:Z2OWs1J4lzJbb37sVDKV9BLSPLNt86phBIrOXH84t9A Nov 6 23:39:14.106659 sshd-session[4041]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:39:14.113254 systemd-logind[1460]: New session 10 of user core. Nov 6 23:39:14.127643 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 6 23:39:14.278611 sshd[4043]: Connection closed by 147.75.109.163 port 58350 Nov 6 23:39:14.278980 sshd-session[4041]: pam_unix(sshd:session): session closed for user core Nov 6 23:39:14.284455 systemd[1]: sshd@9-164.92.114.154:22-147.75.109.163:58350.service: Deactivated successfully. Nov 6 23:39:14.288662 systemd[1]: session-10.scope: Deactivated successfully. Nov 6 23:39:14.291671 systemd-logind[1460]: Session 10 logged out. Waiting for processes to exit. Nov 6 23:39:14.292902 systemd-logind[1460]: Removed session 10. Nov 6 23:39:19.305280 systemd[1]: Started sshd@10-164.92.114.154:22-147.75.109.163:58358.service - OpenSSH per-connection server daemon (147.75.109.163:58358). Nov 6 23:39:19.378572 sshd[4058]: Accepted publickey for core from 147.75.109.163 port 58358 ssh2: RSA SHA256:Z2OWs1J4lzJbb37sVDKV9BLSPLNt86phBIrOXH84t9A Nov 6 23:39:19.381513 sshd-session[4058]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:39:19.390429 systemd-logind[1460]: New session 11 of user core. Nov 6 23:39:19.395558 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 6 23:39:19.569578 sshd[4060]: Connection closed by 147.75.109.163 port 58358 Nov 6 23:39:19.569334 sshd-session[4058]: pam_unix(sshd:session): session closed for user core Nov 6 23:39:19.586977 systemd[1]: sshd@10-164.92.114.154:22-147.75.109.163:58358.service: Deactivated successfully. Nov 6 23:39:19.590187 systemd[1]: session-11.scope: Deactivated successfully. Nov 6 23:39:19.593244 systemd-logind[1460]: Session 11 logged out. Waiting for processes to exit. Nov 6 23:39:19.598849 systemd[1]: Started sshd@11-164.92.114.154:22-147.75.109.163:58366.service - OpenSSH per-connection server daemon (147.75.109.163:58366). Nov 6 23:39:19.601304 systemd-logind[1460]: Removed session 11. Nov 6 23:39:19.671650 sshd[4072]: Accepted publickey for core from 147.75.109.163 port 58366 ssh2: RSA SHA256:Z2OWs1J4lzJbb37sVDKV9BLSPLNt86phBIrOXH84t9A Nov 6 23:39:19.675716 sshd-session[4072]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:39:19.683412 systemd-logind[1460]: New session 12 of user core. Nov 6 23:39:19.691582 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 6 23:39:19.979291 sshd[4075]: Connection closed by 147.75.109.163 port 58366 Nov 6 23:39:19.979784 sshd-session[4072]: pam_unix(sshd:session): session closed for user core Nov 6 23:39:20.004778 systemd[1]: sshd@11-164.92.114.154:22-147.75.109.163:58366.service: Deactivated successfully. Nov 6 23:39:20.011197 systemd[1]: session-12.scope: Deactivated successfully. Nov 6 23:39:20.013134 systemd-logind[1460]: Session 12 logged out. Waiting for processes to exit. Nov 6 23:39:20.027425 systemd[1]: Started sshd@12-164.92.114.154:22-147.75.109.163:58378.service - OpenSSH per-connection server daemon (147.75.109.163:58378). Nov 6 23:39:20.033504 systemd-logind[1460]: Removed session 12. Nov 6 23:39:20.125796 sshd[4083]: Accepted publickey for core from 147.75.109.163 port 58378 ssh2: RSA SHA256:Z2OWs1J4lzJbb37sVDKV9BLSPLNt86phBIrOXH84t9A Nov 6 23:39:20.128008 sshd-session[4083]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:39:20.135097 systemd-logind[1460]: New session 13 of user core. Nov 6 23:39:20.141652 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 6 23:39:20.295756 sshd[4086]: Connection closed by 147.75.109.163 port 58378 Nov 6 23:39:20.296997 sshd-session[4083]: pam_unix(sshd:session): session closed for user core Nov 6 23:39:20.306157 systemd[1]: sshd@12-164.92.114.154:22-147.75.109.163:58378.service: Deactivated successfully. Nov 6 23:39:20.313175 systemd[1]: session-13.scope: Deactivated successfully. Nov 6 23:39:20.316077 systemd-logind[1460]: Session 13 logged out. Waiting for processes to exit. Nov 6 23:39:20.318897 systemd-logind[1460]: Removed session 13. Nov 6 23:39:23.259914 kubelet[2602]: E1106 23:39:23.259776 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:39:25.259594 kubelet[2602]: E1106 23:39:25.259527 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:39:25.319653 systemd[1]: Started sshd@13-164.92.114.154:22-147.75.109.163:57580.service - OpenSSH per-connection server daemon (147.75.109.163:57580). Nov 6 23:39:25.378316 sshd[4101]: Accepted publickey for core from 147.75.109.163 port 57580 ssh2: RSA SHA256:Z2OWs1J4lzJbb37sVDKV9BLSPLNt86phBIrOXH84t9A Nov 6 23:39:25.380490 sshd-session[4101]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:39:25.387337 systemd-logind[1460]: New session 14 of user core. Nov 6 23:39:25.392528 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 6 23:39:25.541343 sshd[4103]: Connection closed by 147.75.109.163 port 57580 Nov 6 23:39:25.540140 sshd-session[4101]: pam_unix(sshd:session): session closed for user core Nov 6 23:39:25.544699 systemd-logind[1460]: Session 14 logged out. Waiting for processes to exit. Nov 6 23:39:25.545218 systemd[1]: sshd@13-164.92.114.154:22-147.75.109.163:57580.service: Deactivated successfully. Nov 6 23:39:25.548476 systemd[1]: session-14.scope: Deactivated successfully. Nov 6 23:39:25.551015 systemd-logind[1460]: Removed session 14. Nov 6 23:39:30.553308 systemd[1]: Started sshd@14-164.92.114.154:22-147.75.109.163:41914.service - OpenSSH per-connection server daemon (147.75.109.163:41914). Nov 6 23:39:30.618709 sshd[4115]: Accepted publickey for core from 147.75.109.163 port 41914 ssh2: RSA SHA256:Z2OWs1J4lzJbb37sVDKV9BLSPLNt86phBIrOXH84t9A Nov 6 23:39:30.620483 sshd-session[4115]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:39:30.625898 systemd-logind[1460]: New session 15 of user core. Nov 6 23:39:30.632515 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 6 23:39:30.764839 sshd[4117]: Connection closed by 147.75.109.163 port 41914 Nov 6 23:39:30.764718 sshd-session[4115]: pam_unix(sshd:session): session closed for user core Nov 6 23:39:30.769975 systemd[1]: sshd@14-164.92.114.154:22-147.75.109.163:41914.service: Deactivated successfully. Nov 6 23:39:30.773296 systemd[1]: session-15.scope: Deactivated successfully. Nov 6 23:39:30.775560 systemd-logind[1460]: Session 15 logged out. Waiting for processes to exit. Nov 6 23:39:30.776765 systemd-logind[1460]: Removed session 15. Nov 6 23:39:34.260335 kubelet[2602]: E1106 23:39:34.259801 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:39:35.791701 systemd[1]: Started sshd@15-164.92.114.154:22-147.75.109.163:41916.service - OpenSSH per-connection server daemon (147.75.109.163:41916). Nov 6 23:39:35.850161 sshd[4128]: Accepted publickey for core from 147.75.109.163 port 41916 ssh2: RSA SHA256:Z2OWs1J4lzJbb37sVDKV9BLSPLNt86phBIrOXH84t9A Nov 6 23:39:35.852257 sshd-session[4128]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:39:35.859504 systemd-logind[1460]: New session 16 of user core. Nov 6 23:39:35.865563 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 6 23:39:36.009744 sshd[4130]: Connection closed by 147.75.109.163 port 41916 Nov 6 23:39:36.010568 sshd-session[4128]: pam_unix(sshd:session): session closed for user core Nov 6 23:39:36.025462 systemd[1]: sshd@15-164.92.114.154:22-147.75.109.163:41916.service: Deactivated successfully. Nov 6 23:39:36.028768 systemd[1]: session-16.scope: Deactivated successfully. Nov 6 23:39:36.031975 systemd-logind[1460]: Session 16 logged out. Waiting for processes to exit. Nov 6 23:39:36.040728 systemd[1]: Started sshd@16-164.92.114.154:22-147.75.109.163:41932.service - OpenSSH per-connection server daemon (147.75.109.163:41932). Nov 6 23:39:36.043433 systemd-logind[1460]: Removed session 16. Nov 6 23:39:36.114268 sshd[4141]: Accepted publickey for core from 147.75.109.163 port 41932 ssh2: RSA SHA256:Z2OWs1J4lzJbb37sVDKV9BLSPLNt86phBIrOXH84t9A Nov 6 23:39:36.116053 sshd-session[4141]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:39:36.123989 systemd-logind[1460]: New session 17 of user core. Nov 6 23:39:36.133644 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 6 23:39:36.477098 sshd[4144]: Connection closed by 147.75.109.163 port 41932 Nov 6 23:39:36.479768 sshd-session[4141]: pam_unix(sshd:session): session closed for user core Nov 6 23:39:36.488869 systemd[1]: sshd@16-164.92.114.154:22-147.75.109.163:41932.service: Deactivated successfully. Nov 6 23:39:36.492148 systemd[1]: session-17.scope: Deactivated successfully. Nov 6 23:39:36.495257 systemd-logind[1460]: Session 17 logged out. Waiting for processes to exit. Nov 6 23:39:36.503824 systemd[1]: Started sshd@17-164.92.114.154:22-147.75.109.163:41942.service - OpenSSH per-connection server daemon (147.75.109.163:41942). Nov 6 23:39:36.506882 systemd-logind[1460]: Removed session 17. Nov 6 23:39:36.584983 sshd[4153]: Accepted publickey for core from 147.75.109.163 port 41942 ssh2: RSA SHA256:Z2OWs1J4lzJbb37sVDKV9BLSPLNt86phBIrOXH84t9A Nov 6 23:39:36.587189 sshd-session[4153]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:39:36.594598 systemd-logind[1460]: New session 18 of user core. Nov 6 23:39:36.603604 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 6 23:39:37.415977 sshd[4156]: Connection closed by 147.75.109.163 port 41942 Nov 6 23:39:37.416883 sshd-session[4153]: pam_unix(sshd:session): session closed for user core Nov 6 23:39:37.432239 systemd[1]: sshd@17-164.92.114.154:22-147.75.109.163:41942.service: Deactivated successfully. Nov 6 23:39:37.434900 systemd[1]: session-18.scope: Deactivated successfully. Nov 6 23:39:37.437900 systemd-logind[1460]: Session 18 logged out. Waiting for processes to exit. Nov 6 23:39:37.446884 systemd[1]: Started sshd@18-164.92.114.154:22-147.75.109.163:41944.service - OpenSSH per-connection server daemon (147.75.109.163:41944). Nov 6 23:39:37.454002 systemd-logind[1460]: Removed session 18. Nov 6 23:39:37.522243 sshd[4171]: Accepted publickey for core from 147.75.109.163 port 41944 ssh2: RSA SHA256:Z2OWs1J4lzJbb37sVDKV9BLSPLNt86phBIrOXH84t9A Nov 6 23:39:37.524290 sshd-session[4171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:39:37.530185 systemd-logind[1460]: New session 19 of user core. Nov 6 23:39:37.537579 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 6 23:39:37.867799 sshd[4175]: Connection closed by 147.75.109.163 port 41944 Nov 6 23:39:37.868765 sshd-session[4171]: pam_unix(sshd:session): session closed for user core Nov 6 23:39:37.888798 systemd[1]: sshd@18-164.92.114.154:22-147.75.109.163:41944.service: Deactivated successfully. Nov 6 23:39:37.893918 systemd[1]: session-19.scope: Deactivated successfully. Nov 6 23:39:37.900166 systemd-logind[1460]: Session 19 logged out. Waiting for processes to exit. Nov 6 23:39:37.910300 systemd[1]: Started sshd@19-164.92.114.154:22-147.75.109.163:41958.service - OpenSSH per-connection server daemon (147.75.109.163:41958). Nov 6 23:39:37.912420 systemd-logind[1460]: Removed session 19. Nov 6 23:39:37.978428 sshd[4184]: Accepted publickey for core from 147.75.109.163 port 41958 ssh2: RSA SHA256:Z2OWs1J4lzJbb37sVDKV9BLSPLNt86phBIrOXH84t9A Nov 6 23:39:37.980119 sshd-session[4184]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:39:37.987039 systemd-logind[1460]: New session 20 of user core. Nov 6 23:39:37.993535 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 6 23:39:38.153941 sshd[4187]: Connection closed by 147.75.109.163 port 41958 Nov 6 23:39:38.155605 sshd-session[4184]: pam_unix(sshd:session): session closed for user core Nov 6 23:39:38.161955 systemd-logind[1460]: Session 20 logged out. Waiting for processes to exit. Nov 6 23:39:38.163051 systemd[1]: sshd@19-164.92.114.154:22-147.75.109.163:41958.service: Deactivated successfully. Nov 6 23:39:38.165987 systemd[1]: session-20.scope: Deactivated successfully. Nov 6 23:39:38.167264 systemd-logind[1460]: Removed session 20. Nov 6 23:39:43.178688 systemd[1]: Started sshd@20-164.92.114.154:22-147.75.109.163:58272.service - OpenSSH per-connection server daemon (147.75.109.163:58272). Nov 6 23:39:43.247258 sshd[4201]: Accepted publickey for core from 147.75.109.163 port 58272 ssh2: RSA SHA256:Z2OWs1J4lzJbb37sVDKV9BLSPLNt86phBIrOXH84t9A Nov 6 23:39:43.249118 sshd-session[4201]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:39:43.257392 systemd-logind[1460]: New session 21 of user core. Nov 6 23:39:43.267593 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 6 23:39:43.406882 sshd[4203]: Connection closed by 147.75.109.163 port 58272 Nov 6 23:39:43.407672 sshd-session[4201]: pam_unix(sshd:session): session closed for user core Nov 6 23:39:43.411531 systemd-logind[1460]: Session 21 logged out. Waiting for processes to exit. Nov 6 23:39:43.412038 systemd[1]: sshd@20-164.92.114.154:22-147.75.109.163:58272.service: Deactivated successfully. Nov 6 23:39:43.414351 systemd[1]: session-21.scope: Deactivated successfully. Nov 6 23:39:43.418041 systemd-logind[1460]: Removed session 21. Nov 6 23:39:45.259208 kubelet[2602]: E1106 23:39:45.259113 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:39:48.426711 systemd[1]: Started sshd@21-164.92.114.154:22-147.75.109.163:58278.service - OpenSSH per-connection server daemon (147.75.109.163:58278). Nov 6 23:39:48.498761 sshd[4215]: Accepted publickey for core from 147.75.109.163 port 58278 ssh2: RSA SHA256:Z2OWs1J4lzJbb37sVDKV9BLSPLNt86phBIrOXH84t9A Nov 6 23:39:48.500775 sshd-session[4215]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:39:48.506857 systemd-logind[1460]: New session 22 of user core. Nov 6 23:39:48.510481 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 6 23:39:48.648997 sshd[4217]: Connection closed by 147.75.109.163 port 58278 Nov 6 23:39:48.651494 sshd-session[4215]: pam_unix(sshd:session): session closed for user core Nov 6 23:39:48.659888 systemd[1]: sshd@21-164.92.114.154:22-147.75.109.163:58278.service: Deactivated successfully. Nov 6 23:39:48.664053 systemd[1]: session-22.scope: Deactivated successfully. Nov 6 23:39:48.667332 systemd-logind[1460]: Session 22 logged out. Waiting for processes to exit. Nov 6 23:39:48.669382 systemd-logind[1460]: Removed session 22. Nov 6 23:39:49.262637 kubelet[2602]: E1106 23:39:49.262568 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:39:53.679422 systemd[1]: Started sshd@22-164.92.114.154:22-147.75.109.163:33104.service - OpenSSH per-connection server daemon (147.75.109.163:33104). Nov 6 23:39:53.743144 sshd[4230]: Accepted publickey for core from 147.75.109.163 port 33104 ssh2: RSA SHA256:Z2OWs1J4lzJbb37sVDKV9BLSPLNt86phBIrOXH84t9A Nov 6 23:39:53.745651 sshd-session[4230]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:39:53.754433 systemd-logind[1460]: New session 23 of user core. Nov 6 23:39:53.764541 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 6 23:39:53.947314 sshd[4232]: Connection closed by 147.75.109.163 port 33104 Nov 6 23:39:53.948455 sshd-session[4230]: pam_unix(sshd:session): session closed for user core Nov 6 23:39:53.962884 systemd[1]: sshd@22-164.92.114.154:22-147.75.109.163:33104.service: Deactivated successfully. Nov 6 23:39:53.966698 systemd[1]: session-23.scope: Deactivated successfully. Nov 6 23:39:53.970524 systemd-logind[1460]: Session 23 logged out. Waiting for processes to exit. Nov 6 23:39:53.979748 systemd[1]: Started sshd@23-164.92.114.154:22-147.75.109.163:33108.service - OpenSSH per-connection server daemon (147.75.109.163:33108). Nov 6 23:39:53.981867 systemd-logind[1460]: Removed session 23. Nov 6 23:39:54.052153 sshd[4244]: Accepted publickey for core from 147.75.109.163 port 33108 ssh2: RSA SHA256:Z2OWs1J4lzJbb37sVDKV9BLSPLNt86phBIrOXH84t9A Nov 6 23:39:54.053987 sshd-session[4244]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:39:54.064303 systemd-logind[1460]: New session 24 of user core. Nov 6 23:39:54.076648 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 6 23:39:55.496459 containerd[1474]: time="2025-11-06T23:39:55.496395129Z" level=info msg="StopContainer for \"eaafaabca44711771648c79018d397aa0e83cad2733c829e28e3cd0a923ce0dd\" with timeout 30 (s)" Nov 6 23:39:55.507580 containerd[1474]: time="2025-11-06T23:39:55.507490021Z" level=info msg="Stop container \"eaafaabca44711771648c79018d397aa0e83cad2733c829e28e3cd0a923ce0dd\" with signal terminated" Nov 6 23:39:55.530484 systemd[1]: run-containerd-runc-k8s.io-df618b658c57ab3dd90e777f47d7da0b74cae2ca7d202ab3bf476f2d11e3c233-runc.baw9d2.mount: Deactivated successfully. Nov 6 23:39:55.555083 systemd[1]: cri-containerd-eaafaabca44711771648c79018d397aa0e83cad2733c829e28e3cd0a923ce0dd.scope: Deactivated successfully. Nov 6 23:39:55.559523 containerd[1474]: time="2025-11-06T23:39:55.559427915Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 6 23:39:55.576555 containerd[1474]: time="2025-11-06T23:39:55.576026040Z" level=info msg="StopContainer for \"df618b658c57ab3dd90e777f47d7da0b74cae2ca7d202ab3bf476f2d11e3c233\" with timeout 2 (s)" Nov 6 23:39:55.576555 containerd[1474]: time="2025-11-06T23:39:55.576454654Z" level=info msg="Stop container \"df618b658c57ab3dd90e777f47d7da0b74cae2ca7d202ab3bf476f2d11e3c233\" with signal terminated" Nov 6 23:39:55.584210 systemd-networkd[1382]: lxc_health: Link DOWN Nov 6 23:39:55.584538 systemd-networkd[1382]: lxc_health: Lost carrier Nov 6 23:39:55.611166 systemd[1]: cri-containerd-df618b658c57ab3dd90e777f47d7da0b74cae2ca7d202ab3bf476f2d11e3c233.scope: Deactivated successfully. Nov 6 23:39:55.613627 systemd[1]: cri-containerd-df618b658c57ab3dd90e777f47d7da0b74cae2ca7d202ab3bf476f2d11e3c233.scope: Consumed 9.014s CPU time, 193M memory peak, 72M read from disk, 13.3M written to disk. Nov 6 23:39:55.628437 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eaafaabca44711771648c79018d397aa0e83cad2733c829e28e3cd0a923ce0dd-rootfs.mount: Deactivated successfully. Nov 6 23:39:55.634319 containerd[1474]: time="2025-11-06T23:39:55.634205584Z" level=info msg="shim disconnected" id=eaafaabca44711771648c79018d397aa0e83cad2733c829e28e3cd0a923ce0dd namespace=k8s.io Nov 6 23:39:55.634319 containerd[1474]: time="2025-11-06T23:39:55.634313434Z" level=warning msg="cleaning up after shim disconnected" id=eaafaabca44711771648c79018d397aa0e83cad2733c829e28e3cd0a923ce0dd namespace=k8s.io Nov 6 23:39:55.634319 containerd[1474]: time="2025-11-06T23:39:55.634329245Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 6 23:39:55.659602 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-df618b658c57ab3dd90e777f47d7da0b74cae2ca7d202ab3bf476f2d11e3c233-rootfs.mount: Deactivated successfully. Nov 6 23:39:55.665446 containerd[1474]: time="2025-11-06T23:39:55.665353498Z" level=info msg="shim disconnected" id=df618b658c57ab3dd90e777f47d7da0b74cae2ca7d202ab3bf476f2d11e3c233 namespace=k8s.io Nov 6 23:39:55.666047 containerd[1474]: time="2025-11-06T23:39:55.665530574Z" level=warning msg="cleaning up after shim disconnected" id=df618b658c57ab3dd90e777f47d7da0b74cae2ca7d202ab3bf476f2d11e3c233 namespace=k8s.io Nov 6 23:39:55.666047 containerd[1474]: time="2025-11-06T23:39:55.665546992Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 6 23:39:55.676562 containerd[1474]: time="2025-11-06T23:39:55.676501290Z" level=info msg="StopContainer for \"eaafaabca44711771648c79018d397aa0e83cad2733c829e28e3cd0a923ce0dd\" returns successfully" Nov 6 23:39:55.710751 containerd[1474]: time="2025-11-06T23:39:55.710510810Z" level=info msg="StopPodSandbox for \"d0143510f844ebc6ffd41a760dfd897bb4c56512dcbda4970ca4cd16cd7762ec\"" Nov 6 23:39:55.714789 containerd[1474]: time="2025-11-06T23:39:55.714523035Z" level=warning msg="cleanup warnings time=\"2025-11-06T23:39:55Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Nov 6 23:39:55.719172 containerd[1474]: time="2025-11-06T23:39:55.713507454Z" level=info msg="Container to stop \"eaafaabca44711771648c79018d397aa0e83cad2733c829e28e3cd0a923ce0dd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 6 23:39:55.722341 containerd[1474]: time="2025-11-06T23:39:55.720288865Z" level=info msg="StopContainer for \"df618b658c57ab3dd90e777f47d7da0b74cae2ca7d202ab3bf476f2d11e3c233\" returns successfully" Nov 6 23:39:55.725391 containerd[1474]: time="2025-11-06T23:39:55.723001300Z" level=info msg="StopPodSandbox for \"4dd27c27ac0bfe4dcd4c6fa2e22782c857644ccb07b7ee8f909c89b0b1e3755c\"" Nov 6 23:39:55.725769 containerd[1474]: time="2025-11-06T23:39:55.725678311Z" level=info msg="Container to stop \"abb3e8592d9e5533d3470c20d3c235e05a4332468b1e6038445e1c849fddbb35\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 6 23:39:55.726258 containerd[1474]: time="2025-11-06T23:39:55.725945074Z" level=info msg="Container to stop \"835d3258dacd5a26ac87f89c04fda942738116d114c493b4af3291c2ce7121c6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 6 23:39:55.726636 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d0143510f844ebc6ffd41a760dfd897bb4c56512dcbda4970ca4cd16cd7762ec-shm.mount: Deactivated successfully. Nov 6 23:39:55.727460 containerd[1474]: time="2025-11-06T23:39:55.726960530Z" level=info msg="Container to stop \"6737868c5a0fd1a7d3a32cf812e6c3bb773e581e16ce394cb0bc8b057de6a0dc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 6 23:39:55.729338 containerd[1474]: time="2025-11-06T23:39:55.728696329Z" level=info msg="Container to stop \"df618b658c57ab3dd90e777f47d7da0b74cae2ca7d202ab3bf476f2d11e3c233\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 6 23:39:55.729338 containerd[1474]: time="2025-11-06T23:39:55.728864276Z" level=info msg="Container to stop \"79aaf97a8c559ac20e3213c0de2b909b0f2cf13c5148294ba7a886d9fd4e107d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 6 23:39:55.740634 systemd[1]: cri-containerd-d0143510f844ebc6ffd41a760dfd897bb4c56512dcbda4970ca4cd16cd7762ec.scope: Deactivated successfully. Nov 6 23:39:55.754010 systemd[1]: cri-containerd-4dd27c27ac0bfe4dcd4c6fa2e22782c857644ccb07b7ee8f909c89b0b1e3755c.scope: Deactivated successfully. Nov 6 23:39:55.792269 containerd[1474]: time="2025-11-06T23:39:55.791907413Z" level=info msg="shim disconnected" id=d0143510f844ebc6ffd41a760dfd897bb4c56512dcbda4970ca4cd16cd7762ec namespace=k8s.io Nov 6 23:39:55.792269 containerd[1474]: time="2025-11-06T23:39:55.791995572Z" level=warning msg="cleaning up after shim disconnected" id=d0143510f844ebc6ffd41a760dfd897bb4c56512dcbda4970ca4cd16cd7762ec namespace=k8s.io Nov 6 23:39:55.792269 containerd[1474]: time="2025-11-06T23:39:55.792008361Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 6 23:39:55.793911 containerd[1474]: time="2025-11-06T23:39:55.793738783Z" level=info msg="shim disconnected" id=4dd27c27ac0bfe4dcd4c6fa2e22782c857644ccb07b7ee8f909c89b0b1e3755c namespace=k8s.io Nov 6 23:39:55.793911 containerd[1474]: time="2025-11-06T23:39:55.793809342Z" level=warning msg="cleaning up after shim disconnected" id=4dd27c27ac0bfe4dcd4c6fa2e22782c857644ccb07b7ee8f909c89b0b1e3755c namespace=k8s.io Nov 6 23:39:55.793911 containerd[1474]: time="2025-11-06T23:39:55.793820774Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 6 23:39:55.821086 containerd[1474]: time="2025-11-06T23:39:55.820893921Z" level=info msg="TearDown network for sandbox \"d0143510f844ebc6ffd41a760dfd897bb4c56512dcbda4970ca4cd16cd7762ec\" successfully" Nov 6 23:39:55.821086 containerd[1474]: time="2025-11-06T23:39:55.820944701Z" level=info msg="StopPodSandbox for \"d0143510f844ebc6ffd41a760dfd897bb4c56512dcbda4970ca4cd16cd7762ec\" returns successfully" Nov 6 23:39:55.829007 containerd[1474]: time="2025-11-06T23:39:55.828496438Z" level=info msg="TearDown network for sandbox \"4dd27c27ac0bfe4dcd4c6fa2e22782c857644ccb07b7ee8f909c89b0b1e3755c\" successfully" Nov 6 23:39:55.829007 containerd[1474]: time="2025-11-06T23:39:55.828536360Z" level=info msg="StopPodSandbox for \"4dd27c27ac0bfe4dcd4c6fa2e22782c857644ccb07b7ee8f909c89b0b1e3755c\" returns successfully" Nov 6 23:39:55.976742 kubelet[2602]: I1106 23:39:55.976279 2602 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fc763845-118d-466b-9e2e-8414a02a094e-xtables-lock\") pod \"fc763845-118d-466b-9e2e-8414a02a094e\" (UID: \"fc763845-118d-466b-9e2e-8414a02a094e\") " Nov 6 23:39:55.976742 kubelet[2602]: I1106 23:39:55.976341 2602 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fc763845-118d-466b-9e2e-8414a02a094e-cni-path\") pod \"fc763845-118d-466b-9e2e-8414a02a094e\" (UID: \"fc763845-118d-466b-9e2e-8414a02a094e\") " Nov 6 23:39:55.976742 kubelet[2602]: I1106 23:39:55.976369 2602 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fc763845-118d-466b-9e2e-8414a02a094e-clustermesh-secrets\") pod \"fc763845-118d-466b-9e2e-8414a02a094e\" (UID: \"fc763845-118d-466b-9e2e-8414a02a094e\") " Nov 6 23:39:55.976742 kubelet[2602]: I1106 23:39:55.976386 2602 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fc763845-118d-466b-9e2e-8414a02a094e-etc-cni-netd\") pod \"fc763845-118d-466b-9e2e-8414a02a094e\" (UID: \"fc763845-118d-466b-9e2e-8414a02a094e\") " Nov 6 23:39:55.976742 kubelet[2602]: I1106 23:39:55.976402 2602 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fc763845-118d-466b-9e2e-8414a02a094e-cilium-cgroup\") pod \"fc763845-118d-466b-9e2e-8414a02a094e\" (UID: \"fc763845-118d-466b-9e2e-8414a02a094e\") " Nov 6 23:39:55.976742 kubelet[2602]: I1106 23:39:55.976424 2602 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ec8dd16a-f84c-4512-8367-2001ee2ca9e1-cilium-config-path\") pod \"ec8dd16a-f84c-4512-8367-2001ee2ca9e1\" (UID: \"ec8dd16a-f84c-4512-8367-2001ee2ca9e1\") " Nov 6 23:39:55.977430 kubelet[2602]: I1106 23:39:55.976442 2602 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fc763845-118d-466b-9e2e-8414a02a094e-bpf-maps\") pod \"fc763845-118d-466b-9e2e-8414a02a094e\" (UID: \"fc763845-118d-466b-9e2e-8414a02a094e\") " Nov 6 23:39:55.977430 kubelet[2602]: I1106 23:39:55.976459 2602 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xm2b7\" (UniqueName: \"kubernetes.io/projected/ec8dd16a-f84c-4512-8367-2001ee2ca9e1-kube-api-access-xm2b7\") pod \"ec8dd16a-f84c-4512-8367-2001ee2ca9e1\" (UID: \"ec8dd16a-f84c-4512-8367-2001ee2ca9e1\") " Nov 6 23:39:55.977430 kubelet[2602]: I1106 23:39:55.976480 2602 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fc763845-118d-466b-9e2e-8414a02a094e-cilium-run\") pod \"fc763845-118d-466b-9e2e-8414a02a094e\" (UID: \"fc763845-118d-466b-9e2e-8414a02a094e\") " Nov 6 23:39:55.977430 kubelet[2602]: I1106 23:39:55.976497 2602 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gfwvm\" (UniqueName: \"kubernetes.io/projected/fc763845-118d-466b-9e2e-8414a02a094e-kube-api-access-gfwvm\") pod \"fc763845-118d-466b-9e2e-8414a02a094e\" (UID: \"fc763845-118d-466b-9e2e-8414a02a094e\") " Nov 6 23:39:55.977430 kubelet[2602]: I1106 23:39:55.976512 2602 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fc763845-118d-466b-9e2e-8414a02a094e-host-proc-sys-net\") pod \"fc763845-118d-466b-9e2e-8414a02a094e\" (UID: \"fc763845-118d-466b-9e2e-8414a02a094e\") " Nov 6 23:39:55.977430 kubelet[2602]: I1106 23:39:55.976528 2602 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fc763845-118d-466b-9e2e-8414a02a094e-hubble-tls\") pod \"fc763845-118d-466b-9e2e-8414a02a094e\" (UID: \"fc763845-118d-466b-9e2e-8414a02a094e\") " Nov 6 23:39:55.977583 kubelet[2602]: I1106 23:39:55.976575 2602 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fc763845-118d-466b-9e2e-8414a02a094e-cilium-config-path\") pod \"fc763845-118d-466b-9e2e-8414a02a094e\" (UID: \"fc763845-118d-466b-9e2e-8414a02a094e\") " Nov 6 23:39:55.977583 kubelet[2602]: I1106 23:39:55.976590 2602 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fc763845-118d-466b-9e2e-8414a02a094e-hostproc\") pod \"fc763845-118d-466b-9e2e-8414a02a094e\" (UID: \"fc763845-118d-466b-9e2e-8414a02a094e\") " Nov 6 23:39:55.977583 kubelet[2602]: I1106 23:39:55.976606 2602 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fc763845-118d-466b-9e2e-8414a02a094e-host-proc-sys-kernel\") pod \"fc763845-118d-466b-9e2e-8414a02a094e\" (UID: \"fc763845-118d-466b-9e2e-8414a02a094e\") " Nov 6 23:39:55.977583 kubelet[2602]: I1106 23:39:55.976623 2602 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fc763845-118d-466b-9e2e-8414a02a094e-lib-modules\") pod \"fc763845-118d-466b-9e2e-8414a02a094e\" (UID: \"fc763845-118d-466b-9e2e-8414a02a094e\") " Nov 6 23:39:55.981970 kubelet[2602]: I1106 23:39:55.980671 2602 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc763845-118d-466b-9e2e-8414a02a094e-cni-path" (OuterVolumeSpecName: "cni-path") pod "fc763845-118d-466b-9e2e-8414a02a094e" (UID: "fc763845-118d-466b-9e2e-8414a02a094e"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 23:39:55.985099 kubelet[2602]: I1106 23:39:55.979670 2602 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc763845-118d-466b-9e2e-8414a02a094e-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "fc763845-118d-466b-9e2e-8414a02a094e" (UID: "fc763845-118d-466b-9e2e-8414a02a094e"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 23:39:55.985099 kubelet[2602]: I1106 23:39:55.985032 2602 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc763845-118d-466b-9e2e-8414a02a094e-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "fc763845-118d-466b-9e2e-8414a02a094e" (UID: "fc763845-118d-466b-9e2e-8414a02a094e"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 23:39:55.985099 kubelet[2602]: I1106 23:39:55.985083 2602 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc763845-118d-466b-9e2e-8414a02a094e-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "fc763845-118d-466b-9e2e-8414a02a094e" (UID: "fc763845-118d-466b-9e2e-8414a02a094e"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 23:39:55.985099 kubelet[2602]: I1106 23:39:55.985102 2602 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc763845-118d-466b-9e2e-8414a02a094e-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "fc763845-118d-466b-9e2e-8414a02a094e" (UID: "fc763845-118d-466b-9e2e-8414a02a094e"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 23:39:55.985099 kubelet[2602]: I1106 23:39:55.985117 2602 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc763845-118d-466b-9e2e-8414a02a094e-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "fc763845-118d-466b-9e2e-8414a02a094e" (UID: "fc763845-118d-466b-9e2e-8414a02a094e"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 23:39:55.994312 kubelet[2602]: I1106 23:39:55.993433 2602 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec8dd16a-f84c-4512-8367-2001ee2ca9e1-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ec8dd16a-f84c-4512-8367-2001ee2ca9e1" (UID: "ec8dd16a-f84c-4512-8367-2001ee2ca9e1"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 6 23:39:55.994312 kubelet[2602]: I1106 23:39:55.993533 2602 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc763845-118d-466b-9e2e-8414a02a094e-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "fc763845-118d-466b-9e2e-8414a02a094e" (UID: "fc763845-118d-466b-9e2e-8414a02a094e"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 23:39:55.996693 kubelet[2602]: I1106 23:39:55.996438 2602 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc763845-118d-466b-9e2e-8414a02a094e-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "fc763845-118d-466b-9e2e-8414a02a094e" (UID: "fc763845-118d-466b-9e2e-8414a02a094e"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 6 23:39:55.999817 kubelet[2602]: I1106 23:39:55.999770 2602 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec8dd16a-f84c-4512-8367-2001ee2ca9e1-kube-api-access-xm2b7" (OuterVolumeSpecName: "kube-api-access-xm2b7") pod "ec8dd16a-f84c-4512-8367-2001ee2ca9e1" (UID: "ec8dd16a-f84c-4512-8367-2001ee2ca9e1"). InnerVolumeSpecName "kube-api-access-xm2b7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 6 23:39:56.002275 kubelet[2602]: I1106 23:39:56.002214 2602 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc763845-118d-466b-9e2e-8414a02a094e-kube-api-access-gfwvm" (OuterVolumeSpecName: "kube-api-access-gfwvm") pod "fc763845-118d-466b-9e2e-8414a02a094e" (UID: "fc763845-118d-466b-9e2e-8414a02a094e"). InnerVolumeSpecName "kube-api-access-gfwvm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 6 23:39:56.002948 kubelet[2602]: I1106 23:39:56.002479 2602 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc763845-118d-466b-9e2e-8414a02a094e-hostproc" (OuterVolumeSpecName: "hostproc") pod "fc763845-118d-466b-9e2e-8414a02a094e" (UID: "fc763845-118d-466b-9e2e-8414a02a094e"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 23:39:56.003061 kubelet[2602]: I1106 23:39:56.002591 2602 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc763845-118d-466b-9e2e-8414a02a094e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "fc763845-118d-466b-9e2e-8414a02a094e" (UID: "fc763845-118d-466b-9e2e-8414a02a094e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 6 23:39:56.006454 kubelet[2602]: I1106 23:39:56.002623 2602 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc763845-118d-466b-9e2e-8414a02a094e-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "fc763845-118d-466b-9e2e-8414a02a094e" (UID: "fc763845-118d-466b-9e2e-8414a02a094e"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 23:39:56.007629 kubelet[2602]: I1106 23:39:56.006687 2602 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc763845-118d-466b-9e2e-8414a02a094e-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "fc763845-118d-466b-9e2e-8414a02a094e" (UID: "fc763845-118d-466b-9e2e-8414a02a094e"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 23:39:56.008284 kubelet[2602]: I1106 23:39:56.008252 2602 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc763845-118d-466b-9e2e-8414a02a094e-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "fc763845-118d-466b-9e2e-8414a02a094e" (UID: "fc763845-118d-466b-9e2e-8414a02a094e"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 6 23:39:56.077703 kubelet[2602]: I1106 23:39:56.077641 2602 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fc763845-118d-466b-9e2e-8414a02a094e-hubble-tls\") on node \"ci-4230.2.4-n-07c3be35b1\" DevicePath \"\"" Nov 6 23:39:56.077703 kubelet[2602]: I1106 23:39:56.077691 2602 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fc763845-118d-466b-9e2e-8414a02a094e-cilium-config-path\") on node \"ci-4230.2.4-n-07c3be35b1\" DevicePath \"\"" Nov 6 23:39:56.077703 kubelet[2602]: I1106 23:39:56.077702 2602 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fc763845-118d-466b-9e2e-8414a02a094e-hostproc\") on node \"ci-4230.2.4-n-07c3be35b1\" DevicePath \"\"" Nov 6 23:39:56.077703 kubelet[2602]: I1106 23:39:56.077715 2602 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fc763845-118d-466b-9e2e-8414a02a094e-host-proc-sys-kernel\") on node \"ci-4230.2.4-n-07c3be35b1\" DevicePath \"\"" Nov 6 23:39:56.077995 kubelet[2602]: I1106 23:39:56.077727 2602 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fc763845-118d-466b-9e2e-8414a02a094e-lib-modules\") on node \"ci-4230.2.4-n-07c3be35b1\" DevicePath \"\"" Nov 6 23:39:56.077995 kubelet[2602]: I1106 23:39:56.077736 2602 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fc763845-118d-466b-9e2e-8414a02a094e-xtables-lock\") on node \"ci-4230.2.4-n-07c3be35b1\" DevicePath \"\"" Nov 6 23:39:56.077995 kubelet[2602]: I1106 23:39:56.077744 2602 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fc763845-118d-466b-9e2e-8414a02a094e-cni-path\") on node \"ci-4230.2.4-n-07c3be35b1\" DevicePath \"\"" Nov 6 23:39:56.077995 kubelet[2602]: I1106 23:39:56.077752 2602 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fc763845-118d-466b-9e2e-8414a02a094e-clustermesh-secrets\") on node \"ci-4230.2.4-n-07c3be35b1\" DevicePath \"\"" Nov 6 23:39:56.077995 kubelet[2602]: I1106 23:39:56.077760 2602 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fc763845-118d-466b-9e2e-8414a02a094e-etc-cni-netd\") on node \"ci-4230.2.4-n-07c3be35b1\" DevicePath \"\"" Nov 6 23:39:56.077995 kubelet[2602]: I1106 23:39:56.077768 2602 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fc763845-118d-466b-9e2e-8414a02a094e-cilium-cgroup\") on node \"ci-4230.2.4-n-07c3be35b1\" DevicePath \"\"" Nov 6 23:39:56.077995 kubelet[2602]: I1106 23:39:56.077777 2602 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ec8dd16a-f84c-4512-8367-2001ee2ca9e1-cilium-config-path\") on node \"ci-4230.2.4-n-07c3be35b1\" DevicePath \"\"" Nov 6 23:39:56.077995 kubelet[2602]: I1106 23:39:56.077788 2602 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fc763845-118d-466b-9e2e-8414a02a094e-bpf-maps\") on node \"ci-4230.2.4-n-07c3be35b1\" DevicePath \"\"" Nov 6 23:39:56.078292 kubelet[2602]: I1106 23:39:56.077797 2602 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xm2b7\" (UniqueName: \"kubernetes.io/projected/ec8dd16a-f84c-4512-8367-2001ee2ca9e1-kube-api-access-xm2b7\") on node \"ci-4230.2.4-n-07c3be35b1\" DevicePath \"\"" Nov 6 23:39:56.078292 kubelet[2602]: I1106 23:39:56.077805 2602 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fc763845-118d-466b-9e2e-8414a02a094e-cilium-run\") on node \"ci-4230.2.4-n-07c3be35b1\" DevicePath \"\"" Nov 6 23:39:56.078292 kubelet[2602]: I1106 23:39:56.077814 2602 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gfwvm\" (UniqueName: \"kubernetes.io/projected/fc763845-118d-466b-9e2e-8414a02a094e-kube-api-access-gfwvm\") on node \"ci-4230.2.4-n-07c3be35b1\" DevicePath \"\"" Nov 6 23:39:56.078292 kubelet[2602]: I1106 23:39:56.077822 2602 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fc763845-118d-466b-9e2e-8414a02a094e-host-proc-sys-net\") on node \"ci-4230.2.4-n-07c3be35b1\" DevicePath \"\"" Nov 6 23:39:56.261110 kubelet[2602]: E1106 23:39:56.259931 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:39:56.269502 systemd[1]: Removed slice kubepods-burstable-podfc763845_118d_466b_9e2e_8414a02a094e.slice - libcontainer container kubepods-burstable-podfc763845_118d_466b_9e2e_8414a02a094e.slice. Nov 6 23:39:56.269701 systemd[1]: kubepods-burstable-podfc763845_118d_466b_9e2e_8414a02a094e.slice: Consumed 9.126s CPU time, 193.3M memory peak, 72M read from disk, 15.9M written to disk. Nov 6 23:39:56.271557 systemd[1]: Removed slice kubepods-besteffort-podec8dd16a_f84c_4512_8367_2001ee2ca9e1.slice - libcontainer container kubepods-besteffort-podec8dd16a_f84c_4512_8367_2001ee2ca9e1.slice. Nov 6 23:39:56.428039 kubelet[2602]: E1106 23:39:56.422070 2602 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Nov 6 23:39:56.516117 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d0143510f844ebc6ffd41a760dfd897bb4c56512dcbda4970ca4cd16cd7762ec-rootfs.mount: Deactivated successfully. Nov 6 23:39:56.516397 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4dd27c27ac0bfe4dcd4c6fa2e22782c857644ccb07b7ee8f909c89b0b1e3755c-rootfs.mount: Deactivated successfully. Nov 6 23:39:56.516544 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4dd27c27ac0bfe4dcd4c6fa2e22782c857644ccb07b7ee8f909c89b0b1e3755c-shm.mount: Deactivated successfully. Nov 6 23:39:56.516649 systemd[1]: var-lib-kubelet-pods-ec8dd16a\x2df84c\x2d4512\x2d8367\x2d2001ee2ca9e1-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxm2b7.mount: Deactivated successfully. Nov 6 23:39:56.516745 systemd[1]: var-lib-kubelet-pods-fc763845\x2d118d\x2d466b\x2d9e2e\x2d8414a02a094e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgfwvm.mount: Deactivated successfully. Nov 6 23:39:56.516828 systemd[1]: var-lib-kubelet-pods-fc763845\x2d118d\x2d466b\x2d9e2e\x2d8414a02a094e-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Nov 6 23:39:56.516925 systemd[1]: var-lib-kubelet-pods-fc763845\x2d118d\x2d466b\x2d9e2e\x2d8414a02a094e-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Nov 6 23:39:56.633710 kubelet[2602]: I1106 23:39:56.633570 2602 scope.go:117] "RemoveContainer" containerID="eaafaabca44711771648c79018d397aa0e83cad2733c829e28e3cd0a923ce0dd" Nov 6 23:39:56.661191 containerd[1474]: time="2025-11-06T23:39:56.661135692Z" level=info msg="RemoveContainer for \"eaafaabca44711771648c79018d397aa0e83cad2733c829e28e3cd0a923ce0dd\"" Nov 6 23:39:56.685697 containerd[1474]: time="2025-11-06T23:39:56.685619682Z" level=info msg="RemoveContainer for \"eaafaabca44711771648c79018d397aa0e83cad2733c829e28e3cd0a923ce0dd\" returns successfully" Nov 6 23:39:56.688275 kubelet[2602]: I1106 23:39:56.687702 2602 scope.go:117] "RemoveContainer" containerID="eaafaabca44711771648c79018d397aa0e83cad2733c829e28e3cd0a923ce0dd" Nov 6 23:39:56.688496 containerd[1474]: time="2025-11-06T23:39:56.688188004Z" level=error msg="ContainerStatus for \"eaafaabca44711771648c79018d397aa0e83cad2733c829e28e3cd0a923ce0dd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"eaafaabca44711771648c79018d397aa0e83cad2733c829e28e3cd0a923ce0dd\": not found" Nov 6 23:39:56.696328 kubelet[2602]: E1106 23:39:56.695202 2602 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"eaafaabca44711771648c79018d397aa0e83cad2733c829e28e3cd0a923ce0dd\": not found" containerID="eaafaabca44711771648c79018d397aa0e83cad2733c829e28e3cd0a923ce0dd" Nov 6 23:39:56.696328 kubelet[2602]: I1106 23:39:56.695339 2602 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"eaafaabca44711771648c79018d397aa0e83cad2733c829e28e3cd0a923ce0dd"} err="failed to get container status \"eaafaabca44711771648c79018d397aa0e83cad2733c829e28e3cd0a923ce0dd\": rpc error: code = NotFound desc = an error occurred when try to find container \"eaafaabca44711771648c79018d397aa0e83cad2733c829e28e3cd0a923ce0dd\": not found" Nov 6 23:39:56.696328 kubelet[2602]: I1106 23:39:56.695431 2602 scope.go:117] "RemoveContainer" containerID="df618b658c57ab3dd90e777f47d7da0b74cae2ca7d202ab3bf476f2d11e3c233" Nov 6 23:39:56.700234 containerd[1474]: time="2025-11-06T23:39:56.700172390Z" level=info msg="RemoveContainer for \"df618b658c57ab3dd90e777f47d7da0b74cae2ca7d202ab3bf476f2d11e3c233\"" Nov 6 23:39:56.707170 containerd[1474]: time="2025-11-06T23:39:56.707111858Z" level=info msg="RemoveContainer for \"df618b658c57ab3dd90e777f47d7da0b74cae2ca7d202ab3bf476f2d11e3c233\" returns successfully" Nov 6 23:39:56.707699 kubelet[2602]: I1106 23:39:56.707552 2602 scope.go:117] "RemoveContainer" containerID="abb3e8592d9e5533d3470c20d3c235e05a4332468b1e6038445e1c849fddbb35" Nov 6 23:39:56.710712 containerd[1474]: time="2025-11-06T23:39:56.710657640Z" level=info msg="RemoveContainer for \"abb3e8592d9e5533d3470c20d3c235e05a4332468b1e6038445e1c849fddbb35\"" Nov 6 23:39:56.722262 containerd[1474]: time="2025-11-06T23:39:56.721886764Z" level=info msg="RemoveContainer for \"abb3e8592d9e5533d3470c20d3c235e05a4332468b1e6038445e1c849fddbb35\" returns successfully" Nov 6 23:39:56.729010 kubelet[2602]: I1106 23:39:56.727213 2602 scope.go:117] "RemoveContainer" containerID="6737868c5a0fd1a7d3a32cf812e6c3bb773e581e16ce394cb0bc8b057de6a0dc" Nov 6 23:39:56.730533 containerd[1474]: time="2025-11-06T23:39:56.730484420Z" level=info msg="RemoveContainer for \"6737868c5a0fd1a7d3a32cf812e6c3bb773e581e16ce394cb0bc8b057de6a0dc\"" Nov 6 23:39:56.734456 containerd[1474]: time="2025-11-06T23:39:56.734396883Z" level=info msg="RemoveContainer for \"6737868c5a0fd1a7d3a32cf812e6c3bb773e581e16ce394cb0bc8b057de6a0dc\" returns successfully" Nov 6 23:39:56.735262 kubelet[2602]: I1106 23:39:56.735193 2602 scope.go:117] "RemoveContainer" containerID="79aaf97a8c559ac20e3213c0de2b909b0f2cf13c5148294ba7a886d9fd4e107d" Nov 6 23:39:56.736451 containerd[1474]: time="2025-11-06T23:39:56.736414473Z" level=info msg="RemoveContainer for \"79aaf97a8c559ac20e3213c0de2b909b0f2cf13c5148294ba7a886d9fd4e107d\"" Nov 6 23:39:56.739442 containerd[1474]: time="2025-11-06T23:39:56.739331176Z" level=info msg="RemoveContainer for \"79aaf97a8c559ac20e3213c0de2b909b0f2cf13c5148294ba7a886d9fd4e107d\" returns successfully" Nov 6 23:39:56.739910 kubelet[2602]: I1106 23:39:56.739649 2602 scope.go:117] "RemoveContainer" containerID="835d3258dacd5a26ac87f89c04fda942738116d114c493b4af3291c2ce7121c6" Nov 6 23:39:56.741796 containerd[1474]: time="2025-11-06T23:39:56.741661856Z" level=info msg="RemoveContainer for \"835d3258dacd5a26ac87f89c04fda942738116d114c493b4af3291c2ce7121c6\"" Nov 6 23:39:56.744745 containerd[1474]: time="2025-11-06T23:39:56.744667749Z" level=info msg="RemoveContainer for \"835d3258dacd5a26ac87f89c04fda942738116d114c493b4af3291c2ce7121c6\" returns successfully" Nov 6 23:39:56.745071 kubelet[2602]: I1106 23:39:56.745029 2602 scope.go:117] "RemoveContainer" containerID="df618b658c57ab3dd90e777f47d7da0b74cae2ca7d202ab3bf476f2d11e3c233" Nov 6 23:39:56.745451 containerd[1474]: time="2025-11-06T23:39:56.745361581Z" level=error msg="ContainerStatus for \"df618b658c57ab3dd90e777f47d7da0b74cae2ca7d202ab3bf476f2d11e3c233\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"df618b658c57ab3dd90e777f47d7da0b74cae2ca7d202ab3bf476f2d11e3c233\": not found" Nov 6 23:39:56.745684 kubelet[2602]: E1106 23:39:56.745649 2602 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"df618b658c57ab3dd90e777f47d7da0b74cae2ca7d202ab3bf476f2d11e3c233\": not found" containerID="df618b658c57ab3dd90e777f47d7da0b74cae2ca7d202ab3bf476f2d11e3c233" Nov 6 23:39:56.745860 kubelet[2602]: I1106 23:39:56.745806 2602 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"df618b658c57ab3dd90e777f47d7da0b74cae2ca7d202ab3bf476f2d11e3c233"} err="failed to get container status \"df618b658c57ab3dd90e777f47d7da0b74cae2ca7d202ab3bf476f2d11e3c233\": rpc error: code = NotFound desc = an error occurred when try to find container \"df618b658c57ab3dd90e777f47d7da0b74cae2ca7d202ab3bf476f2d11e3c233\": not found" Nov 6 23:39:56.746089 kubelet[2602]: I1106 23:39:56.745957 2602 scope.go:117] "RemoveContainer" containerID="abb3e8592d9e5533d3470c20d3c235e05a4332468b1e6038445e1c849fddbb35" Nov 6 23:39:56.746479 containerd[1474]: time="2025-11-06T23:39:56.746431767Z" level=error msg="ContainerStatus for \"abb3e8592d9e5533d3470c20d3c235e05a4332468b1e6038445e1c849fddbb35\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"abb3e8592d9e5533d3470c20d3c235e05a4332468b1e6038445e1c849fddbb35\": not found" Nov 6 23:39:56.746721 kubelet[2602]: E1106 23:39:56.746681 2602 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"abb3e8592d9e5533d3470c20d3c235e05a4332468b1e6038445e1c849fddbb35\": not found" containerID="abb3e8592d9e5533d3470c20d3c235e05a4332468b1e6038445e1c849fddbb35" Nov 6 23:39:56.746784 kubelet[2602]: I1106 23:39:56.746729 2602 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"abb3e8592d9e5533d3470c20d3c235e05a4332468b1e6038445e1c849fddbb35"} err="failed to get container status \"abb3e8592d9e5533d3470c20d3c235e05a4332468b1e6038445e1c849fddbb35\": rpc error: code = NotFound desc = an error occurred when try to find container \"abb3e8592d9e5533d3470c20d3c235e05a4332468b1e6038445e1c849fddbb35\": not found" Nov 6 23:39:56.746784 kubelet[2602]: I1106 23:39:56.746759 2602 scope.go:117] "RemoveContainer" containerID="6737868c5a0fd1a7d3a32cf812e6c3bb773e581e16ce394cb0bc8b057de6a0dc" Nov 6 23:39:56.747073 containerd[1474]: time="2025-11-06T23:39:56.746987777Z" level=error msg="ContainerStatus for \"6737868c5a0fd1a7d3a32cf812e6c3bb773e581e16ce394cb0bc8b057de6a0dc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6737868c5a0fd1a7d3a32cf812e6c3bb773e581e16ce394cb0bc8b057de6a0dc\": not found" Nov 6 23:39:56.747364 kubelet[2602]: E1106 23:39:56.747213 2602 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6737868c5a0fd1a7d3a32cf812e6c3bb773e581e16ce394cb0bc8b057de6a0dc\": not found" containerID="6737868c5a0fd1a7d3a32cf812e6c3bb773e581e16ce394cb0bc8b057de6a0dc" Nov 6 23:39:56.747364 kubelet[2602]: I1106 23:39:56.747266 2602 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6737868c5a0fd1a7d3a32cf812e6c3bb773e581e16ce394cb0bc8b057de6a0dc"} err="failed to get container status \"6737868c5a0fd1a7d3a32cf812e6c3bb773e581e16ce394cb0bc8b057de6a0dc\": rpc error: code = NotFound desc = an error occurred when try to find container \"6737868c5a0fd1a7d3a32cf812e6c3bb773e581e16ce394cb0bc8b057de6a0dc\": not found" Nov 6 23:39:56.747364 kubelet[2602]: I1106 23:39:56.747285 2602 scope.go:117] "RemoveContainer" containerID="79aaf97a8c559ac20e3213c0de2b909b0f2cf13c5148294ba7a886d9fd4e107d" Nov 6 23:39:56.747636 containerd[1474]: time="2025-11-06T23:39:56.747524461Z" level=error msg="ContainerStatus for \"79aaf97a8c559ac20e3213c0de2b909b0f2cf13c5148294ba7a886d9fd4e107d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"79aaf97a8c559ac20e3213c0de2b909b0f2cf13c5148294ba7a886d9fd4e107d\": not found" Nov 6 23:39:56.747817 kubelet[2602]: E1106 23:39:56.747790 2602 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"79aaf97a8c559ac20e3213c0de2b909b0f2cf13c5148294ba7a886d9fd4e107d\": not found" containerID="79aaf97a8c559ac20e3213c0de2b909b0f2cf13c5148294ba7a886d9fd4e107d" Nov 6 23:39:56.747879 kubelet[2602]: I1106 23:39:56.747830 2602 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"79aaf97a8c559ac20e3213c0de2b909b0f2cf13c5148294ba7a886d9fd4e107d"} err="failed to get container status \"79aaf97a8c559ac20e3213c0de2b909b0f2cf13c5148294ba7a886d9fd4e107d\": rpc error: code = NotFound desc = an error occurred when try to find container \"79aaf97a8c559ac20e3213c0de2b909b0f2cf13c5148294ba7a886d9fd4e107d\": not found" Nov 6 23:39:56.747879 kubelet[2602]: I1106 23:39:56.747853 2602 scope.go:117] "RemoveContainer" containerID="835d3258dacd5a26ac87f89c04fda942738116d114c493b4af3291c2ce7121c6" Nov 6 23:39:56.748176 containerd[1474]: time="2025-11-06T23:39:56.748065286Z" level=error msg="ContainerStatus for \"835d3258dacd5a26ac87f89c04fda942738116d114c493b4af3291c2ce7121c6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"835d3258dacd5a26ac87f89c04fda942738116d114c493b4af3291c2ce7121c6\": not found" Nov 6 23:39:56.748436 kubelet[2602]: E1106 23:39:56.748313 2602 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"835d3258dacd5a26ac87f89c04fda942738116d114c493b4af3291c2ce7121c6\": not found" containerID="835d3258dacd5a26ac87f89c04fda942738116d114c493b4af3291c2ce7121c6" Nov 6 23:39:56.748436 kubelet[2602]: I1106 23:39:56.748344 2602 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"835d3258dacd5a26ac87f89c04fda942738116d114c493b4af3291c2ce7121c6"} err="failed to get container status \"835d3258dacd5a26ac87f89c04fda942738116d114c493b4af3291c2ce7121c6\": rpc error: code = NotFound desc = an error occurred when try to find container \"835d3258dacd5a26ac87f89c04fda942738116d114c493b4af3291c2ce7121c6\": not found" Nov 6 23:39:57.401577 sshd[4247]: Connection closed by 147.75.109.163 port 33108 Nov 6 23:39:57.402769 sshd-session[4244]: pam_unix(sshd:session): session closed for user core Nov 6 23:39:57.417978 systemd[1]: sshd@23-164.92.114.154:22-147.75.109.163:33108.service: Deactivated successfully. Nov 6 23:39:57.420797 systemd[1]: session-24.scope: Deactivated successfully. Nov 6 23:39:57.423173 systemd-logind[1460]: Session 24 logged out. Waiting for processes to exit. Nov 6 23:39:57.429636 systemd[1]: Started sshd@24-164.92.114.154:22-147.75.109.163:33118.service - OpenSSH per-connection server daemon (147.75.109.163:33118). Nov 6 23:39:57.432125 systemd-logind[1460]: Removed session 24. Nov 6 23:39:57.501534 sshd[4404]: Accepted publickey for core from 147.75.109.163 port 33118 ssh2: RSA SHA256:Z2OWs1J4lzJbb37sVDKV9BLSPLNt86phBIrOXH84t9A Nov 6 23:39:57.503152 sshd-session[4404]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:39:57.509604 systemd-logind[1460]: New session 25 of user core. Nov 6 23:39:57.511455 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 6 23:39:58.230560 sshd[4407]: Connection closed by 147.75.109.163 port 33118 Nov 6 23:39:58.232542 sshd-session[4404]: pam_unix(sshd:session): session closed for user core Nov 6 23:39:58.253335 systemd[1]: sshd@24-164.92.114.154:22-147.75.109.163:33118.service: Deactivated successfully. Nov 6 23:39:58.259154 systemd[1]: session-25.scope: Deactivated successfully. Nov 6 23:39:58.261792 systemd-logind[1460]: Session 25 logged out. Waiting for processes to exit. Nov 6 23:39:58.271777 systemd[1]: Started sshd@25-164.92.114.154:22-147.75.109.163:33130.service - OpenSSH per-connection server daemon (147.75.109.163:33130). Nov 6 23:39:58.275576 kubelet[2602]: I1106 23:39:58.271218 2602 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec8dd16a-f84c-4512-8367-2001ee2ca9e1" path="/var/lib/kubelet/pods/ec8dd16a-f84c-4512-8367-2001ee2ca9e1/volumes" Nov 6 23:39:58.277186 systemd-logind[1460]: Removed session 25. Nov 6 23:39:58.279404 kubelet[2602]: I1106 23:39:58.278817 2602 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc763845-118d-466b-9e2e-8414a02a094e" path="/var/lib/kubelet/pods/fc763845-118d-466b-9e2e-8414a02a094e/volumes" Nov 6 23:39:58.335378 systemd[1]: Created slice kubepods-burstable-pod5314cac6_eeb5_44b6_b2b1_b0726c36eeb0.slice - libcontainer container kubepods-burstable-pod5314cac6_eeb5_44b6_b2b1_b0726c36eeb0.slice. Nov 6 23:39:58.378932 sshd[4416]: Accepted publickey for core from 147.75.109.163 port 33130 ssh2: RSA SHA256:Z2OWs1J4lzJbb37sVDKV9BLSPLNt86phBIrOXH84t9A Nov 6 23:39:58.381990 sshd-session[4416]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:39:58.389732 systemd-logind[1460]: New session 26 of user core. Nov 6 23:39:58.393678 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 6 23:39:58.413661 kubelet[2602]: I1106 23:39:58.413462 2602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5314cac6-eeb5-44b6-b2b1-b0726c36eeb0-cilium-config-path\") pod \"cilium-687kv\" (UID: \"5314cac6-eeb5-44b6-b2b1-b0726c36eeb0\") " pod="kube-system/cilium-687kv" Nov 6 23:39:58.413661 kubelet[2602]: I1106 23:39:58.413553 2602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/5314cac6-eeb5-44b6-b2b1-b0726c36eeb0-cilium-ipsec-secrets\") pod \"cilium-687kv\" (UID: \"5314cac6-eeb5-44b6-b2b1-b0726c36eeb0\") " pod="kube-system/cilium-687kv" Nov 6 23:39:58.413661 kubelet[2602]: I1106 23:39:58.413575 2602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5314cac6-eeb5-44b6-b2b1-b0726c36eeb0-xtables-lock\") pod \"cilium-687kv\" (UID: \"5314cac6-eeb5-44b6-b2b1-b0726c36eeb0\") " pod="kube-system/cilium-687kv" Nov 6 23:39:58.413661 kubelet[2602]: I1106 23:39:58.413591 2602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkhtz\" (UniqueName: \"kubernetes.io/projected/5314cac6-eeb5-44b6-b2b1-b0726c36eeb0-kube-api-access-tkhtz\") pod \"cilium-687kv\" (UID: \"5314cac6-eeb5-44b6-b2b1-b0726c36eeb0\") " pod="kube-system/cilium-687kv" Nov 6 23:39:58.413661 kubelet[2602]: I1106 23:39:58.413612 2602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5314cac6-eeb5-44b6-b2b1-b0726c36eeb0-hostproc\") pod \"cilium-687kv\" (UID: \"5314cac6-eeb5-44b6-b2b1-b0726c36eeb0\") " pod="kube-system/cilium-687kv" Nov 6 23:39:58.413661 kubelet[2602]: I1106 23:39:58.413628 2602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5314cac6-eeb5-44b6-b2b1-b0726c36eeb0-cni-path\") pod \"cilium-687kv\" (UID: \"5314cac6-eeb5-44b6-b2b1-b0726c36eeb0\") " pod="kube-system/cilium-687kv" Nov 6 23:39:58.413956 kubelet[2602]: I1106 23:39:58.413676 2602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5314cac6-eeb5-44b6-b2b1-b0726c36eeb0-lib-modules\") pod \"cilium-687kv\" (UID: \"5314cac6-eeb5-44b6-b2b1-b0726c36eeb0\") " pod="kube-system/cilium-687kv" Nov 6 23:39:58.413956 kubelet[2602]: I1106 23:39:58.413735 2602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5314cac6-eeb5-44b6-b2b1-b0726c36eeb0-clustermesh-secrets\") pod \"cilium-687kv\" (UID: \"5314cac6-eeb5-44b6-b2b1-b0726c36eeb0\") " pod="kube-system/cilium-687kv" Nov 6 23:39:58.413956 kubelet[2602]: I1106 23:39:58.413756 2602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5314cac6-eeb5-44b6-b2b1-b0726c36eeb0-cilium-run\") pod \"cilium-687kv\" (UID: \"5314cac6-eeb5-44b6-b2b1-b0726c36eeb0\") " pod="kube-system/cilium-687kv" Nov 6 23:39:58.413956 kubelet[2602]: I1106 23:39:58.413779 2602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5314cac6-eeb5-44b6-b2b1-b0726c36eeb0-bpf-maps\") pod \"cilium-687kv\" (UID: \"5314cac6-eeb5-44b6-b2b1-b0726c36eeb0\") " pod="kube-system/cilium-687kv" Nov 6 23:39:58.413956 kubelet[2602]: I1106 23:39:58.413795 2602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5314cac6-eeb5-44b6-b2b1-b0726c36eeb0-etc-cni-netd\") pod \"cilium-687kv\" (UID: \"5314cac6-eeb5-44b6-b2b1-b0726c36eeb0\") " pod="kube-system/cilium-687kv" Nov 6 23:39:58.413956 kubelet[2602]: I1106 23:39:58.413810 2602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5314cac6-eeb5-44b6-b2b1-b0726c36eeb0-host-proc-sys-net\") pod \"cilium-687kv\" (UID: \"5314cac6-eeb5-44b6-b2b1-b0726c36eeb0\") " pod="kube-system/cilium-687kv" Nov 6 23:39:58.414121 kubelet[2602]: I1106 23:39:58.413854 2602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5314cac6-eeb5-44b6-b2b1-b0726c36eeb0-hubble-tls\") pod \"cilium-687kv\" (UID: \"5314cac6-eeb5-44b6-b2b1-b0726c36eeb0\") " pod="kube-system/cilium-687kv" Nov 6 23:39:58.414121 kubelet[2602]: I1106 23:39:58.413910 2602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5314cac6-eeb5-44b6-b2b1-b0726c36eeb0-host-proc-sys-kernel\") pod \"cilium-687kv\" (UID: \"5314cac6-eeb5-44b6-b2b1-b0726c36eeb0\") " pod="kube-system/cilium-687kv" Nov 6 23:39:58.414121 kubelet[2602]: I1106 23:39:58.413944 2602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5314cac6-eeb5-44b6-b2b1-b0726c36eeb0-cilium-cgroup\") pod \"cilium-687kv\" (UID: \"5314cac6-eeb5-44b6-b2b1-b0726c36eeb0\") " pod="kube-system/cilium-687kv" Nov 6 23:39:58.453918 sshd[4419]: Connection closed by 147.75.109.163 port 33130 Nov 6 23:39:58.454635 sshd-session[4416]: pam_unix(sshd:session): session closed for user core Nov 6 23:39:58.464940 systemd[1]: sshd@25-164.92.114.154:22-147.75.109.163:33130.service: Deactivated successfully. Nov 6 23:39:58.467594 systemd[1]: session-26.scope: Deactivated successfully. Nov 6 23:39:58.469559 systemd-logind[1460]: Session 26 logged out. Waiting for processes to exit. Nov 6 23:39:58.477755 systemd[1]: Started sshd@26-164.92.114.154:22-147.75.109.163:33132.service - OpenSSH per-connection server daemon (147.75.109.163:33132). Nov 6 23:39:58.480452 systemd-logind[1460]: Removed session 26. Nov 6 23:39:58.532873 sshd[4425]: Accepted publickey for core from 147.75.109.163 port 33132 ssh2: RSA SHA256:Z2OWs1J4lzJbb37sVDKV9BLSPLNt86phBIrOXH84t9A Nov 6 23:39:58.536674 sshd-session[4425]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:39:58.555461 systemd-logind[1460]: New session 27 of user core. Nov 6 23:39:58.562555 systemd[1]: Started session-27.scope - Session 27 of User core. Nov 6 23:39:58.644850 kubelet[2602]: E1106 23:39:58.644793 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:39:58.645558 containerd[1474]: time="2025-11-06T23:39:58.645497310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-687kv,Uid:5314cac6-eeb5-44b6-b2b1-b0726c36eeb0,Namespace:kube-system,Attempt:0,}" Nov 6 23:39:58.689377 containerd[1474]: time="2025-11-06T23:39:58.688217408Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 6 23:39:58.689377 containerd[1474]: time="2025-11-06T23:39:58.688351365Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 6 23:39:58.689377 containerd[1474]: time="2025-11-06T23:39:58.688365253Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 6 23:39:58.689377 containerd[1474]: time="2025-11-06T23:39:58.688516103Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 6 23:39:58.717533 systemd[1]: Started cri-containerd-8b3a331d5b74841e26756e07be738b8bdaecfa010e20ea95d5371aa5116c97c5.scope - libcontainer container 8b3a331d5b74841e26756e07be738b8bdaecfa010e20ea95d5371aa5116c97c5. Nov 6 23:39:58.769080 containerd[1474]: time="2025-11-06T23:39:58.769020036Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-687kv,Uid:5314cac6-eeb5-44b6-b2b1-b0726c36eeb0,Namespace:kube-system,Attempt:0,} returns sandbox id \"8b3a331d5b74841e26756e07be738b8bdaecfa010e20ea95d5371aa5116c97c5\"" Nov 6 23:39:58.770381 kubelet[2602]: E1106 23:39:58.770325 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:39:58.779970 kubelet[2602]: I1106 23:39:58.779871 2602 setters.go:618] "Node became not ready" node="ci-4230.2.4-n-07c3be35b1" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-06T23:39:58Z","lastTransitionTime":"2025-11-06T23:39:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Nov 6 23:39:58.786359 containerd[1474]: time="2025-11-06T23:39:58.786212053Z" level=info msg="CreateContainer within sandbox \"8b3a331d5b74841e26756e07be738b8bdaecfa010e20ea95d5371aa5116c97c5\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 6 23:39:58.796509 containerd[1474]: time="2025-11-06T23:39:58.795478362Z" level=info msg="CreateContainer within sandbox \"8b3a331d5b74841e26756e07be738b8bdaecfa010e20ea95d5371aa5116c97c5\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"64fd728295b4f758a7609205b67e4a3e965f486319e82911346fa1275a747696\"" Nov 6 23:39:58.797983 containerd[1474]: time="2025-11-06T23:39:58.797936488Z" level=info msg="StartContainer for \"64fd728295b4f758a7609205b67e4a3e965f486319e82911346fa1275a747696\"" Nov 6 23:39:58.838505 systemd[1]: Started cri-containerd-64fd728295b4f758a7609205b67e4a3e965f486319e82911346fa1275a747696.scope - libcontainer container 64fd728295b4f758a7609205b67e4a3e965f486319e82911346fa1275a747696. Nov 6 23:39:58.876908 containerd[1474]: time="2025-11-06T23:39:58.876614752Z" level=info msg="StartContainer for \"64fd728295b4f758a7609205b67e4a3e965f486319e82911346fa1275a747696\" returns successfully" Nov 6 23:39:58.894904 systemd[1]: cri-containerd-64fd728295b4f758a7609205b67e4a3e965f486319e82911346fa1275a747696.scope: Deactivated successfully. Nov 6 23:39:58.895358 systemd[1]: cri-containerd-64fd728295b4f758a7609205b67e4a3e965f486319e82911346fa1275a747696.scope: Consumed 28ms CPU time, 9.4M memory peak, 2.9M read from disk. Nov 6 23:39:58.929116 containerd[1474]: time="2025-11-06T23:39:58.929015360Z" level=info msg="shim disconnected" id=64fd728295b4f758a7609205b67e4a3e965f486319e82911346fa1275a747696 namespace=k8s.io Nov 6 23:39:58.929116 containerd[1474]: time="2025-11-06T23:39:58.929108778Z" level=warning msg="cleaning up after shim disconnected" id=64fd728295b4f758a7609205b67e4a3e965f486319e82911346fa1275a747696 namespace=k8s.io Nov 6 23:39:58.929462 containerd[1474]: time="2025-11-06T23:39:58.929119105Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 6 23:39:59.260071 kubelet[2602]: E1106 23:39:59.259666 2602 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-7gkr9" podUID="8623b039-7a6f-48ae-a3af-3e4f63d72593" Nov 6 23:39:59.653691 kubelet[2602]: E1106 23:39:59.653559 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:39:59.661036 containerd[1474]: time="2025-11-06T23:39:59.660796778Z" level=info msg="CreateContainer within sandbox \"8b3a331d5b74841e26756e07be738b8bdaecfa010e20ea95d5371aa5116c97c5\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 6 23:39:59.681307 containerd[1474]: time="2025-11-06T23:39:59.680957589Z" level=info msg="CreateContainer within sandbox \"8b3a331d5b74841e26756e07be738b8bdaecfa010e20ea95d5371aa5116c97c5\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"08207a91acff1252d14e2e637b535596ade9b5ca8a6b31e14e4f3b4bc178bdd6\"" Nov 6 23:39:59.684244 containerd[1474]: time="2025-11-06T23:39:59.683128241Z" level=info msg="StartContainer for \"08207a91acff1252d14e2e637b535596ade9b5ca8a6b31e14e4f3b4bc178bdd6\"" Nov 6 23:39:59.729768 systemd[1]: Started cri-containerd-08207a91acff1252d14e2e637b535596ade9b5ca8a6b31e14e4f3b4bc178bdd6.scope - libcontainer container 08207a91acff1252d14e2e637b535596ade9b5ca8a6b31e14e4f3b4bc178bdd6. Nov 6 23:39:59.760851 containerd[1474]: time="2025-11-06T23:39:59.760536764Z" level=info msg="StartContainer for \"08207a91acff1252d14e2e637b535596ade9b5ca8a6b31e14e4f3b4bc178bdd6\" returns successfully" Nov 6 23:39:59.772587 systemd[1]: cri-containerd-08207a91acff1252d14e2e637b535596ade9b5ca8a6b31e14e4f3b4bc178bdd6.scope: Deactivated successfully. Nov 6 23:39:59.772912 systemd[1]: cri-containerd-08207a91acff1252d14e2e637b535596ade9b5ca8a6b31e14e4f3b4bc178bdd6.scope: Consumed 22ms CPU time, 7.3M memory peak, 2.1M read from disk. Nov 6 23:39:59.802012 containerd[1474]: time="2025-11-06T23:39:59.801749249Z" level=info msg="shim disconnected" id=08207a91acff1252d14e2e637b535596ade9b5ca8a6b31e14e4f3b4bc178bdd6 namespace=k8s.io Nov 6 23:39:59.802012 containerd[1474]: time="2025-11-06T23:39:59.801805841Z" level=warning msg="cleaning up after shim disconnected" id=08207a91acff1252d14e2e637b535596ade9b5ca8a6b31e14e4f3b4bc178bdd6 namespace=k8s.io Nov 6 23:39:59.802012 containerd[1474]: time="2025-11-06T23:39:59.801814141Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 6 23:40:00.521536 systemd[1]: run-containerd-runc-k8s.io-08207a91acff1252d14e2e637b535596ade9b5ca8a6b31e14e4f3b4bc178bdd6-runc.GZXfRp.mount: Deactivated successfully. Nov 6 23:40:00.521724 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-08207a91acff1252d14e2e637b535596ade9b5ca8a6b31e14e4f3b4bc178bdd6-rootfs.mount: Deactivated successfully. Nov 6 23:40:00.659621 kubelet[2602]: E1106 23:40:00.659359 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:40:00.668673 containerd[1474]: time="2025-11-06T23:40:00.668239695Z" level=info msg="CreateContainer within sandbox \"8b3a331d5b74841e26756e07be738b8bdaecfa010e20ea95d5371aa5116c97c5\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 6 23:40:00.700693 containerd[1474]: time="2025-11-06T23:40:00.700552567Z" level=info msg="CreateContainer within sandbox \"8b3a331d5b74841e26756e07be738b8bdaecfa010e20ea95d5371aa5116c97c5\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"1bb8af5719484a4f269f29e90efe4caa50c482222edb04c2cedd51f12526d11c\"" Nov 6 23:40:00.701203 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount858550266.mount: Deactivated successfully. Nov 6 23:40:00.703204 containerd[1474]: time="2025-11-06T23:40:00.701471012Z" level=info msg="StartContainer for \"1bb8af5719484a4f269f29e90efe4caa50c482222edb04c2cedd51f12526d11c\"" Nov 6 23:40:00.756434 systemd[1]: Started cri-containerd-1bb8af5719484a4f269f29e90efe4caa50c482222edb04c2cedd51f12526d11c.scope - libcontainer container 1bb8af5719484a4f269f29e90efe4caa50c482222edb04c2cedd51f12526d11c. Nov 6 23:40:00.833034 containerd[1474]: time="2025-11-06T23:40:00.832901387Z" level=info msg="StartContainer for \"1bb8af5719484a4f269f29e90efe4caa50c482222edb04c2cedd51f12526d11c\" returns successfully" Nov 6 23:40:00.846488 systemd[1]: cri-containerd-1bb8af5719484a4f269f29e90efe4caa50c482222edb04c2cedd51f12526d11c.scope: Deactivated successfully. Nov 6 23:40:00.886538 containerd[1474]: time="2025-11-06T23:40:00.886438007Z" level=info msg="shim disconnected" id=1bb8af5719484a4f269f29e90efe4caa50c482222edb04c2cedd51f12526d11c namespace=k8s.io Nov 6 23:40:00.886538 containerd[1474]: time="2025-11-06T23:40:00.886504003Z" level=warning msg="cleaning up after shim disconnected" id=1bb8af5719484a4f269f29e90efe4caa50c482222edb04c2cedd51f12526d11c namespace=k8s.io Nov 6 23:40:00.886538 containerd[1474]: time="2025-11-06T23:40:00.886513190Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 6 23:40:00.907891 containerd[1474]: time="2025-11-06T23:40:00.907819539Z" level=warning msg="cleanup warnings time=\"2025-11-06T23:40:00Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Nov 6 23:40:01.259888 kubelet[2602]: E1106 23:40:01.259777 2602 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-7gkr9" podUID="8623b039-7a6f-48ae-a3af-3e4f63d72593" Nov 6 23:40:01.430410 kubelet[2602]: E1106 23:40:01.430254 2602 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Nov 6 23:40:01.523072 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1bb8af5719484a4f269f29e90efe4caa50c482222edb04c2cedd51f12526d11c-rootfs.mount: Deactivated successfully. Nov 6 23:40:01.666692 kubelet[2602]: E1106 23:40:01.666630 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:40:01.685261 containerd[1474]: time="2025-11-06T23:40:01.684938378Z" level=info msg="CreateContainer within sandbox \"8b3a331d5b74841e26756e07be738b8bdaecfa010e20ea95d5371aa5116c97c5\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 6 23:40:01.706185 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1493714302.mount: Deactivated successfully. Nov 6 23:40:01.715722 containerd[1474]: time="2025-11-06T23:40:01.715642384Z" level=info msg="CreateContainer within sandbox \"8b3a331d5b74841e26756e07be738b8bdaecfa010e20ea95d5371aa5116c97c5\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"712a048c87cd2f56d97ec2a1fd1217e956302f864dc9b2131bc0210a98c82b9e\"" Nov 6 23:40:01.717731 containerd[1474]: time="2025-11-06T23:40:01.717678148Z" level=info msg="StartContainer for \"712a048c87cd2f56d97ec2a1fd1217e956302f864dc9b2131bc0210a98c82b9e\"" Nov 6 23:40:01.791803 systemd[1]: Started cri-containerd-712a048c87cd2f56d97ec2a1fd1217e956302f864dc9b2131bc0210a98c82b9e.scope - libcontainer container 712a048c87cd2f56d97ec2a1fd1217e956302f864dc9b2131bc0210a98c82b9e. Nov 6 23:40:01.844911 systemd[1]: cri-containerd-712a048c87cd2f56d97ec2a1fd1217e956302f864dc9b2131bc0210a98c82b9e.scope: Deactivated successfully. Nov 6 23:40:01.852360 containerd[1474]: time="2025-11-06T23:40:01.852135227Z" level=info msg="StartContainer for \"712a048c87cd2f56d97ec2a1fd1217e956302f864dc9b2131bc0210a98c82b9e\" returns successfully" Nov 6 23:40:01.913337 containerd[1474]: time="2025-11-06T23:40:01.913072170Z" level=info msg="shim disconnected" id=712a048c87cd2f56d97ec2a1fd1217e956302f864dc9b2131bc0210a98c82b9e namespace=k8s.io Nov 6 23:40:01.913337 containerd[1474]: time="2025-11-06T23:40:01.913325784Z" level=warning msg="cleaning up after shim disconnected" id=712a048c87cd2f56d97ec2a1fd1217e956302f864dc9b2131bc0210a98c82b9e namespace=k8s.io Nov 6 23:40:01.913337 containerd[1474]: time="2025-11-06T23:40:01.913345541Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 6 23:40:02.523337 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-712a048c87cd2f56d97ec2a1fd1217e956302f864dc9b2131bc0210a98c82b9e-rootfs.mount: Deactivated successfully. Nov 6 23:40:02.672768 kubelet[2602]: E1106 23:40:02.671743 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:40:02.679620 containerd[1474]: time="2025-11-06T23:40:02.679118059Z" level=info msg="CreateContainer within sandbox \"8b3a331d5b74841e26756e07be738b8bdaecfa010e20ea95d5371aa5116c97c5\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 6 23:40:02.704618 containerd[1474]: time="2025-11-06T23:40:02.704550664Z" level=info msg="CreateContainer within sandbox \"8b3a331d5b74841e26756e07be738b8bdaecfa010e20ea95d5371aa5116c97c5\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f3361bdc74b1aa82e997029a941be9093483e623939a349df145f5d097ac978f\"" Nov 6 23:40:02.708299 containerd[1474]: time="2025-11-06T23:40:02.707704726Z" level=info msg="StartContainer for \"f3361bdc74b1aa82e997029a941be9093483e623939a349df145f5d097ac978f\"" Nov 6 23:40:02.765798 systemd[1]: Started cri-containerd-f3361bdc74b1aa82e997029a941be9093483e623939a349df145f5d097ac978f.scope - libcontainer container f3361bdc74b1aa82e997029a941be9093483e623939a349df145f5d097ac978f. Nov 6 23:40:02.829377 containerd[1474]: time="2025-11-06T23:40:02.828505248Z" level=info msg="StartContainer for \"f3361bdc74b1aa82e997029a941be9093483e623939a349df145f5d097ac978f\" returns successfully" Nov 6 23:40:03.259842 kubelet[2602]: E1106 23:40:03.258887 2602 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-7gkr9" podUID="8623b039-7a6f-48ae-a3af-3e4f63d72593" Nov 6 23:40:03.519308 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Nov 6 23:40:03.524469 systemd[1]: run-containerd-runc-k8s.io-f3361bdc74b1aa82e997029a941be9093483e623939a349df145f5d097ac978f-runc.yo3Bsm.mount: Deactivated successfully. Nov 6 23:40:03.690268 kubelet[2602]: E1106 23:40:03.688602 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:40:03.755125 kubelet[2602]: I1106 23:40:03.754854 2602 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-687kv" podStartSLOduration=5.754825532 podStartE2EDuration="5.754825532s" podCreationTimestamp="2025-11-06 23:39:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 23:40:03.752052164 +0000 UTC m=+107.696366074" watchObservedRunningTime="2025-11-06 23:40:03.754825532 +0000 UTC m=+107.699139441" Nov 6 23:40:04.690568 kubelet[2602]: E1106 23:40:04.690463 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:40:05.258939 kubelet[2602]: E1106 23:40:05.258862 2602 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-7gkr9" podUID="8623b039-7a6f-48ae-a3af-3e4f63d72593" Nov 6 23:40:07.260590 kubelet[2602]: E1106 23:40:07.260490 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:40:07.677185 systemd-networkd[1382]: lxc_health: Link UP Nov 6 23:40:07.700569 systemd-networkd[1382]: lxc_health: Gained carrier Nov 6 23:40:08.657161 kubelet[2602]: E1106 23:40:08.657089 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:40:08.705049 kubelet[2602]: E1106 23:40:08.702903 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:40:09.699491 systemd-networkd[1382]: lxc_health: Gained IPv6LL Nov 6 23:40:09.708928 kubelet[2602]: E1106 23:40:09.708882 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 23:40:12.074726 systemd[1]: run-containerd-runc-k8s.io-f3361bdc74b1aa82e997029a941be9093483e623939a349df145f5d097ac978f-runc.1xeNQn.mount: Deactivated successfully. Nov 6 23:40:14.312125 sshd[4432]: Connection closed by 147.75.109.163 port 33132 Nov 6 23:40:14.311888 sshd-session[4425]: pam_unix(sshd:session): session closed for user core Nov 6 23:40:14.325175 systemd[1]: sshd@26-164.92.114.154:22-147.75.109.163:33132.service: Deactivated successfully. Nov 6 23:40:14.329850 systemd[1]: session-27.scope: Deactivated successfully. Nov 6 23:40:14.332048 systemd-logind[1460]: Session 27 logged out. Waiting for processes to exit. Nov 6 23:40:14.333865 systemd-logind[1460]: Removed session 27. Nov 6 23:40:16.256596 containerd[1474]: time="2025-11-06T23:40:16.256523268Z" level=info msg="StopPodSandbox for \"d0143510f844ebc6ffd41a760dfd897bb4c56512dcbda4970ca4cd16cd7762ec\"" Nov 6 23:40:16.257858 containerd[1474]: time="2025-11-06T23:40:16.256676745Z" level=info msg="TearDown network for sandbox \"d0143510f844ebc6ffd41a760dfd897bb4c56512dcbda4970ca4cd16cd7762ec\" successfully" Nov 6 23:40:16.257858 containerd[1474]: time="2025-11-06T23:40:16.256694943Z" level=info msg="StopPodSandbox for \"d0143510f844ebc6ffd41a760dfd897bb4c56512dcbda4970ca4cd16cd7762ec\" returns successfully" Nov 6 23:40:16.257858 containerd[1474]: time="2025-11-06T23:40:16.257336965Z" level=info msg="RemovePodSandbox for \"d0143510f844ebc6ffd41a760dfd897bb4c56512dcbda4970ca4cd16cd7762ec\"" Nov 6 23:40:16.257858 containerd[1474]: time="2025-11-06T23:40:16.257393902Z" level=info msg="Forcibly stopping sandbox \"d0143510f844ebc6ffd41a760dfd897bb4c56512dcbda4970ca4cd16cd7762ec\"" Nov 6 23:40:16.257858 containerd[1474]: time="2025-11-06T23:40:16.257477887Z" level=info msg="TearDown network for sandbox \"d0143510f844ebc6ffd41a760dfd897bb4c56512dcbda4970ca4cd16cd7762ec\" successfully" Nov 6 23:40:16.264267 containerd[1474]: time="2025-11-06T23:40:16.264178278Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d0143510f844ebc6ffd41a760dfd897bb4c56512dcbda4970ca4cd16cd7762ec\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 6 23:40:16.264508 containerd[1474]: time="2025-11-06T23:40:16.264346210Z" level=info msg="RemovePodSandbox \"d0143510f844ebc6ffd41a760dfd897bb4c56512dcbda4970ca4cd16cd7762ec\" returns successfully" Nov 6 23:40:16.265191 containerd[1474]: time="2025-11-06T23:40:16.265122418Z" level=info msg="StopPodSandbox for \"4dd27c27ac0bfe4dcd4c6fa2e22782c857644ccb07b7ee8f909c89b0b1e3755c\"" Nov 6 23:40:16.265436 containerd[1474]: time="2025-11-06T23:40:16.265324374Z" level=info msg="TearDown network for sandbox \"4dd27c27ac0bfe4dcd4c6fa2e22782c857644ccb07b7ee8f909c89b0b1e3755c\" successfully" Nov 6 23:40:16.265436 containerd[1474]: time="2025-11-06T23:40:16.265341968Z" level=info msg="StopPodSandbox for \"4dd27c27ac0bfe4dcd4c6fa2e22782c857644ccb07b7ee8f909c89b0b1e3755c\" returns successfully" Nov 6 23:40:16.265797 containerd[1474]: time="2025-11-06T23:40:16.265771536Z" level=info msg="RemovePodSandbox for \"4dd27c27ac0bfe4dcd4c6fa2e22782c857644ccb07b7ee8f909c89b0b1e3755c\"" Nov 6 23:40:16.265874 containerd[1474]: time="2025-11-06T23:40:16.265804674Z" level=info msg="Forcibly stopping sandbox \"4dd27c27ac0bfe4dcd4c6fa2e22782c857644ccb07b7ee8f909c89b0b1e3755c\"" Nov 6 23:40:16.266055 containerd[1474]: time="2025-11-06T23:40:16.265876128Z" level=info msg="TearDown network for sandbox \"4dd27c27ac0bfe4dcd4c6fa2e22782c857644ccb07b7ee8f909c89b0b1e3755c\" successfully" Nov 6 23:40:16.269982 containerd[1474]: time="2025-11-06T23:40:16.269918145Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4dd27c27ac0bfe4dcd4c6fa2e22782c857644ccb07b7ee8f909c89b0b1e3755c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 6 23:40:16.269982 containerd[1474]: time="2025-11-06T23:40:16.270008093Z" level=info msg="RemovePodSandbox \"4dd27c27ac0bfe4dcd4c6fa2e22782c857644ccb07b7ee8f909c89b0b1e3755c\" returns successfully"