Nov 5 15:49:39.097355 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Wed Nov 5 13:45:21 -00 2025 Nov 5 15:49:39.097410 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=c2a05564bcb92d35bbb2f0ae32fe5ddfa8424368122998dedda8bd375a237cb4 Nov 5 15:49:39.097430 kernel: BIOS-provided physical RAM map: Nov 5 15:49:39.097437 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Nov 5 15:49:39.097444 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Nov 5 15:49:39.097452 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Nov 5 15:49:39.097460 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Nov 5 15:49:39.097471 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Nov 5 15:49:39.097479 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 5 15:49:39.097493 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Nov 5 15:49:39.097501 kernel: NX (Execute Disable) protection: active Nov 5 15:49:39.097508 kernel: APIC: Static calls initialized Nov 5 15:49:39.097516 kernel: SMBIOS 2.8 present. Nov 5 15:49:39.097524 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Nov 5 15:49:39.097534 kernel: DMI: Memory slots populated: 1/1 Nov 5 15:49:39.097566 kernel: Hypervisor detected: KVM Nov 5 15:49:39.097583 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Nov 5 15:49:39.097593 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 5 15:49:39.097602 kernel: kvm-clock: using sched offset of 3804685660 cycles Nov 5 15:49:39.097612 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 5 15:49:39.097621 kernel: tsc: Detected 2494.134 MHz processor Nov 5 15:49:39.097631 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 5 15:49:39.097641 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 5 15:49:39.097657 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Nov 5 15:49:39.097667 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Nov 5 15:49:39.097676 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 5 15:49:39.097685 kernel: ACPI: Early table checksum verification disabled Nov 5 15:49:39.097693 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Nov 5 15:49:39.097702 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 15:49:39.097712 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 15:49:39.097727 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 15:49:39.097736 kernel: ACPI: FACS 0x000000007FFE0000 000040 Nov 5 15:49:39.097745 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 15:49:39.097758 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 15:49:39.097770 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 15:49:39.097783 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 15:49:39.097795 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Nov 5 15:49:39.097816 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Nov 5 15:49:39.097825 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Nov 5 15:49:39.097834 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Nov 5 15:49:39.097851 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Nov 5 15:49:39.097860 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Nov 5 15:49:39.097876 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Nov 5 15:49:39.097885 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Nov 5 15:49:39.097895 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Nov 5 15:49:39.097905 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00001000-0x7ffdafff] Nov 5 15:49:39.097915 kernel: NODE_DATA(0) allocated [mem 0x7ffd3dc0-0x7ffdafff] Nov 5 15:49:39.097924 kernel: Zone ranges: Nov 5 15:49:39.097953 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 5 15:49:39.097962 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Nov 5 15:49:39.097972 kernel: Normal empty Nov 5 15:49:39.097987 kernel: Device empty Nov 5 15:49:39.097996 kernel: Movable zone start for each node Nov 5 15:49:39.098005 kernel: Early memory node ranges Nov 5 15:49:39.098015 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Nov 5 15:49:39.098023 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Nov 5 15:49:39.098040 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Nov 5 15:49:39.098049 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 5 15:49:39.098058 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Nov 5 15:49:39.098068 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Nov 5 15:49:39.098077 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 5 15:49:39.098091 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 5 15:49:39.098108 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 5 15:49:39.098135 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 5 15:49:39.098149 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 5 15:49:39.098161 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 5 15:49:39.098177 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 5 15:49:39.098189 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 5 15:49:39.098202 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 5 15:49:39.098215 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 5 15:49:39.098236 kernel: TSC deadline timer available Nov 5 15:49:39.098250 kernel: CPU topo: Max. logical packages: 1 Nov 5 15:49:39.098263 kernel: CPU topo: Max. logical dies: 1 Nov 5 15:49:39.098277 kernel: CPU topo: Max. dies per package: 1 Nov 5 15:49:39.098288 kernel: CPU topo: Max. threads per core: 1 Nov 5 15:49:39.098297 kernel: CPU topo: Num. cores per package: 2 Nov 5 15:49:39.098307 kernel: CPU topo: Num. threads per package: 2 Nov 5 15:49:39.098315 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Nov 5 15:49:39.098334 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 5 15:49:39.098344 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Nov 5 15:49:39.098353 kernel: Booting paravirtualized kernel on KVM Nov 5 15:49:39.098363 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 5 15:49:39.098372 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Nov 5 15:49:39.098382 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Nov 5 15:49:39.098391 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Nov 5 15:49:39.098407 kernel: pcpu-alloc: [0] 0 1 Nov 5 15:49:39.098417 kernel: kvm-guest: PV spinlocks disabled, no host support Nov 5 15:49:39.098428 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=c2a05564bcb92d35bbb2f0ae32fe5ddfa8424368122998dedda8bd375a237cb4 Nov 5 15:49:39.098438 kernel: random: crng init done Nov 5 15:49:39.098447 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 5 15:49:39.098456 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 5 15:49:39.098465 kernel: Fallback order for Node 0: 0 Nov 5 15:49:39.098482 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524153 Nov 5 15:49:39.098492 kernel: Policy zone: DMA32 Nov 5 15:49:39.098507 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 5 15:49:39.098521 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 5 15:49:39.098534 kernel: Kernel/User page tables isolation: enabled Nov 5 15:49:39.098546 kernel: ftrace: allocating 40092 entries in 157 pages Nov 5 15:49:39.098560 kernel: ftrace: allocated 157 pages with 5 groups Nov 5 15:49:39.098582 kernel: Dynamic Preempt: voluntary Nov 5 15:49:39.098595 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 5 15:49:39.098612 kernel: rcu: RCU event tracing is enabled. Nov 5 15:49:39.098627 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 5 15:49:39.098638 kernel: Trampoline variant of Tasks RCU enabled. Nov 5 15:49:39.098647 kernel: Rude variant of Tasks RCU enabled. Nov 5 15:49:39.098657 kernel: Tracing variant of Tasks RCU enabled. Nov 5 15:49:39.098674 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 5 15:49:39.098683 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 5 15:49:39.098693 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 5 15:49:39.098706 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 5 15:49:39.098716 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 5 15:49:39.098725 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Nov 5 15:49:39.098735 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 5 15:49:39.105084 kernel: Console: colour VGA+ 80x25 Nov 5 15:49:39.105112 kernel: printk: legacy console [tty0] enabled Nov 5 15:49:39.105123 kernel: printk: legacy console [ttyS0] enabled Nov 5 15:49:39.105133 kernel: ACPI: Core revision 20240827 Nov 5 15:49:39.105143 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 5 15:49:39.105194 kernel: APIC: Switch to symmetric I/O mode setup Nov 5 15:49:39.105210 kernel: x2apic enabled Nov 5 15:49:39.105220 kernel: APIC: Switched APIC routing to: physical x2apic Nov 5 15:49:39.105230 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 5 15:49:39.105240 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f3946f721, max_idle_ns: 440795294991 ns Nov 5 15:49:39.105261 kernel: Calibrating delay loop (skipped) preset value.. 4988.26 BogoMIPS (lpj=2494134) Nov 5 15:49:39.105271 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Nov 5 15:49:39.105281 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Nov 5 15:49:39.109000 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 5 15:49:39.109025 kernel: Spectre V2 : Mitigation: Retpolines Nov 5 15:49:39.109036 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 5 15:49:39.109047 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Nov 5 15:49:39.109057 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 5 15:49:39.109067 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 5 15:49:39.109077 kernel: MDS: Mitigation: Clear CPU buffers Nov 5 15:49:39.109105 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Nov 5 15:49:39.109115 kernel: active return thunk: its_return_thunk Nov 5 15:49:39.109125 kernel: ITS: Mitigation: Aligned branch/return thunks Nov 5 15:49:39.109135 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 5 15:49:39.109146 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 5 15:49:39.109155 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 5 15:49:39.109165 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 5 15:49:39.109182 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Nov 5 15:49:39.109203 kernel: Freeing SMP alternatives memory: 32K Nov 5 15:49:39.109227 kernel: pid_max: default: 32768 minimum: 301 Nov 5 15:49:39.109242 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Nov 5 15:49:39.109257 kernel: landlock: Up and running. Nov 5 15:49:39.109272 kernel: SELinux: Initializing. Nov 5 15:49:39.109284 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 5 15:49:39.109295 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 5 15:49:39.109316 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Nov 5 15:49:39.109326 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Nov 5 15:49:39.109337 kernel: signal: max sigframe size: 1776 Nov 5 15:49:39.109347 kernel: rcu: Hierarchical SRCU implementation. Nov 5 15:49:39.109383 kernel: rcu: Max phase no-delay instances is 400. Nov 5 15:49:39.109398 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Nov 5 15:49:39.109411 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Nov 5 15:49:39.109437 kernel: smp: Bringing up secondary CPUs ... Nov 5 15:49:39.109457 kernel: smpboot: x86: Booting SMP configuration: Nov 5 15:49:39.109472 kernel: .... node #0, CPUs: #1 Nov 5 15:49:39.109487 kernel: smp: Brought up 1 node, 2 CPUs Nov 5 15:49:39.109504 kernel: smpboot: Total of 2 processors activated (9976.53 BogoMIPS) Nov 5 15:49:39.109525 kernel: Memory: 1989436K/2096612K available (14336K kernel code, 2443K rwdata, 26064K rodata, 15964K init, 2080K bss, 102612K reserved, 0K cma-reserved) Nov 5 15:49:39.109539 kernel: devtmpfs: initialized Nov 5 15:49:39.109579 kernel: x86/mm: Memory block size: 128MB Nov 5 15:49:39.109595 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 5 15:49:39.109610 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 5 15:49:39.109623 kernel: pinctrl core: initialized pinctrl subsystem Nov 5 15:49:39.109637 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 5 15:49:39.109652 kernel: audit: initializing netlink subsys (disabled) Nov 5 15:49:39.109667 kernel: audit: type=2000 audit(1762357776.523:1): state=initialized audit_enabled=0 res=1 Nov 5 15:49:39.109693 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 5 15:49:39.109706 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 5 15:49:39.109716 kernel: cpuidle: using governor menu Nov 5 15:49:39.109726 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 5 15:49:39.109736 kernel: dca service started, version 1.12.1 Nov 5 15:49:39.109746 kernel: PCI: Using configuration type 1 for base access Nov 5 15:49:39.109756 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 5 15:49:39.109774 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 5 15:49:39.109785 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 5 15:49:39.109795 kernel: ACPI: Added _OSI(Module Device) Nov 5 15:49:39.109805 kernel: ACPI: Added _OSI(Processor Device) Nov 5 15:49:39.109814 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 5 15:49:39.109824 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 5 15:49:39.109834 kernel: ACPI: Interpreter enabled Nov 5 15:49:39.109850 kernel: ACPI: PM: (supports S0 S5) Nov 5 15:49:39.109860 kernel: ACPI: Using IOAPIC for interrupt routing Nov 5 15:49:39.109869 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 5 15:49:39.109881 kernel: PCI: Using E820 reservations for host bridge windows Nov 5 15:49:39.109895 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Nov 5 15:49:39.109912 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 5 15:49:39.110366 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Nov 5 15:49:39.110548 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Nov 5 15:49:39.110743 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Nov 5 15:49:39.110765 kernel: acpiphp: Slot [3] registered Nov 5 15:49:39.110780 kernel: acpiphp: Slot [4] registered Nov 5 15:49:39.110796 kernel: acpiphp: Slot [5] registered Nov 5 15:49:39.110810 kernel: acpiphp: Slot [6] registered Nov 5 15:49:39.110842 kernel: acpiphp: Slot [7] registered Nov 5 15:49:39.110857 kernel: acpiphp: Slot [8] registered Nov 5 15:49:39.110872 kernel: acpiphp: Slot [9] registered Nov 5 15:49:39.110888 kernel: acpiphp: Slot [10] registered Nov 5 15:49:39.110905 kernel: acpiphp: Slot [11] registered Nov 5 15:49:39.110921 kernel: acpiphp: Slot [12] registered Nov 5 15:49:39.110961 kernel: acpiphp: Slot [13] registered Nov 5 15:49:39.110976 kernel: acpiphp: Slot [14] registered Nov 5 15:49:39.111004 kernel: acpiphp: Slot [15] registered Nov 5 15:49:39.111019 kernel: acpiphp: Slot [16] registered Nov 5 15:49:39.111033 kernel: acpiphp: Slot [17] registered Nov 5 15:49:39.111048 kernel: acpiphp: Slot [18] registered Nov 5 15:49:39.111062 kernel: acpiphp: Slot [19] registered Nov 5 15:49:39.111077 kernel: acpiphp: Slot [20] registered Nov 5 15:49:39.111094 kernel: acpiphp: Slot [21] registered Nov 5 15:49:39.111121 kernel: acpiphp: Slot [22] registered Nov 5 15:49:39.111138 kernel: acpiphp: Slot [23] registered Nov 5 15:49:39.111154 kernel: acpiphp: Slot [24] registered Nov 5 15:49:39.111168 kernel: acpiphp: Slot [25] registered Nov 5 15:49:39.111183 kernel: acpiphp: Slot [26] registered Nov 5 15:49:39.111197 kernel: acpiphp: Slot [27] registered Nov 5 15:49:39.111213 kernel: acpiphp: Slot [28] registered Nov 5 15:49:39.111236 kernel: acpiphp: Slot [29] registered Nov 5 15:49:39.111251 kernel: acpiphp: Slot [30] registered Nov 5 15:49:39.111266 kernel: acpiphp: Slot [31] registered Nov 5 15:49:39.111281 kernel: PCI host bridge to bus 0000:00 Nov 5 15:49:39.111534 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 5 15:49:39.111673 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 5 15:49:39.111794 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 5 15:49:39.111998 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Nov 5 15:49:39.112130 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Nov 5 15:49:39.112254 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 5 15:49:39.112438 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Nov 5 15:49:39.112581 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint Nov 5 15:49:39.112754 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint Nov 5 15:49:39.112885 kernel: pci 0000:00:01.1: BAR 4 [io 0xc1e0-0xc1ef] Nov 5 15:49:39.113042 kernel: pci 0000:00:01.1: BAR 0 [io 0x01f0-0x01f7]: legacy IDE quirk Nov 5 15:49:39.113197 kernel: pci 0000:00:01.1: BAR 1 [io 0x03f6]: legacy IDE quirk Nov 5 15:49:39.113333 kernel: pci 0000:00:01.1: BAR 2 [io 0x0170-0x0177]: legacy IDE quirk Nov 5 15:49:39.113459 kernel: pci 0000:00:01.1: BAR 3 [io 0x0376]: legacy IDE quirk Nov 5 15:49:39.113652 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint Nov 5 15:49:39.113788 kernel: pci 0000:00:01.2: BAR 4 [io 0xc180-0xc19f] Nov 5 15:49:39.113925 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint Nov 5 15:49:39.114086 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Nov 5 15:49:39.114215 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Nov 5 15:49:39.114370 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint Nov 5 15:49:39.114504 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref] Nov 5 15:49:39.114634 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref] Nov 5 15:49:39.114764 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfebf0000-0xfebf0fff] Nov 5 15:49:39.114893 kernel: pci 0000:00:02.0: ROM [mem 0xfebe0000-0xfebeffff pref] Nov 5 15:49:39.115036 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 5 15:49:39.115192 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Nov 5 15:49:39.115349 kernel: pci 0000:00:03.0: BAR 0 [io 0xc1a0-0xc1bf] Nov 5 15:49:39.115484 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebf1000-0xfebf1fff] Nov 5 15:49:39.115617 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref] Nov 5 15:49:39.115763 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Nov 5 15:49:39.115914 kernel: pci 0000:00:04.0: BAR 0 [io 0xc1c0-0xc1df] Nov 5 15:49:39.116072 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebf2000-0xfebf2fff] Nov 5 15:49:39.116211 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref] Nov 5 15:49:39.116376 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint Nov 5 15:49:39.116566 kernel: pci 0000:00:05.0: BAR 0 [io 0xc100-0xc13f] Nov 5 15:49:39.116704 kernel: pci 0000:00:05.0: BAR 1 [mem 0xfebf3000-0xfebf3fff] Nov 5 15:49:39.116912 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref] Nov 5 15:49:39.117124 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Nov 5 15:49:39.117265 kernel: pci 0000:00:06.0: BAR 0 [io 0xc000-0xc07f] Nov 5 15:49:39.117444 kernel: pci 0000:00:06.0: BAR 1 [mem 0xfebf4000-0xfebf4fff] Nov 5 15:49:39.117617 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref] Nov 5 15:49:39.117831 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Nov 5 15:49:39.118024 kernel: pci 0000:00:07.0: BAR 0 [io 0xc080-0xc0ff] Nov 5 15:49:39.118193 kernel: pci 0000:00:07.0: BAR 1 [mem 0xfebf5000-0xfebf5fff] Nov 5 15:49:39.118324 kernel: pci 0000:00:07.0: BAR 4 [mem 0xfe814000-0xfe817fff 64bit pref] Nov 5 15:49:39.118485 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint Nov 5 15:49:39.118662 kernel: pci 0000:00:08.0: BAR 0 [io 0xc140-0xc17f] Nov 5 15:49:39.118812 kernel: pci 0000:00:08.0: BAR 4 [mem 0xfe818000-0xfe81bfff 64bit pref] Nov 5 15:49:39.118825 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 5 15:49:39.118836 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 5 15:49:39.118846 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 5 15:49:39.118856 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 5 15:49:39.118866 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Nov 5 15:49:39.118906 kernel: iommu: Default domain type: Translated Nov 5 15:49:39.118918 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 5 15:49:39.118944 kernel: PCI: Using ACPI for IRQ routing Nov 5 15:49:39.118954 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 5 15:49:39.118964 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Nov 5 15:49:39.118974 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Nov 5 15:49:39.119129 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Nov 5 15:49:39.119277 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Nov 5 15:49:39.119409 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 5 15:49:39.119422 kernel: vgaarb: loaded Nov 5 15:49:39.119432 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 5 15:49:39.119442 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 5 15:49:39.119452 kernel: clocksource: Switched to clocksource kvm-clock Nov 5 15:49:39.119462 kernel: VFS: Disk quotas dquot_6.6.0 Nov 5 15:49:39.119480 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 5 15:49:39.119490 kernel: pnp: PnP ACPI init Nov 5 15:49:39.119500 kernel: pnp: PnP ACPI: found 4 devices Nov 5 15:49:39.119510 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 5 15:49:39.119520 kernel: NET: Registered PF_INET protocol family Nov 5 15:49:39.119530 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 5 15:49:39.119541 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Nov 5 15:49:39.119557 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 5 15:49:39.119567 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 5 15:49:39.119577 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Nov 5 15:49:39.119587 kernel: TCP: Hash tables configured (established 16384 bind 16384) Nov 5 15:49:39.119597 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 5 15:49:39.119607 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 5 15:49:39.119617 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 5 15:49:39.119633 kernel: NET: Registered PF_XDP protocol family Nov 5 15:49:39.121650 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 5 15:49:39.121799 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 5 15:49:39.121919 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 5 15:49:39.122087 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Nov 5 15:49:39.122207 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Nov 5 15:49:39.122357 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Nov 5 15:49:39.122533 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Nov 5 15:49:39.122548 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Nov 5 15:49:39.122700 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x720 took 26065 usecs Nov 5 15:49:39.122715 kernel: PCI: CLS 0 bytes, default 64 Nov 5 15:49:39.122725 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Nov 5 15:49:39.122736 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f3946f721, max_idle_ns: 440795294991 ns Nov 5 15:49:39.122746 kernel: Initialise system trusted keyrings Nov 5 15:49:39.122767 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Nov 5 15:49:39.122777 kernel: Key type asymmetric registered Nov 5 15:49:39.122793 kernel: Asymmetric key parser 'x509' registered Nov 5 15:49:39.122808 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Nov 5 15:49:39.122821 kernel: io scheduler mq-deadline registered Nov 5 15:49:39.122834 kernel: io scheduler kyber registered Nov 5 15:49:39.122847 kernel: io scheduler bfq registered Nov 5 15:49:39.122872 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 5 15:49:39.122886 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Nov 5 15:49:39.122900 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Nov 5 15:49:39.122910 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Nov 5 15:49:39.122920 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 5 15:49:39.124789 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 5 15:49:39.124817 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 5 15:49:39.124864 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 5 15:49:39.124882 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 5 15:49:39.125181 kernel: rtc_cmos 00:03: RTC can wake from S4 Nov 5 15:49:39.125208 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 5 15:49:39.125406 kernel: rtc_cmos 00:03: registered as rtc0 Nov 5 15:49:39.125740 kernel: rtc_cmos 00:03: setting system clock to 2025-11-05T15:49:37 UTC (1762357777) Nov 5 15:49:39.126022 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Nov 5 15:49:39.126046 kernel: intel_pstate: CPU model not supported Nov 5 15:49:39.126059 kernel: NET: Registered PF_INET6 protocol family Nov 5 15:49:39.126077 kernel: Segment Routing with IPv6 Nov 5 15:49:39.126095 kernel: In-situ OAM (IOAM) with IPv6 Nov 5 15:49:39.126112 kernel: NET: Registered PF_PACKET protocol family Nov 5 15:49:39.126130 kernel: Key type dns_resolver registered Nov 5 15:49:39.126166 kernel: IPI shorthand broadcast: enabled Nov 5 15:49:39.126184 kernel: sched_clock: Marking stable (1297004357, 160665280)->(1482792956, -25123319) Nov 5 15:49:39.126201 kernel: registered taskstats version 1 Nov 5 15:49:39.126219 kernel: Loading compiled-in X.509 certificates Nov 5 15:49:39.126238 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: 9f02cc8d588ce542f03b0da66dde47a90a145382' Nov 5 15:49:39.126255 kernel: Demotion targets for Node 0: null Nov 5 15:49:39.126273 kernel: Key type .fscrypt registered Nov 5 15:49:39.126299 kernel: Key type fscrypt-provisioning registered Nov 5 15:49:39.126368 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 5 15:49:39.126394 kernel: ima: Allocated hash algorithm: sha1 Nov 5 15:49:39.126412 kernel: ima: No architecture policies found Nov 5 15:49:39.126431 kernel: clk: Disabling unused clocks Nov 5 15:49:39.126449 kernel: Freeing unused kernel image (initmem) memory: 15964K Nov 5 15:49:39.126468 kernel: Write protecting the kernel read-only data: 40960k Nov 5 15:49:39.126494 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Nov 5 15:49:39.126511 kernel: Run /init as init process Nov 5 15:49:39.126530 kernel: with arguments: Nov 5 15:49:39.126549 kernel: /init Nov 5 15:49:39.126566 kernel: with environment: Nov 5 15:49:39.126584 kernel: HOME=/ Nov 5 15:49:39.126601 kernel: TERM=linux Nov 5 15:49:39.126620 kernel: SCSI subsystem initialized Nov 5 15:49:39.126645 kernel: libata version 3.00 loaded. Nov 5 15:49:39.126891 kernel: ata_piix 0000:00:01.1: version 2.13 Nov 5 15:49:39.129410 kernel: scsi host0: ata_piix Nov 5 15:49:39.129726 kernel: scsi host1: ata_piix Nov 5 15:49:39.129760 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 lpm-pol 0 Nov 5 15:49:39.129817 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 lpm-pol 0 Nov 5 15:49:39.129832 kernel: ACPI: bus type USB registered Nov 5 15:49:39.129848 kernel: usbcore: registered new interface driver usbfs Nov 5 15:49:39.129864 kernel: usbcore: registered new interface driver hub Nov 5 15:49:39.129879 kernel: usbcore: registered new device driver usb Nov 5 15:49:39.130104 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Nov 5 15:49:39.130297 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Nov 5 15:49:39.130531 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Nov 5 15:49:39.130728 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Nov 5 15:49:39.132571 kernel: hub 1-0:1.0: USB hub found Nov 5 15:49:39.132864 kernel: hub 1-0:1.0: 2 ports detected Nov 5 15:49:39.133181 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Nov 5 15:49:39.133444 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Nov 5 15:49:39.133475 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 5 15:49:39.133492 kernel: GPT:16515071 != 125829119 Nov 5 15:49:39.133511 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 5 15:49:39.133559 kernel: GPT:16515071 != 125829119 Nov 5 15:49:39.133574 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 5 15:49:39.133590 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 5 15:49:39.133824 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Nov 5 15:49:39.136591 kernel: virtio_blk virtio5: [vdb] 980 512-byte logical blocks (502 kB/490 KiB) Nov 5 15:49:39.136889 kernel: virtio_scsi virtio3: 2/0/0 default/read/poll queues Nov 5 15:49:39.137299 kernel: scsi host2: Virtio SCSI HBA Nov 5 15:49:39.137334 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 5 15:49:39.137355 kernel: device-mapper: uevent: version 1.0.3 Nov 5 15:49:39.137374 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Nov 5 15:49:39.137391 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Nov 5 15:49:39.137408 kernel: raid6: avx2x4 gen() 17397 MB/s Nov 5 15:49:39.137426 kernel: raid6: avx2x2 gen() 17622 MB/s Nov 5 15:49:39.137465 kernel: raid6: avx2x1 gen() 12733 MB/s Nov 5 15:49:39.137485 kernel: raid6: using algorithm avx2x2 gen() 17622 MB/s Nov 5 15:49:39.137505 kernel: raid6: .... xor() 19786 MB/s, rmw enabled Nov 5 15:49:39.137523 kernel: raid6: using avx2x2 recovery algorithm Nov 5 15:49:39.137555 kernel: xor: automatically using best checksumming function avx Nov 5 15:49:39.137575 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 5 15:49:39.137594 kernel: BTRFS: device fsid a4c7be9c-39f6-471d-8a4c-d50144c6bf01 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (162) Nov 5 15:49:39.137630 kernel: BTRFS info (device dm-0): first mount of filesystem a4c7be9c-39f6-471d-8a4c-d50144c6bf01 Nov 5 15:49:39.137648 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 5 15:49:39.137666 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 5 15:49:39.137684 kernel: BTRFS info (device dm-0): enabling free space tree Nov 5 15:49:39.137703 kernel: loop: module loaded Nov 5 15:49:39.137721 kernel: loop0: detected capacity change from 0 to 100120 Nov 5 15:49:39.137740 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 5 15:49:39.137771 systemd[1]: Successfully made /usr/ read-only. Nov 5 15:49:39.137795 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 5 15:49:39.137816 systemd[1]: Detected virtualization kvm. Nov 5 15:49:39.137833 systemd[1]: Detected architecture x86-64. Nov 5 15:49:39.137852 systemd[1]: Running in initrd. Nov 5 15:49:39.137869 systemd[1]: No hostname configured, using default hostname. Nov 5 15:49:39.137899 systemd[1]: Hostname set to . Nov 5 15:49:39.137916 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Nov 5 15:49:39.139051 systemd[1]: Queued start job for default target initrd.target. Nov 5 15:49:39.139078 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 5 15:49:39.139099 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 5 15:49:39.139119 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 5 15:49:39.139155 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 5 15:49:39.139174 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 5 15:49:39.139191 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 5 15:49:39.139208 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 5 15:49:39.139226 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 5 15:49:39.139243 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 5 15:49:39.139269 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Nov 5 15:49:39.139285 systemd[1]: Reached target paths.target - Path Units. Nov 5 15:49:39.139302 systemd[1]: Reached target slices.target - Slice Units. Nov 5 15:49:39.139319 systemd[1]: Reached target swap.target - Swaps. Nov 5 15:49:39.139336 systemd[1]: Reached target timers.target - Timer Units. Nov 5 15:49:39.139355 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 5 15:49:39.139372 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 5 15:49:39.139393 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 5 15:49:39.139404 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Nov 5 15:49:39.139415 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 5 15:49:39.139433 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 5 15:49:39.139450 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 5 15:49:39.139465 systemd[1]: Reached target sockets.target - Socket Units. Nov 5 15:49:39.139481 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 5 15:49:39.139507 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 5 15:49:39.139524 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 5 15:49:39.139540 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 5 15:49:39.139559 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Nov 5 15:49:39.139578 systemd[1]: Starting systemd-fsck-usr.service... Nov 5 15:49:39.139596 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 5 15:49:39.139616 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 5 15:49:39.139627 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 15:49:39.139639 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 5 15:49:39.139650 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 5 15:49:39.139668 systemd[1]: Finished systemd-fsck-usr.service. Nov 5 15:49:39.139679 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 5 15:49:39.139771 systemd-journald[297]: Collecting audit messages is disabled. Nov 5 15:49:39.139806 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 5 15:49:39.139819 kernel: Bridge firewalling registered Nov 5 15:49:39.139831 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 5 15:49:39.139843 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 5 15:49:39.139854 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 5 15:49:39.139867 systemd-journald[297]: Journal started Nov 5 15:49:39.139898 systemd-journald[297]: Runtime Journal (/run/log/journal/ec16a3d4e5444c36834ec2fcfe43a5a6) is 4.9M, max 39.2M, 34.3M free. Nov 5 15:49:39.118994 systemd-modules-load[299]: Inserted module 'br_netfilter' Nov 5 15:49:39.192959 systemd[1]: Started systemd-journald.service - Journal Service. Nov 5 15:49:39.195229 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 15:49:39.202156 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 5 15:49:39.204232 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 5 15:49:39.211277 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 5 15:49:39.212423 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 5 15:49:39.217223 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 5 15:49:39.238200 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 5 15:49:39.248661 systemd-tmpfiles[320]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Nov 5 15:49:39.257649 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 5 15:49:39.266895 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 5 15:49:39.270160 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 5 15:49:39.297407 systemd-resolved[322]: Positive Trust Anchors: Nov 5 15:49:39.297433 systemd-resolved[322]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 5 15:49:39.297439 systemd-resolved[322]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 5 15:49:39.297493 systemd-resolved[322]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 5 15:49:39.317874 dracut-cmdline[339]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=c2a05564bcb92d35bbb2f0ae32fe5ddfa8424368122998dedda8bd375a237cb4 Nov 5 15:49:39.330646 systemd-resolved[322]: Defaulting to hostname 'linux'. Nov 5 15:49:39.332712 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 5 15:49:39.335081 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 5 15:49:39.438968 kernel: Loading iSCSI transport class v2.0-870. Nov 5 15:49:39.455990 kernel: iscsi: registered transport (tcp) Nov 5 15:49:39.482286 kernel: iscsi: registered transport (qla4xxx) Nov 5 15:49:39.482417 kernel: QLogic iSCSI HBA Driver Nov 5 15:49:39.516906 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 5 15:49:39.550912 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 5 15:49:39.555107 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 5 15:49:39.631273 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 5 15:49:39.635237 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 5 15:49:39.638343 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 5 15:49:39.697524 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 5 15:49:39.701225 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 5 15:49:39.744989 systemd-udevd[579]: Using default interface naming scheme 'v257'. Nov 5 15:49:39.762939 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 5 15:49:39.769298 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 5 15:49:39.793121 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 5 15:49:39.797148 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 5 15:49:39.803947 dracut-pre-trigger[660]: rd.md=0: removing MD RAID activation Nov 5 15:49:39.849865 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 5 15:49:39.854198 systemd-networkd[682]: lo: Link UP Nov 5 15:49:39.854206 systemd-networkd[682]: lo: Gained carrier Nov 5 15:49:39.856619 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 5 15:49:39.858363 systemd[1]: Reached target network.target - Network. Nov 5 15:49:39.860680 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 5 15:49:39.995048 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 5 15:49:39.999777 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 5 15:49:40.107321 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 5 15:49:40.138151 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 5 15:49:40.165963 kernel: cryptd: max_cpu_qlen set to 1000 Nov 5 15:49:40.178025 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 5 15:49:40.195007 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Nov 5 15:49:40.192786 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 5 15:49:40.196961 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 5 15:49:40.222957 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 5 15:49:40.223133 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 15:49:40.226328 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 15:49:40.229030 disk-uuid[747]: Primary Header is updated. Nov 5 15:49:40.229030 disk-uuid[747]: Secondary Entries is updated. Nov 5 15:49:40.229030 disk-uuid[747]: Secondary Header is updated. Nov 5 15:49:40.231044 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 15:49:40.264959 kernel: AES CTR mode by8 optimization enabled Nov 5 15:49:40.299543 systemd-networkd[682]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/yy-digitalocean.network Nov 5 15:49:40.299553 systemd-networkd[682]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Nov 5 15:49:40.301547 systemd-networkd[682]: eth0: Link UP Nov 5 15:49:40.302963 systemd-networkd[682]: eth0: Gained carrier Nov 5 15:49:40.302976 systemd-networkd[682]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/yy-digitalocean.network Nov 5 15:49:40.313651 systemd-networkd[682]: eth1: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 5 15:49:40.313661 systemd-networkd[682]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 5 15:49:40.314533 systemd-networkd[682]: eth1: Link UP Nov 5 15:49:40.314724 systemd-networkd[682]: eth1: Gained carrier Nov 5 15:49:40.314758 systemd-networkd[682]: eth1: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 5 15:49:40.328085 systemd-networkd[682]: eth0: DHCPv4 address 137.184.121.184/20, gateway 137.184.112.1 acquired from 169.254.169.253 Nov 5 15:49:40.338044 systemd-networkd[682]: eth1: DHCPv4 address 10.124.0.33/20 acquired from 169.254.169.253 Nov 5 15:49:40.404488 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 15:49:40.441856 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 5 15:49:40.443066 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 5 15:49:40.443844 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 5 15:49:40.445044 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 5 15:49:40.447486 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 5 15:49:40.480480 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 5 15:49:41.304476 disk-uuid[749]: Warning: The kernel is still using the old partition table. Nov 5 15:49:41.304476 disk-uuid[749]: The new table will be used at the next reboot or after you Nov 5 15:49:41.304476 disk-uuid[749]: run partprobe(8) or kpartx(8) Nov 5 15:49:41.304476 disk-uuid[749]: The operation has completed successfully. Nov 5 15:49:41.312306 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 5 15:49:41.312506 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 5 15:49:41.315669 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 5 15:49:41.355993 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (835) Nov 5 15:49:41.359966 kernel: BTRFS info (device vda6): first mount of filesystem fa887730-d07b-4714-9f34-65e9489ec2e4 Nov 5 15:49:41.360065 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 5 15:49:41.365183 kernel: BTRFS info (device vda6): turning on async discard Nov 5 15:49:41.365260 kernel: BTRFS info (device vda6): enabling free space tree Nov 5 15:49:41.370073 systemd-networkd[682]: eth1: Gained IPv6LL Nov 5 15:49:41.375961 kernel: BTRFS info (device vda6): last unmount of filesystem fa887730-d07b-4714-9f34-65e9489ec2e4 Nov 5 15:49:41.376579 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 5 15:49:41.379488 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 5 15:49:41.620164 ignition[854]: Ignition 2.22.0 Nov 5 15:49:41.620182 ignition[854]: Stage: fetch-offline Nov 5 15:49:41.620226 ignition[854]: no configs at "/usr/lib/ignition/base.d" Nov 5 15:49:41.620240 ignition[854]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 5 15:49:41.620371 ignition[854]: parsed url from cmdline: "" Nov 5 15:49:41.620377 ignition[854]: no config URL provided Nov 5 15:49:41.620387 ignition[854]: reading system config file "/usr/lib/ignition/user.ign" Nov 5 15:49:41.623189 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 5 15:49:41.620403 ignition[854]: no config at "/usr/lib/ignition/user.ign" Nov 5 15:49:41.620411 ignition[854]: failed to fetch config: resource requires networking Nov 5 15:49:41.620692 ignition[854]: Ignition finished successfully Nov 5 15:49:41.627196 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 5 15:49:41.675678 ignition[862]: Ignition 2.22.0 Nov 5 15:49:41.675703 ignition[862]: Stage: fetch Nov 5 15:49:41.676000 ignition[862]: no configs at "/usr/lib/ignition/base.d" Nov 5 15:49:41.676024 ignition[862]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 5 15:49:41.676140 ignition[862]: parsed url from cmdline: "" Nov 5 15:49:41.676144 ignition[862]: no config URL provided Nov 5 15:49:41.676151 ignition[862]: reading system config file "/usr/lib/ignition/user.ign" Nov 5 15:49:41.676160 ignition[862]: no config at "/usr/lib/ignition/user.ign" Nov 5 15:49:41.676193 ignition[862]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Nov 5 15:49:41.690392 systemd-networkd[682]: eth0: Gained IPv6LL Nov 5 15:49:41.691467 ignition[862]: GET result: OK Nov 5 15:49:41.692293 ignition[862]: parsing config with SHA512: 959f29f9750e709f70bd686ec96cb156e7af6d39a55311c2adda94c491e22d1fbdc97cc57558dafe40c237375d90fb2b64be4745a3e244e771891d96f2e385a7 Nov 5 15:49:41.702081 unknown[862]: fetched base config from "system" Nov 5 15:49:41.702684 unknown[862]: fetched base config from "system" Nov 5 15:49:41.703156 ignition[862]: fetch: fetch complete Nov 5 15:49:41.702693 unknown[862]: fetched user config from "digitalocean" Nov 5 15:49:41.703162 ignition[862]: fetch: fetch passed Nov 5 15:49:41.703230 ignition[862]: Ignition finished successfully Nov 5 15:49:41.708879 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 5 15:49:41.711399 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 5 15:49:41.762956 ignition[868]: Ignition 2.22.0 Nov 5 15:49:41.763768 ignition[868]: Stage: kargs Nov 5 15:49:41.764413 ignition[868]: no configs at "/usr/lib/ignition/base.d" Nov 5 15:49:41.764429 ignition[868]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 5 15:49:41.768511 ignition[868]: kargs: kargs passed Nov 5 15:49:41.769135 ignition[868]: Ignition finished successfully Nov 5 15:49:41.770687 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 5 15:49:41.773092 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 5 15:49:41.824148 ignition[874]: Ignition 2.22.0 Nov 5 15:49:41.824168 ignition[874]: Stage: disks Nov 5 15:49:41.824395 ignition[874]: no configs at "/usr/lib/ignition/base.d" Nov 5 15:49:41.824410 ignition[874]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 5 15:49:41.825738 ignition[874]: disks: disks passed Nov 5 15:49:41.825823 ignition[874]: Ignition finished successfully Nov 5 15:49:41.830420 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 5 15:49:41.833986 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 5 15:49:41.834701 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 5 15:49:41.835960 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 5 15:49:41.837063 systemd[1]: Reached target sysinit.target - System Initialization. Nov 5 15:49:41.838273 systemd[1]: Reached target basic.target - Basic System. Nov 5 15:49:41.840762 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 5 15:49:41.889361 systemd-fsck[883]: ROOT: clean, 15/456736 files, 38230/456704 blocks Nov 5 15:49:41.894179 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 5 15:49:41.895790 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 5 15:49:42.029947 kernel: EXT4-fs (vda9): mounted filesystem f3db699e-c9e0-4f6b-8c2b-aa40a78cd116 r/w with ordered data mode. Quota mode: none. Nov 5 15:49:42.030752 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 5 15:49:42.032317 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 5 15:49:42.034862 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 5 15:49:42.037995 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 5 15:49:42.041160 systemd[1]: Starting flatcar-afterburn-network.service - Flatcar Afterburn network service... Nov 5 15:49:42.048127 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Nov 5 15:49:42.049913 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 5 15:49:42.050054 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 5 15:49:42.064618 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 5 15:49:42.065229 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (891) Nov 5 15:49:42.070594 kernel: BTRFS info (device vda6): first mount of filesystem fa887730-d07b-4714-9f34-65e9489ec2e4 Nov 5 15:49:42.070668 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 5 15:49:42.072543 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 5 15:49:42.079741 kernel: BTRFS info (device vda6): turning on async discard Nov 5 15:49:42.079770 kernel: BTRFS info (device vda6): enabling free space tree Nov 5 15:49:42.087953 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 5 15:49:42.149992 coreos-metadata[894]: Nov 05 15:49:42.147 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Nov 5 15:49:42.161175 coreos-metadata[894]: Nov 05 15:49:42.160 INFO Fetch successful Nov 5 15:49:42.162148 coreos-metadata[893]: Nov 05 15:49:42.162 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Nov 5 15:49:42.164109 initrd-setup-root[922]: cut: /sysroot/etc/passwd: No such file or directory Nov 5 15:49:42.168960 coreos-metadata[894]: Nov 05 15:49:42.168 INFO wrote hostname ci-4487.0.1-2-254db4f49e to /sysroot/etc/hostname Nov 5 15:49:42.170217 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 5 15:49:42.174464 initrd-setup-root[929]: cut: /sysroot/etc/group: No such file or directory Nov 5 15:49:42.175438 coreos-metadata[893]: Nov 05 15:49:42.174 INFO Fetch successful Nov 5 15:49:42.183001 initrd-setup-root[937]: cut: /sysroot/etc/shadow: No such file or directory Nov 5 15:49:42.183741 systemd[1]: flatcar-afterburn-network.service: Deactivated successfully. Nov 5 15:49:42.183895 systemd[1]: Finished flatcar-afterburn-network.service - Flatcar Afterburn network service. Nov 5 15:49:42.189886 initrd-setup-root[945]: cut: /sysroot/etc/gshadow: No such file or directory Nov 5 15:49:42.326024 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 5 15:49:42.328838 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 5 15:49:42.332195 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 5 15:49:42.352481 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 5 15:49:42.355218 kernel: BTRFS info (device vda6): last unmount of filesystem fa887730-d07b-4714-9f34-65e9489ec2e4 Nov 5 15:49:42.379770 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 5 15:49:42.402570 ignition[1014]: INFO : Ignition 2.22.0 Nov 5 15:49:42.402570 ignition[1014]: INFO : Stage: mount Nov 5 15:49:42.403863 ignition[1014]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 5 15:49:42.403863 ignition[1014]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 5 15:49:42.406368 ignition[1014]: INFO : mount: mount passed Nov 5 15:49:42.406368 ignition[1014]: INFO : Ignition finished successfully Nov 5 15:49:42.407395 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 5 15:49:42.410538 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 5 15:49:42.433650 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 5 15:49:42.467131 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1024) Nov 5 15:49:42.467206 kernel: BTRFS info (device vda6): first mount of filesystem fa887730-d07b-4714-9f34-65e9489ec2e4 Nov 5 15:49:42.469902 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 5 15:49:42.473981 kernel: BTRFS info (device vda6): turning on async discard Nov 5 15:49:42.474084 kernel: BTRFS info (device vda6): enabling free space tree Nov 5 15:49:42.477325 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 5 15:49:42.517425 ignition[1040]: INFO : Ignition 2.22.0 Nov 5 15:49:42.517425 ignition[1040]: INFO : Stage: files Nov 5 15:49:42.519008 ignition[1040]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 5 15:49:42.519008 ignition[1040]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 5 15:49:42.520201 ignition[1040]: DEBUG : files: compiled without relabeling support, skipping Nov 5 15:49:42.520201 ignition[1040]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 5 15:49:42.520201 ignition[1040]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 5 15:49:42.524626 ignition[1040]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 5 15:49:42.525659 ignition[1040]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 5 15:49:42.526530 ignition[1040]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 5 15:49:42.525788 unknown[1040]: wrote ssh authorized keys file for user: core Nov 5 15:49:42.527966 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 5 15:49:42.527966 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Nov 5 15:49:42.557222 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 5 15:49:42.613174 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 5 15:49:42.613174 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 5 15:49:42.615706 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Nov 5 15:49:42.893853 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Nov 5 15:49:43.622802 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 5 15:49:43.622802 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Nov 5 15:49:43.624885 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Nov 5 15:49:43.624885 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 5 15:49:43.624885 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 5 15:49:43.624885 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 5 15:49:43.624885 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 5 15:49:43.624885 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 5 15:49:43.624885 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 5 15:49:43.630136 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 5 15:49:43.630136 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 5 15:49:43.630136 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 5 15:49:43.630136 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 5 15:49:43.630136 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 5 15:49:43.630136 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Nov 5 15:49:44.208527 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Nov 5 15:49:45.906593 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 5 15:49:45.908036 ignition[1040]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Nov 5 15:49:45.909172 ignition[1040]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 5 15:49:45.910781 ignition[1040]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 5 15:49:45.912945 ignition[1040]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Nov 5 15:49:45.912945 ignition[1040]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Nov 5 15:49:45.912945 ignition[1040]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Nov 5 15:49:45.912945 ignition[1040]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 5 15:49:45.912945 ignition[1040]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 5 15:49:45.912945 ignition[1040]: INFO : files: files passed Nov 5 15:49:45.912945 ignition[1040]: INFO : Ignition finished successfully Nov 5 15:49:45.914340 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 5 15:49:45.920119 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 5 15:49:45.922187 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 5 15:49:45.936524 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 5 15:49:45.936652 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 5 15:49:45.947953 initrd-setup-root-after-ignition[1072]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 5 15:49:45.947953 initrd-setup-root-after-ignition[1072]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 5 15:49:45.950172 initrd-setup-root-after-ignition[1076]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 5 15:49:45.952753 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 5 15:49:45.953779 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 5 15:49:45.955706 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 5 15:49:46.012043 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 5 15:49:46.012187 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 5 15:49:46.013387 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 5 15:49:46.014245 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 5 15:49:46.015656 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 5 15:49:46.016883 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 5 15:49:46.048446 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 5 15:49:46.051216 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 5 15:49:46.075963 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 5 15:49:46.076246 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 5 15:49:46.078212 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 5 15:49:46.078807 systemd[1]: Stopped target timers.target - Timer Units. Nov 5 15:49:46.080847 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 5 15:49:46.081104 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 5 15:49:46.082124 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 5 15:49:46.083263 systemd[1]: Stopped target basic.target - Basic System. Nov 5 15:49:46.084312 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 5 15:49:46.085356 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 5 15:49:46.086427 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 5 15:49:46.087432 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Nov 5 15:49:46.088491 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 5 15:49:46.089498 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 5 15:49:46.090476 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 5 15:49:46.091571 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 5 15:49:46.092530 systemd[1]: Stopped target swap.target - Swaps. Nov 5 15:49:46.093445 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 5 15:49:46.093730 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 5 15:49:46.094637 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 5 15:49:46.095298 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 5 15:49:46.096245 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 5 15:49:46.096468 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 5 15:49:46.097326 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 5 15:49:46.097610 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 5 15:49:46.099028 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 5 15:49:46.099236 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 5 15:49:46.100588 systemd[1]: ignition-files.service: Deactivated successfully. Nov 5 15:49:46.100769 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 5 15:49:46.101831 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Nov 5 15:49:46.102050 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 5 15:49:46.105088 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 5 15:49:46.105725 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 5 15:49:46.105923 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 5 15:49:46.110085 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 5 15:49:46.111089 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 5 15:49:46.111232 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 5 15:49:46.113478 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 5 15:49:46.113599 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 5 15:49:46.114388 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 5 15:49:46.114493 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 5 15:49:46.124112 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 5 15:49:46.124273 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 5 15:49:46.152349 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 5 15:49:46.159990 ignition[1096]: INFO : Ignition 2.22.0 Nov 5 15:49:46.159990 ignition[1096]: INFO : Stage: umount Nov 5 15:49:46.159990 ignition[1096]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 5 15:49:46.159990 ignition[1096]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 5 15:49:46.164018 ignition[1096]: INFO : umount: umount passed Nov 5 15:49:46.164018 ignition[1096]: INFO : Ignition finished successfully Nov 5 15:49:46.166831 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 5 15:49:46.166966 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 5 15:49:46.170358 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 5 15:49:46.170425 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 5 15:49:46.171008 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 5 15:49:46.171082 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 5 15:49:46.173142 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 5 15:49:46.173201 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 5 15:49:46.173781 systemd[1]: Stopped target network.target - Network. Nov 5 15:49:46.175426 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 5 15:49:46.175490 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 5 15:49:46.175992 systemd[1]: Stopped target paths.target - Path Units. Nov 5 15:49:46.177727 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 5 15:49:46.180994 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 5 15:49:46.181642 systemd[1]: Stopped target slices.target - Slice Units. Nov 5 15:49:46.182055 systemd[1]: Stopped target sockets.target - Socket Units. Nov 5 15:49:46.182466 systemd[1]: iscsid.socket: Deactivated successfully. Nov 5 15:49:46.182515 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 5 15:49:46.185021 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 5 15:49:46.185078 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 5 15:49:46.185829 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 5 15:49:46.185911 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 5 15:49:46.186495 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 5 15:49:46.186561 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 5 15:49:46.189365 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 5 15:49:46.191381 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 5 15:49:46.197808 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 5 15:49:46.198000 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 5 15:49:46.201350 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 5 15:49:46.201524 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 5 15:49:46.203940 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 5 15:49:46.204055 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 5 15:49:46.208317 systemd[1]: Stopped target network-pre.target - Preparation for Network. Nov 5 15:49:46.208831 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 5 15:49:46.208878 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 5 15:49:46.209814 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 5 15:49:46.209879 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 5 15:49:46.211690 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 5 15:49:46.214291 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 5 15:49:46.214371 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 5 15:49:46.214890 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 5 15:49:46.214953 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 5 15:49:46.215420 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 5 15:49:46.215461 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 5 15:49:46.220137 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 5 15:49:46.232803 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 5 15:49:46.233893 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 5 15:49:46.235804 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 5 15:49:46.235870 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 5 15:49:46.237686 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 5 15:49:46.237726 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 5 15:49:46.238209 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 5 15:49:46.238270 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 5 15:49:46.238800 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 5 15:49:46.238843 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 5 15:49:46.241292 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 5 15:49:46.241364 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 5 15:49:46.244246 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 5 15:49:46.245175 systemd[1]: systemd-network-generator.service: Deactivated successfully. Nov 5 15:49:46.245233 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Nov 5 15:49:46.246320 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 5 15:49:46.246392 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 5 15:49:46.246866 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 5 15:49:46.246909 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 15:49:46.258008 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 5 15:49:46.258177 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 5 15:49:46.263529 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 5 15:49:46.263651 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 5 15:49:46.265421 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 5 15:49:46.267418 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 5 15:49:46.292073 systemd[1]: Switching root. Nov 5 15:49:46.342938 systemd-journald[297]: Journal stopped Nov 5 15:49:47.483830 systemd-journald[297]: Received SIGTERM from PID 1 (systemd). Nov 5 15:49:47.483910 kernel: SELinux: policy capability network_peer_controls=1 Nov 5 15:49:47.483959 kernel: SELinux: policy capability open_perms=1 Nov 5 15:49:47.483977 kernel: SELinux: policy capability extended_socket_class=1 Nov 5 15:49:47.485319 kernel: SELinux: policy capability always_check_network=0 Nov 5 15:49:47.485347 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 5 15:49:47.485361 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 5 15:49:47.485373 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 5 15:49:47.485392 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 5 15:49:47.485406 kernel: SELinux: policy capability userspace_initial_context=0 Nov 5 15:49:47.485419 kernel: audit: type=1403 audit(1762357786.491:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 5 15:49:47.485474 systemd[1]: Successfully loaded SELinux policy in 79.155ms. Nov 5 15:49:47.485503 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 12.174ms. Nov 5 15:49:47.485519 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 5 15:49:47.485534 systemd[1]: Detected virtualization kvm. Nov 5 15:49:47.485548 systemd[1]: Detected architecture x86-64. Nov 5 15:49:47.485573 systemd[1]: Detected first boot. Nov 5 15:49:47.485586 systemd[1]: Hostname set to . Nov 5 15:49:47.485608 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Nov 5 15:49:47.485621 kernel: Guest personality initialized and is inactive Nov 5 15:49:47.485634 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Nov 5 15:49:47.485647 kernel: Initialized host personality Nov 5 15:49:47.485660 zram_generator::config[1144]: No configuration found. Nov 5 15:49:47.485674 kernel: NET: Registered PF_VSOCK protocol family Nov 5 15:49:47.485693 systemd[1]: Populated /etc with preset unit settings. Nov 5 15:49:47.485706 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 5 15:49:47.485719 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 5 15:49:47.485735 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 5 15:49:47.485749 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 5 15:49:47.485762 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 5 15:49:47.485780 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 5 15:49:47.485808 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 5 15:49:47.485823 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 5 15:49:47.485837 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 5 15:49:47.485853 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 5 15:49:47.485867 systemd[1]: Created slice user.slice - User and Session Slice. Nov 5 15:49:47.485881 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 5 15:49:47.485896 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 5 15:49:47.485916 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 5 15:49:47.485941 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 5 15:49:47.485955 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 5 15:49:47.485977 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 5 15:49:47.485990 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 5 15:49:47.486010 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 5 15:49:47.486024 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 5 15:49:47.486037 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 5 15:49:47.486050 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 5 15:49:47.486064 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 5 15:49:47.486077 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 5 15:49:47.486091 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 5 15:49:47.486110 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 5 15:49:47.486123 systemd[1]: Reached target slices.target - Slice Units. Nov 5 15:49:47.486149 systemd[1]: Reached target swap.target - Swaps. Nov 5 15:49:47.486163 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 5 15:49:47.486177 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 5 15:49:47.486192 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Nov 5 15:49:47.486205 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 5 15:49:47.486219 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 5 15:49:47.486241 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 5 15:49:47.486254 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 5 15:49:47.486267 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 5 15:49:47.486280 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 5 15:49:47.486295 systemd[1]: Mounting media.mount - External Media Directory... Nov 5 15:49:47.486315 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 15:49:47.486335 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 5 15:49:47.486364 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 5 15:49:47.486386 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 5 15:49:47.486408 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 5 15:49:47.486422 systemd[1]: Reached target machines.target - Containers. Nov 5 15:49:47.486436 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 5 15:49:47.486450 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 5 15:49:47.486471 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 5 15:49:47.486486 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 5 15:49:47.486500 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 5 15:49:47.486514 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 5 15:49:47.486528 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 5 15:49:47.486541 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 5 15:49:47.486555 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 5 15:49:47.486575 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 5 15:49:47.486588 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 5 15:49:47.486602 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 5 15:49:47.486615 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 5 15:49:47.486629 systemd[1]: Stopped systemd-fsck-usr.service. Nov 5 15:49:47.486644 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 5 15:49:47.486658 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 5 15:49:47.486680 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 5 15:49:47.486695 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 5 15:49:47.486708 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 5 15:49:47.486722 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Nov 5 15:49:47.486741 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 5 15:49:47.486755 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 15:49:47.486769 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 5 15:49:47.486782 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 5 15:49:47.486796 systemd[1]: Mounted media.mount - External Media Directory. Nov 5 15:49:47.486816 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 5 15:49:47.486837 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 5 15:49:47.486850 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 5 15:49:47.486864 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 5 15:49:47.486877 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 5 15:49:47.486891 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 5 15:49:47.486904 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 5 15:49:47.486924 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 5 15:49:47.488341 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 5 15:49:47.488367 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 5 15:49:47.488381 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 5 15:49:47.488436 systemd-journald[1213]: Collecting audit messages is disabled. Nov 5 15:49:47.488499 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 5 15:49:47.488525 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 5 15:49:47.488546 systemd-journald[1213]: Journal started Nov 5 15:49:47.488580 systemd-journald[1213]: Runtime Journal (/run/log/journal/ec16a3d4e5444c36834ec2fcfe43a5a6) is 4.9M, max 39.2M, 34.3M free. Nov 5 15:49:47.148722 systemd[1]: Queued start job for default target multi-user.target. Nov 5 15:49:47.170468 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 5 15:49:47.171277 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 5 15:49:47.496993 systemd[1]: Started systemd-journald.service - Journal Service. Nov 5 15:49:47.497133 kernel: fuse: init (API version 7.41) Nov 5 15:49:47.503045 kernel: ACPI: bus type drm_connector registered Nov 5 15:49:47.502748 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 5 15:49:47.503101 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 5 15:49:47.504257 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 5 15:49:47.505504 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 5 15:49:47.510296 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 5 15:49:47.511030 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 5 15:49:47.527806 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 5 15:49:47.530734 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Nov 5 15:49:47.531281 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 5 15:49:47.531308 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 5 15:49:47.532714 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Nov 5 15:49:47.534190 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 5 15:49:47.537091 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 5 15:49:47.540336 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 5 15:49:47.541531 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 5 15:49:47.544176 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 5 15:49:47.544726 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 5 15:49:47.546023 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 5 15:49:47.553238 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 5 15:49:47.556128 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Nov 5 15:49:47.589587 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 5 15:49:47.593184 systemd-journald[1213]: Time spent on flushing to /var/log/journal/ec16a3d4e5444c36834ec2fcfe43a5a6 is 41.912ms for 992 entries. Nov 5 15:49:47.593184 systemd-journald[1213]: System Journal (/var/log/journal/ec16a3d4e5444c36834ec2fcfe43a5a6) is 8M, max 163.5M, 155.5M free. Nov 5 15:49:47.653286 systemd-journald[1213]: Received client request to flush runtime journal. Nov 5 15:49:47.653348 kernel: loop1: detected capacity change from 0 to 110984 Nov 5 15:49:47.597047 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 5 15:49:47.598515 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 5 15:49:47.613495 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 5 15:49:47.662573 kernel: loop2: detected capacity change from 0 to 224512 Nov 5 15:49:47.615243 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 5 15:49:47.619016 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Nov 5 15:49:47.655304 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 5 15:49:47.663223 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 5 15:49:47.668719 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Nov 5 15:49:47.681494 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 5 15:49:47.689196 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 5 15:49:47.692223 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 5 15:49:47.696021 kernel: loop3: detected capacity change from 0 to 128048 Nov 5 15:49:47.729220 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 5 15:49:47.742971 kernel: loop4: detected capacity change from 0 to 8 Nov 5 15:49:47.752875 systemd-tmpfiles[1279]: ACLs are not supported, ignoring. Nov 5 15:49:47.753999 systemd-tmpfiles[1279]: ACLs are not supported, ignoring. Nov 5 15:49:47.765830 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 5 15:49:47.766020 kernel: loop5: detected capacity change from 0 to 110984 Nov 5 15:49:47.787967 kernel: loop6: detected capacity change from 0 to 224512 Nov 5 15:49:47.804970 kernel: loop7: detected capacity change from 0 to 128048 Nov 5 15:49:47.820450 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 5 15:49:47.827991 kernel: loop1: detected capacity change from 0 to 8 Nov 5 15:49:47.833421 (sd-merge)[1284]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-digitalocean.raw'. Nov 5 15:49:47.842196 (sd-merge)[1284]: Merged extensions into '/usr'. Nov 5 15:49:47.857302 systemd[1]: Reload requested from client PID 1258 ('systemd-sysext') (unit systemd-sysext.service)... Nov 5 15:49:47.857336 systemd[1]: Reloading... Nov 5 15:49:47.974150 systemd-resolved[1278]: Positive Trust Anchors: Nov 5 15:49:47.974173 systemd-resolved[1278]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 5 15:49:47.974180 systemd-resolved[1278]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 5 15:49:47.974242 systemd-resolved[1278]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 5 15:49:47.997264 systemd-resolved[1278]: Using system hostname 'ci-4487.0.1-2-254db4f49e'. Nov 5 15:49:48.030006 zram_generator::config[1321]: No configuration found. Nov 5 15:49:48.299568 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 5 15:49:48.299895 systemd[1]: Reloading finished in 441 ms. Nov 5 15:49:48.316899 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 5 15:49:48.318655 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 5 15:49:48.321640 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 5 15:49:48.327139 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 5 15:49:48.335144 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 5 15:49:48.347262 systemd[1]: Starting ensure-sysext.service... Nov 5 15:49:48.351316 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 5 15:49:48.365879 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 5 15:49:48.366625 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 5 15:49:48.377122 systemd[1]: Reload requested from client PID 1362 ('systemctl') (unit ensure-sysext.service)... Nov 5 15:49:48.377148 systemd[1]: Reloading... Nov 5 15:49:48.415021 systemd-tmpfiles[1363]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Nov 5 15:49:48.415066 systemd-tmpfiles[1363]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Nov 5 15:49:48.415412 systemd-tmpfiles[1363]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 5 15:49:48.415688 systemd-tmpfiles[1363]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 5 15:49:48.419242 systemd-tmpfiles[1363]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 5 15:49:48.419518 systemd-tmpfiles[1363]: ACLs are not supported, ignoring. Nov 5 15:49:48.419574 systemd-tmpfiles[1363]: ACLs are not supported, ignoring. Nov 5 15:49:48.433790 systemd-tmpfiles[1363]: Detected autofs mount point /boot during canonicalization of boot. Nov 5 15:49:48.433813 systemd-tmpfiles[1363]: Skipping /boot Nov 5 15:49:48.456113 systemd-tmpfiles[1363]: Detected autofs mount point /boot during canonicalization of boot. Nov 5 15:49:48.456288 systemd-tmpfiles[1363]: Skipping /boot Nov 5 15:49:48.519960 zram_generator::config[1398]: No configuration found. Nov 5 15:49:48.780809 systemd[1]: Reloading finished in 403 ms. Nov 5 15:49:48.804184 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 5 15:49:48.816215 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 5 15:49:48.827833 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 5 15:49:48.832301 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 5 15:49:48.835393 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 5 15:49:48.842443 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 5 15:49:48.846302 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 5 15:49:48.851029 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 5 15:49:48.860221 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 15:49:48.860520 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 5 15:49:48.862414 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 5 15:49:48.871424 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 5 15:49:48.877085 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 5 15:49:48.877916 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 5 15:49:48.879182 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 5 15:49:48.879344 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 15:49:48.884403 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 15:49:48.884716 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 5 15:49:48.887093 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 5 15:49:48.887282 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 5 15:49:48.887428 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 15:49:48.894663 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 15:49:48.895223 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 5 15:49:48.898642 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 5 15:49:48.900128 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 5 15:49:48.900307 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 5 15:49:48.900508 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 15:49:48.916813 systemd[1]: Finished ensure-sysext.service. Nov 5 15:49:48.957751 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 5 15:49:48.962459 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 5 15:49:48.962764 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 5 15:49:48.964523 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 5 15:49:48.964794 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 5 15:49:48.966264 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 5 15:49:48.966841 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 5 15:49:48.968858 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 5 15:49:48.969132 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 5 15:49:48.978483 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 5 15:49:48.992297 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 5 15:49:48.992393 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 5 15:49:49.046186 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 5 15:49:49.062462 systemd-udevd[1444]: Using default interface naming scheme 'v257'. Nov 5 15:49:49.107851 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 5 15:49:49.111503 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 5 15:49:49.134030 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 5 15:49:49.138442 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 5 15:49:49.149549 augenrules[1486]: No rules Nov 5 15:49:49.151554 systemd[1]: audit-rules.service: Deactivated successfully. Nov 5 15:49:49.157145 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 5 15:49:49.323947 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 5 15:49:49.324905 systemd[1]: Reached target time-set.target - System Time Set. Nov 5 15:49:49.331546 systemd-networkd[1479]: lo: Link UP Nov 5 15:49:49.332004 systemd-networkd[1479]: lo: Gained carrier Nov 5 15:49:49.340220 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 5 15:49:49.343397 systemd[1]: Reached target network.target - Network. Nov 5 15:49:49.353435 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Nov 5 15:49:49.361723 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 5 15:49:49.428742 systemd[1]: Condition check resulted in dev-disk-by\x2dlabel-config\x2d2.device - /dev/disk/by-label/config-2 being skipped. Nov 5 15:49:49.446606 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Nov 5 15:49:49.448309 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 15:49:49.448542 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 5 15:49:49.454225 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 5 15:49:49.459353 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 5 15:49:49.467059 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 5 15:49:49.467811 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 5 15:49:49.467863 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 5 15:49:49.467912 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 5 15:49:49.467966 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 15:49:49.491602 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Nov 5 15:49:49.493628 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 5 15:49:49.493887 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 5 15:49:49.496574 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 5 15:49:49.497923 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 5 15:49:49.506630 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 5 15:49:49.513225 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 5 15:49:49.521869 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 5 15:49:49.522167 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 5 15:49:49.559001 kernel: ISO 9660 Extensions: RRIP_1991A Nov 5 15:49:49.570505 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Nov 5 15:49:49.591769 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 5 15:49:49.594555 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 5 15:49:49.631820 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 5 15:49:49.659072 systemd-networkd[1479]: eth0: Configuring with /run/systemd/network/10-ee:be:2a:aa:76:63.network. Nov 5 15:49:49.662224 systemd-networkd[1479]: eth0: Link UP Nov 5 15:49:49.662422 systemd-networkd[1479]: eth0: Gained carrier Nov 5 15:49:49.671785 systemd-timesyncd[1457]: Network configuration changed, trying to establish connection. Nov 5 15:49:49.675047 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 5 15:49:49.683709 systemd-networkd[1479]: eth1: Configuring with /run/systemd/network/10-66:87:58:1f:77:4f.network. Nov 5 15:49:49.684544 systemd-networkd[1479]: eth1: Link UP Nov 5 15:49:49.686874 systemd-networkd[1479]: eth1: Gained carrier Nov 5 15:49:49.757207 kernel: mousedev: PS/2 mouse device common for all mice Nov 5 15:49:49.772560 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Nov 5 15:49:49.783176 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Nov 5 15:49:49.786352 kernel: ACPI: button: Power Button [PWRF] Nov 5 15:49:49.802218 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Nov 5 15:49:49.876816 systemd-timesyncd[1457]: Contacted time server 99.28.14.242:123 (1.flatcar.pool.ntp.org). Nov 5 15:49:49.876896 systemd-timesyncd[1457]: Initial clock synchronization to Wed 2025-11-05 15:49:50.221203 UTC. Nov 5 15:49:49.889957 ldconfig[1442]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 5 15:49:49.898066 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 5 15:49:49.901684 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 5 15:49:49.942082 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 5 15:49:49.944507 systemd[1]: Reached target sysinit.target - System Initialization. Nov 5 15:49:49.947264 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 5 15:49:49.949062 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 5 15:49:49.951052 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Nov 5 15:49:49.953279 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 5 15:49:49.956201 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 5 15:49:49.958090 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 5 15:49:49.959858 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 5 15:49:49.959904 systemd[1]: Reached target paths.target - Path Units. Nov 5 15:49:49.962029 systemd[1]: Reached target timers.target - Timer Units. Nov 5 15:49:49.964670 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 5 15:49:49.974185 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 5 15:49:49.983202 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Nov 5 15:49:49.988300 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Nov 5 15:49:49.990049 systemd[1]: Reached target ssh-access.target - SSH Access Available. Nov 5 15:49:50.007571 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 5 15:49:50.010601 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Nov 5 15:49:50.012227 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 5 15:49:50.023665 systemd[1]: Reached target sockets.target - Socket Units. Nov 5 15:49:50.037417 systemd[1]: Reached target basic.target - Basic System. Nov 5 15:49:50.038240 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 5 15:49:50.038430 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 5 15:49:50.042666 systemd[1]: Starting containerd.service - containerd container runtime... Nov 5 15:49:50.048436 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 5 15:49:50.053481 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 5 15:49:50.059798 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 5 15:49:50.066464 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 5 15:49:50.076162 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 5 15:49:50.079102 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 5 15:49:50.085569 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Nov 5 15:49:50.094454 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 5 15:49:50.108848 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 5 15:49:50.118532 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 5 15:49:50.132397 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 5 15:49:50.188320 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 5 15:49:50.190092 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 5 15:49:50.192124 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 5 15:49:50.212041 google_oslogin_nss_cache[1559]: oslogin_cache_refresh[1559]: Refreshing passwd entry cache Nov 5 15:49:50.212041 google_oslogin_nss_cache[1559]: oslogin_cache_refresh[1559]: Failure getting users, quitting Nov 5 15:49:50.212041 google_oslogin_nss_cache[1559]: oslogin_cache_refresh[1559]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 5 15:49:50.212041 google_oslogin_nss_cache[1559]: oslogin_cache_refresh[1559]: Refreshing group entry cache Nov 5 15:49:50.212041 google_oslogin_nss_cache[1559]: oslogin_cache_refresh[1559]: Failure getting groups, quitting Nov 5 15:49:50.212041 google_oslogin_nss_cache[1559]: oslogin_cache_refresh[1559]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 5 15:49:50.197402 oslogin_cache_refresh[1559]: Refreshing passwd entry cache Nov 5 15:49:50.199486 systemd[1]: Starting update-engine.service - Update Engine... Nov 5 15:49:50.202586 oslogin_cache_refresh[1559]: Failure getting users, quitting Nov 5 15:49:50.206911 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 5 15:49:50.202610 oslogin_cache_refresh[1559]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 5 15:49:50.202679 oslogin_cache_refresh[1559]: Refreshing group entry cache Nov 5 15:49:50.203845 oslogin_cache_refresh[1559]: Failure getting groups, quitting Nov 5 15:49:50.203859 oslogin_cache_refresh[1559]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 5 15:49:50.229893 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 5 15:49:50.233418 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Nov 5 15:49:50.234375 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Nov 5 15:49:50.246610 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 5 15:49:50.248081 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 5 15:49:50.260634 jq[1557]: false Nov 5 15:49:50.281127 extend-filesystems[1558]: Found /dev/vda6 Nov 5 15:49:50.270241 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 5 15:49:50.271161 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 5 15:49:50.294050 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 15:49:50.317219 tar[1576]: linux-amd64/LICENSE Nov 5 15:49:50.317219 tar[1576]: linux-amd64/helm Nov 5 15:49:50.300429 dbus-daemon[1555]: [system] SELinux support is enabled Nov 5 15:49:50.300734 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 5 15:49:50.310617 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 5 15:49:50.310882 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 5 15:49:50.312560 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 5 15:49:50.312816 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Nov 5 15:49:50.313021 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 5 15:49:50.336047 jq[1571]: true Nov 5 15:49:50.336417 extend-filesystems[1558]: Found /dev/vda9 Nov 5 15:49:50.346368 extend-filesystems[1558]: Checking size of /dev/vda9 Nov 5 15:49:50.364734 jq[1593]: true Nov 5 15:49:50.392555 systemd[1]: motdgen.service: Deactivated successfully. Nov 5 15:49:50.396103 (ntainerd)[1598]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 5 15:49:50.396485 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 5 15:49:50.440116 extend-filesystems[1558]: Resized partition /dev/vda9 Nov 5 15:49:50.458752 extend-filesystems[1616]: resize2fs 1.47.3 (8-Jul-2025) Nov 5 15:49:50.487208 update_engine[1570]: I20251105 15:49:50.479028 1570 main.cc:92] Flatcar Update Engine starting Nov 5 15:49:50.509214 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 14138363 blocks Nov 5 15:49:50.514095 systemd[1]: Started update-engine.service - Update Engine. Nov 5 15:49:50.525712 update_engine[1570]: I20251105 15:49:50.525254 1570 update_check_scheduler.cc:74] Next update check in 9m39s Nov 5 15:49:50.570262 coreos-metadata[1554]: Nov 05 15:49:50.570 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Nov 5 15:49:50.586361 bash[1622]: Updated "/home/core/.ssh/authorized_keys" Nov 5 15:49:50.586725 coreos-metadata[1554]: Nov 05 15:49:50.586 INFO Fetch successful Nov 5 15:49:50.628188 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 5 15:49:50.629846 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 5 15:49:50.646342 systemd[1]: Starting sshkeys.service... Nov 5 15:49:50.699367 kernel: EXT4-fs (vda9): resized filesystem to 14138363 Nov 5 15:49:50.723760 extend-filesystems[1616]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 5 15:49:50.723760 extend-filesystems[1616]: old_desc_blocks = 1, new_desc_blocks = 7 Nov 5 15:49:50.723760 extend-filesystems[1616]: The filesystem on /dev/vda9 is now 14138363 (4k) blocks long. Nov 5 15:49:50.737171 extend-filesystems[1558]: Resized filesystem in /dev/vda9 Nov 5 15:49:50.725240 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 5 15:49:50.750033 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Nov 5 15:49:50.749648 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 5 15:49:50.811230 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Nov 5 15:49:50.902574 systemd-logind[1568]: Watching system buttons on /dev/input/event2 (Power Button) Nov 5 15:49:50.911237 systemd-logind[1568]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 5 15:49:50.916534 kernel: Console: switching to colour dummy device 80x25 Nov 5 15:49:50.916666 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Nov 5 15:49:50.916700 kernel: [drm] features: -context_init Nov 5 15:49:50.924767 kernel: [drm] number of scanouts: 1 Nov 5 15:49:50.924856 kernel: [drm] number of cap sets: 0 Nov 5 15:49:50.924883 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0 Nov 5 15:49:50.926318 systemd-logind[1568]: New seat seat0. Nov 5 15:49:50.927647 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 15:49:50.932470 systemd[1]: Started systemd-logind.service - User Login Management. Nov 5 15:49:50.959210 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Nov 5 15:49:50.968268 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Nov 5 15:49:50.980275 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Nov 5 15:49:50.981000 kernel: Console: switching to colour frame buffer device 128x48 Nov 5 15:49:50.998036 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 5 15:49:50.998444 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 15:49:51.059277 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 15:49:51.059991 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Nov 5 15:49:51.066406 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 15:49:51.089446 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 5 15:49:51.094286 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 5 15:49:51.124381 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 5 15:49:51.124737 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 15:49:51.129681 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 15:49:51.159390 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 5 15:49:51.165225 systemd-networkd[1479]: eth0: Gained IPv6LL Nov 5 15:49:51.169620 kernel: EDAC MC: Ver: 3.0.0 Nov 5 15:49:51.175878 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 5 15:49:51.181580 systemd[1]: Reached target network-online.target - Network is Online. Nov 5 15:49:51.192468 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 15:49:51.204338 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 5 15:49:51.238832 coreos-metadata[1646]: Nov 05 15:49:51.238 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Nov 5 15:49:51.265483 coreos-metadata[1646]: Nov 05 15:49:51.263 INFO Fetch successful Nov 5 15:49:51.292133 unknown[1646]: wrote ssh authorized keys file for user: core Nov 5 15:49:51.334056 update-ssh-keys[1666]: Updated "/home/core/.ssh/authorized_keys" Nov 5 15:49:51.335412 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Nov 5 15:49:51.342392 systemd[1]: Finished sshkeys.service. Nov 5 15:49:51.431721 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 15:49:51.439834 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 5 15:49:51.513838 sshd_keygen[1610]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 5 15:49:51.564943 containerd[1598]: time="2025-11-05T15:49:51Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Nov 5 15:49:51.568051 containerd[1598]: time="2025-11-05T15:49:51.567993542Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Nov 5 15:49:51.587557 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 5 15:49:51.591432 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 5 15:49:51.594970 systemd[1]: Started sshd@0-137.184.121.184:22-139.178.68.195:52676.service - OpenSSH per-connection server daemon (139.178.68.195:52676). Nov 5 15:49:51.633300 locksmithd[1625]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 5 15:49:51.640983 containerd[1598]: time="2025-11-05T15:49:51.638019428Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="13.033µs" Nov 5 15:49:51.642573 containerd[1598]: time="2025-11-05T15:49:51.642524470Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Nov 5 15:49:51.646320 containerd[1598]: time="2025-11-05T15:49:51.644807295Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Nov 5 15:49:51.646320 containerd[1598]: time="2025-11-05T15:49:51.645040632Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Nov 5 15:49:51.646320 containerd[1598]: time="2025-11-05T15:49:51.645062570Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Nov 5 15:49:51.646320 containerd[1598]: time="2025-11-05T15:49:51.645090159Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 5 15:49:51.646320 containerd[1598]: time="2025-11-05T15:49:51.645149881Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 5 15:49:51.646320 containerd[1598]: time="2025-11-05T15:49:51.645161412Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 5 15:49:51.646320 containerd[1598]: time="2025-11-05T15:49:51.645391820Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 5 15:49:51.646320 containerd[1598]: time="2025-11-05T15:49:51.645407245Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 5 15:49:51.646320 containerd[1598]: time="2025-11-05T15:49:51.645418788Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 5 15:49:51.646320 containerd[1598]: time="2025-11-05T15:49:51.645427509Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Nov 5 15:49:51.646320 containerd[1598]: time="2025-11-05T15:49:51.645509215Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Nov 5 15:49:51.646320 containerd[1598]: time="2025-11-05T15:49:51.645725302Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 5 15:49:51.646671 containerd[1598]: time="2025-11-05T15:49:51.645756679Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 5 15:49:51.646671 containerd[1598]: time="2025-11-05T15:49:51.645768818Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Nov 5 15:49:51.646671 containerd[1598]: time="2025-11-05T15:49:51.645820132Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Nov 5 15:49:51.650879 systemd[1]: issuegen.service: Deactivated successfully. Nov 5 15:49:51.651490 containerd[1598]: time="2025-11-05T15:49:51.650680583Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Nov 5 15:49:51.651490 containerd[1598]: time="2025-11-05T15:49:51.650803596Z" level=info msg="metadata content store policy set" policy=shared Nov 5 15:49:51.651190 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 5 15:49:51.658124 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 5 15:49:51.660921 containerd[1598]: time="2025-11-05T15:49:51.660552672Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Nov 5 15:49:51.660921 containerd[1598]: time="2025-11-05T15:49:51.660615363Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Nov 5 15:49:51.660921 containerd[1598]: time="2025-11-05T15:49:51.660631286Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Nov 5 15:49:51.660921 containerd[1598]: time="2025-11-05T15:49:51.660644585Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Nov 5 15:49:51.660921 containerd[1598]: time="2025-11-05T15:49:51.660671591Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Nov 5 15:49:51.660921 containerd[1598]: time="2025-11-05T15:49:51.660685308Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Nov 5 15:49:51.660921 containerd[1598]: time="2025-11-05T15:49:51.660699278Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Nov 5 15:49:51.660921 containerd[1598]: time="2025-11-05T15:49:51.660711274Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Nov 5 15:49:51.660921 containerd[1598]: time="2025-11-05T15:49:51.660723176Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Nov 5 15:49:51.660921 containerd[1598]: time="2025-11-05T15:49:51.660733070Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Nov 5 15:49:51.660921 containerd[1598]: time="2025-11-05T15:49:51.660743345Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Nov 5 15:49:51.660921 containerd[1598]: time="2025-11-05T15:49:51.660756105Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Nov 5 15:49:51.664144 containerd[1598]: time="2025-11-05T15:49:51.663126078Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Nov 5 15:49:51.664144 containerd[1598]: time="2025-11-05T15:49:51.663240724Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Nov 5 15:49:51.664144 containerd[1598]: time="2025-11-05T15:49:51.663268200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Nov 5 15:49:51.664144 containerd[1598]: time="2025-11-05T15:49:51.663280889Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Nov 5 15:49:51.664144 containerd[1598]: time="2025-11-05T15:49:51.663294431Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Nov 5 15:49:51.664144 containerd[1598]: time="2025-11-05T15:49:51.663317152Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Nov 5 15:49:51.664144 containerd[1598]: time="2025-11-05T15:49:51.663329806Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Nov 5 15:49:51.664144 containerd[1598]: time="2025-11-05T15:49:51.663340170Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Nov 5 15:49:51.664144 containerd[1598]: time="2025-11-05T15:49:51.663363934Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Nov 5 15:49:51.664144 containerd[1598]: time="2025-11-05T15:49:51.663374562Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Nov 5 15:49:51.664144 containerd[1598]: time="2025-11-05T15:49:51.663409080Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Nov 5 15:49:51.664144 containerd[1598]: time="2025-11-05T15:49:51.663497224Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Nov 5 15:49:51.664144 containerd[1598]: time="2025-11-05T15:49:51.663510956Z" level=info msg="Start snapshots syncer" Nov 5 15:49:51.664144 containerd[1598]: time="2025-11-05T15:49:51.663534324Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Nov 5 15:49:51.667589 containerd[1598]: time="2025-11-05T15:49:51.665586809Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Nov 5 15:49:51.667589 containerd[1598]: time="2025-11-05T15:49:51.666054666Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Nov 5 15:49:51.668530 containerd[1598]: time="2025-11-05T15:49:51.668300312Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Nov 5 15:49:51.668801 containerd[1598]: time="2025-11-05T15:49:51.668767385Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Nov 5 15:49:51.668914 containerd[1598]: time="2025-11-05T15:49:51.668900631Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Nov 5 15:49:51.670386 containerd[1598]: time="2025-11-05T15:49:51.669179058Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Nov 5 15:49:51.670386 containerd[1598]: time="2025-11-05T15:49:51.669199679Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Nov 5 15:49:51.670386 containerd[1598]: time="2025-11-05T15:49:51.669214815Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Nov 5 15:49:51.670564 containerd[1598]: time="2025-11-05T15:49:51.670542694Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Nov 5 15:49:51.670645 containerd[1598]: time="2025-11-05T15:49:51.670633406Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Nov 5 15:49:51.671118 containerd[1598]: time="2025-11-05T15:49:51.670808684Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Nov 5 15:49:51.671232 containerd[1598]: time="2025-11-05T15:49:51.670829709Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Nov 5 15:49:51.672478 containerd[1598]: time="2025-11-05T15:49:51.671883373Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Nov 5 15:49:51.672478 containerd[1598]: time="2025-11-05T15:49:51.671976222Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 5 15:49:51.672478 containerd[1598]: time="2025-11-05T15:49:51.672005420Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 5 15:49:51.672478 containerd[1598]: time="2025-11-05T15:49:51.672019600Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 5 15:49:51.672478 containerd[1598]: time="2025-11-05T15:49:51.672030688Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 5 15:49:51.672478 containerd[1598]: time="2025-11-05T15:49:51.672408534Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Nov 5 15:49:51.672478 containerd[1598]: time="2025-11-05T15:49:51.672425367Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Nov 5 15:49:51.672478 containerd[1598]: time="2025-11-05T15:49:51.672437520Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Nov 5 15:49:51.672478 containerd[1598]: time="2025-11-05T15:49:51.672459634Z" level=info msg="runtime interface created" Nov 5 15:49:51.675461 containerd[1598]: time="2025-11-05T15:49:51.672772701Z" level=info msg="created NRI interface" Nov 5 15:49:51.675461 containerd[1598]: time="2025-11-05T15:49:51.672797246Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Nov 5 15:49:51.675461 containerd[1598]: time="2025-11-05T15:49:51.672824489Z" level=info msg="Connect containerd service" Nov 5 15:49:51.675461 containerd[1598]: time="2025-11-05T15:49:51.674611423Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 5 15:49:51.675775 systemd-networkd[1479]: eth1: Gained IPv6LL Nov 5 15:49:51.686400 containerd[1598]: time="2025-11-05T15:49:51.686241572Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 5 15:49:51.755683 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 5 15:49:51.761389 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 5 15:49:51.767005 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 5 15:49:51.769969 systemd[1]: Reached target getty.target - Login Prompts. Nov 5 15:49:51.907539 sshd[1695]: Accepted publickey for core from 139.178.68.195 port 52676 ssh2: RSA SHA256:6pZ2eqROk+ALbQ+c/ul+tfC2zt1KpSHiHdkR7HgdI30 Nov 5 15:49:51.910118 sshd-session[1695]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:49:51.930692 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 5 15:49:51.937031 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 5 15:49:51.973384 systemd-logind[1568]: New session 1 of user core. Nov 5 15:49:51.996466 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 5 15:49:52.007917 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 5 15:49:52.029656 (systemd)[1721]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 5 15:49:52.036476 systemd-logind[1568]: New session c1 of user core. Nov 5 15:49:52.052616 containerd[1598]: time="2025-11-05T15:49:52.052568102Z" level=info msg="Start subscribing containerd event" Nov 5 15:49:52.052870 containerd[1598]: time="2025-11-05T15:49:52.052823016Z" level=info msg="Start recovering state" Nov 5 15:49:52.054142 containerd[1598]: time="2025-11-05T15:49:52.053116682Z" level=info msg="Start event monitor" Nov 5 15:49:52.054142 containerd[1598]: time="2025-11-05T15:49:52.053138694Z" level=info msg="Start cni network conf syncer for default" Nov 5 15:49:52.054142 containerd[1598]: time="2025-11-05T15:49:52.053146114Z" level=info msg="Start streaming server" Nov 5 15:49:52.054142 containerd[1598]: time="2025-11-05T15:49:52.053155011Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Nov 5 15:49:52.054142 containerd[1598]: time="2025-11-05T15:49:52.053162127Z" level=info msg="runtime interface starting up..." Nov 5 15:49:52.054142 containerd[1598]: time="2025-11-05T15:49:52.053168154Z" level=info msg="starting plugins..." Nov 5 15:49:52.054142 containerd[1598]: time="2025-11-05T15:49:52.053181442Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Nov 5 15:49:52.055152 containerd[1598]: time="2025-11-05T15:49:52.055121908Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 5 15:49:52.055661 containerd[1598]: time="2025-11-05T15:49:52.055323002Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 5 15:49:52.055798 containerd[1598]: time="2025-11-05T15:49:52.055779620Z" level=info msg="containerd successfully booted in 0.493441s" Nov 5 15:49:52.058050 systemd[1]: Started containerd.service - containerd container runtime. Nov 5 15:49:52.179180 tar[1576]: linux-amd64/README.md Nov 5 15:49:52.219162 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 5 15:49:52.280036 systemd[1721]: Queued start job for default target default.target. Nov 5 15:49:52.287777 systemd[1721]: Created slice app.slice - User Application Slice. Nov 5 15:49:52.287836 systemd[1721]: Reached target paths.target - Paths. Nov 5 15:49:52.287912 systemd[1721]: Reached target timers.target - Timers. Nov 5 15:49:52.290423 systemd[1721]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 5 15:49:52.328802 systemd[1721]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 5 15:49:52.329667 systemd[1721]: Reached target sockets.target - Sockets. Nov 5 15:49:52.329737 systemd[1721]: Reached target basic.target - Basic System. Nov 5 15:49:52.329780 systemd[1721]: Reached target default.target - Main User Target. Nov 5 15:49:52.329813 systemd[1721]: Startup finished in 272ms. Nov 5 15:49:52.330704 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 5 15:49:52.347279 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 5 15:49:52.436101 systemd[1]: Started sshd@1-137.184.121.184:22-139.178.68.195:52686.service - OpenSSH per-connection server daemon (139.178.68.195:52686). Nov 5 15:49:52.530989 sshd[1736]: Accepted publickey for core from 139.178.68.195 port 52686 ssh2: RSA SHA256:6pZ2eqROk+ALbQ+c/ul+tfC2zt1KpSHiHdkR7HgdI30 Nov 5 15:49:52.533365 sshd-session[1736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:49:52.541779 systemd-logind[1568]: New session 2 of user core. Nov 5 15:49:52.555344 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 5 15:49:52.631455 sshd[1739]: Connection closed by 139.178.68.195 port 52686 Nov 5 15:49:52.633224 sshd-session[1736]: pam_unix(sshd:session): session closed for user core Nov 5 15:49:52.645932 systemd[1]: sshd@1-137.184.121.184:22-139.178.68.195:52686.service: Deactivated successfully. Nov 5 15:49:52.648931 systemd[1]: session-2.scope: Deactivated successfully. Nov 5 15:49:52.651121 systemd-logind[1568]: Session 2 logged out. Waiting for processes to exit. Nov 5 15:49:52.655395 systemd[1]: Started sshd@2-137.184.121.184:22-139.178.68.195:52694.service - OpenSSH per-connection server daemon (139.178.68.195:52694). Nov 5 15:49:52.661128 systemd-logind[1568]: Removed session 2. Nov 5 15:49:52.743183 sshd[1745]: Accepted publickey for core from 139.178.68.195 port 52694 ssh2: RSA SHA256:6pZ2eqROk+ALbQ+c/ul+tfC2zt1KpSHiHdkR7HgdI30 Nov 5 15:49:52.745579 sshd-session[1745]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:49:52.753047 systemd-logind[1568]: New session 3 of user core. Nov 5 15:49:52.764270 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 5 15:49:52.843413 sshd[1748]: Connection closed by 139.178.68.195 port 52694 Nov 5 15:49:52.846321 sshd-session[1745]: pam_unix(sshd:session): session closed for user core Nov 5 15:49:52.854674 systemd[1]: sshd@2-137.184.121.184:22-139.178.68.195:52694.service: Deactivated successfully. Nov 5 15:49:52.854737 systemd-logind[1568]: Session 3 logged out. Waiting for processes to exit. Nov 5 15:49:52.860212 systemd[1]: session-3.scope: Deactivated successfully. Nov 5 15:49:52.865565 systemd-logind[1568]: Removed session 3. Nov 5 15:49:53.062284 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:49:53.064184 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 5 15:49:53.066138 systemd[1]: Startup finished in 2.428s (kernel) + 7.773s (initrd) + 6.650s (userspace) = 16.852s. Nov 5 15:49:53.082870 (kubelet)[1758]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 5 15:49:53.781831 kubelet[1758]: E1105 15:49:53.781685 1758 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 5 15:49:53.784656 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 5 15:49:53.785077 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 5 15:49:53.785741 systemd[1]: kubelet.service: Consumed 1.320s CPU time, 263.8M memory peak. Nov 5 15:50:03.049270 systemd[1]: Started sshd@3-137.184.121.184:22-139.178.68.195:49270.service - OpenSSH per-connection server daemon (139.178.68.195:49270). Nov 5 15:50:03.168185 sshd[1770]: Accepted publickey for core from 139.178.68.195 port 49270 ssh2: RSA SHA256:6pZ2eqROk+ALbQ+c/ul+tfC2zt1KpSHiHdkR7HgdI30 Nov 5 15:50:03.170801 sshd-session[1770]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:50:03.181533 systemd-logind[1568]: New session 4 of user core. Nov 5 15:50:03.200329 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 5 15:50:03.281176 sshd[1773]: Connection closed by 139.178.68.195 port 49270 Nov 5 15:50:03.282168 sshd-session[1770]: pam_unix(sshd:session): session closed for user core Nov 5 15:50:03.299261 systemd[1]: sshd@3-137.184.121.184:22-139.178.68.195:49270.service: Deactivated successfully. Nov 5 15:50:03.305565 systemd[1]: session-4.scope: Deactivated successfully. Nov 5 15:50:03.308232 systemd-logind[1568]: Session 4 logged out. Waiting for processes to exit. Nov 5 15:50:03.314610 systemd[1]: Started sshd@4-137.184.121.184:22-139.178.68.195:49286.service - OpenSSH per-connection server daemon (139.178.68.195:49286). Nov 5 15:50:03.317584 systemd-logind[1568]: Removed session 4. Nov 5 15:50:03.424087 sshd[1779]: Accepted publickey for core from 139.178.68.195 port 49286 ssh2: RSA SHA256:6pZ2eqROk+ALbQ+c/ul+tfC2zt1KpSHiHdkR7HgdI30 Nov 5 15:50:03.427642 sshd-session[1779]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:50:03.438506 systemd-logind[1568]: New session 5 of user core. Nov 5 15:50:03.454370 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 5 15:50:03.521631 sshd[1782]: Connection closed by 139.178.68.195 port 49286 Nov 5 15:50:03.521357 sshd-session[1779]: pam_unix(sshd:session): session closed for user core Nov 5 15:50:03.541663 systemd[1]: sshd@4-137.184.121.184:22-139.178.68.195:49286.service: Deactivated successfully. Nov 5 15:50:03.546721 systemd[1]: session-5.scope: Deactivated successfully. Nov 5 15:50:03.549147 systemd-logind[1568]: Session 5 logged out. Waiting for processes to exit. Nov 5 15:50:03.554892 systemd[1]: Started sshd@5-137.184.121.184:22-139.178.68.195:49296.service - OpenSSH per-connection server daemon (139.178.68.195:49296). Nov 5 15:50:03.557102 systemd-logind[1568]: Removed session 5. Nov 5 15:50:03.660354 sshd[1788]: Accepted publickey for core from 139.178.68.195 port 49296 ssh2: RSA SHA256:6pZ2eqROk+ALbQ+c/ul+tfC2zt1KpSHiHdkR7HgdI30 Nov 5 15:50:03.662800 sshd-session[1788]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:50:03.676922 systemd-logind[1568]: New session 6 of user core. Nov 5 15:50:03.694369 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 5 15:50:03.770479 sshd[1791]: Connection closed by 139.178.68.195 port 49296 Nov 5 15:50:03.771429 sshd-session[1788]: pam_unix(sshd:session): session closed for user core Nov 5 15:50:03.790330 systemd[1]: sshd@5-137.184.121.184:22-139.178.68.195:49296.service: Deactivated successfully. Nov 5 15:50:03.793062 systemd[1]: session-6.scope: Deactivated successfully. Nov 5 15:50:03.796020 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 5 15:50:03.798583 systemd-logind[1568]: Session 6 logged out. Waiting for processes to exit. Nov 5 15:50:03.802670 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 15:50:03.806452 systemd[1]: Started sshd@6-137.184.121.184:22-139.178.68.195:49306.service - OpenSSH per-connection server daemon (139.178.68.195:49306). Nov 5 15:50:03.809316 systemd-logind[1568]: Removed session 6. Nov 5 15:50:03.897682 sshd[1798]: Accepted publickey for core from 139.178.68.195 port 49306 ssh2: RSA SHA256:6pZ2eqROk+ALbQ+c/ul+tfC2zt1KpSHiHdkR7HgdI30 Nov 5 15:50:03.899715 sshd-session[1798]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:50:03.913070 systemd-logind[1568]: New session 7 of user core. Nov 5 15:50:03.915411 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 5 15:50:04.006738 sudo[1804]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 5 15:50:04.007289 sudo[1804]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 15:50:04.025187 sudo[1804]: pam_unix(sudo:session): session closed for user root Nov 5 15:50:04.030994 sshd[1803]: Connection closed by 139.178.68.195 port 49306 Nov 5 15:50:04.032566 sshd-session[1798]: pam_unix(sshd:session): session closed for user core Nov 5 15:50:04.049671 systemd[1]: sshd@6-137.184.121.184:22-139.178.68.195:49306.service: Deactivated successfully. Nov 5 15:50:04.054823 systemd[1]: session-7.scope: Deactivated successfully. Nov 5 15:50:04.059236 systemd-logind[1568]: Session 7 logged out. Waiting for processes to exit. Nov 5 15:50:04.068416 systemd[1]: Started sshd@7-137.184.121.184:22-139.178.68.195:49310.service - OpenSSH per-connection server daemon (139.178.68.195:49310). Nov 5 15:50:04.071193 systemd-logind[1568]: Removed session 7. Nov 5 15:50:04.075291 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:50:04.086869 (kubelet)[1816]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 5 15:50:04.156710 sshd[1814]: Accepted publickey for core from 139.178.68.195 port 49310 ssh2: RSA SHA256:6pZ2eqROk+ALbQ+c/ul+tfC2zt1KpSHiHdkR7HgdI30 Nov 5 15:50:04.159692 sshd-session[1814]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:50:04.170821 systemd-logind[1568]: New session 8 of user core. Nov 5 15:50:04.176385 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 5 15:50:04.182588 kubelet[1816]: E1105 15:50:04.182466 1816 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 5 15:50:04.190059 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 5 15:50:04.190290 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 5 15:50:04.190962 systemd[1]: kubelet.service: Consumed 283ms CPU time, 110M memory peak. Nov 5 15:50:04.247576 sudo[1827]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 5 15:50:04.248091 sudo[1827]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 15:50:04.256078 sudo[1827]: pam_unix(sudo:session): session closed for user root Nov 5 15:50:04.268597 sudo[1826]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 5 15:50:04.269129 sudo[1826]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 15:50:04.287050 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 5 15:50:04.363864 augenrules[1849]: No rules Nov 5 15:50:04.366392 systemd[1]: audit-rules.service: Deactivated successfully. Nov 5 15:50:04.366680 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 5 15:50:04.369198 sudo[1826]: pam_unix(sudo:session): session closed for user root Nov 5 15:50:04.374991 sshd[1824]: Connection closed by 139.178.68.195 port 49310 Nov 5 15:50:04.375332 sshd-session[1814]: pam_unix(sshd:session): session closed for user core Nov 5 15:50:04.388827 systemd[1]: sshd@7-137.184.121.184:22-139.178.68.195:49310.service: Deactivated successfully. Nov 5 15:50:04.394380 systemd[1]: session-8.scope: Deactivated successfully. Nov 5 15:50:04.397379 systemd-logind[1568]: Session 8 logged out. Waiting for processes to exit. Nov 5 15:50:04.402036 systemd[1]: Started sshd@8-137.184.121.184:22-139.178.68.195:49320.service - OpenSSH per-connection server daemon (139.178.68.195:49320). Nov 5 15:50:04.403794 systemd-logind[1568]: Removed session 8. Nov 5 15:50:04.483509 sshd[1858]: Accepted publickey for core from 139.178.68.195 port 49320 ssh2: RSA SHA256:6pZ2eqROk+ALbQ+c/ul+tfC2zt1KpSHiHdkR7HgdI30 Nov 5 15:50:04.486513 sshd-session[1858]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:50:04.496297 systemd-logind[1568]: New session 9 of user core. Nov 5 15:50:04.512358 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 5 15:50:04.579360 sudo[1862]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 5 15:50:04.579686 sudo[1862]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 15:50:05.316454 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 5 15:50:05.333646 (dockerd)[1879]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 5 15:50:05.837195 dockerd[1879]: time="2025-11-05T15:50:05.837105769Z" level=info msg="Starting up" Nov 5 15:50:05.838867 dockerd[1879]: time="2025-11-05T15:50:05.838822918Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Nov 5 15:50:05.861899 dockerd[1879]: time="2025-11-05T15:50:05.861791606Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Nov 5 15:50:05.883120 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport101006083-merged.mount: Deactivated successfully. Nov 5 15:50:05.914786 dockerd[1879]: time="2025-11-05T15:50:05.914530449Z" level=info msg="Loading containers: start." Nov 5 15:50:05.926974 kernel: Initializing XFRM netlink socket Nov 5 15:50:06.308729 systemd-networkd[1479]: docker0: Link UP Nov 5 15:50:06.312224 dockerd[1879]: time="2025-11-05T15:50:06.312156981Z" level=info msg="Loading containers: done." Nov 5 15:50:06.337596 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck607777314-merged.mount: Deactivated successfully. Nov 5 15:50:06.341506 dockerd[1879]: time="2025-11-05T15:50:06.341401424Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 5 15:50:06.341703 dockerd[1879]: time="2025-11-05T15:50:06.341545440Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Nov 5 15:50:06.341764 dockerd[1879]: time="2025-11-05T15:50:06.341716660Z" level=info msg="Initializing buildkit" Nov 5 15:50:06.375653 dockerd[1879]: time="2025-11-05T15:50:06.375572565Z" level=info msg="Completed buildkit initialization" Nov 5 15:50:06.386432 dockerd[1879]: time="2025-11-05T15:50:06.386180237Z" level=info msg="Daemon has completed initialization" Nov 5 15:50:06.386432 dockerd[1879]: time="2025-11-05T15:50:06.386316286Z" level=info msg="API listen on /run/docker.sock" Nov 5 15:50:06.387526 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 5 15:50:07.374202 containerd[1598]: time="2025-11-05T15:50:07.373728992Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Nov 5 15:50:08.007448 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2391973962.mount: Deactivated successfully. Nov 5 15:50:09.429463 containerd[1598]: time="2025-11-05T15:50:09.429352024Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:50:09.431372 containerd[1598]: time="2025-11-05T15:50:09.431301714Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=28837916" Nov 5 15:50:09.432259 containerd[1598]: time="2025-11-05T15:50:09.432185637Z" level=info msg="ImageCreate event name:\"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:50:09.436856 containerd[1598]: time="2025-11-05T15:50:09.436755428Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:50:09.438991 containerd[1598]: time="2025-11-05T15:50:09.438885367Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"28834515\" in 2.065097313s" Nov 5 15:50:09.438991 containerd[1598]: time="2025-11-05T15:50:09.438984052Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Nov 5 15:50:09.439890 containerd[1598]: time="2025-11-05T15:50:09.439844210Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Nov 5 15:50:10.996123 containerd[1598]: time="2025-11-05T15:50:10.996039734Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:50:10.997744 containerd[1598]: time="2025-11-05T15:50:10.997507678Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=24787027" Nov 5 15:50:10.998500 containerd[1598]: time="2025-11-05T15:50:10.998450893Z" level=info msg="ImageCreate event name:\"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:50:11.001972 containerd[1598]: time="2025-11-05T15:50:11.001495832Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:50:11.003326 containerd[1598]: time="2025-11-05T15:50:11.003131395Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"26421706\" in 1.563243678s" Nov 5 15:50:11.003326 containerd[1598]: time="2025-11-05T15:50:11.003192093Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Nov 5 15:50:11.004724 containerd[1598]: time="2025-11-05T15:50:11.004403882Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Nov 5 15:50:12.429746 containerd[1598]: time="2025-11-05T15:50:12.429601130Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:50:12.430978 containerd[1598]: time="2025-11-05T15:50:12.430480295Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=19176289" Nov 5 15:50:12.432965 containerd[1598]: time="2025-11-05T15:50:12.432276924Z" level=info msg="ImageCreate event name:\"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:50:12.436880 containerd[1598]: time="2025-11-05T15:50:12.436814688Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:50:12.439147 containerd[1598]: time="2025-11-05T15:50:12.439074978Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"20810986\" in 1.434618715s" Nov 5 15:50:12.439409 containerd[1598]: time="2025-11-05T15:50:12.439381776Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Nov 5 15:50:12.440392 containerd[1598]: time="2025-11-05T15:50:12.440323003Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Nov 5 15:50:13.638749 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4294856496.mount: Deactivated successfully. Nov 5 15:50:14.199751 containerd[1598]: time="2025-11-05T15:50:14.199674313Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:50:14.200781 containerd[1598]: time="2025-11-05T15:50:14.200558853Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=30924206" Nov 5 15:50:14.202003 containerd[1598]: time="2025-11-05T15:50:14.201392375Z" level=info msg="ImageCreate event name:\"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:50:14.203667 containerd[1598]: time="2025-11-05T15:50:14.203606097Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:50:14.204105 containerd[1598]: time="2025-11-05T15:50:14.204069646Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"30923225\" in 1.763465075s" Nov 5 15:50:14.204105 containerd[1598]: time="2025-11-05T15:50:14.204102274Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Nov 5 15:50:14.204708 containerd[1598]: time="2025-11-05T15:50:14.204561297Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Nov 5 15:50:14.206308 systemd-resolved[1278]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Nov 5 15:50:14.303891 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 5 15:50:14.306508 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 15:50:14.478418 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:50:14.490385 (kubelet)[2181]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 5 15:50:14.547957 kubelet[2181]: E1105 15:50:14.547882 2181 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 5 15:50:14.551116 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 5 15:50:14.551445 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 5 15:50:14.552330 systemd[1]: kubelet.service: Consumed 198ms CPU time, 111M memory peak. Nov 5 15:50:14.799458 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2386371713.mount: Deactivated successfully. Nov 5 15:50:15.770090 containerd[1598]: time="2025-11-05T15:50:15.770017466Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:50:15.771028 containerd[1598]: time="2025-11-05T15:50:15.770980414Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Nov 5 15:50:15.772269 containerd[1598]: time="2025-11-05T15:50:15.772196091Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:50:15.776397 containerd[1598]: time="2025-11-05T15:50:15.775624420Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:50:15.778433 containerd[1598]: time="2025-11-05T15:50:15.778365825Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.573773287s" Nov 5 15:50:15.778433 containerd[1598]: time="2025-11-05T15:50:15.778422195Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Nov 5 15:50:15.779367 containerd[1598]: time="2025-11-05T15:50:15.779340467Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 5 15:50:16.345157 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount739289373.mount: Deactivated successfully. Nov 5 15:50:16.352226 containerd[1598]: time="2025-11-05T15:50:16.352064662Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 5 15:50:16.353831 containerd[1598]: time="2025-11-05T15:50:16.353756535Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Nov 5 15:50:16.354486 containerd[1598]: time="2025-11-05T15:50:16.354415666Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 5 15:50:16.358739 containerd[1598]: time="2025-11-05T15:50:16.358634562Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 5 15:50:16.361288 containerd[1598]: time="2025-11-05T15:50:16.360237281Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 580.746692ms" Nov 5 15:50:16.361288 containerd[1598]: time="2025-11-05T15:50:16.360306187Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 5 15:50:16.361561 containerd[1598]: time="2025-11-05T15:50:16.361525301Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Nov 5 15:50:16.834669 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1582338627.mount: Deactivated successfully. Nov 5 15:50:17.275160 systemd-resolved[1278]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Nov 5 15:50:18.712297 containerd[1598]: time="2025-11-05T15:50:18.712222324Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:50:18.713632 containerd[1598]: time="2025-11-05T15:50:18.713571340Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Nov 5 15:50:18.714774 containerd[1598]: time="2025-11-05T15:50:18.714725663Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:50:18.719723 containerd[1598]: time="2025-11-05T15:50:18.719651670Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:50:18.721520 containerd[1598]: time="2025-11-05T15:50:18.721460469Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.359886756s" Nov 5 15:50:18.721726 containerd[1598]: time="2025-11-05T15:50:18.721704333Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Nov 5 15:50:21.578643 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:50:21.579574 systemd[1]: kubelet.service: Consumed 198ms CPU time, 111M memory peak. Nov 5 15:50:21.583813 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 15:50:21.627647 systemd[1]: Reload requested from client PID 2324 ('systemctl') (unit session-9.scope)... Nov 5 15:50:21.627669 systemd[1]: Reloading... Nov 5 15:50:21.828976 zram_generator::config[2380]: No configuration found. Nov 5 15:50:22.131107 systemd[1]: Reloading finished in 502 ms. Nov 5 15:50:22.209184 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 5 15:50:22.209313 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 5 15:50:22.209788 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:50:22.209865 systemd[1]: kubelet.service: Consumed 151ms CPU time, 98.1M memory peak. Nov 5 15:50:22.213647 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 15:50:22.394953 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:50:22.410611 (kubelet)[2423]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 5 15:50:22.506840 kubelet[2423]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 5 15:50:22.507965 kubelet[2423]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 5 15:50:22.507965 kubelet[2423]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 5 15:50:22.507965 kubelet[2423]: I1105 15:50:22.507687 2423 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 5 15:50:23.192916 kubelet[2423]: I1105 15:50:23.192838 2423 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 5 15:50:23.193211 kubelet[2423]: I1105 15:50:23.193174 2423 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 5 15:50:23.194452 kubelet[2423]: I1105 15:50:23.194403 2423 server.go:954] "Client rotation is on, will bootstrap in background" Nov 5 15:50:23.247679 kubelet[2423]: I1105 15:50:23.247624 2423 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 5 15:50:23.251983 kubelet[2423]: E1105 15:50:23.251719 2423 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://137.184.121.184:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 137.184.121.184:6443: connect: connection refused" logger="UnhandledError" Nov 5 15:50:23.267347 kubelet[2423]: I1105 15:50:23.267226 2423 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 5 15:50:23.275976 kubelet[2423]: I1105 15:50:23.275903 2423 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 5 15:50:23.282107 kubelet[2423]: I1105 15:50:23.281996 2423 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 5 15:50:23.282482 kubelet[2423]: I1105 15:50:23.282103 2423 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4487.0.1-2-254db4f49e","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 5 15:50:23.284657 kubelet[2423]: I1105 15:50:23.284586 2423 topology_manager.go:138] "Creating topology manager with none policy" Nov 5 15:50:23.284657 kubelet[2423]: I1105 15:50:23.284644 2423 container_manager_linux.go:304] "Creating device plugin manager" Nov 5 15:50:23.286334 kubelet[2423]: I1105 15:50:23.286261 2423 state_mem.go:36] "Initialized new in-memory state store" Nov 5 15:50:23.290660 kubelet[2423]: I1105 15:50:23.290513 2423 kubelet.go:446] "Attempting to sync node with API server" Nov 5 15:50:23.290660 kubelet[2423]: I1105 15:50:23.290580 2423 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 5 15:50:23.292429 kubelet[2423]: I1105 15:50:23.292011 2423 kubelet.go:352] "Adding apiserver pod source" Nov 5 15:50:23.292429 kubelet[2423]: I1105 15:50:23.292061 2423 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 5 15:50:23.296234 kubelet[2423]: W1105 15:50:23.296098 2423 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://137.184.121.184:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4487.0.1-2-254db4f49e&limit=500&resourceVersion=0": dial tcp 137.184.121.184:6443: connect: connection refused Nov 5 15:50:23.296477 kubelet[2423]: E1105 15:50:23.296441 2423 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://137.184.121.184:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4487.0.1-2-254db4f49e&limit=500&resourceVersion=0\": dial tcp 137.184.121.184:6443: connect: connection refused" logger="UnhandledError" Nov 5 15:50:23.297957 kubelet[2423]: I1105 15:50:23.297827 2423 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 5 15:50:23.301210 kubelet[2423]: I1105 15:50:23.301052 2423 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 5 15:50:23.302011 kubelet[2423]: W1105 15:50:23.301614 2423 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 5 15:50:23.302897 kubelet[2423]: I1105 15:50:23.302865 2423 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 5 15:50:23.303033 kubelet[2423]: I1105 15:50:23.302910 2423 server.go:1287] "Started kubelet" Nov 5 15:50:23.315889 kubelet[2423]: I1105 15:50:23.315838 2423 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 5 15:50:23.321543 kubelet[2423]: I1105 15:50:23.321067 2423 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 5 15:50:23.326741 kubelet[2423]: I1105 15:50:23.326691 2423 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 5 15:50:23.327070 kubelet[2423]: E1105 15:50:23.322025 2423 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://137.184.121.184:6443/api/v1/namespaces/default/events\": dial tcp 137.184.121.184:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4487.0.1-2-254db4f49e.1875271a59195c87 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4487.0.1-2-254db4f49e,UID:ci-4487.0.1-2-254db4f49e,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4487.0.1-2-254db4f49e,},FirstTimestamp:2025-11-05 15:50:23.302884487 +0000 UTC m=+0.884823033,LastTimestamp:2025-11-05 15:50:23.302884487 +0000 UTC m=+0.884823033,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4487.0.1-2-254db4f49e,}" Nov 5 15:50:23.327248 kubelet[2423]: I1105 15:50:23.327152 2423 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 5 15:50:23.327837 kubelet[2423]: E1105 15:50:23.327751 2423 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4487.0.1-2-254db4f49e\" not found" Nov 5 15:50:23.328979 kubelet[2423]: I1105 15:50:23.328774 2423 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 5 15:50:23.329390 kubelet[2423]: I1105 15:50:23.329362 2423 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 5 15:50:23.331510 kubelet[2423]: I1105 15:50:23.330977 2423 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 5 15:50:23.331510 kubelet[2423]: I1105 15:50:23.331053 2423 reconciler.go:26] "Reconciler: start to sync state" Nov 5 15:50:23.331647 kubelet[2423]: I1105 15:50:23.331604 2423 server.go:479] "Adding debug handlers to kubelet server" Nov 5 15:50:23.333760 kubelet[2423]: E1105 15:50:23.332133 2423 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://137.184.121.184:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4487.0.1-2-254db4f49e?timeout=10s\": dial tcp 137.184.121.184:6443: connect: connection refused" interval="200ms" Nov 5 15:50:23.336716 kubelet[2423]: W1105 15:50:23.336637 2423 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://137.184.121.184:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 137.184.121.184:6443: connect: connection refused Nov 5 15:50:23.337026 kubelet[2423]: E1105 15:50:23.336991 2423 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://137.184.121.184:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 137.184.121.184:6443: connect: connection refused" logger="UnhandledError" Nov 5 15:50:23.337548 kubelet[2423]: I1105 15:50:23.337522 2423 factory.go:221] Registration of the systemd container factory successfully Nov 5 15:50:23.337837 kubelet[2423]: I1105 15:50:23.337810 2423 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 5 15:50:23.345563 kubelet[2423]: W1105 15:50:23.345458 2423 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://137.184.121.184:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 137.184.121.184:6443: connect: connection refused Nov 5 15:50:23.345563 kubelet[2423]: E1105 15:50:23.345554 2423 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://137.184.121.184:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 137.184.121.184:6443: connect: connection refused" logger="UnhandledError" Nov 5 15:50:23.347926 kubelet[2423]: I1105 15:50:23.347878 2423 factory.go:221] Registration of the containerd container factory successfully Nov 5 15:50:23.351271 kubelet[2423]: E1105 15:50:23.351229 2423 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 5 15:50:23.365578 kubelet[2423]: I1105 15:50:23.365378 2423 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 5 15:50:23.367486 kubelet[2423]: I1105 15:50:23.367438 2423 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 5 15:50:23.368027 kubelet[2423]: I1105 15:50:23.367653 2423 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 5 15:50:23.368027 kubelet[2423]: I1105 15:50:23.367697 2423 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 5 15:50:23.368027 kubelet[2423]: I1105 15:50:23.367707 2423 kubelet.go:2382] "Starting kubelet main sync loop" Nov 5 15:50:23.368027 kubelet[2423]: E1105 15:50:23.367785 2423 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 5 15:50:23.384029 kubelet[2423]: W1105 15:50:23.383755 2423 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://137.184.121.184:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 137.184.121.184:6443: connect: connection refused Nov 5 15:50:23.384029 kubelet[2423]: E1105 15:50:23.383851 2423 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://137.184.121.184:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 137.184.121.184:6443: connect: connection refused" logger="UnhandledError" Nov 5 15:50:23.390254 kubelet[2423]: I1105 15:50:23.390218 2423 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 5 15:50:23.390942 kubelet[2423]: I1105 15:50:23.390657 2423 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 5 15:50:23.390942 kubelet[2423]: I1105 15:50:23.390794 2423 state_mem.go:36] "Initialized new in-memory state store" Nov 5 15:50:23.393580 kubelet[2423]: I1105 15:50:23.393211 2423 policy_none.go:49] "None policy: Start" Nov 5 15:50:23.393580 kubelet[2423]: I1105 15:50:23.393245 2423 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 5 15:50:23.393580 kubelet[2423]: I1105 15:50:23.393263 2423 state_mem.go:35] "Initializing new in-memory state store" Nov 5 15:50:23.401905 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 5 15:50:23.427904 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 5 15:50:23.428441 kubelet[2423]: E1105 15:50:23.428171 2423 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4487.0.1-2-254db4f49e\" not found" Nov 5 15:50:23.435182 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 5 15:50:23.451593 kubelet[2423]: I1105 15:50:23.451472 2423 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 5 15:50:23.451960 kubelet[2423]: I1105 15:50:23.451944 2423 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 5 15:50:23.452079 kubelet[2423]: I1105 15:50:23.452041 2423 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 5 15:50:23.452764 kubelet[2423]: I1105 15:50:23.452637 2423 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 5 15:50:23.455626 kubelet[2423]: E1105 15:50:23.455414 2423 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 5 15:50:23.455626 kubelet[2423]: E1105 15:50:23.455535 2423 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4487.0.1-2-254db4f49e\" not found" Nov 5 15:50:23.479539 systemd[1]: Created slice kubepods-burstable-pod4b7896c2f233ed0c39d05c145bf38809.slice - libcontainer container kubepods-burstable-pod4b7896c2f233ed0c39d05c145bf38809.slice. Nov 5 15:50:23.493165 kubelet[2423]: E1105 15:50:23.493121 2423 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.1-2-254db4f49e\" not found" node="ci-4487.0.1-2-254db4f49e" Nov 5 15:50:23.497841 systemd[1]: Created slice kubepods-burstable-podfe7bc33fc5f336a87da5269eb0aad81a.slice - libcontainer container kubepods-burstable-podfe7bc33fc5f336a87da5269eb0aad81a.slice. Nov 5 15:50:23.518009 kubelet[2423]: E1105 15:50:23.517970 2423 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.1-2-254db4f49e\" not found" node="ci-4487.0.1-2-254db4f49e" Nov 5 15:50:23.523804 systemd[1]: Created slice kubepods-burstable-pod1de67e8b25adff5c7183fd43704b53aa.slice - libcontainer container kubepods-burstable-pod1de67e8b25adff5c7183fd43704b53aa.slice. Nov 5 15:50:23.526333 kubelet[2423]: E1105 15:50:23.526300 2423 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.1-2-254db4f49e\" not found" node="ci-4487.0.1-2-254db4f49e" Nov 5 15:50:23.533098 kubelet[2423]: E1105 15:50:23.533054 2423 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://137.184.121.184:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4487.0.1-2-254db4f49e?timeout=10s\": dial tcp 137.184.121.184:6443: connect: connection refused" interval="400ms" Nov 5 15:50:23.554988 kubelet[2423]: I1105 15:50:23.554374 2423 kubelet_node_status.go:75] "Attempting to register node" node="ci-4487.0.1-2-254db4f49e" Nov 5 15:50:23.554988 kubelet[2423]: E1105 15:50:23.554922 2423 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://137.184.121.184:6443/api/v1/nodes\": dial tcp 137.184.121.184:6443: connect: connection refused" node="ci-4487.0.1-2-254db4f49e" Nov 5 15:50:23.632114 kubelet[2423]: I1105 15:50:23.632043 2423 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fe7bc33fc5f336a87da5269eb0aad81a-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4487.0.1-2-254db4f49e\" (UID: \"fe7bc33fc5f336a87da5269eb0aad81a\") " pod="kube-system/kube-controller-manager-ci-4487.0.1-2-254db4f49e" Nov 5 15:50:23.632114 kubelet[2423]: I1105 15:50:23.632108 2423 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4b7896c2f233ed0c39d05c145bf38809-ca-certs\") pod \"kube-apiserver-ci-4487.0.1-2-254db4f49e\" (UID: \"4b7896c2f233ed0c39d05c145bf38809\") " pod="kube-system/kube-apiserver-ci-4487.0.1-2-254db4f49e" Nov 5 15:50:23.632114 kubelet[2423]: I1105 15:50:23.632139 2423 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4b7896c2f233ed0c39d05c145bf38809-k8s-certs\") pod \"kube-apiserver-ci-4487.0.1-2-254db4f49e\" (UID: \"4b7896c2f233ed0c39d05c145bf38809\") " pod="kube-system/kube-apiserver-ci-4487.0.1-2-254db4f49e" Nov 5 15:50:23.632450 kubelet[2423]: I1105 15:50:23.632163 2423 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4b7896c2f233ed0c39d05c145bf38809-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4487.0.1-2-254db4f49e\" (UID: \"4b7896c2f233ed0c39d05c145bf38809\") " pod="kube-system/kube-apiserver-ci-4487.0.1-2-254db4f49e" Nov 5 15:50:23.632450 kubelet[2423]: I1105 15:50:23.632189 2423 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fe7bc33fc5f336a87da5269eb0aad81a-k8s-certs\") pod \"kube-controller-manager-ci-4487.0.1-2-254db4f49e\" (UID: \"fe7bc33fc5f336a87da5269eb0aad81a\") " pod="kube-system/kube-controller-manager-ci-4487.0.1-2-254db4f49e" Nov 5 15:50:23.632450 kubelet[2423]: I1105 15:50:23.632215 2423 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe7bc33fc5f336a87da5269eb0aad81a-kubeconfig\") pod \"kube-controller-manager-ci-4487.0.1-2-254db4f49e\" (UID: \"fe7bc33fc5f336a87da5269eb0aad81a\") " pod="kube-system/kube-controller-manager-ci-4487.0.1-2-254db4f49e" Nov 5 15:50:23.632450 kubelet[2423]: I1105 15:50:23.632256 2423 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fe7bc33fc5f336a87da5269eb0aad81a-ca-certs\") pod \"kube-controller-manager-ci-4487.0.1-2-254db4f49e\" (UID: \"fe7bc33fc5f336a87da5269eb0aad81a\") " pod="kube-system/kube-controller-manager-ci-4487.0.1-2-254db4f49e" Nov 5 15:50:23.632450 kubelet[2423]: I1105 15:50:23.632282 2423 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fe7bc33fc5f336a87da5269eb0aad81a-flexvolume-dir\") pod \"kube-controller-manager-ci-4487.0.1-2-254db4f49e\" (UID: \"fe7bc33fc5f336a87da5269eb0aad81a\") " pod="kube-system/kube-controller-manager-ci-4487.0.1-2-254db4f49e" Nov 5 15:50:23.632662 kubelet[2423]: I1105 15:50:23.632351 2423 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1de67e8b25adff5c7183fd43704b53aa-kubeconfig\") pod \"kube-scheduler-ci-4487.0.1-2-254db4f49e\" (UID: \"1de67e8b25adff5c7183fd43704b53aa\") " pod="kube-system/kube-scheduler-ci-4487.0.1-2-254db4f49e" Nov 5 15:50:23.756436 kubelet[2423]: I1105 15:50:23.756309 2423 kubelet_node_status.go:75] "Attempting to register node" node="ci-4487.0.1-2-254db4f49e" Nov 5 15:50:23.757416 kubelet[2423]: E1105 15:50:23.757373 2423 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://137.184.121.184:6443/api/v1/nodes\": dial tcp 137.184.121.184:6443: connect: connection refused" node="ci-4487.0.1-2-254db4f49e" Nov 5 15:50:23.794047 kubelet[2423]: E1105 15:50:23.793739 2423 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 5 15:50:23.797368 containerd[1598]: time="2025-11-05T15:50:23.796977549Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4487.0.1-2-254db4f49e,Uid:4b7896c2f233ed0c39d05c145bf38809,Namespace:kube-system,Attempt:0,}" Nov 5 15:50:23.820192 kubelet[2423]: E1105 15:50:23.820143 2423 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 5 15:50:23.832917 kubelet[2423]: E1105 15:50:23.832345 2423 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 5 15:50:23.834892 containerd[1598]: time="2025-11-05T15:50:23.834815537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4487.0.1-2-254db4f49e,Uid:1de67e8b25adff5c7183fd43704b53aa,Namespace:kube-system,Attempt:0,}" Nov 5 15:50:23.835982 containerd[1598]: time="2025-11-05T15:50:23.835492219Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4487.0.1-2-254db4f49e,Uid:fe7bc33fc5f336a87da5269eb0aad81a,Namespace:kube-system,Attempt:0,}" Nov 5 15:50:23.919605 containerd[1598]: time="2025-11-05T15:50:23.919538427Z" level=info msg="connecting to shim edcf886bee45c4ad6ec031aa543febd52225e2b1b4667a51ed2b0fbdc3f9089e" address="unix:///run/containerd/s/4f0b06200a2c859b6432a14a93c65f235cdf4ce4aac08a8581b3f645713c7fe4" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:50:23.922238 containerd[1598]: time="2025-11-05T15:50:23.921208486Z" level=info msg="connecting to shim 19a0ee60a2e6530ee773e24e82c353a0d70a3c5cd5c1c4c48432393a38bc1d7c" address="unix:///run/containerd/s/a0093fd8d00594a53b233f30f12f1e44b3c0ae6ac555b28600b13d747a7253de" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:50:23.929188 containerd[1598]: time="2025-11-05T15:50:23.929138867Z" level=info msg="connecting to shim 0cf7e8255713f7b829e941d7782737828dd265f087d87b7b2cdac56153731fd2" address="unix:///run/containerd/s/0f0da4c40a17f2dc7017ae4e2d81c8734ff0648db1d87b41f360a94dedeacdb6" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:50:23.933855 kubelet[2423]: E1105 15:50:23.933729 2423 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://137.184.121.184:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4487.0.1-2-254db4f49e?timeout=10s\": dial tcp 137.184.121.184:6443: connect: connection refused" interval="800ms" Nov 5 15:50:24.062208 systemd[1]: Started cri-containerd-19a0ee60a2e6530ee773e24e82c353a0d70a3c5cd5c1c4c48432393a38bc1d7c.scope - libcontainer container 19a0ee60a2e6530ee773e24e82c353a0d70a3c5cd5c1c4c48432393a38bc1d7c. Nov 5 15:50:24.087182 systemd[1]: Started cri-containerd-0cf7e8255713f7b829e941d7782737828dd265f087d87b7b2cdac56153731fd2.scope - libcontainer container 0cf7e8255713f7b829e941d7782737828dd265f087d87b7b2cdac56153731fd2. Nov 5 15:50:24.090901 systemd[1]: Started cri-containerd-edcf886bee45c4ad6ec031aa543febd52225e2b1b4667a51ed2b0fbdc3f9089e.scope - libcontainer container edcf886bee45c4ad6ec031aa543febd52225e2b1b4667a51ed2b0fbdc3f9089e. Nov 5 15:50:24.160060 kubelet[2423]: I1105 15:50:24.159210 2423 kubelet_node_status.go:75] "Attempting to register node" node="ci-4487.0.1-2-254db4f49e" Nov 5 15:50:24.161028 kubelet[2423]: E1105 15:50:24.160907 2423 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://137.184.121.184:6443/api/v1/nodes\": dial tcp 137.184.121.184:6443: connect: connection refused" node="ci-4487.0.1-2-254db4f49e" Nov 5 15:50:24.202388 containerd[1598]: time="2025-11-05T15:50:24.202342245Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4487.0.1-2-254db4f49e,Uid:4b7896c2f233ed0c39d05c145bf38809,Namespace:kube-system,Attempt:0,} returns sandbox id \"19a0ee60a2e6530ee773e24e82c353a0d70a3c5cd5c1c4c48432393a38bc1d7c\"" Nov 5 15:50:24.205839 kubelet[2423]: E1105 15:50:24.205807 2423 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 5 15:50:24.208604 containerd[1598]: time="2025-11-05T15:50:24.208572919Z" level=info msg="CreateContainer within sandbox \"19a0ee60a2e6530ee773e24e82c353a0d70a3c5cd5c1c4c48432393a38bc1d7c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 5 15:50:24.215029 containerd[1598]: time="2025-11-05T15:50:24.214982438Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4487.0.1-2-254db4f49e,Uid:fe7bc33fc5f336a87da5269eb0aad81a,Namespace:kube-system,Attempt:0,} returns sandbox id \"0cf7e8255713f7b829e941d7782737828dd265f087d87b7b2cdac56153731fd2\"" Nov 5 15:50:24.217044 kubelet[2423]: E1105 15:50:24.216995 2423 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 5 15:50:24.221462 containerd[1598]: time="2025-11-05T15:50:24.221389749Z" level=info msg="CreateContainer within sandbox \"0cf7e8255713f7b829e941d7782737828dd265f087d87b7b2cdac56153731fd2\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 5 15:50:24.222898 containerd[1598]: time="2025-11-05T15:50:24.222855944Z" level=info msg="Container b24f11d1db872d20bcee7cfd3da886c2e3e0ef03de098c6230da4d036032afe7: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:50:24.232355 containerd[1598]: time="2025-11-05T15:50:24.232097819Z" level=info msg="Container 43b8666462268e2f45429c196b1136f84d5315d228144e31d3ecb6bfdce450b8: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:50:24.238593 containerd[1598]: time="2025-11-05T15:50:24.238531390Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4487.0.1-2-254db4f49e,Uid:1de67e8b25adff5c7183fd43704b53aa,Namespace:kube-system,Attempt:0,} returns sandbox id \"edcf886bee45c4ad6ec031aa543febd52225e2b1b4667a51ed2b0fbdc3f9089e\"" Nov 5 15:50:24.240337 kubelet[2423]: E1105 15:50:24.240301 2423 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 5 15:50:24.243018 containerd[1598]: time="2025-11-05T15:50:24.242896792Z" level=info msg="CreateContainer within sandbox \"edcf886bee45c4ad6ec031aa543febd52225e2b1b4667a51ed2b0fbdc3f9089e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 5 15:50:24.245397 containerd[1598]: time="2025-11-05T15:50:24.245197662Z" level=info msg="CreateContainer within sandbox \"19a0ee60a2e6530ee773e24e82c353a0d70a3c5cd5c1c4c48432393a38bc1d7c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b24f11d1db872d20bcee7cfd3da886c2e3e0ef03de098c6230da4d036032afe7\"" Nov 5 15:50:24.246957 containerd[1598]: time="2025-11-05T15:50:24.246176105Z" level=info msg="StartContainer for \"b24f11d1db872d20bcee7cfd3da886c2e3e0ef03de098c6230da4d036032afe7\"" Nov 5 15:50:24.248559 containerd[1598]: time="2025-11-05T15:50:24.248514690Z" level=info msg="connecting to shim b24f11d1db872d20bcee7cfd3da886c2e3e0ef03de098c6230da4d036032afe7" address="unix:///run/containerd/s/a0093fd8d00594a53b233f30f12f1e44b3c0ae6ac555b28600b13d747a7253de" protocol=ttrpc version=3 Nov 5 15:50:24.249249 containerd[1598]: time="2025-11-05T15:50:24.247924822Z" level=info msg="CreateContainer within sandbox \"0cf7e8255713f7b829e941d7782737828dd265f087d87b7b2cdac56153731fd2\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"43b8666462268e2f45429c196b1136f84d5315d228144e31d3ecb6bfdce450b8\"" Nov 5 15:50:24.250287 containerd[1598]: time="2025-11-05T15:50:24.250252696Z" level=info msg="StartContainer for \"43b8666462268e2f45429c196b1136f84d5315d228144e31d3ecb6bfdce450b8\"" Nov 5 15:50:24.253523 containerd[1598]: time="2025-11-05T15:50:24.253398308Z" level=info msg="connecting to shim 43b8666462268e2f45429c196b1136f84d5315d228144e31d3ecb6bfdce450b8" address="unix:///run/containerd/s/0f0da4c40a17f2dc7017ae4e2d81c8734ff0648db1d87b41f360a94dedeacdb6" protocol=ttrpc version=3 Nov 5 15:50:24.260295 containerd[1598]: time="2025-11-05T15:50:24.260232879Z" level=info msg="Container d2ab8ed58b5c46fa8395f1be75de2d342fdc014dc8b8ccf597db2d06d68ae256: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:50:24.269761 containerd[1598]: time="2025-11-05T15:50:24.269713622Z" level=info msg="CreateContainer within sandbox \"edcf886bee45c4ad6ec031aa543febd52225e2b1b4667a51ed2b0fbdc3f9089e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d2ab8ed58b5c46fa8395f1be75de2d342fdc014dc8b8ccf597db2d06d68ae256\"" Nov 5 15:50:24.271957 containerd[1598]: time="2025-11-05T15:50:24.271541908Z" level=info msg="StartContainer for \"d2ab8ed58b5c46fa8395f1be75de2d342fdc014dc8b8ccf597db2d06d68ae256\"" Nov 5 15:50:24.273988 containerd[1598]: time="2025-11-05T15:50:24.273872856Z" level=info msg="connecting to shim d2ab8ed58b5c46fa8395f1be75de2d342fdc014dc8b8ccf597db2d06d68ae256" address="unix:///run/containerd/s/4f0b06200a2c859b6432a14a93c65f235cdf4ce4aac08a8581b3f645713c7fe4" protocol=ttrpc version=3 Nov 5 15:50:24.288204 systemd[1]: Started cri-containerd-43b8666462268e2f45429c196b1136f84d5315d228144e31d3ecb6bfdce450b8.scope - libcontainer container 43b8666462268e2f45429c196b1136f84d5315d228144e31d3ecb6bfdce450b8. Nov 5 15:50:24.289781 systemd[1]: Started cri-containerd-b24f11d1db872d20bcee7cfd3da886c2e3e0ef03de098c6230da4d036032afe7.scope - libcontainer container b24f11d1db872d20bcee7cfd3da886c2e3e0ef03de098c6230da4d036032afe7. Nov 5 15:50:24.317283 systemd[1]: Started cri-containerd-d2ab8ed58b5c46fa8395f1be75de2d342fdc014dc8b8ccf597db2d06d68ae256.scope - libcontainer container d2ab8ed58b5c46fa8395f1be75de2d342fdc014dc8b8ccf597db2d06d68ae256. Nov 5 15:50:24.388607 containerd[1598]: time="2025-11-05T15:50:24.388551648Z" level=info msg="StartContainer for \"b24f11d1db872d20bcee7cfd3da886c2e3e0ef03de098c6230da4d036032afe7\" returns successfully" Nov 5 15:50:24.409720 kubelet[2423]: E1105 15:50:24.409554 2423 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.1-2-254db4f49e\" not found" node="ci-4487.0.1-2-254db4f49e" Nov 5 15:50:24.410272 kubelet[2423]: E1105 15:50:24.410231 2423 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 5 15:50:24.434198 containerd[1598]: time="2025-11-05T15:50:24.434127673Z" level=info msg="StartContainer for \"43b8666462268e2f45429c196b1136f84d5315d228144e31d3ecb6bfdce450b8\" returns successfully" Nov 5 15:50:24.477311 containerd[1598]: time="2025-11-05T15:50:24.477255401Z" level=info msg="StartContainer for \"d2ab8ed58b5c46fa8395f1be75de2d342fdc014dc8b8ccf597db2d06d68ae256\" returns successfully" Nov 5 15:50:24.528638 kubelet[2423]: W1105 15:50:24.528572 2423 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://137.184.121.184:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 137.184.121.184:6443: connect: connection refused Nov 5 15:50:24.528638 kubelet[2423]: E1105 15:50:24.528640 2423 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://137.184.121.184:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 137.184.121.184:6443: connect: connection refused" logger="UnhandledError" Nov 5 15:50:24.604964 kubelet[2423]: W1105 15:50:24.604743 2423 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://137.184.121.184:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4487.0.1-2-254db4f49e&limit=500&resourceVersion=0": dial tcp 137.184.121.184:6443: connect: connection refused Nov 5 15:50:24.604964 kubelet[2423]: E1105 15:50:24.604824 2423 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://137.184.121.184:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4487.0.1-2-254db4f49e&limit=500&resourceVersion=0\": dial tcp 137.184.121.184:6443: connect: connection refused" logger="UnhandledError" Nov 5 15:50:24.628971 kubelet[2423]: W1105 15:50:24.628075 2423 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://137.184.121.184:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 137.184.121.184:6443: connect: connection refused Nov 5 15:50:24.628971 kubelet[2423]: E1105 15:50:24.628178 2423 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://137.184.121.184:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 137.184.121.184:6443: connect: connection refused" logger="UnhandledError" Nov 5 15:50:24.963425 kubelet[2423]: I1105 15:50:24.963286 2423 kubelet_node_status.go:75] "Attempting to register node" node="ci-4487.0.1-2-254db4f49e" Nov 5 15:50:25.415269 kubelet[2423]: E1105 15:50:25.415227 2423 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.1-2-254db4f49e\" not found" node="ci-4487.0.1-2-254db4f49e" Nov 5 15:50:25.415444 kubelet[2423]: E1105 15:50:25.415408 2423 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 5 15:50:25.420809 kubelet[2423]: E1105 15:50:25.420773 2423 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.1-2-254db4f49e\" not found" node="ci-4487.0.1-2-254db4f49e" Nov 5 15:50:25.422175 kubelet[2423]: E1105 15:50:25.422150 2423 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.1-2-254db4f49e\" not found" node="ci-4487.0.1-2-254db4f49e" Nov 5 15:50:25.422310 kubelet[2423]: E1105 15:50:25.422294 2423 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 5 15:50:25.422433 kubelet[2423]: E1105 15:50:25.422419 2423 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 5 15:50:26.422238 kubelet[2423]: E1105 15:50:26.422203 2423 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.1-2-254db4f49e\" not found" node="ci-4487.0.1-2-254db4f49e" Nov 5 15:50:26.422644 kubelet[2423]: E1105 15:50:26.422331 2423 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 5 15:50:26.422644 kubelet[2423]: E1105 15:50:26.422598 2423 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.1-2-254db4f49e\" not found" node="ci-4487.0.1-2-254db4f49e" Nov 5 15:50:26.422711 kubelet[2423]: E1105 15:50:26.422697 2423 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 5 15:50:27.031734 kubelet[2423]: I1105 15:50:27.031677 2423 kubelet_node_status.go:78] "Successfully registered node" node="ci-4487.0.1-2-254db4f49e" Nov 5 15:50:27.031734 kubelet[2423]: E1105 15:50:27.031729 2423 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4487.0.1-2-254db4f49e\": node \"ci-4487.0.1-2-254db4f49e\" not found" Nov 5 15:50:27.064571 kubelet[2423]: E1105 15:50:27.064430 2423 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4487.0.1-2-254db4f49e.1875271a59195c87 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4487.0.1-2-254db4f49e,UID:ci-4487.0.1-2-254db4f49e,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4487.0.1-2-254db4f49e,},FirstTimestamp:2025-11-05 15:50:23.302884487 +0000 UTC m=+0.884823033,LastTimestamp:2025-11-05 15:50:23.302884487 +0000 UTC m=+0.884823033,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4487.0.1-2-254db4f49e,}" Nov 5 15:50:27.094247 kubelet[2423]: E1105 15:50:27.094187 2423 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="1.6s" Nov 5 15:50:27.129461 kubelet[2423]: I1105 15:50:27.129412 2423 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4487.0.1-2-254db4f49e" Nov 5 15:50:27.142448 kubelet[2423]: E1105 15:50:27.142339 2423 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4487.0.1-2-254db4f49e\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4487.0.1-2-254db4f49e" Nov 5 15:50:27.142448 kubelet[2423]: I1105 15:50:27.142396 2423 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4487.0.1-2-254db4f49e" Nov 5 15:50:27.146800 kubelet[2423]: E1105 15:50:27.146720 2423 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4487.0.1-2-254db4f49e\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4487.0.1-2-254db4f49e" Nov 5 15:50:27.147176 kubelet[2423]: I1105 15:50:27.146953 2423 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4487.0.1-2-254db4f49e" Nov 5 15:50:27.151107 kubelet[2423]: E1105 15:50:27.150657 2423 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4487.0.1-2-254db4f49e\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4487.0.1-2-254db4f49e" Nov 5 15:50:27.313462 kubelet[2423]: I1105 15:50:27.313403 2423 apiserver.go:52] "Watching apiserver" Nov 5 15:50:27.331389 kubelet[2423]: I1105 15:50:27.331342 2423 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 5 15:50:27.423629 kubelet[2423]: I1105 15:50:27.423248 2423 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4487.0.1-2-254db4f49e" Nov 5 15:50:27.427582 kubelet[2423]: E1105 15:50:27.427544 2423 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4487.0.1-2-254db4f49e\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4487.0.1-2-254db4f49e" Nov 5 15:50:27.427943 kubelet[2423]: E1105 15:50:27.427912 2423 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 5 15:50:27.717250 kubelet[2423]: I1105 15:50:27.717109 2423 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4487.0.1-2-254db4f49e" Nov 5 15:50:27.721084 kubelet[2423]: E1105 15:50:27.720761 2423 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4487.0.1-2-254db4f49e\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4487.0.1-2-254db4f49e" Nov 5 15:50:27.721084 kubelet[2423]: E1105 15:50:27.720970 2423 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 5 15:50:27.863632 kubelet[2423]: I1105 15:50:27.863582 2423 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4487.0.1-2-254db4f49e" Nov 5 15:50:27.867182 kubelet[2423]: E1105 15:50:27.867136 2423 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4487.0.1-2-254db4f49e\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4487.0.1-2-254db4f49e" Nov 5 15:50:27.867395 kubelet[2423]: E1105 15:50:27.867353 2423 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 5 15:50:29.390171 systemd[1]: Reload requested from client PID 2691 ('systemctl') (unit session-9.scope)... Nov 5 15:50:29.390197 systemd[1]: Reloading... Nov 5 15:50:29.528001 zram_generator::config[2747]: No configuration found. Nov 5 15:50:29.718573 kubelet[2423]: I1105 15:50:29.718449 2423 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4487.0.1-2-254db4f49e" Nov 5 15:50:29.729040 kubelet[2423]: W1105 15:50:29.729004 2423 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 5 15:50:29.729291 kubelet[2423]: E1105 15:50:29.729271 2423 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 5 15:50:29.776161 systemd[1]: Reloading finished in 385 ms. Nov 5 15:50:29.812199 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 15:50:29.826639 systemd[1]: kubelet.service: Deactivated successfully. Nov 5 15:50:29.827346 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:50:29.827625 systemd[1]: kubelet.service: Consumed 1.390s CPU time, 129.9M memory peak. Nov 5 15:50:29.830611 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 15:50:30.014559 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:50:30.027819 (kubelet)[2786]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 5 15:50:30.104729 kubelet[2786]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 5 15:50:30.104729 kubelet[2786]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 5 15:50:30.104729 kubelet[2786]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 5 15:50:30.105183 kubelet[2786]: I1105 15:50:30.104839 2786 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 5 15:50:30.115419 kubelet[2786]: I1105 15:50:30.115374 2786 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 5 15:50:30.115419 kubelet[2786]: I1105 15:50:30.115403 2786 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 5 15:50:30.115765 kubelet[2786]: I1105 15:50:30.115743 2786 server.go:954] "Client rotation is on, will bootstrap in background" Nov 5 15:50:30.118050 kubelet[2786]: I1105 15:50:30.118018 2786 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 5 15:50:30.122575 kubelet[2786]: I1105 15:50:30.122001 2786 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 5 15:50:30.127694 kubelet[2786]: I1105 15:50:30.127657 2786 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 5 15:50:30.133693 kubelet[2786]: I1105 15:50:30.133506 2786 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 5 15:50:30.135123 kubelet[2786]: I1105 15:50:30.134027 2786 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 5 15:50:30.135123 kubelet[2786]: I1105 15:50:30.134064 2786 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4487.0.1-2-254db4f49e","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 5 15:50:30.135123 kubelet[2786]: I1105 15:50:30.134258 2786 topology_manager.go:138] "Creating topology manager with none policy" Nov 5 15:50:30.135123 kubelet[2786]: I1105 15:50:30.134269 2786 container_manager_linux.go:304] "Creating device plugin manager" Nov 5 15:50:30.135347 kubelet[2786]: I1105 15:50:30.134321 2786 state_mem.go:36] "Initialized new in-memory state store" Nov 5 15:50:30.135347 kubelet[2786]: I1105 15:50:30.134504 2786 kubelet.go:446] "Attempting to sync node with API server" Nov 5 15:50:30.136012 kubelet[2786]: I1105 15:50:30.135857 2786 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 5 15:50:30.136012 kubelet[2786]: I1105 15:50:30.135906 2786 kubelet.go:352] "Adding apiserver pod source" Nov 5 15:50:30.136012 kubelet[2786]: I1105 15:50:30.135918 2786 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 5 15:50:30.150950 kubelet[2786]: I1105 15:50:30.149160 2786 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 5 15:50:30.150950 kubelet[2786]: I1105 15:50:30.149546 2786 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 5 15:50:30.150950 kubelet[2786]: I1105 15:50:30.149979 2786 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 5 15:50:30.150950 kubelet[2786]: I1105 15:50:30.150012 2786 server.go:1287] "Started kubelet" Nov 5 15:50:30.159637 kubelet[2786]: I1105 15:50:30.159593 2786 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 5 15:50:30.177692 kubelet[2786]: I1105 15:50:30.177579 2786 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 5 15:50:30.178800 kubelet[2786]: I1105 15:50:30.178740 2786 server.go:479] "Adding debug handlers to kubelet server" Nov 5 15:50:30.180340 kubelet[2786]: I1105 15:50:30.180261 2786 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 5 15:50:30.180558 kubelet[2786]: I1105 15:50:30.180469 2786 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 5 15:50:30.180767 kubelet[2786]: I1105 15:50:30.180680 2786 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 5 15:50:30.182220 kubelet[2786]: I1105 15:50:30.182190 2786 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 5 15:50:30.182481 kubelet[2786]: E1105 15:50:30.182397 2786 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4487.0.1-2-254db4f49e\" not found" Nov 5 15:50:30.182967 kubelet[2786]: I1105 15:50:30.182840 2786 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 5 15:50:30.184962 kubelet[2786]: I1105 15:50:30.184113 2786 reconciler.go:26] "Reconciler: start to sync state" Nov 5 15:50:30.186361 kubelet[2786]: I1105 15:50:30.186336 2786 factory.go:221] Registration of the systemd container factory successfully Nov 5 15:50:30.186602 kubelet[2786]: I1105 15:50:30.186447 2786 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 5 15:50:30.190965 kubelet[2786]: I1105 15:50:30.190536 2786 factory.go:221] Registration of the containerd container factory successfully Nov 5 15:50:30.193925 kubelet[2786]: E1105 15:50:30.193878 2786 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 5 15:50:30.198767 kubelet[2786]: I1105 15:50:30.198722 2786 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 5 15:50:30.201849 kubelet[2786]: I1105 15:50:30.201804 2786 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 5 15:50:30.201849 kubelet[2786]: I1105 15:50:30.201842 2786 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 5 15:50:30.201849 kubelet[2786]: I1105 15:50:30.201863 2786 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 5 15:50:30.202203 kubelet[2786]: I1105 15:50:30.201870 2786 kubelet.go:2382] "Starting kubelet main sync loop" Nov 5 15:50:30.202203 kubelet[2786]: E1105 15:50:30.201921 2786 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 5 15:50:30.247266 kubelet[2786]: I1105 15:50:30.247232 2786 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 5 15:50:30.247266 kubelet[2786]: I1105 15:50:30.247251 2786 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 5 15:50:30.247266 kubelet[2786]: I1105 15:50:30.247274 2786 state_mem.go:36] "Initialized new in-memory state store" Nov 5 15:50:30.247463 kubelet[2786]: I1105 15:50:30.247448 2786 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 5 15:50:30.247491 kubelet[2786]: I1105 15:50:30.247460 2786 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 5 15:50:30.247491 kubelet[2786]: I1105 15:50:30.247479 2786 policy_none.go:49] "None policy: Start" Nov 5 15:50:30.247491 kubelet[2786]: I1105 15:50:30.247489 2786 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 5 15:50:30.247623 kubelet[2786]: I1105 15:50:30.247498 2786 state_mem.go:35] "Initializing new in-memory state store" Nov 5 15:50:30.247649 kubelet[2786]: I1105 15:50:30.247642 2786 state_mem.go:75] "Updated machine memory state" Nov 5 15:50:30.252227 kubelet[2786]: I1105 15:50:30.251993 2786 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 5 15:50:30.252378 kubelet[2786]: I1105 15:50:30.252342 2786 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 5 15:50:30.252378 kubelet[2786]: I1105 15:50:30.252354 2786 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 5 15:50:30.253979 kubelet[2786]: I1105 15:50:30.252740 2786 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 5 15:50:30.256977 kubelet[2786]: E1105 15:50:30.255502 2786 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 5 15:50:30.303845 kubelet[2786]: I1105 15:50:30.303499 2786 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4487.0.1-2-254db4f49e" Nov 5 15:50:30.306192 kubelet[2786]: I1105 15:50:30.305777 2786 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4487.0.1-2-254db4f49e" Nov 5 15:50:30.306192 kubelet[2786]: I1105 15:50:30.305996 2786 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4487.0.1-2-254db4f49e" Nov 5 15:50:30.319029 kubelet[2786]: W1105 15:50:30.318985 2786 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 5 15:50:30.319529 kubelet[2786]: W1105 15:50:30.319511 2786 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 5 15:50:30.319793 kubelet[2786]: E1105 15:50:30.319756 2786 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4487.0.1-2-254db4f49e\" already exists" pod="kube-system/kube-scheduler-ci-4487.0.1-2-254db4f49e" Nov 5 15:50:30.319980 kubelet[2786]: W1105 15:50:30.319894 2786 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 5 15:50:30.354090 kubelet[2786]: I1105 15:50:30.354007 2786 kubelet_node_status.go:75] "Attempting to register node" node="ci-4487.0.1-2-254db4f49e" Nov 5 15:50:30.368964 kubelet[2786]: I1105 15:50:30.368904 2786 kubelet_node_status.go:124] "Node was previously registered" node="ci-4487.0.1-2-254db4f49e" Nov 5 15:50:30.369165 kubelet[2786]: I1105 15:50:30.369147 2786 kubelet_node_status.go:78] "Successfully registered node" node="ci-4487.0.1-2-254db4f49e" Nov 5 15:50:30.376427 sudo[2820]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Nov 5 15:50:30.376736 sudo[2820]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Nov 5 15:50:30.385543 kubelet[2786]: I1105 15:50:30.385496 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4b7896c2f233ed0c39d05c145bf38809-ca-certs\") pod \"kube-apiserver-ci-4487.0.1-2-254db4f49e\" (UID: \"4b7896c2f233ed0c39d05c145bf38809\") " pod="kube-system/kube-apiserver-ci-4487.0.1-2-254db4f49e" Nov 5 15:50:30.385543 kubelet[2786]: I1105 15:50:30.385542 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4b7896c2f233ed0c39d05c145bf38809-k8s-certs\") pod \"kube-apiserver-ci-4487.0.1-2-254db4f49e\" (UID: \"4b7896c2f233ed0c39d05c145bf38809\") " pod="kube-system/kube-apiserver-ci-4487.0.1-2-254db4f49e" Nov 5 15:50:30.385726 kubelet[2786]: I1105 15:50:30.385564 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4b7896c2f233ed0c39d05c145bf38809-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4487.0.1-2-254db4f49e\" (UID: \"4b7896c2f233ed0c39d05c145bf38809\") " pod="kube-system/kube-apiserver-ci-4487.0.1-2-254db4f49e" Nov 5 15:50:30.385726 kubelet[2786]: I1105 15:50:30.385581 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe7bc33fc5f336a87da5269eb0aad81a-kubeconfig\") pod \"kube-controller-manager-ci-4487.0.1-2-254db4f49e\" (UID: \"fe7bc33fc5f336a87da5269eb0aad81a\") " pod="kube-system/kube-controller-manager-ci-4487.0.1-2-254db4f49e" Nov 5 15:50:30.385726 kubelet[2786]: I1105 15:50:30.385598 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fe7bc33fc5f336a87da5269eb0aad81a-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4487.0.1-2-254db4f49e\" (UID: \"fe7bc33fc5f336a87da5269eb0aad81a\") " pod="kube-system/kube-controller-manager-ci-4487.0.1-2-254db4f49e" Nov 5 15:50:30.385726 kubelet[2786]: I1105 15:50:30.385624 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1de67e8b25adff5c7183fd43704b53aa-kubeconfig\") pod \"kube-scheduler-ci-4487.0.1-2-254db4f49e\" (UID: \"1de67e8b25adff5c7183fd43704b53aa\") " pod="kube-system/kube-scheduler-ci-4487.0.1-2-254db4f49e" Nov 5 15:50:30.385726 kubelet[2786]: I1105 15:50:30.385652 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fe7bc33fc5f336a87da5269eb0aad81a-ca-certs\") pod \"kube-controller-manager-ci-4487.0.1-2-254db4f49e\" (UID: \"fe7bc33fc5f336a87da5269eb0aad81a\") " pod="kube-system/kube-controller-manager-ci-4487.0.1-2-254db4f49e" Nov 5 15:50:30.385864 kubelet[2786]: I1105 15:50:30.385674 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fe7bc33fc5f336a87da5269eb0aad81a-flexvolume-dir\") pod \"kube-controller-manager-ci-4487.0.1-2-254db4f49e\" (UID: \"fe7bc33fc5f336a87da5269eb0aad81a\") " pod="kube-system/kube-controller-manager-ci-4487.0.1-2-254db4f49e" Nov 5 15:50:30.385864 kubelet[2786]: I1105 15:50:30.385691 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fe7bc33fc5f336a87da5269eb0aad81a-k8s-certs\") pod \"kube-controller-manager-ci-4487.0.1-2-254db4f49e\" (UID: \"fe7bc33fc5f336a87da5269eb0aad81a\") " pod="kube-system/kube-controller-manager-ci-4487.0.1-2-254db4f49e" Nov 5 15:50:30.620895 kubelet[2786]: E1105 15:50:30.620763 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 5 15:50:30.622165 kubelet[2786]: E1105 15:50:30.622115 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 5 15:50:30.622276 kubelet[2786]: E1105 15:50:30.622262 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 5 15:50:30.912480 sudo[2820]: pam_unix(sudo:session): session closed for user root Nov 5 15:50:31.148694 kubelet[2786]: I1105 15:50:31.148647 2786 apiserver.go:52] "Watching apiserver" Nov 5 15:50:31.183965 kubelet[2786]: I1105 15:50:31.183810 2786 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 5 15:50:31.224498 kubelet[2786]: I1105 15:50:31.224451 2786 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4487.0.1-2-254db4f49e" Nov 5 15:50:31.225768 kubelet[2786]: I1105 15:50:31.225742 2786 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4487.0.1-2-254db4f49e" Nov 5 15:50:31.226185 kubelet[2786]: I1105 15:50:31.226097 2786 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4487.0.1-2-254db4f49e" Nov 5 15:50:31.243967 kubelet[2786]: W1105 15:50:31.241994 2786 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 5 15:50:31.243967 kubelet[2786]: E1105 15:50:31.242104 2786 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4487.0.1-2-254db4f49e\" already exists" pod="kube-system/kube-controller-manager-ci-4487.0.1-2-254db4f49e" Nov 5 15:50:31.243967 kubelet[2786]: E1105 15:50:31.242376 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 5 15:50:31.243967 kubelet[2786]: W1105 15:50:31.243178 2786 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 5 15:50:31.243967 kubelet[2786]: E1105 15:50:31.243231 2786 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4487.0.1-2-254db4f49e\" already exists" pod="kube-system/kube-scheduler-ci-4487.0.1-2-254db4f49e" Nov 5 15:50:31.243967 kubelet[2786]: E1105 15:50:31.243389 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 5 15:50:31.243967 kubelet[2786]: W1105 15:50:31.243482 2786 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 5 15:50:31.243967 kubelet[2786]: E1105 15:50:31.243506 2786 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4487.0.1-2-254db4f49e\" already exists" pod="kube-system/kube-apiserver-ci-4487.0.1-2-254db4f49e" Nov 5 15:50:31.243967 kubelet[2786]: E1105 15:50:31.243659 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 5 15:50:31.285375 kubelet[2786]: I1105 15:50:31.285293 2786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4487.0.1-2-254db4f49e" podStartSLOduration=1.2852275610000001 podStartE2EDuration="1.285227561s" podCreationTimestamp="2025-11-05 15:50:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 15:50:31.28368578 +0000 UTC m=+1.248535272" watchObservedRunningTime="2025-11-05 15:50:31.285227561 +0000 UTC m=+1.250077054" Nov 5 15:50:31.297758 kubelet[2786]: I1105 15:50:31.297694 2786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4487.0.1-2-254db4f49e" podStartSLOduration=2.297673728 podStartE2EDuration="2.297673728s" podCreationTimestamp="2025-11-05 15:50:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 15:50:31.296712259 +0000 UTC m=+1.261561756" watchObservedRunningTime="2025-11-05 15:50:31.297673728 +0000 UTC m=+1.262523220" Nov 5 15:50:32.227403 kubelet[2786]: E1105 15:50:32.226971 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 5 15:50:32.229194 kubelet[2786]: E1105 15:50:32.229160 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 5 15:50:32.229767 kubelet[2786]: E1105 15:50:32.229511 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 5 15:50:33.231743 kubelet[2786]: E1105 15:50:33.230753 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 5 15:50:33.684056 sudo[1862]: pam_unix(sudo:session): session closed for user root Nov 5 15:50:33.691001 sshd[1861]: Connection closed by 139.178.68.195 port 49320 Nov 5 15:50:33.692178 sshd-session[1858]: pam_unix(sshd:session): session closed for user core Nov 5 15:50:33.698610 systemd[1]: sshd@8-137.184.121.184:22-139.178.68.195:49320.service: Deactivated successfully. Nov 5 15:50:33.701357 systemd[1]: session-9.scope: Deactivated successfully. Nov 5 15:50:33.701721 systemd[1]: session-9.scope: Consumed 6.205s CPU time, 219.5M memory peak. Nov 5 15:50:33.703809 systemd-logind[1568]: Session 9 logged out. Waiting for processes to exit. Nov 5 15:50:33.705542 systemd-logind[1568]: Removed session 9. Nov 5 15:50:34.032106 kubelet[2786]: I1105 15:50:34.031960 2786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4487.0.1-2-254db4f49e" podStartSLOduration=4.031925888 podStartE2EDuration="4.031925888s" podCreationTimestamp="2025-11-05 15:50:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 15:50:31.313251106 +0000 UTC m=+1.278100599" watchObservedRunningTime="2025-11-05 15:50:34.031925888 +0000 UTC m=+3.996775379" Nov 5 15:50:34.232850 kubelet[2786]: E1105 15:50:34.232621 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 5 15:50:35.148353 kubelet[2786]: I1105 15:50:35.148309 2786 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 5 15:50:35.149093 containerd[1598]: time="2025-11-05T15:50:35.149057328Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 5 15:50:35.149941 kubelet[2786]: I1105 15:50:35.149653 2786 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 5 15:50:35.237652 kubelet[2786]: E1105 15:50:35.237592 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 5 15:50:35.934966 kubelet[2786]: I1105 15:50:35.934848 2786 status_manager.go:890] "Failed to get status for pod" podUID="82b4e0c5-57ad-4bf5-aac6-9fccd7b54b85" pod="kube-system/kube-proxy-x7mlx" err="pods \"kube-proxy-x7mlx\" is forbidden: User \"system:node:ci-4487.0.1-2-254db4f49e\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4487.0.1-2-254db4f49e' and this object" Nov 5 15:50:35.959285 systemd[1]: Created slice kubepods-besteffort-pod82b4e0c5_57ad_4bf5_aac6_9fccd7b54b85.slice - libcontainer container kubepods-besteffort-pod82b4e0c5_57ad_4bf5_aac6_9fccd7b54b85.slice. Nov 5 15:50:35.983450 systemd[1]: Created slice kubepods-burstable-pod600e642a_4a9d_43b4_99e9_77f1e45a228b.slice - libcontainer container kubepods-burstable-pod600e642a_4a9d_43b4_99e9_77f1e45a228b.slice. Nov 5 15:50:36.022367 kubelet[2786]: I1105 15:50:36.022319 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/600e642a-4a9d-43b4-99e9-77f1e45a228b-cilium-run\") pod \"cilium-kvbjc\" (UID: \"600e642a-4a9d-43b4-99e9-77f1e45a228b\") " pod="kube-system/cilium-kvbjc" Nov 5 15:50:36.022823 kubelet[2786]: I1105 15:50:36.022786 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/600e642a-4a9d-43b4-99e9-77f1e45a228b-bpf-maps\") pod \"cilium-kvbjc\" (UID: \"600e642a-4a9d-43b4-99e9-77f1e45a228b\") " pod="kube-system/cilium-kvbjc" Nov 5 15:50:36.023054 kubelet[2786]: I1105 15:50:36.023030 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/600e642a-4a9d-43b4-99e9-77f1e45a228b-hubble-tls\") pod \"cilium-kvbjc\" (UID: \"600e642a-4a9d-43b4-99e9-77f1e45a228b\") " pod="kube-system/cilium-kvbjc" Nov 5 15:50:36.023264 kubelet[2786]: I1105 15:50:36.023242 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4rqf\" (UniqueName: \"kubernetes.io/projected/600e642a-4a9d-43b4-99e9-77f1e45a228b-kube-api-access-t4rqf\") pod \"cilium-kvbjc\" (UID: \"600e642a-4a9d-43b4-99e9-77f1e45a228b\") " pod="kube-system/cilium-kvbjc" Nov 5 15:50:36.023465 kubelet[2786]: I1105 15:50:36.023420 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/600e642a-4a9d-43b4-99e9-77f1e45a228b-host-proc-sys-kernel\") pod \"cilium-kvbjc\" (UID: \"600e642a-4a9d-43b4-99e9-77f1e45a228b\") " pod="kube-system/cilium-kvbjc" Nov 5 15:50:36.023563 kubelet[2786]: I1105 15:50:36.023453 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/82b4e0c5-57ad-4bf5-aac6-9fccd7b54b85-kube-proxy\") pod \"kube-proxy-x7mlx\" (UID: \"82b4e0c5-57ad-4bf5-aac6-9fccd7b54b85\") " pod="kube-system/kube-proxy-x7mlx" Nov 5 15:50:36.024112 kubelet[2786]: I1105 15:50:36.023632 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/600e642a-4a9d-43b4-99e9-77f1e45a228b-cilium-cgroup\") pod \"cilium-kvbjc\" (UID: \"600e642a-4a9d-43b4-99e9-77f1e45a228b\") " pod="kube-system/cilium-kvbjc" Nov 5 15:50:36.024112 kubelet[2786]: I1105 15:50:36.023657 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/600e642a-4a9d-43b4-99e9-77f1e45a228b-cilium-config-path\") pod \"cilium-kvbjc\" (UID: \"600e642a-4a9d-43b4-99e9-77f1e45a228b\") " pod="kube-system/cilium-kvbjc" Nov 5 15:50:36.024112 kubelet[2786]: I1105 15:50:36.023687 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/82b4e0c5-57ad-4bf5-aac6-9fccd7b54b85-lib-modules\") pod \"kube-proxy-x7mlx\" (UID: \"82b4e0c5-57ad-4bf5-aac6-9fccd7b54b85\") " pod="kube-system/kube-proxy-x7mlx" Nov 5 15:50:36.024112 kubelet[2786]: I1105 15:50:36.023710 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/600e642a-4a9d-43b4-99e9-77f1e45a228b-cni-path\") pod \"cilium-kvbjc\" (UID: \"600e642a-4a9d-43b4-99e9-77f1e45a228b\") " pod="kube-system/cilium-kvbjc" Nov 5 15:50:36.024112 kubelet[2786]: I1105 15:50:36.023734 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7t7x\" (UniqueName: \"kubernetes.io/projected/82b4e0c5-57ad-4bf5-aac6-9fccd7b54b85-kube-api-access-q7t7x\") pod \"kube-proxy-x7mlx\" (UID: \"82b4e0c5-57ad-4bf5-aac6-9fccd7b54b85\") " pod="kube-system/kube-proxy-x7mlx" Nov 5 15:50:36.024112 kubelet[2786]: I1105 15:50:36.023759 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/600e642a-4a9d-43b4-99e9-77f1e45a228b-etc-cni-netd\") pod \"cilium-kvbjc\" (UID: \"600e642a-4a9d-43b4-99e9-77f1e45a228b\") " pod="kube-system/cilium-kvbjc" Nov 5 15:50:36.024417 kubelet[2786]: I1105 15:50:36.023787 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/600e642a-4a9d-43b4-99e9-77f1e45a228b-clustermesh-secrets\") pod \"cilium-kvbjc\" (UID: \"600e642a-4a9d-43b4-99e9-77f1e45a228b\") " pod="kube-system/cilium-kvbjc" Nov 5 15:50:36.024417 kubelet[2786]: I1105 15:50:36.023811 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/600e642a-4a9d-43b4-99e9-77f1e45a228b-host-proc-sys-net\") pod \"cilium-kvbjc\" (UID: \"600e642a-4a9d-43b4-99e9-77f1e45a228b\") " pod="kube-system/cilium-kvbjc" Nov 5 15:50:36.024417 kubelet[2786]: I1105 15:50:36.023841 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/82b4e0c5-57ad-4bf5-aac6-9fccd7b54b85-xtables-lock\") pod \"kube-proxy-x7mlx\" (UID: \"82b4e0c5-57ad-4bf5-aac6-9fccd7b54b85\") " pod="kube-system/kube-proxy-x7mlx" Nov 5 15:50:36.024417 kubelet[2786]: I1105 15:50:36.023862 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/600e642a-4a9d-43b4-99e9-77f1e45a228b-lib-modules\") pod \"cilium-kvbjc\" (UID: \"600e642a-4a9d-43b4-99e9-77f1e45a228b\") " pod="kube-system/cilium-kvbjc" Nov 5 15:50:36.024417 kubelet[2786]: I1105 15:50:36.023885 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/600e642a-4a9d-43b4-99e9-77f1e45a228b-xtables-lock\") pod \"cilium-kvbjc\" (UID: \"600e642a-4a9d-43b4-99e9-77f1e45a228b\") " pod="kube-system/cilium-kvbjc" Nov 5 15:50:36.024417 kubelet[2786]: I1105 15:50:36.023925 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/600e642a-4a9d-43b4-99e9-77f1e45a228b-hostproc\") pod \"cilium-kvbjc\" (UID: \"600e642a-4a9d-43b4-99e9-77f1e45a228b\") " pod="kube-system/cilium-kvbjc" Nov 5 15:50:36.217922 update_engine[1570]: I20251105 15:50:36.216012 1570 update_attempter.cc:509] Updating boot flags... Nov 5 15:50:36.221619 kubelet[2786]: I1105 15:50:36.221174 2786 status_manager.go:890] "Failed to get status for pod" podUID="5ec11585-2830-4ec3-85e4-bd5daee68c9a" pod="kube-system/cilium-operator-6c4d7847fc-p8kft" err="pods \"cilium-operator-6c4d7847fc-p8kft\" is forbidden: User \"system:node:ci-4487.0.1-2-254db4f49e\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4487.0.1-2-254db4f49e' and this object" Nov 5 15:50:36.225094 systemd[1]: Created slice kubepods-besteffort-pod5ec11585_2830_4ec3_85e4_bd5daee68c9a.slice - libcontainer container kubepods-besteffort-pod5ec11585_2830_4ec3_85e4_bd5daee68c9a.slice. Nov 5 15:50:36.228164 kubelet[2786]: I1105 15:50:36.225726 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5ec11585-2830-4ec3-85e4-bd5daee68c9a-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-p8kft\" (UID: \"5ec11585-2830-4ec3-85e4-bd5daee68c9a\") " pod="kube-system/cilium-operator-6c4d7847fc-p8kft" Nov 5 15:50:36.228164 kubelet[2786]: I1105 15:50:36.225783 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8flxf\" (UniqueName: \"kubernetes.io/projected/5ec11585-2830-4ec3-85e4-bd5daee68c9a-kube-api-access-8flxf\") pod \"cilium-operator-6c4d7847fc-p8kft\" (UID: \"5ec11585-2830-4ec3-85e4-bd5daee68c9a\") " pod="kube-system/cilium-operator-6c4d7847fc-p8kft" Nov 5 15:50:36.240587 kubelet[2786]: E1105 15:50:36.240545 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 5 15:50:36.282424 kubelet[2786]: E1105 15:50:36.280321 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 5 15:50:36.285248 containerd[1598]: time="2025-11-05T15:50:36.285147288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-x7mlx,Uid:82b4e0c5-57ad-4bf5-aac6-9fccd7b54b85,Namespace:kube-system,Attempt:0,}" Nov 5 15:50:36.291361 kubelet[2786]: E1105 15:50:36.291320 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 5 15:50:36.292617 containerd[1598]: time="2025-11-05T15:50:36.292505463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kvbjc,Uid:600e642a-4a9d-43b4-99e9-77f1e45a228b,Namespace:kube-system,Attempt:0,}" Nov 5 15:50:36.356158 containerd[1598]: time="2025-11-05T15:50:36.352959800Z" level=info msg="connecting to shim b22e374448d24910ca86f22d87aee5e68e1caaf01d42bafa46d6160025036c6a" address="unix:///run/containerd/s/52b7f0ae4981d59a1d653edc23680f86bdb63d57dfc481131c9731473933711a" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:50:36.391044 containerd[1598]: time="2025-11-05T15:50:36.390312572Z" level=info msg="connecting to shim 908af049d4cc66f321f67bc4a3d0d299e781ec2b1fa3f835e93230d62bbd54a2" address="unix:///run/containerd/s/192a9942400f32632e7c35d04bdf61da9905b8d83646814dcbcdd9e5aa2e3eab" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:50:36.486144 systemd[1]: Started cri-containerd-b22e374448d24910ca86f22d87aee5e68e1caaf01d42bafa46d6160025036c6a.scope - libcontainer container b22e374448d24910ca86f22d87aee5e68e1caaf01d42bafa46d6160025036c6a. Nov 5 15:50:36.502386 systemd[1]: Started cri-containerd-908af049d4cc66f321f67bc4a3d0d299e781ec2b1fa3f835e93230d62bbd54a2.scope - libcontainer container 908af049d4cc66f321f67bc4a3d0d299e781ec2b1fa3f835e93230d62bbd54a2. Nov 5 15:50:36.537830 kubelet[2786]: E1105 15:50:36.535709 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 5 15:50:36.541217 containerd[1598]: time="2025-11-05T15:50:36.538150147Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-p8kft,Uid:5ec11585-2830-4ec3-85e4-bd5daee68c9a,Namespace:kube-system,Attempt:0,}" Nov 5 15:50:36.593723 containerd[1598]: time="2025-11-05T15:50:36.593669421Z" level=info msg="connecting to shim 708a2299ac97e4b0f835a65746c9cfa8b3b375d9d27baaaffc50ee80e54aac28" address="unix:///run/containerd/s/53b9681c82fdabba406d736c376625daca59680c0ffc83f32bf1f6c056758dd5" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:50:36.676826 containerd[1598]: time="2025-11-05T15:50:36.676779668Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-x7mlx,Uid:82b4e0c5-57ad-4bf5-aac6-9fccd7b54b85,Namespace:kube-system,Attempt:0,} returns sandbox id \"908af049d4cc66f321f67bc4a3d0d299e781ec2b1fa3f835e93230d62bbd54a2\"" Nov 5 15:50:36.679959 kubelet[2786]: E1105 15:50:36.679648 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 5 15:50:36.681548 containerd[1598]: time="2025-11-05T15:50:36.681463299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kvbjc,Uid:600e642a-4a9d-43b4-99e9-77f1e45a228b,Namespace:kube-system,Attempt:0,} returns sandbox id \"b22e374448d24910ca86f22d87aee5e68e1caaf01d42bafa46d6160025036c6a\"" Nov 5 15:50:36.685669 kubelet[2786]: E1105 15:50:36.685635 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 5 15:50:36.686186 containerd[1598]: time="2025-11-05T15:50:36.686148639Z" level=info msg="CreateContainer within sandbox \"908af049d4cc66f321f67bc4a3d0d299e781ec2b1fa3f835e93230d62bbd54a2\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 5 15:50:36.691079 containerd[1598]: time="2025-11-05T15:50:36.691015823Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Nov 5 15:50:36.696729 systemd-resolved[1278]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.2. Nov 5 15:50:36.709429 systemd[1]: Started cri-containerd-708a2299ac97e4b0f835a65746c9cfa8b3b375d9d27baaaffc50ee80e54aac28.scope - libcontainer container 708a2299ac97e4b0f835a65746c9cfa8b3b375d9d27baaaffc50ee80e54aac28. Nov 5 15:50:36.718114 containerd[1598]: time="2025-11-05T15:50:36.718062162Z" level=info msg="Container 9d4ea5943cb15b99e87a34422b8fbe636c47feb69f345c2d6568b38806fe8df9: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:50:36.730394 containerd[1598]: time="2025-11-05T15:50:36.730328566Z" level=info msg="CreateContainer within sandbox \"908af049d4cc66f321f67bc4a3d0d299e781ec2b1fa3f835e93230d62bbd54a2\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"9d4ea5943cb15b99e87a34422b8fbe636c47feb69f345c2d6568b38806fe8df9\"" Nov 5 15:50:36.731147 containerd[1598]: time="2025-11-05T15:50:36.731115050Z" level=info msg="StartContainer for \"9d4ea5943cb15b99e87a34422b8fbe636c47feb69f345c2d6568b38806fe8df9\"" Nov 5 15:50:36.734292 containerd[1598]: time="2025-11-05T15:50:36.734194185Z" level=info msg="connecting to shim 9d4ea5943cb15b99e87a34422b8fbe636c47feb69f345c2d6568b38806fe8df9" address="unix:///run/containerd/s/192a9942400f32632e7c35d04bdf61da9905b8d83646814dcbcdd9e5aa2e3eab" protocol=ttrpc version=3 Nov 5 15:50:36.764309 systemd[1]: Started cri-containerd-9d4ea5943cb15b99e87a34422b8fbe636c47feb69f345c2d6568b38806fe8df9.scope - libcontainer container 9d4ea5943cb15b99e87a34422b8fbe636c47feb69f345c2d6568b38806fe8df9. Nov 5 15:50:36.786550 containerd[1598]: time="2025-11-05T15:50:36.786500262Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-p8kft,Uid:5ec11585-2830-4ec3-85e4-bd5daee68c9a,Namespace:kube-system,Attempt:0,} returns sandbox id \"708a2299ac97e4b0f835a65746c9cfa8b3b375d9d27baaaffc50ee80e54aac28\"" Nov 5 15:50:36.788194 kubelet[2786]: E1105 15:50:36.788153 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 5 15:50:36.828960 containerd[1598]: time="2025-11-05T15:50:36.828875668Z" level=info msg="StartContainer for \"9d4ea5943cb15b99e87a34422b8fbe636c47feb69f345c2d6568b38806fe8df9\" returns successfully" Nov 5 15:50:37.248260 kubelet[2786]: E1105 15:50:37.248217 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 5 15:50:37.280300 kubelet[2786]: I1105 15:50:37.280000 2786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-x7mlx" podStartSLOduration=2.279977289 podStartE2EDuration="2.279977289s" podCreationTimestamp="2025-11-05 15:50:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 15:50:37.268507968 +0000 UTC m=+7.233357460" watchObservedRunningTime="2025-11-05 15:50:37.279977289 +0000 UTC m=+7.244826782" Nov 5 15:50:38.568044 kubelet[2786]: E1105 15:50:38.566374 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 5 15:50:39.200259 kubelet[2786]: E1105 15:50:39.199450 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 5 15:50:39.260046 kubelet[2786]: E1105 15:50:39.259789 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 5 15:50:39.261564 kubelet[2786]: E1105 15:50:39.260534 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 5 15:50:41.529200 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount665114584.mount: Deactivated successfully. Nov 5 15:50:43.798916 containerd[1598]: time="2025-11-05T15:50:43.798828976Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:50:43.800192 containerd[1598]: time="2025-11-05T15:50:43.800129563Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Nov 5 15:50:43.810969 containerd[1598]: time="2025-11-05T15:50:43.809278529Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:50:43.812048 containerd[1598]: time="2025-11-05T15:50:43.811992305Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 7.120935067s" Nov 5 15:50:43.812048 containerd[1598]: time="2025-11-05T15:50:43.812043053Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Nov 5 15:50:43.816348 containerd[1598]: time="2025-11-05T15:50:43.816284601Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Nov 5 15:50:43.818308 containerd[1598]: time="2025-11-05T15:50:43.818222652Z" level=info msg="CreateContainer within sandbox \"b22e374448d24910ca86f22d87aee5e68e1caaf01d42bafa46d6160025036c6a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 5 15:50:43.871222 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3439086585.mount: Deactivated successfully. Nov 5 15:50:43.876662 containerd[1598]: time="2025-11-05T15:50:43.876573113Z" level=info msg="Container 0a6b4503482af33f9a9e24fa1bcf4c93beb29de1fa0a4141dc46899843e082b8: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:50:43.882054 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1985853209.mount: Deactivated successfully. Nov 5 15:50:43.894091 containerd[1598]: time="2025-11-05T15:50:43.894008942Z" level=info msg="CreateContainer within sandbox \"b22e374448d24910ca86f22d87aee5e68e1caaf01d42bafa46d6160025036c6a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"0a6b4503482af33f9a9e24fa1bcf4c93beb29de1fa0a4141dc46899843e082b8\"" Nov 5 15:50:43.895179 containerd[1598]: time="2025-11-05T15:50:43.895127050Z" level=info msg="StartContainer for \"0a6b4503482af33f9a9e24fa1bcf4c93beb29de1fa0a4141dc46899843e082b8\"" Nov 5 15:50:43.897369 containerd[1598]: time="2025-11-05T15:50:43.896911708Z" level=info msg="connecting to shim 0a6b4503482af33f9a9e24fa1bcf4c93beb29de1fa0a4141dc46899843e082b8" address="unix:///run/containerd/s/52b7f0ae4981d59a1d653edc23680f86bdb63d57dfc481131c9731473933711a" protocol=ttrpc version=3 Nov 5 15:50:43.936335 systemd[1]: Started cri-containerd-0a6b4503482af33f9a9e24fa1bcf4c93beb29de1fa0a4141dc46899843e082b8.scope - libcontainer container 0a6b4503482af33f9a9e24fa1bcf4c93beb29de1fa0a4141dc46899843e082b8. Nov 5 15:50:43.989740 containerd[1598]: time="2025-11-05T15:50:43.989670679Z" level=info msg="StartContainer for \"0a6b4503482af33f9a9e24fa1bcf4c93beb29de1fa0a4141dc46899843e082b8\" returns successfully" Nov 5 15:50:44.003832 systemd[1]: cri-containerd-0a6b4503482af33f9a9e24fa1bcf4c93beb29de1fa0a4141dc46899843e082b8.scope: Deactivated successfully. Nov 5 15:50:44.024375 containerd[1598]: time="2025-11-05T15:50:44.024311245Z" level=info msg="received exit event container_id:\"0a6b4503482af33f9a9e24fa1bcf4c93beb29de1fa0a4141dc46899843e082b8\" id:\"0a6b4503482af33f9a9e24fa1bcf4c93beb29de1fa0a4141dc46899843e082b8\" pid:3216 exited_at:{seconds:1762357844 nanos:10099480}" Nov 5 15:50:44.040575 containerd[1598]: time="2025-11-05T15:50:44.039743560Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0a6b4503482af33f9a9e24fa1bcf4c93beb29de1fa0a4141dc46899843e082b8\" id:\"0a6b4503482af33f9a9e24fa1bcf4c93beb29de1fa0a4141dc46899843e082b8\" pid:3216 exited_at:{seconds:1762357844 nanos:10099480}" Nov 5 15:50:44.283673 kubelet[2786]: E1105 15:50:44.283423 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 5 15:50:44.288447 containerd[1598]: time="2025-11-05T15:50:44.288305188Z" level=info msg="CreateContainer within sandbox \"b22e374448d24910ca86f22d87aee5e68e1caaf01d42bafa46d6160025036c6a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 5 15:50:44.298956 containerd[1598]: time="2025-11-05T15:50:44.298391257Z" level=info msg="Container 518062574cbab2c6ba832301c72b212958e7788c25a864f2c07c30bd21ce6bd3: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:50:44.315376 containerd[1598]: time="2025-11-05T15:50:44.315317865Z" level=info msg="CreateContainer within sandbox \"b22e374448d24910ca86f22d87aee5e68e1caaf01d42bafa46d6160025036c6a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"518062574cbab2c6ba832301c72b212958e7788c25a864f2c07c30bd21ce6bd3\"" Nov 5 15:50:44.316419 containerd[1598]: time="2025-11-05T15:50:44.316379515Z" level=info msg="StartContainer for \"518062574cbab2c6ba832301c72b212958e7788c25a864f2c07c30bd21ce6bd3\"" Nov 5 15:50:44.319601 containerd[1598]: time="2025-11-05T15:50:44.319553956Z" level=info msg="connecting to shim 518062574cbab2c6ba832301c72b212958e7788c25a864f2c07c30bd21ce6bd3" address="unix:///run/containerd/s/52b7f0ae4981d59a1d653edc23680f86bdb63d57dfc481131c9731473933711a" protocol=ttrpc version=3 Nov 5 15:50:44.348185 systemd[1]: Started cri-containerd-518062574cbab2c6ba832301c72b212958e7788c25a864f2c07c30bd21ce6bd3.scope - libcontainer container 518062574cbab2c6ba832301c72b212958e7788c25a864f2c07c30bd21ce6bd3. Nov 5 15:50:44.391663 containerd[1598]: time="2025-11-05T15:50:44.391582250Z" level=info msg="StartContainer for \"518062574cbab2c6ba832301c72b212958e7788c25a864f2c07c30bd21ce6bd3\" returns successfully" Nov 5 15:50:44.416121 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 5 15:50:44.416420 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 5 15:50:44.416499 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Nov 5 15:50:44.420396 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 5 15:50:44.422647 systemd[1]: cri-containerd-518062574cbab2c6ba832301c72b212958e7788c25a864f2c07c30bd21ce6bd3.scope: Deactivated successfully. Nov 5 15:50:44.425176 containerd[1598]: time="2025-11-05T15:50:44.424962849Z" level=info msg="received exit event container_id:\"518062574cbab2c6ba832301c72b212958e7788c25a864f2c07c30bd21ce6bd3\" id:\"518062574cbab2c6ba832301c72b212958e7788c25a864f2c07c30bd21ce6bd3\" pid:3262 exited_at:{seconds:1762357844 nanos:423622595}" Nov 5 15:50:44.426493 containerd[1598]: time="2025-11-05T15:50:44.426395028Z" level=info msg="TaskExit event in podsandbox handler container_id:\"518062574cbab2c6ba832301c72b212958e7788c25a864f2c07c30bd21ce6bd3\" id:\"518062574cbab2c6ba832301c72b212958e7788c25a864f2c07c30bd21ce6bd3\" pid:3262 exited_at:{seconds:1762357844 nanos:423622595}" Nov 5 15:50:44.461351 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 5 15:50:44.865619 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0a6b4503482af33f9a9e24fa1bcf4c93beb29de1fa0a4141dc46899843e082b8-rootfs.mount: Deactivated successfully. Nov 5 15:50:45.049482 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2358334714.mount: Deactivated successfully. Nov 5 15:50:45.295711 kubelet[2786]: E1105 15:50:45.295587 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 5 15:50:45.311403 containerd[1598]: time="2025-11-05T15:50:45.311327790Z" level=info msg="CreateContainer within sandbox \"b22e374448d24910ca86f22d87aee5e68e1caaf01d42bafa46d6160025036c6a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 5 15:50:45.338793 containerd[1598]: time="2025-11-05T15:50:45.338570196Z" level=info msg="Container bd09933f076ab487df99ba4e46179a390831af0992d356f05ca6978b6a82cf94: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:50:45.359706 containerd[1598]: time="2025-11-05T15:50:45.359504586Z" level=info msg="CreateContainer within sandbox \"b22e374448d24910ca86f22d87aee5e68e1caaf01d42bafa46d6160025036c6a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"bd09933f076ab487df99ba4e46179a390831af0992d356f05ca6978b6a82cf94\"" Nov 5 15:50:45.361892 containerd[1598]: time="2025-11-05T15:50:45.361775781Z" level=info msg="StartContainer for \"bd09933f076ab487df99ba4e46179a390831af0992d356f05ca6978b6a82cf94\"" Nov 5 15:50:45.365373 containerd[1598]: time="2025-11-05T15:50:45.365315556Z" level=info msg="connecting to shim bd09933f076ab487df99ba4e46179a390831af0992d356f05ca6978b6a82cf94" address="unix:///run/containerd/s/52b7f0ae4981d59a1d653edc23680f86bdb63d57dfc481131c9731473933711a" protocol=ttrpc version=3 Nov 5 15:50:45.399363 systemd[1]: Started cri-containerd-bd09933f076ab487df99ba4e46179a390831af0992d356f05ca6978b6a82cf94.scope - libcontainer container bd09933f076ab487df99ba4e46179a390831af0992d356f05ca6978b6a82cf94. Nov 5 15:50:45.474106 systemd[1]: cri-containerd-bd09933f076ab487df99ba4e46179a390831af0992d356f05ca6978b6a82cf94.scope: Deactivated successfully. Nov 5 15:50:45.478799 containerd[1598]: time="2025-11-05T15:50:45.478703936Z" level=info msg="StartContainer for \"bd09933f076ab487df99ba4e46179a390831af0992d356f05ca6978b6a82cf94\" returns successfully" Nov 5 15:50:45.482777 containerd[1598]: time="2025-11-05T15:50:45.482713920Z" level=info msg="received exit event container_id:\"bd09933f076ab487df99ba4e46179a390831af0992d356f05ca6978b6a82cf94\" id:\"bd09933f076ab487df99ba4e46179a390831af0992d356f05ca6978b6a82cf94\" pid:3318 exited_at:{seconds:1762357845 nanos:481567939}" Nov 5 15:50:45.483386 containerd[1598]: time="2025-11-05T15:50:45.483346223Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bd09933f076ab487df99ba4e46179a390831af0992d356f05ca6978b6a82cf94\" id:\"bd09933f076ab487df99ba4e46179a390831af0992d356f05ca6978b6a82cf94\" pid:3318 exited_at:{seconds:1762357845 nanos:481567939}" Nov 5 15:50:45.731954 containerd[1598]: time="2025-11-05T15:50:45.731876923Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:50:45.735192 containerd[1598]: time="2025-11-05T15:50:45.735061036Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Nov 5 15:50:45.748038 containerd[1598]: time="2025-11-05T15:50:45.747988086Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:50:45.750068 containerd[1598]: time="2025-11-05T15:50:45.749533739Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 1.932835962s" Nov 5 15:50:45.750068 containerd[1598]: time="2025-11-05T15:50:45.749581559Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Nov 5 15:50:45.753585 containerd[1598]: time="2025-11-05T15:50:45.753439769Z" level=info msg="CreateContainer within sandbox \"708a2299ac97e4b0f835a65746c9cfa8b3b375d9d27baaaffc50ee80e54aac28\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Nov 5 15:50:45.759987 containerd[1598]: time="2025-11-05T15:50:45.759867761Z" level=info msg="Container 469b696d7d45599ed0a1386b3bae8404e9d5cb5b238cb4e649d5fdcfba7d6d20: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:50:45.775298 containerd[1598]: time="2025-11-05T15:50:45.775168039Z" level=info msg="CreateContainer within sandbox \"708a2299ac97e4b0f835a65746c9cfa8b3b375d9d27baaaffc50ee80e54aac28\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"469b696d7d45599ed0a1386b3bae8404e9d5cb5b238cb4e649d5fdcfba7d6d20\"" Nov 5 15:50:45.776534 containerd[1598]: time="2025-11-05T15:50:45.776486586Z" level=info msg="StartContainer for \"469b696d7d45599ed0a1386b3bae8404e9d5cb5b238cb4e649d5fdcfba7d6d20\"" Nov 5 15:50:45.779527 containerd[1598]: time="2025-11-05T15:50:45.779390920Z" level=info msg="connecting to shim 469b696d7d45599ed0a1386b3bae8404e9d5cb5b238cb4e649d5fdcfba7d6d20" address="unix:///run/containerd/s/53b9681c82fdabba406d736c376625daca59680c0ffc83f32bf1f6c056758dd5" protocol=ttrpc version=3 Nov 5 15:50:45.806331 systemd[1]: Started cri-containerd-469b696d7d45599ed0a1386b3bae8404e9d5cb5b238cb4e649d5fdcfba7d6d20.scope - libcontainer container 469b696d7d45599ed0a1386b3bae8404e9d5cb5b238cb4e649d5fdcfba7d6d20. Nov 5 15:50:45.847206 containerd[1598]: time="2025-11-05T15:50:45.847157599Z" level=info msg="StartContainer for \"469b696d7d45599ed0a1386b3bae8404e9d5cb5b238cb4e649d5fdcfba7d6d20\" returns successfully" Nov 5 15:50:46.299311 kubelet[2786]: E1105 15:50:46.299268 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 5 15:50:46.319326 kubelet[2786]: E1105 15:50:46.319284 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 5 15:50:46.324980 containerd[1598]: time="2025-11-05T15:50:46.322995849Z" level=info msg="CreateContainer within sandbox \"b22e374448d24910ca86f22d87aee5e68e1caaf01d42bafa46d6160025036c6a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 5 15:50:46.358822 containerd[1598]: time="2025-11-05T15:50:46.357475662Z" level=info msg="Container 44706daef2ff86d33f5c3fb52d4f0d292e53909df54ef0990d161388ac35d6ae: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:50:46.369566 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount935504712.mount: Deactivated successfully. Nov 5 15:50:46.373639 kubelet[2786]: I1105 15:50:46.373572 2786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-p8kft" podStartSLOduration=1.413358637 podStartE2EDuration="10.373547409s" podCreationTimestamp="2025-11-05 15:50:36 +0000 UTC" firstStartedPulling="2025-11-05 15:50:36.79041399 +0000 UTC m=+6.755263475" lastFinishedPulling="2025-11-05 15:50:45.750602761 +0000 UTC m=+15.715452247" observedRunningTime="2025-11-05 15:50:46.315665292 +0000 UTC m=+16.280514784" watchObservedRunningTime="2025-11-05 15:50:46.373547409 +0000 UTC m=+16.338396905" Nov 5 15:50:46.375898 containerd[1598]: time="2025-11-05T15:50:46.375857919Z" level=info msg="CreateContainer within sandbox \"b22e374448d24910ca86f22d87aee5e68e1caaf01d42bafa46d6160025036c6a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"44706daef2ff86d33f5c3fb52d4f0d292e53909df54ef0990d161388ac35d6ae\"" Nov 5 15:50:46.377436 containerd[1598]: time="2025-11-05T15:50:46.376950111Z" level=info msg="StartContainer for \"44706daef2ff86d33f5c3fb52d4f0d292e53909df54ef0990d161388ac35d6ae\"" Nov 5 15:50:46.378361 containerd[1598]: time="2025-11-05T15:50:46.378289788Z" level=info msg="connecting to shim 44706daef2ff86d33f5c3fb52d4f0d292e53909df54ef0990d161388ac35d6ae" address="unix:///run/containerd/s/52b7f0ae4981d59a1d653edc23680f86bdb63d57dfc481131c9731473933711a" protocol=ttrpc version=3 Nov 5 15:50:46.421235 systemd[1]: Started cri-containerd-44706daef2ff86d33f5c3fb52d4f0d292e53909df54ef0990d161388ac35d6ae.scope - libcontainer container 44706daef2ff86d33f5c3fb52d4f0d292e53909df54ef0990d161388ac35d6ae. Nov 5 15:50:46.525498 systemd[1]: cri-containerd-44706daef2ff86d33f5c3fb52d4f0d292e53909df54ef0990d161388ac35d6ae.scope: Deactivated successfully. Nov 5 15:50:46.528076 containerd[1598]: time="2025-11-05T15:50:46.527812397Z" level=info msg="received exit event container_id:\"44706daef2ff86d33f5c3fb52d4f0d292e53909df54ef0990d161388ac35d6ae\" id:\"44706daef2ff86d33f5c3fb52d4f0d292e53909df54ef0990d161388ac35d6ae\" pid:3397 exited_at:{seconds:1762357846 nanos:526726967}" Nov 5 15:50:46.531114 containerd[1598]: time="2025-11-05T15:50:46.531036956Z" level=info msg="TaskExit event in podsandbox handler container_id:\"44706daef2ff86d33f5c3fb52d4f0d292e53909df54ef0990d161388ac35d6ae\" id:\"44706daef2ff86d33f5c3fb52d4f0d292e53909df54ef0990d161388ac35d6ae\" pid:3397 exited_at:{seconds:1762357846 nanos:526726967}" Nov 5 15:50:46.548713 containerd[1598]: time="2025-11-05T15:50:46.548601899Z" level=info msg="StartContainer for \"44706daef2ff86d33f5c3fb52d4f0d292e53909df54ef0990d161388ac35d6ae\" returns successfully" Nov 5 15:50:46.864367 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-44706daef2ff86d33f5c3fb52d4f0d292e53909df54ef0990d161388ac35d6ae-rootfs.mount: Deactivated successfully. Nov 5 15:50:47.328007 kubelet[2786]: E1105 15:50:47.327560 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 5 15:50:47.328007 kubelet[2786]: E1105 15:50:47.327645 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 5 15:50:47.333042 containerd[1598]: time="2025-11-05T15:50:47.332917020Z" level=info msg="CreateContainer within sandbox \"b22e374448d24910ca86f22d87aee5e68e1caaf01d42bafa46d6160025036c6a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 5 15:50:47.367934 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount678212341.mount: Deactivated successfully. Nov 5 15:50:47.372144 containerd[1598]: time="2025-11-05T15:50:47.371206139Z" level=info msg="Container 7dacc059177b20e5fbb696e1a24c3ee88b11abeb62917c006267dd4a3f5c6919: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:50:47.373495 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2558774241.mount: Deactivated successfully. Nov 5 15:50:47.385295 containerd[1598]: time="2025-11-05T15:50:47.385234623Z" level=info msg="CreateContainer within sandbox \"b22e374448d24910ca86f22d87aee5e68e1caaf01d42bafa46d6160025036c6a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"7dacc059177b20e5fbb696e1a24c3ee88b11abeb62917c006267dd4a3f5c6919\"" Nov 5 15:50:47.386096 containerd[1598]: time="2025-11-05T15:50:47.386060292Z" level=info msg="StartContainer for \"7dacc059177b20e5fbb696e1a24c3ee88b11abeb62917c006267dd4a3f5c6919\"" Nov 5 15:50:47.391102 containerd[1598]: time="2025-11-05T15:50:47.391051025Z" level=info msg="connecting to shim 7dacc059177b20e5fbb696e1a24c3ee88b11abeb62917c006267dd4a3f5c6919" address="unix:///run/containerd/s/52b7f0ae4981d59a1d653edc23680f86bdb63d57dfc481131c9731473933711a" protocol=ttrpc version=3 Nov 5 15:50:47.430352 systemd[1]: Started cri-containerd-7dacc059177b20e5fbb696e1a24c3ee88b11abeb62917c006267dd4a3f5c6919.scope - libcontainer container 7dacc059177b20e5fbb696e1a24c3ee88b11abeb62917c006267dd4a3f5c6919. Nov 5 15:50:47.509205 containerd[1598]: time="2025-11-05T15:50:47.509146983Z" level=info msg="StartContainer for \"7dacc059177b20e5fbb696e1a24c3ee88b11abeb62917c006267dd4a3f5c6919\" returns successfully" Nov 5 15:50:47.664804 containerd[1598]: time="2025-11-05T15:50:47.664658196Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7dacc059177b20e5fbb696e1a24c3ee88b11abeb62917c006267dd4a3f5c6919\" id:\"f9559223008618ea1de2e809dbdec2728221940e44cdb91167c88b3eeb9515da\" pid:3468 exited_at:{seconds:1762357847 nanos:660633384}" Nov 5 15:50:47.756758 kubelet[2786]: I1105 15:50:47.756716 2786 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 5 15:50:47.813037 systemd[1]: Created slice kubepods-burstable-podbc848d3d_cb26_4afa_95ce_551dcef759dc.slice - libcontainer container kubepods-burstable-podbc848d3d_cb26_4afa_95ce_551dcef759dc.slice. Nov 5 15:50:47.825609 systemd[1]: Created slice kubepods-burstable-pod1e6910d6_7ddf_4518_8146_21294a62053e.slice - libcontainer container kubepods-burstable-pod1e6910d6_7ddf_4518_8146_21294a62053e.slice. Nov 5 15:50:47.826982 kubelet[2786]: I1105 15:50:47.826951 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1e6910d6-7ddf-4518-8146-21294a62053e-config-volume\") pod \"coredns-668d6bf9bc-nvm9h\" (UID: \"1e6910d6-7ddf-4518-8146-21294a62053e\") " pod="kube-system/coredns-668d6bf9bc-nvm9h" Nov 5 15:50:47.827479 kubelet[2786]: I1105 15:50:47.827219 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bc848d3d-cb26-4afa-95ce-551dcef759dc-config-volume\") pod \"coredns-668d6bf9bc-v96j4\" (UID: \"bc848d3d-cb26-4afa-95ce-551dcef759dc\") " pod="kube-system/coredns-668d6bf9bc-v96j4" Nov 5 15:50:47.827987 kubelet[2786]: I1105 15:50:47.827904 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jxp2\" (UniqueName: \"kubernetes.io/projected/bc848d3d-cb26-4afa-95ce-551dcef759dc-kube-api-access-2jxp2\") pod \"coredns-668d6bf9bc-v96j4\" (UID: \"bc848d3d-cb26-4afa-95ce-551dcef759dc\") " pod="kube-system/coredns-668d6bf9bc-v96j4" Nov 5 15:50:47.829266 kubelet[2786]: I1105 15:50:47.829231 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6g4q\" (UniqueName: \"kubernetes.io/projected/1e6910d6-7ddf-4518-8146-21294a62053e-kube-api-access-d6g4q\") pod \"coredns-668d6bf9bc-nvm9h\" (UID: \"1e6910d6-7ddf-4518-8146-21294a62053e\") " pod="kube-system/coredns-668d6bf9bc-nvm9h" Nov 5 15:50:48.120124 kubelet[2786]: E1105 15:50:48.120073 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 5 15:50:48.121400 containerd[1598]: time="2025-11-05T15:50:48.121340570Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-v96j4,Uid:bc848d3d-cb26-4afa-95ce-551dcef759dc,Namespace:kube-system,Attempt:0,}" Nov 5 15:50:48.133648 kubelet[2786]: E1105 15:50:48.133134 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 5 15:50:48.135518 containerd[1598]: time="2025-11-05T15:50:48.134799606Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nvm9h,Uid:1e6910d6-7ddf-4518-8146-21294a62053e,Namespace:kube-system,Attempt:0,}" Nov 5 15:50:48.346733 kubelet[2786]: E1105 15:50:48.346251 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 5 15:50:49.349905 kubelet[2786]: E1105 15:50:49.349868 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 5 15:50:50.042846 systemd-networkd[1479]: cilium_host: Link UP Nov 5 15:50:50.044375 systemd-networkd[1479]: cilium_net: Link UP Nov 5 15:50:50.044585 systemd-networkd[1479]: cilium_net: Gained carrier Nov 5 15:50:50.044743 systemd-networkd[1479]: cilium_host: Gained carrier Nov 5 15:50:50.217207 systemd-networkd[1479]: cilium_vxlan: Link UP Nov 5 15:50:50.217221 systemd-networkd[1479]: cilium_vxlan: Gained carrier Nov 5 15:50:50.355286 kubelet[2786]: E1105 15:50:50.354119 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 5 15:50:50.386283 systemd-networkd[1479]: cilium_host: Gained IPv6LL Nov 5 15:50:50.450116 systemd-networkd[1479]: cilium_net: Gained IPv6LL Nov 5 15:50:50.737344 kernel: NET: Registered PF_ALG protocol family Nov 5 15:50:51.451696 systemd-networkd[1479]: cilium_vxlan: Gained IPv6LL Nov 5 15:50:51.793835 systemd-networkd[1479]: lxc_health: Link UP Nov 5 15:50:51.795776 systemd-networkd[1479]: lxc_health: Gained carrier Nov 5 15:50:52.284026 kernel: eth0: renamed from tmpa1008 Nov 5 15:50:52.290003 kernel: eth0: renamed from tmp4e094 Nov 5 15:50:52.290032 systemd-networkd[1479]: lxcea2d571cd19c: Link UP Nov 5 15:50:52.290617 systemd-networkd[1479]: lxca304c67b0d35: Link UP Nov 5 15:50:52.294463 systemd-networkd[1479]: lxca304c67b0d35: Gained carrier Nov 5 15:50:52.296581 kubelet[2786]: E1105 15:50:52.295456 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 5 15:50:52.296808 systemd-networkd[1479]: lxcea2d571cd19c: Gained carrier Nov 5 15:50:52.365219 kubelet[2786]: I1105 15:50:52.361656 2786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-kvbjc" podStartSLOduration=10.236143183 podStartE2EDuration="17.361525009s" podCreationTimestamp="2025-11-05 15:50:35 +0000 UTC" firstStartedPulling="2025-11-05 15:50:36.688617292 +0000 UTC m=+6.653466763" lastFinishedPulling="2025-11-05 15:50:43.813999099 +0000 UTC m=+13.778848589" observedRunningTime="2025-11-05 15:50:48.370617789 +0000 UTC m=+18.335467279" watchObservedRunningTime="2025-11-05 15:50:52.361525009 +0000 UTC m=+22.326374502" Nov 5 15:50:52.860258 systemd-networkd[1479]: lxc_health: Gained IPv6LL Nov 5 15:50:53.562267 systemd-networkd[1479]: lxcea2d571cd19c: Gained IPv6LL Nov 5 15:50:53.946228 systemd-networkd[1479]: lxca304c67b0d35: Gained IPv6LL Nov 5 15:50:57.070959 containerd[1598]: time="2025-11-05T15:50:57.070726889Z" level=info msg="connecting to shim 4e094331c24c040354122a662db06f179b5cbae0376d461c080a2ba603ebbf10" address="unix:///run/containerd/s/3f62792aaf460a4caa8b3d4b437b104e463f43df37cf017f6af998351fe42d89" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:50:57.113451 containerd[1598]: time="2025-11-05T15:50:57.113193316Z" level=info msg="connecting to shim a10086444e58561b0ba1a51a3e99a3610a97d7236bab35d5b1f8193c4887df3f" address="unix:///run/containerd/s/8a52e60429f7008f18a304ed3bdcba421325b6b855d7a18b4a4dfedd91364107" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:50:57.162185 systemd[1]: Started cri-containerd-a10086444e58561b0ba1a51a3e99a3610a97d7236bab35d5b1f8193c4887df3f.scope - libcontainer container a10086444e58561b0ba1a51a3e99a3610a97d7236bab35d5b1f8193c4887df3f. Nov 5 15:50:57.171805 systemd[1]: Started cri-containerd-4e094331c24c040354122a662db06f179b5cbae0376d461c080a2ba603ebbf10.scope - libcontainer container 4e094331c24c040354122a662db06f179b5cbae0376d461c080a2ba603ebbf10. Nov 5 15:50:57.295100 containerd[1598]: time="2025-11-05T15:50:57.295035036Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-v96j4,Uid:bc848d3d-cb26-4afa-95ce-551dcef759dc,Namespace:kube-system,Attempt:0,} returns sandbox id \"4e094331c24c040354122a662db06f179b5cbae0376d461c080a2ba603ebbf10\"" Nov 5 15:50:57.299277 kubelet[2786]: E1105 15:50:57.299208 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 5 15:50:57.312620 containerd[1598]: time="2025-11-05T15:50:57.312447775Z" level=info msg="CreateContainer within sandbox \"4e094331c24c040354122a662db06f179b5cbae0376d461c080a2ba603ebbf10\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 5 15:50:57.348385 containerd[1598]: time="2025-11-05T15:50:57.348152540Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nvm9h,Uid:1e6910d6-7ddf-4518-8146-21294a62053e,Namespace:kube-system,Attempt:0,} returns sandbox id \"a10086444e58561b0ba1a51a3e99a3610a97d7236bab35d5b1f8193c4887df3f\"" Nov 5 15:50:57.357322 kubelet[2786]: E1105 15:50:57.356551 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 5 15:50:57.361261 containerd[1598]: time="2025-11-05T15:50:57.361091340Z" level=info msg="CreateContainer within sandbox \"a10086444e58561b0ba1a51a3e99a3610a97d7236bab35d5b1f8193c4887df3f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 5 15:50:57.362083 containerd[1598]: time="2025-11-05T15:50:57.361396793Z" level=info msg="Container 8993af4f891bc7696e90e468c6c178845e26f306507133494f38b99d69355191: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:50:57.380776 containerd[1598]: time="2025-11-05T15:50:57.380699749Z" level=info msg="CreateContainer within sandbox \"4e094331c24c040354122a662db06f179b5cbae0376d461c080a2ba603ebbf10\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8993af4f891bc7696e90e468c6c178845e26f306507133494f38b99d69355191\"" Nov 5 15:50:57.383036 containerd[1598]: time="2025-11-05T15:50:57.382674393Z" level=info msg="StartContainer for \"8993af4f891bc7696e90e468c6c178845e26f306507133494f38b99d69355191\"" Nov 5 15:50:57.387055 containerd[1598]: time="2025-11-05T15:50:57.386889291Z" level=info msg="connecting to shim 8993af4f891bc7696e90e468c6c178845e26f306507133494f38b99d69355191" address="unix:///run/containerd/s/3f62792aaf460a4caa8b3d4b437b104e463f43df37cf017f6af998351fe42d89" protocol=ttrpc version=3 Nov 5 15:50:57.391384 containerd[1598]: time="2025-11-05T15:50:57.391217685Z" level=info msg="Container 32479674301a64cf4ef2c60aa1cf5ec946cce02002e094e58c08bbbb75b9025c: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:50:57.402178 containerd[1598]: time="2025-11-05T15:50:57.402115952Z" level=info msg="CreateContainer within sandbox \"a10086444e58561b0ba1a51a3e99a3610a97d7236bab35d5b1f8193c4887df3f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"32479674301a64cf4ef2c60aa1cf5ec946cce02002e094e58c08bbbb75b9025c\"" Nov 5 15:50:57.405335 containerd[1598]: time="2025-11-05T15:50:57.405269186Z" level=info msg="StartContainer for \"32479674301a64cf4ef2c60aa1cf5ec946cce02002e094e58c08bbbb75b9025c\"" Nov 5 15:50:57.412362 containerd[1598]: time="2025-11-05T15:50:57.412161481Z" level=info msg="connecting to shim 32479674301a64cf4ef2c60aa1cf5ec946cce02002e094e58c08bbbb75b9025c" address="unix:///run/containerd/s/8a52e60429f7008f18a304ed3bdcba421325b6b855d7a18b4a4dfedd91364107" protocol=ttrpc version=3 Nov 5 15:50:57.426261 systemd[1]: Started cri-containerd-8993af4f891bc7696e90e468c6c178845e26f306507133494f38b99d69355191.scope - libcontainer container 8993af4f891bc7696e90e468c6c178845e26f306507133494f38b99d69355191. Nov 5 15:50:57.455286 systemd[1]: Started cri-containerd-32479674301a64cf4ef2c60aa1cf5ec946cce02002e094e58c08bbbb75b9025c.scope - libcontainer container 32479674301a64cf4ef2c60aa1cf5ec946cce02002e094e58c08bbbb75b9025c. Nov 5 15:50:57.495131 containerd[1598]: time="2025-11-05T15:50:57.495000476Z" level=info msg="StartContainer for \"8993af4f891bc7696e90e468c6c178845e26f306507133494f38b99d69355191\" returns successfully" Nov 5 15:50:57.514187 containerd[1598]: time="2025-11-05T15:50:57.514084103Z" level=info msg="StartContainer for \"32479674301a64cf4ef2c60aa1cf5ec946cce02002e094e58c08bbbb75b9025c\" returns successfully" Nov 5 15:50:58.051995 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount810385752.mount: Deactivated successfully. Nov 5 15:50:58.402649 kubelet[2786]: E1105 15:50:58.402152 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 5 15:50:58.407435 kubelet[2786]: E1105 15:50:58.407252 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 5 15:50:58.428571 kubelet[2786]: I1105 15:50:58.428496 2786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-v96j4" podStartSLOduration=22.428463427 podStartE2EDuration="22.428463427s" podCreationTimestamp="2025-11-05 15:50:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 15:50:58.424603114 +0000 UTC m=+28.389452609" watchObservedRunningTime="2025-11-05 15:50:58.428463427 +0000 UTC m=+28.393312919" Nov 5 15:50:59.411145 kubelet[2786]: E1105 15:50:59.410959 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 5 15:50:59.411145 kubelet[2786]: E1105 15:50:59.411023 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 5 15:51:00.411605 kubelet[2786]: E1105 15:51:00.411311 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 5 15:51:00.411605 kubelet[2786]: E1105 15:51:00.411445 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 5 15:51:01.286031 kubelet[2786]: I1105 15:51:01.285589 2786 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 5 15:51:01.286576 kubelet[2786]: E1105 15:51:01.286360 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 5 15:51:01.324797 kubelet[2786]: I1105 15:51:01.324226 2786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-nvm9h" podStartSLOduration=25.324205909 podStartE2EDuration="25.324205909s" podCreationTimestamp="2025-11-05 15:50:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 15:50:58.471462935 +0000 UTC m=+28.436312427" watchObservedRunningTime="2025-11-05 15:51:01.324205909 +0000 UTC m=+31.289055396" Nov 5 15:51:01.414167 kubelet[2786]: E1105 15:51:01.414118 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 5 15:51:12.298580 systemd[1]: Started sshd@9-137.184.121.184:22-139.178.68.195:48646.service - OpenSSH per-connection server daemon (139.178.68.195:48646). Nov 5 15:51:12.471817 sshd[4113]: Accepted publickey for core from 139.178.68.195 port 48646 ssh2: RSA SHA256:6pZ2eqROk+ALbQ+c/ul+tfC2zt1KpSHiHdkR7HgdI30 Nov 5 15:51:12.475111 sshd-session[4113]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:51:12.483033 systemd-logind[1568]: New session 10 of user core. Nov 5 15:51:12.488359 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 5 15:51:13.142863 sshd[4116]: Connection closed by 139.178.68.195 port 48646 Nov 5 15:51:13.141821 sshd-session[4113]: pam_unix(sshd:session): session closed for user core Nov 5 15:51:13.149461 systemd[1]: sshd@9-137.184.121.184:22-139.178.68.195:48646.service: Deactivated successfully. Nov 5 15:51:13.152243 systemd[1]: session-10.scope: Deactivated successfully. Nov 5 15:51:13.153495 systemd-logind[1568]: Session 10 logged out. Waiting for processes to exit. Nov 5 15:51:13.155700 systemd-logind[1568]: Removed session 10. Nov 5 15:51:18.158818 systemd[1]: Started sshd@10-137.184.121.184:22-139.178.68.195:55638.service - OpenSSH per-connection server daemon (139.178.68.195:55638). Nov 5 15:51:18.233822 sshd[4131]: Accepted publickey for core from 139.178.68.195 port 55638 ssh2: RSA SHA256:6pZ2eqROk+ALbQ+c/ul+tfC2zt1KpSHiHdkR7HgdI30 Nov 5 15:51:18.235717 sshd-session[4131]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:51:18.243805 systemd-logind[1568]: New session 11 of user core. Nov 5 15:51:18.250217 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 5 15:51:18.391575 sshd[4134]: Connection closed by 139.178.68.195 port 55638 Nov 5 15:51:18.392541 sshd-session[4131]: pam_unix(sshd:session): session closed for user core Nov 5 15:51:18.398012 systemd-logind[1568]: Session 11 logged out. Waiting for processes to exit. Nov 5 15:51:18.398916 systemd[1]: sshd@10-137.184.121.184:22-139.178.68.195:55638.service: Deactivated successfully. Nov 5 15:51:18.401734 systemd[1]: session-11.scope: Deactivated successfully. Nov 5 15:51:18.405548 systemd-logind[1568]: Removed session 11. Nov 5 15:51:23.414957 systemd[1]: Started sshd@11-137.184.121.184:22-139.178.68.195:54852.service - OpenSSH per-connection server daemon (139.178.68.195:54852). Nov 5 15:51:23.492289 sshd[4147]: Accepted publickey for core from 139.178.68.195 port 54852 ssh2: RSA SHA256:6pZ2eqROk+ALbQ+c/ul+tfC2zt1KpSHiHdkR7HgdI30 Nov 5 15:51:23.493951 sshd-session[4147]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:51:23.499750 systemd-logind[1568]: New session 12 of user core. Nov 5 15:51:23.506569 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 5 15:51:23.667915 sshd[4150]: Connection closed by 139.178.68.195 port 54852 Nov 5 15:51:23.669298 sshd-session[4147]: pam_unix(sshd:session): session closed for user core Nov 5 15:51:23.677405 systemd[1]: sshd@11-137.184.121.184:22-139.178.68.195:54852.service: Deactivated successfully. Nov 5 15:51:23.682294 systemd[1]: session-12.scope: Deactivated successfully. Nov 5 15:51:23.684245 systemd-logind[1568]: Session 12 logged out. Waiting for processes to exit. Nov 5 15:51:23.687211 systemd-logind[1568]: Removed session 12. Nov 5 15:51:28.685115 systemd[1]: Started sshd@12-137.184.121.184:22-139.178.68.195:54854.service - OpenSSH per-connection server daemon (139.178.68.195:54854). Nov 5 15:51:28.776015 sshd[4164]: Accepted publickey for core from 139.178.68.195 port 54854 ssh2: RSA SHA256:6pZ2eqROk+ALbQ+c/ul+tfC2zt1KpSHiHdkR7HgdI30 Nov 5 15:51:28.777831 sshd-session[4164]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:51:28.786869 systemd-logind[1568]: New session 13 of user core. Nov 5 15:51:28.798268 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 5 15:51:28.946689 sshd[4167]: Connection closed by 139.178.68.195 port 54854 Nov 5 15:51:28.945705 sshd-session[4164]: pam_unix(sshd:session): session closed for user core Nov 5 15:51:28.962521 systemd[1]: sshd@12-137.184.121.184:22-139.178.68.195:54854.service: Deactivated successfully. Nov 5 15:51:28.966344 systemd[1]: session-13.scope: Deactivated successfully. Nov 5 15:51:28.971671 systemd-logind[1568]: Session 13 logged out. Waiting for processes to exit. Nov 5 15:51:28.978154 systemd[1]: Started sshd@13-137.184.121.184:22-139.178.68.195:54870.service - OpenSSH per-connection server daemon (139.178.68.195:54870). Nov 5 15:51:28.980198 systemd-logind[1568]: Removed session 13. Nov 5 15:51:29.056626 sshd[4180]: Accepted publickey for core from 139.178.68.195 port 54870 ssh2: RSA SHA256:6pZ2eqROk+ALbQ+c/ul+tfC2zt1KpSHiHdkR7HgdI30 Nov 5 15:51:29.058508 sshd-session[4180]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:51:29.063752 systemd-logind[1568]: New session 14 of user core. Nov 5 15:51:29.076292 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 5 15:51:29.275310 sshd[4183]: Connection closed by 139.178.68.195 port 54870 Nov 5 15:51:29.276294 sshd-session[4180]: pam_unix(sshd:session): session closed for user core Nov 5 15:51:29.288628 systemd[1]: sshd@13-137.184.121.184:22-139.178.68.195:54870.service: Deactivated successfully. Nov 5 15:51:29.293444 systemd[1]: session-14.scope: Deactivated successfully. Nov 5 15:51:29.295092 systemd-logind[1568]: Session 14 logged out. Waiting for processes to exit. Nov 5 15:51:29.302246 systemd[1]: Started sshd@14-137.184.121.184:22-139.178.68.195:54884.service - OpenSSH per-connection server daemon (139.178.68.195:54884). Nov 5 15:51:29.305021 systemd-logind[1568]: Removed session 14. Nov 5 15:51:29.383008 sshd[4193]: Accepted publickey for core from 139.178.68.195 port 54884 ssh2: RSA SHA256:6pZ2eqROk+ALbQ+c/ul+tfC2zt1KpSHiHdkR7HgdI30 Nov 5 15:51:29.384735 sshd-session[4193]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:51:29.391287 systemd-logind[1568]: New session 15 of user core. Nov 5 15:51:29.402288 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 5 15:51:29.566600 sshd[4196]: Connection closed by 139.178.68.195 port 54884 Nov 5 15:51:29.567225 sshd-session[4193]: pam_unix(sshd:session): session closed for user core Nov 5 15:51:29.571651 systemd[1]: sshd@14-137.184.121.184:22-139.178.68.195:54884.service: Deactivated successfully. Nov 5 15:51:29.574580 systemd[1]: session-15.scope: Deactivated successfully. Nov 5 15:51:29.578311 systemd-logind[1568]: Session 15 logged out. Waiting for processes to exit. Nov 5 15:51:29.579420 systemd-logind[1568]: Removed session 15. Nov 5 15:51:34.585060 systemd[1]: Started sshd@15-137.184.121.184:22-139.178.68.195:41168.service - OpenSSH per-connection server daemon (139.178.68.195:41168). Nov 5 15:51:34.664222 sshd[4210]: Accepted publickey for core from 139.178.68.195 port 41168 ssh2: RSA SHA256:6pZ2eqROk+ALbQ+c/ul+tfC2zt1KpSHiHdkR7HgdI30 Nov 5 15:51:34.665913 sshd-session[4210]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:51:34.672570 systemd-logind[1568]: New session 16 of user core. Nov 5 15:51:34.681261 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 5 15:51:34.823221 sshd[4213]: Connection closed by 139.178.68.195 port 41168 Nov 5 15:51:34.823891 sshd-session[4210]: pam_unix(sshd:session): session closed for user core Nov 5 15:51:34.829484 systemd[1]: sshd@15-137.184.121.184:22-139.178.68.195:41168.service: Deactivated successfully. Nov 5 15:51:34.832388 systemd[1]: session-16.scope: Deactivated successfully. Nov 5 15:51:34.835718 systemd-logind[1568]: Session 16 logged out. Waiting for processes to exit. Nov 5 15:51:34.837432 systemd-logind[1568]: Removed session 16. Nov 5 15:51:39.202576 kubelet[2786]: E1105 15:51:39.202473 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 5 15:51:39.846266 systemd[1]: Started sshd@16-137.184.121.184:22-139.178.68.195:41180.service - OpenSSH per-connection server daemon (139.178.68.195:41180). Nov 5 15:51:39.937632 sshd[4228]: Accepted publickey for core from 139.178.68.195 port 41180 ssh2: RSA SHA256:6pZ2eqROk+ALbQ+c/ul+tfC2zt1KpSHiHdkR7HgdI30 Nov 5 15:51:39.939848 sshd-session[4228]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:51:39.945617 systemd-logind[1568]: New session 17 of user core. Nov 5 15:51:39.957312 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 5 15:51:40.113013 sshd[4231]: Connection closed by 139.178.68.195 port 41180 Nov 5 15:51:40.110814 sshd-session[4228]: pam_unix(sshd:session): session closed for user core Nov 5 15:51:40.120762 systemd[1]: sshd@16-137.184.121.184:22-139.178.68.195:41180.service: Deactivated successfully. Nov 5 15:51:40.126235 systemd[1]: session-17.scope: Deactivated successfully. Nov 5 15:51:40.129397 systemd-logind[1568]: Session 17 logged out. Waiting for processes to exit. Nov 5 15:51:40.133668 systemd[1]: Started sshd@17-137.184.121.184:22-139.178.68.195:41186.service - OpenSSH per-connection server daemon (139.178.68.195:41186). Nov 5 15:51:40.137117 systemd-logind[1568]: Removed session 17. Nov 5 15:51:40.210623 sshd[4243]: Accepted publickey for core from 139.178.68.195 port 41186 ssh2: RSA SHA256:6pZ2eqROk+ALbQ+c/ul+tfC2zt1KpSHiHdkR7HgdI30 Nov 5 15:51:40.213005 sshd-session[4243]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:51:40.219079 systemd-logind[1568]: New session 18 of user core. Nov 5 15:51:40.231349 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 5 15:51:40.705261 sshd[4246]: Connection closed by 139.178.68.195 port 41186 Nov 5 15:51:40.706615 sshd-session[4243]: pam_unix(sshd:session): session closed for user core Nov 5 15:51:40.725474 systemd[1]: sshd@17-137.184.121.184:22-139.178.68.195:41186.service: Deactivated successfully. Nov 5 15:51:40.728790 systemd[1]: session-18.scope: Deactivated successfully. Nov 5 15:51:40.730312 systemd-logind[1568]: Session 18 logged out. Waiting for processes to exit. Nov 5 15:51:40.735224 systemd[1]: Started sshd@18-137.184.121.184:22-139.178.68.195:41200.service - OpenSSH per-connection server daemon (139.178.68.195:41200). Nov 5 15:51:40.737622 systemd-logind[1568]: Removed session 18. Nov 5 15:51:40.837143 sshd[4256]: Accepted publickey for core from 139.178.68.195 port 41200 ssh2: RSA SHA256:6pZ2eqROk+ALbQ+c/ul+tfC2zt1KpSHiHdkR7HgdI30 Nov 5 15:51:40.840091 sshd-session[4256]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:51:40.847625 systemd-logind[1568]: New session 19 of user core. Nov 5 15:51:40.855364 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 5 15:51:41.718046 sshd[4259]: Connection closed by 139.178.68.195 port 41200 Nov 5 15:51:41.717607 sshd-session[4256]: pam_unix(sshd:session): session closed for user core Nov 5 15:51:41.735355 systemd[1]: sshd@18-137.184.121.184:22-139.178.68.195:41200.service: Deactivated successfully. Nov 5 15:51:41.743146 systemd[1]: session-19.scope: Deactivated successfully. Nov 5 15:51:41.748507 systemd-logind[1568]: Session 19 logged out. Waiting for processes to exit. Nov 5 15:51:41.756393 systemd[1]: Started sshd@19-137.184.121.184:22-139.178.68.195:41202.service - OpenSSH per-connection server daemon (139.178.68.195:41202). Nov 5 15:51:41.759214 systemd-logind[1568]: Removed session 19. Nov 5 15:51:41.853878 sshd[4276]: Accepted publickey for core from 139.178.68.195 port 41202 ssh2: RSA SHA256:6pZ2eqROk+ALbQ+c/ul+tfC2zt1KpSHiHdkR7HgdI30 Nov 5 15:51:41.855618 sshd-session[4276]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:51:41.861791 systemd-logind[1568]: New session 20 of user core. Nov 5 15:51:41.874236 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 5 15:51:42.223258 sshd[4279]: Connection closed by 139.178.68.195 port 41202 Nov 5 15:51:42.223731 sshd-session[4276]: pam_unix(sshd:session): session closed for user core Nov 5 15:51:42.239785 systemd[1]: sshd@19-137.184.121.184:22-139.178.68.195:41202.service: Deactivated successfully. Nov 5 15:51:42.245897 systemd[1]: session-20.scope: Deactivated successfully. Nov 5 15:51:42.250450 systemd-logind[1568]: Session 20 logged out. Waiting for processes to exit. Nov 5 15:51:42.258597 systemd[1]: Started sshd@20-137.184.121.184:22-139.178.68.195:41206.service - OpenSSH per-connection server daemon (139.178.68.195:41206). Nov 5 15:51:42.261107 systemd-logind[1568]: Removed session 20. Nov 5 15:51:42.348990 sshd[4289]: Accepted publickey for core from 139.178.68.195 port 41206 ssh2: RSA SHA256:6pZ2eqROk+ALbQ+c/ul+tfC2zt1KpSHiHdkR7HgdI30 Nov 5 15:51:42.350382 sshd-session[4289]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:51:42.358436 systemd-logind[1568]: New session 21 of user core. Nov 5 15:51:42.377296 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 5 15:51:42.564431 sshd[4292]: Connection closed by 139.178.68.195 port 41206 Nov 5 15:51:42.563737 sshd-session[4289]: pam_unix(sshd:session): session closed for user core Nov 5 15:51:42.570447 systemd[1]: sshd@20-137.184.121.184:22-139.178.68.195:41206.service: Deactivated successfully. Nov 5 15:51:42.574680 systemd[1]: session-21.scope: Deactivated successfully. Nov 5 15:51:42.576299 systemd-logind[1568]: Session 21 logged out. Waiting for processes to exit. Nov 5 15:51:42.578588 systemd-logind[1568]: Removed session 21. Nov 5 15:51:43.204994 kubelet[2786]: E1105 15:51:43.204837 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 5 15:51:44.203459 kubelet[2786]: E1105 15:51:44.203413 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 5 15:51:47.580610 systemd[1]: Started sshd@21-137.184.121.184:22-139.178.68.195:47212.service - OpenSSH per-connection server daemon (139.178.68.195:47212). Nov 5 15:51:47.660517 sshd[4305]: Accepted publickey for core from 139.178.68.195 port 47212 ssh2: RSA SHA256:6pZ2eqROk+ALbQ+c/ul+tfC2zt1KpSHiHdkR7HgdI30 Nov 5 15:51:47.663242 sshd-session[4305]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:51:47.668654 systemd-logind[1568]: New session 22 of user core. Nov 5 15:51:47.681320 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 5 15:51:47.817346 sshd[4308]: Connection closed by 139.178.68.195 port 47212 Nov 5 15:51:47.818008 sshd-session[4305]: pam_unix(sshd:session): session closed for user core Nov 5 15:51:47.824170 systemd[1]: sshd@21-137.184.121.184:22-139.178.68.195:47212.service: Deactivated successfully. Nov 5 15:51:47.826820 systemd[1]: session-22.scope: Deactivated successfully. Nov 5 15:51:47.828250 systemd-logind[1568]: Session 22 logged out. Waiting for processes to exit. Nov 5 15:51:47.830278 systemd-logind[1568]: Removed session 22. Nov 5 15:51:50.204581 kubelet[2786]: E1105 15:51:50.203739 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 5 15:51:52.838915 systemd[1]: Started sshd@22-137.184.121.184:22-139.178.68.195:47222.service - OpenSSH per-connection server daemon (139.178.68.195:47222). Nov 5 15:51:52.910145 sshd[4319]: Accepted publickey for core from 139.178.68.195 port 47222 ssh2: RSA SHA256:6pZ2eqROk+ALbQ+c/ul+tfC2zt1KpSHiHdkR7HgdI30 Nov 5 15:51:52.911683 sshd-session[4319]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:51:52.917105 systemd-logind[1568]: New session 23 of user core. Nov 5 15:51:52.927231 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 5 15:51:53.063067 sshd[4322]: Connection closed by 139.178.68.195 port 47222 Nov 5 15:51:53.062463 sshd-session[4319]: pam_unix(sshd:session): session closed for user core Nov 5 15:51:53.067074 systemd[1]: sshd@22-137.184.121.184:22-139.178.68.195:47222.service: Deactivated successfully. Nov 5 15:51:53.070054 systemd[1]: session-23.scope: Deactivated successfully. Nov 5 15:51:53.072734 systemd-logind[1568]: Session 23 logged out. Waiting for processes to exit. Nov 5 15:51:53.074778 systemd-logind[1568]: Removed session 23. Nov 5 15:51:58.078732 systemd[1]: Started sshd@23-137.184.121.184:22-139.178.68.195:41804.service - OpenSSH per-connection server daemon (139.178.68.195:41804). Nov 5 15:51:58.154600 sshd[4335]: Accepted publickey for core from 139.178.68.195 port 41804 ssh2: RSA SHA256:6pZ2eqROk+ALbQ+c/ul+tfC2zt1KpSHiHdkR7HgdI30 Nov 5 15:51:58.156672 sshd-session[4335]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:51:58.163739 systemd-logind[1568]: New session 24 of user core. Nov 5 15:51:58.169222 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 5 15:51:58.319389 sshd[4338]: Connection closed by 139.178.68.195 port 41804 Nov 5 15:51:58.320106 sshd-session[4335]: pam_unix(sshd:session): session closed for user core Nov 5 15:51:58.335117 systemd[1]: sshd@23-137.184.121.184:22-139.178.68.195:41804.service: Deactivated successfully. Nov 5 15:51:58.338439 systemd[1]: session-24.scope: Deactivated successfully. Nov 5 15:51:58.339918 systemd-logind[1568]: Session 24 logged out. Waiting for processes to exit. Nov 5 15:51:58.346113 systemd[1]: Started sshd@24-137.184.121.184:22-139.178.68.195:41820.service - OpenSSH per-connection server daemon (139.178.68.195:41820). Nov 5 15:51:58.347995 systemd-logind[1568]: Removed session 24. Nov 5 15:51:58.425038 sshd[4350]: Accepted publickey for core from 139.178.68.195 port 41820 ssh2: RSA SHA256:6pZ2eqROk+ALbQ+c/ul+tfC2zt1KpSHiHdkR7HgdI30 Nov 5 15:51:58.427287 sshd-session[4350]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:51:58.436523 systemd-logind[1568]: New session 25 of user core. Nov 5 15:51:58.446265 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 5 15:51:59.939381 containerd[1598]: time="2025-11-05T15:51:59.939171245Z" level=info msg="StopContainer for \"469b696d7d45599ed0a1386b3bae8404e9d5cb5b238cb4e649d5fdcfba7d6d20\" with timeout 30 (s)" Nov 5 15:51:59.942520 containerd[1598]: time="2025-11-05T15:51:59.940596994Z" level=info msg="Stop container \"469b696d7d45599ed0a1386b3bae8404e9d5cb5b238cb4e649d5fdcfba7d6d20\" with signal terminated" Nov 5 15:52:00.016730 containerd[1598]: time="2025-11-05T15:52:00.016676137Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 5 15:52:00.037543 containerd[1598]: time="2025-11-05T15:52:00.037493009Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7dacc059177b20e5fbb696e1a24c3ee88b11abeb62917c006267dd4a3f5c6919\" id:\"62d36a009442efdf67b16f89f0c6bb85fe889bb5a760a640b77a5017d3de38b1\" pid:4378 exited_at:{seconds:1762357920 nanos:37049277}" Nov 5 15:52:00.043552 containerd[1598]: time="2025-11-05T15:52:00.043506151Z" level=info msg="StopContainer for \"7dacc059177b20e5fbb696e1a24c3ee88b11abeb62917c006267dd4a3f5c6919\" with timeout 2 (s)" Nov 5 15:52:00.043950 containerd[1598]: time="2025-11-05T15:52:00.043911459Z" level=info msg="Stop container \"7dacc059177b20e5fbb696e1a24c3ee88b11abeb62917c006267dd4a3f5c6919\" with signal terminated" Nov 5 15:52:00.049430 systemd[1]: cri-containerd-469b696d7d45599ed0a1386b3bae8404e9d5cb5b238cb4e649d5fdcfba7d6d20.scope: Deactivated successfully. Nov 5 15:52:00.055612 containerd[1598]: time="2025-11-05T15:52:00.055553190Z" level=info msg="received exit event container_id:\"469b696d7d45599ed0a1386b3bae8404e9d5cb5b238cb4e649d5fdcfba7d6d20\" id:\"469b696d7d45599ed0a1386b3bae8404e9d5cb5b238cb4e649d5fdcfba7d6d20\" pid:3364 exited_at:{seconds:1762357920 nanos:54096052}" Nov 5 15:52:00.058335 containerd[1598]: time="2025-11-05T15:52:00.058250058Z" level=info msg="TaskExit event in podsandbox handler container_id:\"469b696d7d45599ed0a1386b3bae8404e9d5cb5b238cb4e649d5fdcfba7d6d20\" id:\"469b696d7d45599ed0a1386b3bae8404e9d5cb5b238cb4e649d5fdcfba7d6d20\" pid:3364 exited_at:{seconds:1762357920 nanos:54096052}" Nov 5 15:52:00.084412 systemd-networkd[1479]: lxc_health: Link DOWN Nov 5 15:52:00.084428 systemd-networkd[1479]: lxc_health: Lost carrier Nov 5 15:52:00.117605 systemd[1]: cri-containerd-7dacc059177b20e5fbb696e1a24c3ee88b11abeb62917c006267dd4a3f5c6919.scope: Deactivated successfully. Nov 5 15:52:00.118068 systemd[1]: cri-containerd-7dacc059177b20e5fbb696e1a24c3ee88b11abeb62917c006267dd4a3f5c6919.scope: Consumed 8.750s CPU time, 195M memory peak, 74.9M read from disk, 13.3M written to disk. Nov 5 15:52:00.123886 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-469b696d7d45599ed0a1386b3bae8404e9d5cb5b238cb4e649d5fdcfba7d6d20-rootfs.mount: Deactivated successfully. Nov 5 15:52:00.125574 containerd[1598]: time="2025-11-05T15:52:00.124105268Z" level=info msg="received exit event container_id:\"7dacc059177b20e5fbb696e1a24c3ee88b11abeb62917c006267dd4a3f5c6919\" id:\"7dacc059177b20e5fbb696e1a24c3ee88b11abeb62917c006267dd4a3f5c6919\" pid:3435 exited_at:{seconds:1762357920 nanos:120470305}" Nov 5 15:52:00.125652 containerd[1598]: time="2025-11-05T15:52:00.124431557Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7dacc059177b20e5fbb696e1a24c3ee88b11abeb62917c006267dd4a3f5c6919\" id:\"7dacc059177b20e5fbb696e1a24c3ee88b11abeb62917c006267dd4a3f5c6919\" pid:3435 exited_at:{seconds:1762357920 nanos:120470305}" Nov 5 15:52:00.140794 containerd[1598]: time="2025-11-05T15:52:00.140748617Z" level=info msg="StopContainer for \"469b696d7d45599ed0a1386b3bae8404e9d5cb5b238cb4e649d5fdcfba7d6d20\" returns successfully" Nov 5 15:52:00.146282 containerd[1598]: time="2025-11-05T15:52:00.146087186Z" level=info msg="StopPodSandbox for \"708a2299ac97e4b0f835a65746c9cfa8b3b375d9d27baaaffc50ee80e54aac28\"" Nov 5 15:52:00.146649 containerd[1598]: time="2025-11-05T15:52:00.146504640Z" level=info msg="Container to stop \"469b696d7d45599ed0a1386b3bae8404e9d5cb5b238cb4e649d5fdcfba7d6d20\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 5 15:52:00.158539 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7dacc059177b20e5fbb696e1a24c3ee88b11abeb62917c006267dd4a3f5c6919-rootfs.mount: Deactivated successfully. Nov 5 15:52:00.165151 systemd[1]: cri-containerd-708a2299ac97e4b0f835a65746c9cfa8b3b375d9d27baaaffc50ee80e54aac28.scope: Deactivated successfully. Nov 5 15:52:00.169948 containerd[1598]: time="2025-11-05T15:52:00.169879408Z" level=info msg="StopContainer for \"7dacc059177b20e5fbb696e1a24c3ee88b11abeb62917c006267dd4a3f5c6919\" returns successfully" Nov 5 15:52:00.170742 containerd[1598]: time="2025-11-05T15:52:00.170699030Z" level=info msg="StopPodSandbox for \"b22e374448d24910ca86f22d87aee5e68e1caaf01d42bafa46d6160025036c6a\"" Nov 5 15:52:00.171363 containerd[1598]: time="2025-11-05T15:52:00.170895129Z" level=info msg="Container to stop \"7dacc059177b20e5fbb696e1a24c3ee88b11abeb62917c006267dd4a3f5c6919\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 5 15:52:00.171623 containerd[1598]: time="2025-11-05T15:52:00.171346269Z" level=info msg="Container to stop \"0a6b4503482af33f9a9e24fa1bcf4c93beb29de1fa0a4141dc46899843e082b8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 5 15:52:00.171944 containerd[1598]: time="2025-11-05T15:52:00.171879610Z" level=info msg="Container to stop \"bd09933f076ab487df99ba4e46179a390831af0992d356f05ca6978b6a82cf94\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 5 15:52:00.171944 containerd[1598]: time="2025-11-05T15:52:00.171913194Z" level=info msg="Container to stop \"518062574cbab2c6ba832301c72b212958e7788c25a864f2c07c30bd21ce6bd3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 5 15:52:00.172221 containerd[1598]: time="2025-11-05T15:52:00.171925956Z" level=info msg="Container to stop \"44706daef2ff86d33f5c3fb52d4f0d292e53909df54ef0990d161388ac35d6ae\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 5 15:52:00.172617 containerd[1598]: time="2025-11-05T15:52:00.172225870Z" level=info msg="TaskExit event in podsandbox handler container_id:\"708a2299ac97e4b0f835a65746c9cfa8b3b375d9d27baaaffc50ee80e54aac28\" id:\"708a2299ac97e4b0f835a65746c9cfa8b3b375d9d27baaaffc50ee80e54aac28\" pid:3001 exit_status:137 exited_at:{seconds:1762357920 nanos:171293966}" Nov 5 15:52:00.181388 systemd[1]: cri-containerd-b22e374448d24910ca86f22d87aee5e68e1caaf01d42bafa46d6160025036c6a.scope: Deactivated successfully. Nov 5 15:52:00.223225 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b22e374448d24910ca86f22d87aee5e68e1caaf01d42bafa46d6160025036c6a-rootfs.mount: Deactivated successfully. Nov 5 15:52:00.229256 containerd[1598]: time="2025-11-05T15:52:00.229203192Z" level=info msg="shim disconnected" id=b22e374448d24910ca86f22d87aee5e68e1caaf01d42bafa46d6160025036c6a namespace=k8s.io Nov 5 15:52:00.229256 containerd[1598]: time="2025-11-05T15:52:00.229245743Z" level=warning msg="cleaning up after shim disconnected" id=b22e374448d24910ca86f22d87aee5e68e1caaf01d42bafa46d6160025036c6a namespace=k8s.io Nov 5 15:52:00.242154 containerd[1598]: time="2025-11-05T15:52:00.229253752Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 5 15:52:00.253171 containerd[1598]: time="2025-11-05T15:52:00.251867562Z" level=info msg="shim disconnected" id=708a2299ac97e4b0f835a65746c9cfa8b3b375d9d27baaaffc50ee80e54aac28 namespace=k8s.io Nov 5 15:52:00.252637 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-708a2299ac97e4b0f835a65746c9cfa8b3b375d9d27baaaffc50ee80e54aac28-rootfs.mount: Deactivated successfully. Nov 5 15:52:00.256079 containerd[1598]: time="2025-11-05T15:52:00.253513636Z" level=warning msg="cleaning up after shim disconnected" id=708a2299ac97e4b0f835a65746c9cfa8b3b375d9d27baaaffc50ee80e54aac28 namespace=k8s.io Nov 5 15:52:00.256481 containerd[1598]: time="2025-11-05T15:52:00.256353329Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 5 15:52:00.269713 containerd[1598]: time="2025-11-05T15:52:00.269661897Z" level=info msg="received exit event sandbox_id:\"b22e374448d24910ca86f22d87aee5e68e1caaf01d42bafa46d6160025036c6a\" exit_status:137 exited_at:{seconds:1762357920 nanos:184677671}" Nov 5 15:52:00.271279 containerd[1598]: time="2025-11-05T15:52:00.271234555Z" level=info msg="TearDown network for sandbox \"b22e374448d24910ca86f22d87aee5e68e1caaf01d42bafa46d6160025036c6a\" successfully" Nov 5 15:52:00.271279 containerd[1598]: time="2025-11-05T15:52:00.271262498Z" level=info msg="StopPodSandbox for \"b22e374448d24910ca86f22d87aee5e68e1caaf01d42bafa46d6160025036c6a\" returns successfully" Nov 5 15:52:00.278880 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b22e374448d24910ca86f22d87aee5e68e1caaf01d42bafa46d6160025036c6a-shm.mount: Deactivated successfully. Nov 5 15:52:00.282099 kubelet[2786]: E1105 15:52:00.281979 2786 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Nov 5 15:52:00.308228 containerd[1598]: time="2025-11-05T15:52:00.307103968Z" level=info msg="received exit event sandbox_id:\"708a2299ac97e4b0f835a65746c9cfa8b3b375d9d27baaaffc50ee80e54aac28\" exit_status:137 exited_at:{seconds:1762357920 nanos:171293966}" Nov 5 15:52:00.309220 containerd[1598]: time="2025-11-05T15:52:00.307229665Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b22e374448d24910ca86f22d87aee5e68e1caaf01d42bafa46d6160025036c6a\" id:\"b22e374448d24910ca86f22d87aee5e68e1caaf01d42bafa46d6160025036c6a\" pid:2938 exit_status:137 exited_at:{seconds:1762357920 nanos:184677671}" Nov 5 15:52:00.309220 containerd[1598]: time="2025-11-05T15:52:00.307563856Z" level=info msg="TearDown network for sandbox \"708a2299ac97e4b0f835a65746c9cfa8b3b375d9d27baaaffc50ee80e54aac28\" successfully" Nov 5 15:52:00.309220 containerd[1598]: time="2025-11-05T15:52:00.308778535Z" level=info msg="StopPodSandbox for \"708a2299ac97e4b0f835a65746c9cfa8b3b375d9d27baaaffc50ee80e54aac28\" returns successfully" Nov 5 15:52:00.330167 kubelet[2786]: I1105 15:52:00.330123 2786 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/600e642a-4a9d-43b4-99e9-77f1e45a228b-hostproc\") pod \"600e642a-4a9d-43b4-99e9-77f1e45a228b\" (UID: \"600e642a-4a9d-43b4-99e9-77f1e45a228b\") " Nov 5 15:52:00.330596 kubelet[2786]: I1105 15:52:00.330557 2786 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/600e642a-4a9d-43b4-99e9-77f1e45a228b-bpf-maps\") pod \"600e642a-4a9d-43b4-99e9-77f1e45a228b\" (UID: \"600e642a-4a9d-43b4-99e9-77f1e45a228b\") " Nov 5 15:52:00.330849 kubelet[2786]: I1105 15:52:00.330481 2786 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/600e642a-4a9d-43b4-99e9-77f1e45a228b-hostproc" (OuterVolumeSpecName: "hostproc") pod "600e642a-4a9d-43b4-99e9-77f1e45a228b" (UID: "600e642a-4a9d-43b4-99e9-77f1e45a228b"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 5 15:52:00.330849 kubelet[2786]: I1105 15:52:00.330820 2786 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/600e642a-4a9d-43b4-99e9-77f1e45a228b-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "600e642a-4a9d-43b4-99e9-77f1e45a228b" (UID: "600e642a-4a9d-43b4-99e9-77f1e45a228b"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 5 15:52:00.331089 kubelet[2786]: I1105 15:52:00.331040 2786 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/600e642a-4a9d-43b4-99e9-77f1e45a228b-host-proc-sys-kernel\") pod \"600e642a-4a9d-43b4-99e9-77f1e45a228b\" (UID: \"600e642a-4a9d-43b4-99e9-77f1e45a228b\") " Nov 5 15:52:00.331262 kubelet[2786]: I1105 15:52:00.331197 2786 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/600e642a-4a9d-43b4-99e9-77f1e45a228b-cilium-cgroup\") pod \"600e642a-4a9d-43b4-99e9-77f1e45a228b\" (UID: \"600e642a-4a9d-43b4-99e9-77f1e45a228b\") " Nov 5 15:52:00.331412 kubelet[2786]: I1105 15:52:00.331362 2786 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/600e642a-4a9d-43b4-99e9-77f1e45a228b-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "600e642a-4a9d-43b4-99e9-77f1e45a228b" (UID: "600e642a-4a9d-43b4-99e9-77f1e45a228b"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 5 15:52:00.332767 kubelet[2786]: I1105 15:52:00.331516 2786 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/600e642a-4a9d-43b4-99e9-77f1e45a228b-cilium-config-path\") pod \"600e642a-4a9d-43b4-99e9-77f1e45a228b\" (UID: \"600e642a-4a9d-43b4-99e9-77f1e45a228b\") " Nov 5 15:52:00.332767 kubelet[2786]: I1105 15:52:00.331553 2786 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/600e642a-4a9d-43b4-99e9-77f1e45a228b-lib-modules\") pod \"600e642a-4a9d-43b4-99e9-77f1e45a228b\" (UID: \"600e642a-4a9d-43b4-99e9-77f1e45a228b\") " Nov 5 15:52:00.332767 kubelet[2786]: I1105 15:52:00.331574 2786 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/600e642a-4a9d-43b4-99e9-77f1e45a228b-xtables-lock\") pod \"600e642a-4a9d-43b4-99e9-77f1e45a228b\" (UID: \"600e642a-4a9d-43b4-99e9-77f1e45a228b\") " Nov 5 15:52:00.332767 kubelet[2786]: I1105 15:52:00.331613 2786 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t4rqf\" (UniqueName: \"kubernetes.io/projected/600e642a-4a9d-43b4-99e9-77f1e45a228b-kube-api-access-t4rqf\") pod \"600e642a-4a9d-43b4-99e9-77f1e45a228b\" (UID: \"600e642a-4a9d-43b4-99e9-77f1e45a228b\") " Nov 5 15:52:00.332767 kubelet[2786]: I1105 15:52:00.331640 2786 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/600e642a-4a9d-43b4-99e9-77f1e45a228b-cilium-run\") pod \"600e642a-4a9d-43b4-99e9-77f1e45a228b\" (UID: \"600e642a-4a9d-43b4-99e9-77f1e45a228b\") " Nov 5 15:52:00.332767 kubelet[2786]: I1105 15:52:00.331668 2786 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/600e642a-4a9d-43b4-99e9-77f1e45a228b-hubble-tls\") pod \"600e642a-4a9d-43b4-99e9-77f1e45a228b\" (UID: \"600e642a-4a9d-43b4-99e9-77f1e45a228b\") " Nov 5 15:52:00.333004 kubelet[2786]: I1105 15:52:00.331689 2786 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/600e642a-4a9d-43b4-99e9-77f1e45a228b-cni-path\") pod \"600e642a-4a9d-43b4-99e9-77f1e45a228b\" (UID: \"600e642a-4a9d-43b4-99e9-77f1e45a228b\") " Nov 5 15:52:00.333004 kubelet[2786]: I1105 15:52:00.331721 2786 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/600e642a-4a9d-43b4-99e9-77f1e45a228b-clustermesh-secrets\") pod \"600e642a-4a9d-43b4-99e9-77f1e45a228b\" (UID: \"600e642a-4a9d-43b4-99e9-77f1e45a228b\") " Nov 5 15:52:00.333004 kubelet[2786]: I1105 15:52:00.331744 2786 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/600e642a-4a9d-43b4-99e9-77f1e45a228b-host-proc-sys-net\") pod \"600e642a-4a9d-43b4-99e9-77f1e45a228b\" (UID: \"600e642a-4a9d-43b4-99e9-77f1e45a228b\") " Nov 5 15:52:00.333004 kubelet[2786]: I1105 15:52:00.331775 2786 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/600e642a-4a9d-43b4-99e9-77f1e45a228b-etc-cni-netd\") pod \"600e642a-4a9d-43b4-99e9-77f1e45a228b\" (UID: \"600e642a-4a9d-43b4-99e9-77f1e45a228b\") " Nov 5 15:52:00.333004 kubelet[2786]: I1105 15:52:00.331841 2786 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/600e642a-4a9d-43b4-99e9-77f1e45a228b-hostproc\") on node \"ci-4487.0.1-2-254db4f49e\" DevicePath \"\"" Nov 5 15:52:00.333004 kubelet[2786]: I1105 15:52:00.331859 2786 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/600e642a-4a9d-43b4-99e9-77f1e45a228b-bpf-maps\") on node \"ci-4487.0.1-2-254db4f49e\" DevicePath \"\"" Nov 5 15:52:00.333004 kubelet[2786]: I1105 15:52:00.331871 2786 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/600e642a-4a9d-43b4-99e9-77f1e45a228b-host-proc-sys-kernel\") on node \"ci-4487.0.1-2-254db4f49e\" DevicePath \"\"" Nov 5 15:52:00.333183 kubelet[2786]: I1105 15:52:00.331910 2786 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/600e642a-4a9d-43b4-99e9-77f1e45a228b-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "600e642a-4a9d-43b4-99e9-77f1e45a228b" (UID: "600e642a-4a9d-43b4-99e9-77f1e45a228b"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 5 15:52:00.333183 kubelet[2786]: I1105 15:52:00.331960 2786 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/600e642a-4a9d-43b4-99e9-77f1e45a228b-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "600e642a-4a9d-43b4-99e9-77f1e45a228b" (UID: "600e642a-4a9d-43b4-99e9-77f1e45a228b"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 5 15:52:00.333183 kubelet[2786]: I1105 15:52:00.332806 2786 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/600e642a-4a9d-43b4-99e9-77f1e45a228b-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "600e642a-4a9d-43b4-99e9-77f1e45a228b" (UID: "600e642a-4a9d-43b4-99e9-77f1e45a228b"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 5 15:52:00.333183 kubelet[2786]: I1105 15:52:00.332854 2786 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/600e642a-4a9d-43b4-99e9-77f1e45a228b-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "600e642a-4a9d-43b4-99e9-77f1e45a228b" (UID: "600e642a-4a9d-43b4-99e9-77f1e45a228b"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 5 15:52:00.333183 kubelet[2786]: I1105 15:52:00.332878 2786 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/600e642a-4a9d-43b4-99e9-77f1e45a228b-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "600e642a-4a9d-43b4-99e9-77f1e45a228b" (UID: "600e642a-4a9d-43b4-99e9-77f1e45a228b"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 5 15:52:00.338225 kubelet[2786]: I1105 15:52:00.338146 2786 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/600e642a-4a9d-43b4-99e9-77f1e45a228b-cni-path" (OuterVolumeSpecName: "cni-path") pod "600e642a-4a9d-43b4-99e9-77f1e45a228b" (UID: "600e642a-4a9d-43b4-99e9-77f1e45a228b"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 5 15:52:00.342731 kubelet[2786]: I1105 15:52:00.342583 2786 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/600e642a-4a9d-43b4-99e9-77f1e45a228b-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "600e642a-4a9d-43b4-99e9-77f1e45a228b" (UID: "600e642a-4a9d-43b4-99e9-77f1e45a228b"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 5 15:52:00.343082 kubelet[2786]: I1105 15:52:00.343020 2786 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/600e642a-4a9d-43b4-99e9-77f1e45a228b-kube-api-access-t4rqf" (OuterVolumeSpecName: "kube-api-access-t4rqf") pod "600e642a-4a9d-43b4-99e9-77f1e45a228b" (UID: "600e642a-4a9d-43b4-99e9-77f1e45a228b"). InnerVolumeSpecName "kube-api-access-t4rqf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 5 15:52:00.348185 kubelet[2786]: I1105 15:52:00.348138 2786 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/600e642a-4a9d-43b4-99e9-77f1e45a228b-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "600e642a-4a9d-43b4-99e9-77f1e45a228b" (UID: "600e642a-4a9d-43b4-99e9-77f1e45a228b"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 5 15:52:00.348869 kubelet[2786]: I1105 15:52:00.348828 2786 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/600e642a-4a9d-43b4-99e9-77f1e45a228b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "600e642a-4a9d-43b4-99e9-77f1e45a228b" (UID: "600e642a-4a9d-43b4-99e9-77f1e45a228b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 5 15:52:00.349267 kubelet[2786]: I1105 15:52:00.349232 2786 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/600e642a-4a9d-43b4-99e9-77f1e45a228b-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "600e642a-4a9d-43b4-99e9-77f1e45a228b" (UID: "600e642a-4a9d-43b4-99e9-77f1e45a228b"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 5 15:52:00.432623 kubelet[2786]: I1105 15:52:00.432077 2786 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5ec11585-2830-4ec3-85e4-bd5daee68c9a-cilium-config-path\") pod \"5ec11585-2830-4ec3-85e4-bd5daee68c9a\" (UID: \"5ec11585-2830-4ec3-85e4-bd5daee68c9a\") " Nov 5 15:52:00.432623 kubelet[2786]: I1105 15:52:00.432133 2786 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8flxf\" (UniqueName: \"kubernetes.io/projected/5ec11585-2830-4ec3-85e4-bd5daee68c9a-kube-api-access-8flxf\") pod \"5ec11585-2830-4ec3-85e4-bd5daee68c9a\" (UID: \"5ec11585-2830-4ec3-85e4-bd5daee68c9a\") " Nov 5 15:52:00.432623 kubelet[2786]: I1105 15:52:00.432173 2786 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/600e642a-4a9d-43b4-99e9-77f1e45a228b-cilium-cgroup\") on node \"ci-4487.0.1-2-254db4f49e\" DevicePath \"\"" Nov 5 15:52:00.432623 kubelet[2786]: I1105 15:52:00.432185 2786 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/600e642a-4a9d-43b4-99e9-77f1e45a228b-cilium-config-path\") on node \"ci-4487.0.1-2-254db4f49e\" DevicePath \"\"" Nov 5 15:52:00.432623 kubelet[2786]: I1105 15:52:00.432196 2786 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/600e642a-4a9d-43b4-99e9-77f1e45a228b-lib-modules\") on node \"ci-4487.0.1-2-254db4f49e\" DevicePath \"\"" Nov 5 15:52:00.432623 kubelet[2786]: I1105 15:52:00.432206 2786 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/600e642a-4a9d-43b4-99e9-77f1e45a228b-xtables-lock\") on node \"ci-4487.0.1-2-254db4f49e\" DevicePath \"\"" Nov 5 15:52:00.432623 kubelet[2786]: I1105 15:52:00.432215 2786 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-t4rqf\" (UniqueName: \"kubernetes.io/projected/600e642a-4a9d-43b4-99e9-77f1e45a228b-kube-api-access-t4rqf\") on node \"ci-4487.0.1-2-254db4f49e\" DevicePath \"\"" Nov 5 15:52:00.432966 kubelet[2786]: I1105 15:52:00.432226 2786 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/600e642a-4a9d-43b4-99e9-77f1e45a228b-cilium-run\") on node \"ci-4487.0.1-2-254db4f49e\" DevicePath \"\"" Nov 5 15:52:00.432966 kubelet[2786]: I1105 15:52:00.432239 2786 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/600e642a-4a9d-43b4-99e9-77f1e45a228b-hubble-tls\") on node \"ci-4487.0.1-2-254db4f49e\" DevicePath \"\"" Nov 5 15:52:00.432966 kubelet[2786]: I1105 15:52:00.432246 2786 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/600e642a-4a9d-43b4-99e9-77f1e45a228b-cni-path\") on node \"ci-4487.0.1-2-254db4f49e\" DevicePath \"\"" Nov 5 15:52:00.432966 kubelet[2786]: I1105 15:52:00.432254 2786 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/600e642a-4a9d-43b4-99e9-77f1e45a228b-clustermesh-secrets\") on node \"ci-4487.0.1-2-254db4f49e\" DevicePath \"\"" Nov 5 15:52:00.432966 kubelet[2786]: I1105 15:52:00.432265 2786 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/600e642a-4a9d-43b4-99e9-77f1e45a228b-host-proc-sys-net\") on node \"ci-4487.0.1-2-254db4f49e\" DevicePath \"\"" Nov 5 15:52:00.432966 kubelet[2786]: I1105 15:52:00.432274 2786 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/600e642a-4a9d-43b4-99e9-77f1e45a228b-etc-cni-netd\") on node \"ci-4487.0.1-2-254db4f49e\" DevicePath \"\"" Nov 5 15:52:00.436090 kubelet[2786]: I1105 15:52:00.436034 2786 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ec11585-2830-4ec3-85e4-bd5daee68c9a-kube-api-access-8flxf" (OuterVolumeSpecName: "kube-api-access-8flxf") pod "5ec11585-2830-4ec3-85e4-bd5daee68c9a" (UID: "5ec11585-2830-4ec3-85e4-bd5daee68c9a"). InnerVolumeSpecName "kube-api-access-8flxf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 5 15:52:00.436303 kubelet[2786]: I1105 15:52:00.436104 2786 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ec11585-2830-4ec3-85e4-bd5daee68c9a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "5ec11585-2830-4ec3-85e4-bd5daee68c9a" (UID: "5ec11585-2830-4ec3-85e4-bd5daee68c9a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 5 15:52:00.533547 kubelet[2786]: I1105 15:52:00.533394 2786 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5ec11585-2830-4ec3-85e4-bd5daee68c9a-cilium-config-path\") on node \"ci-4487.0.1-2-254db4f49e\" DevicePath \"\"" Nov 5 15:52:00.533547 kubelet[2786]: I1105 15:52:00.533432 2786 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8flxf\" (UniqueName: \"kubernetes.io/projected/5ec11585-2830-4ec3-85e4-bd5daee68c9a-kube-api-access-8flxf\") on node \"ci-4487.0.1-2-254db4f49e\" DevicePath \"\"" Nov 5 15:52:00.573151 kubelet[2786]: I1105 15:52:00.573098 2786 scope.go:117] "RemoveContainer" containerID="469b696d7d45599ed0a1386b3bae8404e9d5cb5b238cb4e649d5fdcfba7d6d20" Nov 5 15:52:00.582276 systemd[1]: Removed slice kubepods-besteffort-pod5ec11585_2830_4ec3_85e4_bd5daee68c9a.slice - libcontainer container kubepods-besteffort-pod5ec11585_2830_4ec3_85e4_bd5daee68c9a.slice. Nov 5 15:52:00.583985 containerd[1598]: time="2025-11-05T15:52:00.583784857Z" level=info msg="RemoveContainer for \"469b696d7d45599ed0a1386b3bae8404e9d5cb5b238cb4e649d5fdcfba7d6d20\"" Nov 5 15:52:00.594866 containerd[1598]: time="2025-11-05T15:52:00.594826571Z" level=info msg="RemoveContainer for \"469b696d7d45599ed0a1386b3bae8404e9d5cb5b238cb4e649d5fdcfba7d6d20\" returns successfully" Nov 5 15:52:00.602053 kubelet[2786]: I1105 15:52:00.602012 2786 scope.go:117] "RemoveContainer" containerID="469b696d7d45599ed0a1386b3bae8404e9d5cb5b238cb4e649d5fdcfba7d6d20" Nov 5 15:52:00.605191 containerd[1598]: time="2025-11-05T15:52:00.603994349Z" level=error msg="ContainerStatus for \"469b696d7d45599ed0a1386b3bae8404e9d5cb5b238cb4e649d5fdcfba7d6d20\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"469b696d7d45599ed0a1386b3bae8404e9d5cb5b238cb4e649d5fdcfba7d6d20\": not found" Nov 5 15:52:00.610359 kubelet[2786]: E1105 15:52:00.609986 2786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"469b696d7d45599ed0a1386b3bae8404e9d5cb5b238cb4e649d5fdcfba7d6d20\": not found" containerID="469b696d7d45599ed0a1386b3bae8404e9d5cb5b238cb4e649d5fdcfba7d6d20" Nov 5 15:52:00.610359 kubelet[2786]: I1105 15:52:00.610065 2786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"469b696d7d45599ed0a1386b3bae8404e9d5cb5b238cb4e649d5fdcfba7d6d20"} err="failed to get container status \"469b696d7d45599ed0a1386b3bae8404e9d5cb5b238cb4e649d5fdcfba7d6d20\": rpc error: code = NotFound desc = an error occurred when try to find container \"469b696d7d45599ed0a1386b3bae8404e9d5cb5b238cb4e649d5fdcfba7d6d20\": not found" Nov 5 15:52:00.610359 kubelet[2786]: I1105 15:52:00.610179 2786 scope.go:117] "RemoveContainer" containerID="7dacc059177b20e5fbb696e1a24c3ee88b11abeb62917c006267dd4a3f5c6919" Nov 5 15:52:00.616973 systemd[1]: Removed slice kubepods-burstable-pod600e642a_4a9d_43b4_99e9_77f1e45a228b.slice - libcontainer container kubepods-burstable-pod600e642a_4a9d_43b4_99e9_77f1e45a228b.slice. Nov 5 15:52:00.617163 systemd[1]: kubepods-burstable-pod600e642a_4a9d_43b4_99e9_77f1e45a228b.slice: Consumed 8.868s CPU time, 195.3M memory peak, 74.9M read from disk, 13.3M written to disk. Nov 5 15:52:00.628189 containerd[1598]: time="2025-11-05T15:52:00.628129256Z" level=info msg="RemoveContainer for \"7dacc059177b20e5fbb696e1a24c3ee88b11abeb62917c006267dd4a3f5c6919\"" Nov 5 15:52:00.637844 containerd[1598]: time="2025-11-05T15:52:00.637705281Z" level=info msg="RemoveContainer for \"7dacc059177b20e5fbb696e1a24c3ee88b11abeb62917c006267dd4a3f5c6919\" returns successfully" Nov 5 15:52:00.638207 kubelet[2786]: I1105 15:52:00.638104 2786 scope.go:117] "RemoveContainer" containerID="44706daef2ff86d33f5c3fb52d4f0d292e53909df54ef0990d161388ac35d6ae" Nov 5 15:52:00.641880 containerd[1598]: time="2025-11-05T15:52:00.641847052Z" level=info msg="RemoveContainer for \"44706daef2ff86d33f5c3fb52d4f0d292e53909df54ef0990d161388ac35d6ae\"" Nov 5 15:52:00.647201 containerd[1598]: time="2025-11-05T15:52:00.647118830Z" level=info msg="RemoveContainer for \"44706daef2ff86d33f5c3fb52d4f0d292e53909df54ef0990d161388ac35d6ae\" returns successfully" Nov 5 15:52:00.648212 kubelet[2786]: I1105 15:52:00.648174 2786 scope.go:117] "RemoveContainer" containerID="bd09933f076ab487df99ba4e46179a390831af0992d356f05ca6978b6a82cf94" Nov 5 15:52:00.652761 containerd[1598]: time="2025-11-05T15:52:00.652709302Z" level=info msg="RemoveContainer for \"bd09933f076ab487df99ba4e46179a390831af0992d356f05ca6978b6a82cf94\"" Nov 5 15:52:00.657201 containerd[1598]: time="2025-11-05T15:52:00.657137921Z" level=info msg="RemoveContainer for \"bd09933f076ab487df99ba4e46179a390831af0992d356f05ca6978b6a82cf94\" returns successfully" Nov 5 15:52:00.657572 kubelet[2786]: I1105 15:52:00.657532 2786 scope.go:117] "RemoveContainer" containerID="518062574cbab2c6ba832301c72b212958e7788c25a864f2c07c30bd21ce6bd3" Nov 5 15:52:00.660427 containerd[1598]: time="2025-11-05T15:52:00.660366155Z" level=info msg="RemoveContainer for \"518062574cbab2c6ba832301c72b212958e7788c25a864f2c07c30bd21ce6bd3\"" Nov 5 15:52:00.663734 containerd[1598]: time="2025-11-05T15:52:00.663693060Z" level=info msg="RemoveContainer for \"518062574cbab2c6ba832301c72b212958e7788c25a864f2c07c30bd21ce6bd3\" returns successfully" Nov 5 15:52:00.664142 kubelet[2786]: I1105 15:52:00.664111 2786 scope.go:117] "RemoveContainer" containerID="0a6b4503482af33f9a9e24fa1bcf4c93beb29de1fa0a4141dc46899843e082b8" Nov 5 15:52:00.667042 containerd[1598]: time="2025-11-05T15:52:00.667006637Z" level=info msg="RemoveContainer for \"0a6b4503482af33f9a9e24fa1bcf4c93beb29de1fa0a4141dc46899843e082b8\"" Nov 5 15:52:00.669829 containerd[1598]: time="2025-11-05T15:52:00.669792589Z" level=info msg="RemoveContainer for \"0a6b4503482af33f9a9e24fa1bcf4c93beb29de1fa0a4141dc46899843e082b8\" returns successfully" Nov 5 15:52:00.670068 kubelet[2786]: I1105 15:52:00.670041 2786 scope.go:117] "RemoveContainer" containerID="7dacc059177b20e5fbb696e1a24c3ee88b11abeb62917c006267dd4a3f5c6919" Nov 5 15:52:00.670340 containerd[1598]: time="2025-11-05T15:52:00.670277414Z" level=error msg="ContainerStatus for \"7dacc059177b20e5fbb696e1a24c3ee88b11abeb62917c006267dd4a3f5c6919\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7dacc059177b20e5fbb696e1a24c3ee88b11abeb62917c006267dd4a3f5c6919\": not found" Nov 5 15:52:00.670572 kubelet[2786]: E1105 15:52:00.670545 2786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7dacc059177b20e5fbb696e1a24c3ee88b11abeb62917c006267dd4a3f5c6919\": not found" containerID="7dacc059177b20e5fbb696e1a24c3ee88b11abeb62917c006267dd4a3f5c6919" Nov 5 15:52:00.670640 kubelet[2786]: I1105 15:52:00.670577 2786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7dacc059177b20e5fbb696e1a24c3ee88b11abeb62917c006267dd4a3f5c6919"} err="failed to get container status \"7dacc059177b20e5fbb696e1a24c3ee88b11abeb62917c006267dd4a3f5c6919\": rpc error: code = NotFound desc = an error occurred when try to find container \"7dacc059177b20e5fbb696e1a24c3ee88b11abeb62917c006267dd4a3f5c6919\": not found" Nov 5 15:52:00.670640 kubelet[2786]: I1105 15:52:00.670609 2786 scope.go:117] "RemoveContainer" containerID="44706daef2ff86d33f5c3fb52d4f0d292e53909df54ef0990d161388ac35d6ae" Nov 5 15:52:00.670913 containerd[1598]: time="2025-11-05T15:52:00.670874294Z" level=error msg="ContainerStatus for \"44706daef2ff86d33f5c3fb52d4f0d292e53909df54ef0990d161388ac35d6ae\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"44706daef2ff86d33f5c3fb52d4f0d292e53909df54ef0990d161388ac35d6ae\": not found" Nov 5 15:52:00.671146 kubelet[2786]: E1105 15:52:00.671122 2786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"44706daef2ff86d33f5c3fb52d4f0d292e53909df54ef0990d161388ac35d6ae\": not found" containerID="44706daef2ff86d33f5c3fb52d4f0d292e53909df54ef0990d161388ac35d6ae" Nov 5 15:52:00.671214 kubelet[2786]: I1105 15:52:00.671149 2786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"44706daef2ff86d33f5c3fb52d4f0d292e53909df54ef0990d161388ac35d6ae"} err="failed to get container status \"44706daef2ff86d33f5c3fb52d4f0d292e53909df54ef0990d161388ac35d6ae\": rpc error: code = NotFound desc = an error occurred when try to find container \"44706daef2ff86d33f5c3fb52d4f0d292e53909df54ef0990d161388ac35d6ae\": not found" Nov 5 15:52:00.671214 kubelet[2786]: I1105 15:52:00.671172 2786 scope.go:117] "RemoveContainer" containerID="bd09933f076ab487df99ba4e46179a390831af0992d356f05ca6978b6a82cf94" Nov 5 15:52:00.671457 containerd[1598]: time="2025-11-05T15:52:00.671427962Z" level=error msg="ContainerStatus for \"bd09933f076ab487df99ba4e46179a390831af0992d356f05ca6978b6a82cf94\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bd09933f076ab487df99ba4e46179a390831af0992d356f05ca6978b6a82cf94\": not found" Nov 5 15:52:00.671578 kubelet[2786]: E1105 15:52:00.671555 2786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bd09933f076ab487df99ba4e46179a390831af0992d356f05ca6978b6a82cf94\": not found" containerID="bd09933f076ab487df99ba4e46179a390831af0992d356f05ca6978b6a82cf94" Nov 5 15:52:00.671614 kubelet[2786]: I1105 15:52:00.671578 2786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bd09933f076ab487df99ba4e46179a390831af0992d356f05ca6978b6a82cf94"} err="failed to get container status \"bd09933f076ab487df99ba4e46179a390831af0992d356f05ca6978b6a82cf94\": rpc error: code = NotFound desc = an error occurred when try to find container \"bd09933f076ab487df99ba4e46179a390831af0992d356f05ca6978b6a82cf94\": not found" Nov 5 15:52:00.671614 kubelet[2786]: I1105 15:52:00.671610 2786 scope.go:117] "RemoveContainer" containerID="518062574cbab2c6ba832301c72b212958e7788c25a864f2c07c30bd21ce6bd3" Nov 5 15:52:00.671898 containerd[1598]: time="2025-11-05T15:52:00.671867821Z" level=error msg="ContainerStatus for \"518062574cbab2c6ba832301c72b212958e7788c25a864f2c07c30bd21ce6bd3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"518062574cbab2c6ba832301c72b212958e7788c25a864f2c07c30bd21ce6bd3\": not found" Nov 5 15:52:00.672035 kubelet[2786]: E1105 15:52:00.672013 2786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"518062574cbab2c6ba832301c72b212958e7788c25a864f2c07c30bd21ce6bd3\": not found" containerID="518062574cbab2c6ba832301c72b212958e7788c25a864f2c07c30bd21ce6bd3" Nov 5 15:52:00.672120 kubelet[2786]: I1105 15:52:00.672052 2786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"518062574cbab2c6ba832301c72b212958e7788c25a864f2c07c30bd21ce6bd3"} err="failed to get container status \"518062574cbab2c6ba832301c72b212958e7788c25a864f2c07c30bd21ce6bd3\": rpc error: code = NotFound desc = an error occurred when try to find container \"518062574cbab2c6ba832301c72b212958e7788c25a864f2c07c30bd21ce6bd3\": not found" Nov 5 15:52:00.672120 kubelet[2786]: I1105 15:52:00.672068 2786 scope.go:117] "RemoveContainer" containerID="0a6b4503482af33f9a9e24fa1bcf4c93beb29de1fa0a4141dc46899843e082b8" Nov 5 15:52:00.672266 containerd[1598]: time="2025-11-05T15:52:00.672214391Z" level=error msg="ContainerStatus for \"0a6b4503482af33f9a9e24fa1bcf4c93beb29de1fa0a4141dc46899843e082b8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0a6b4503482af33f9a9e24fa1bcf4c93beb29de1fa0a4141dc46899843e082b8\": not found" Nov 5 15:52:00.672335 kubelet[2786]: E1105 15:52:00.672303 2786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0a6b4503482af33f9a9e24fa1bcf4c93beb29de1fa0a4141dc46899843e082b8\": not found" containerID="0a6b4503482af33f9a9e24fa1bcf4c93beb29de1fa0a4141dc46899843e082b8" Nov 5 15:52:00.672335 kubelet[2786]: I1105 15:52:00.672319 2786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0a6b4503482af33f9a9e24fa1bcf4c93beb29de1fa0a4141dc46899843e082b8"} err="failed to get container status \"0a6b4503482af33f9a9e24fa1bcf4c93beb29de1fa0a4141dc46899843e082b8\": rpc error: code = NotFound desc = an error occurred when try to find container \"0a6b4503482af33f9a9e24fa1bcf4c93beb29de1fa0a4141dc46899843e082b8\": not found" Nov 5 15:52:01.122891 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-708a2299ac97e4b0f835a65746c9cfa8b3b375d9d27baaaffc50ee80e54aac28-shm.mount: Deactivated successfully. Nov 5 15:52:01.123413 systemd[1]: var-lib-kubelet-pods-5ec11585\x2d2830\x2d4ec3\x2d85e4\x2dbd5daee68c9a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8flxf.mount: Deactivated successfully. Nov 5 15:52:01.123494 systemd[1]: var-lib-kubelet-pods-600e642a\x2d4a9d\x2d43b4\x2d99e9\x2d77f1e45a228b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dt4rqf.mount: Deactivated successfully. Nov 5 15:52:01.123559 systemd[1]: var-lib-kubelet-pods-600e642a\x2d4a9d\x2d43b4\x2d99e9\x2d77f1e45a228b-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Nov 5 15:52:01.123637 systemd[1]: var-lib-kubelet-pods-600e642a\x2d4a9d\x2d43b4\x2d99e9\x2d77f1e45a228b-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Nov 5 15:52:01.877778 sshd[4353]: Connection closed by 139.178.68.195 port 41820 Nov 5 15:52:01.877241 sshd-session[4350]: pam_unix(sshd:session): session closed for user core Nov 5 15:52:01.891176 systemd[1]: sshd@24-137.184.121.184:22-139.178.68.195:41820.service: Deactivated successfully. Nov 5 15:52:01.894857 systemd[1]: session-25.scope: Deactivated successfully. Nov 5 15:52:01.897028 systemd-logind[1568]: Session 25 logged out. Waiting for processes to exit. Nov 5 15:52:01.903481 systemd[1]: Started sshd@25-137.184.121.184:22-139.178.68.195:41834.service - OpenSSH per-connection server daemon (139.178.68.195:41834). Nov 5 15:52:01.906039 systemd-logind[1568]: Removed session 25. Nov 5 15:52:01.978827 sshd[4500]: Accepted publickey for core from 139.178.68.195 port 41834 ssh2: RSA SHA256:6pZ2eqROk+ALbQ+c/ul+tfC2zt1KpSHiHdkR7HgdI30 Nov 5 15:52:01.981835 sshd-session[4500]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:52:01.992357 systemd-logind[1568]: New session 26 of user core. Nov 5 15:52:01.997306 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 5 15:52:02.211618 kubelet[2786]: I1105 15:52:02.211028 2786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ec11585-2830-4ec3-85e4-bd5daee68c9a" path="/var/lib/kubelet/pods/5ec11585-2830-4ec3-85e4-bd5daee68c9a/volumes" Nov 5 15:52:02.213209 kubelet[2786]: I1105 15:52:02.213164 2786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="600e642a-4a9d-43b4-99e9-77f1e45a228b" path="/var/lib/kubelet/pods/600e642a-4a9d-43b4-99e9-77f1e45a228b/volumes" Nov 5 15:52:02.329956 kubelet[2786]: I1105 15:52:02.329858 2786 setters.go:602] "Node became not ready" node="ci-4487.0.1-2-254db4f49e" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-05T15:52:02Z","lastTransitionTime":"2025-11-05T15:52:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Nov 5 15:52:02.823007 sshd[4503]: Connection closed by 139.178.68.195 port 41834 Nov 5 15:52:02.824197 sshd-session[4500]: pam_unix(sshd:session): session closed for user core Nov 5 15:52:02.845558 systemd[1]: sshd@25-137.184.121.184:22-139.178.68.195:41834.service: Deactivated successfully. Nov 5 15:52:02.854398 systemd[1]: session-26.scope: Deactivated successfully. Nov 5 15:52:02.856492 systemd-logind[1568]: Session 26 logged out. Waiting for processes to exit. Nov 5 15:52:02.866537 systemd[1]: Started sshd@26-137.184.121.184:22-139.178.68.195:41842.service - OpenSSH per-connection server daemon (139.178.68.195:41842). Nov 5 15:52:02.871996 systemd-logind[1568]: Removed session 26. Nov 5 15:52:02.886990 kubelet[2786]: I1105 15:52:02.886876 2786 memory_manager.go:355] "RemoveStaleState removing state" podUID="600e642a-4a9d-43b4-99e9-77f1e45a228b" containerName="cilium-agent" Nov 5 15:52:02.887404 kubelet[2786]: I1105 15:52:02.887189 2786 memory_manager.go:355] "RemoveStaleState removing state" podUID="5ec11585-2830-4ec3-85e4-bd5daee68c9a" containerName="cilium-operator" Nov 5 15:52:02.920482 systemd[1]: Created slice kubepods-burstable-podb16252c8_483b_4854_9afe_2f164005d07c.slice - libcontainer container kubepods-burstable-podb16252c8_483b_4854_9afe_2f164005d07c.slice. Nov 5 15:52:02.946813 kubelet[2786]: I1105 15:52:02.946377 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b16252c8-483b-4854-9afe-2f164005d07c-hostproc\") pod \"cilium-mp4x9\" (UID: \"b16252c8-483b-4854-9afe-2f164005d07c\") " pod="kube-system/cilium-mp4x9" Nov 5 15:52:02.946813 kubelet[2786]: I1105 15:52:02.946431 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b16252c8-483b-4854-9afe-2f164005d07c-cni-path\") pod \"cilium-mp4x9\" (UID: \"b16252c8-483b-4854-9afe-2f164005d07c\") " pod="kube-system/cilium-mp4x9" Nov 5 15:52:02.946813 kubelet[2786]: I1105 15:52:02.946456 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b16252c8-483b-4854-9afe-2f164005d07c-cilium-run\") pod \"cilium-mp4x9\" (UID: \"b16252c8-483b-4854-9afe-2f164005d07c\") " pod="kube-system/cilium-mp4x9" Nov 5 15:52:02.946813 kubelet[2786]: I1105 15:52:02.946479 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b16252c8-483b-4854-9afe-2f164005d07c-cilium-cgroup\") pod \"cilium-mp4x9\" (UID: \"b16252c8-483b-4854-9afe-2f164005d07c\") " pod="kube-system/cilium-mp4x9" Nov 5 15:52:02.946813 kubelet[2786]: I1105 15:52:02.946501 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b16252c8-483b-4854-9afe-2f164005d07c-lib-modules\") pod \"cilium-mp4x9\" (UID: \"b16252c8-483b-4854-9afe-2f164005d07c\") " pod="kube-system/cilium-mp4x9" Nov 5 15:52:02.946813 kubelet[2786]: I1105 15:52:02.946519 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b16252c8-483b-4854-9afe-2f164005d07c-clustermesh-secrets\") pod \"cilium-mp4x9\" (UID: \"b16252c8-483b-4854-9afe-2f164005d07c\") " pod="kube-system/cilium-mp4x9" Nov 5 15:52:02.947139 kubelet[2786]: I1105 15:52:02.946534 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b16252c8-483b-4854-9afe-2f164005d07c-bpf-maps\") pod \"cilium-mp4x9\" (UID: \"b16252c8-483b-4854-9afe-2f164005d07c\") " pod="kube-system/cilium-mp4x9" Nov 5 15:52:02.947139 kubelet[2786]: I1105 15:52:02.946549 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b16252c8-483b-4854-9afe-2f164005d07c-cilium-config-path\") pod \"cilium-mp4x9\" (UID: \"b16252c8-483b-4854-9afe-2f164005d07c\") " pod="kube-system/cilium-mp4x9" Nov 5 15:52:02.947139 kubelet[2786]: I1105 15:52:02.946568 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b16252c8-483b-4854-9afe-2f164005d07c-host-proc-sys-kernel\") pod \"cilium-mp4x9\" (UID: \"b16252c8-483b-4854-9afe-2f164005d07c\") " pod="kube-system/cilium-mp4x9" Nov 5 15:52:02.947139 kubelet[2786]: I1105 15:52:02.946594 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b16252c8-483b-4854-9afe-2f164005d07c-hubble-tls\") pod \"cilium-mp4x9\" (UID: \"b16252c8-483b-4854-9afe-2f164005d07c\") " pod="kube-system/cilium-mp4x9" Nov 5 15:52:02.947139 kubelet[2786]: I1105 15:52:02.946618 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b16252c8-483b-4854-9afe-2f164005d07c-xtables-lock\") pod \"cilium-mp4x9\" (UID: \"b16252c8-483b-4854-9afe-2f164005d07c\") " pod="kube-system/cilium-mp4x9" Nov 5 15:52:02.947139 kubelet[2786]: I1105 15:52:02.946637 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6q8dh\" (UniqueName: \"kubernetes.io/projected/b16252c8-483b-4854-9afe-2f164005d07c-kube-api-access-6q8dh\") pod \"cilium-mp4x9\" (UID: \"b16252c8-483b-4854-9afe-2f164005d07c\") " pod="kube-system/cilium-mp4x9" Nov 5 15:52:02.947290 kubelet[2786]: I1105 15:52:02.946652 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b16252c8-483b-4854-9afe-2f164005d07c-etc-cni-netd\") pod \"cilium-mp4x9\" (UID: \"b16252c8-483b-4854-9afe-2f164005d07c\") " pod="kube-system/cilium-mp4x9" Nov 5 15:52:02.947290 kubelet[2786]: I1105 15:52:02.946671 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b16252c8-483b-4854-9afe-2f164005d07c-cilium-ipsec-secrets\") pod \"cilium-mp4x9\" (UID: \"b16252c8-483b-4854-9afe-2f164005d07c\") " pod="kube-system/cilium-mp4x9" Nov 5 15:52:02.947290 kubelet[2786]: I1105 15:52:02.946693 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b16252c8-483b-4854-9afe-2f164005d07c-host-proc-sys-net\") pod \"cilium-mp4x9\" (UID: \"b16252c8-483b-4854-9afe-2f164005d07c\") " pod="kube-system/cilium-mp4x9" Nov 5 15:52:02.965589 sshd[4514]: Accepted publickey for core from 139.178.68.195 port 41842 ssh2: RSA SHA256:6pZ2eqROk+ALbQ+c/ul+tfC2zt1KpSHiHdkR7HgdI30 Nov 5 15:52:02.969671 sshd-session[4514]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:52:02.981251 systemd-logind[1568]: New session 27 of user core. Nov 5 15:52:02.987104 systemd[1]: Started session-27.scope - Session 27 of User core. Nov 5 15:52:03.052634 sshd[4517]: Connection closed by 139.178.68.195 port 41842 Nov 5 15:52:03.052634 sshd-session[4514]: pam_unix(sshd:session): session closed for user core Nov 5 15:52:03.111562 systemd[1]: sshd@26-137.184.121.184:22-139.178.68.195:41842.service: Deactivated successfully. Nov 5 15:52:03.117569 systemd[1]: session-27.scope: Deactivated successfully. Nov 5 15:52:03.120487 systemd-logind[1568]: Session 27 logged out. Waiting for processes to exit. Nov 5 15:52:03.124166 systemd-logind[1568]: Removed session 27. Nov 5 15:52:03.126349 systemd[1]: Started sshd@27-137.184.121.184:22-139.178.68.195:57888.service - OpenSSH per-connection server daemon (139.178.68.195:57888). Nov 5 15:52:03.197974 sshd[4528]: Accepted publickey for core from 139.178.68.195 port 57888 ssh2: RSA SHA256:6pZ2eqROk+ALbQ+c/ul+tfC2zt1KpSHiHdkR7HgdI30 Nov 5 15:52:03.200766 sshd-session[4528]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:52:03.204223 kubelet[2786]: E1105 15:52:03.203651 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 5 15:52:03.213034 systemd-logind[1568]: New session 28 of user core. Nov 5 15:52:03.220194 systemd[1]: Started session-28.scope - Session 28 of User core. Nov 5 15:52:03.229415 kubelet[2786]: E1105 15:52:03.229363 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 5 15:52:03.231991 containerd[1598]: time="2025-11-05T15:52:03.230479475Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mp4x9,Uid:b16252c8-483b-4854-9afe-2f164005d07c,Namespace:kube-system,Attempt:0,}" Nov 5 15:52:03.254850 containerd[1598]: time="2025-11-05T15:52:03.254759198Z" level=info msg="connecting to shim 25f928c5ec6982adeacabe527af07e9beffd282852a7bb91d7db6a3b5ea68eca" address="unix:///run/containerd/s/39f7dbbd52bd912a0d61c98ff5e8ce11598eac4462b0231e80a18fc672a4f723" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:52:03.286541 systemd[1]: Started cri-containerd-25f928c5ec6982adeacabe527af07e9beffd282852a7bb91d7db6a3b5ea68eca.scope - libcontainer container 25f928c5ec6982adeacabe527af07e9beffd282852a7bb91d7db6a3b5ea68eca. Nov 5 15:52:03.332722 containerd[1598]: time="2025-11-05T15:52:03.332677886Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mp4x9,Uid:b16252c8-483b-4854-9afe-2f164005d07c,Namespace:kube-system,Attempt:0,} returns sandbox id \"25f928c5ec6982adeacabe527af07e9beffd282852a7bb91d7db6a3b5ea68eca\"" Nov 5 15:52:03.336235 kubelet[2786]: E1105 15:52:03.336203 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 5 15:52:03.342095 containerd[1598]: time="2025-11-05T15:52:03.341704896Z" level=info msg="CreateContainer within sandbox \"25f928c5ec6982adeacabe527af07e9beffd282852a7bb91d7db6a3b5ea68eca\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 5 15:52:03.363589 containerd[1598]: time="2025-11-05T15:52:03.362598095Z" level=info msg="Container 6950492916580f17a347c1394b5eb94b344b6928a9317e081397a86d501cc6a0: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:52:03.371700 containerd[1598]: time="2025-11-05T15:52:03.371654934Z" level=info msg="CreateContainer within sandbox \"25f928c5ec6982adeacabe527af07e9beffd282852a7bb91d7db6a3b5ea68eca\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6950492916580f17a347c1394b5eb94b344b6928a9317e081397a86d501cc6a0\"" Nov 5 15:52:03.373975 containerd[1598]: time="2025-11-05T15:52:03.373921243Z" level=info msg="StartContainer for \"6950492916580f17a347c1394b5eb94b344b6928a9317e081397a86d501cc6a0\"" Nov 5 15:52:03.375203 containerd[1598]: time="2025-11-05T15:52:03.375159111Z" level=info msg="connecting to shim 6950492916580f17a347c1394b5eb94b344b6928a9317e081397a86d501cc6a0" address="unix:///run/containerd/s/39f7dbbd52bd912a0d61c98ff5e8ce11598eac4462b0231e80a18fc672a4f723" protocol=ttrpc version=3 Nov 5 15:52:03.399690 systemd[1]: Started cri-containerd-6950492916580f17a347c1394b5eb94b344b6928a9317e081397a86d501cc6a0.scope - libcontainer container 6950492916580f17a347c1394b5eb94b344b6928a9317e081397a86d501cc6a0. Nov 5 15:52:03.468156 containerd[1598]: time="2025-11-05T15:52:03.468086539Z" level=info msg="StartContainer for \"6950492916580f17a347c1394b5eb94b344b6928a9317e081397a86d501cc6a0\" returns successfully" Nov 5 15:52:03.482689 systemd[1]: cri-containerd-6950492916580f17a347c1394b5eb94b344b6928a9317e081397a86d501cc6a0.scope: Deactivated successfully. Nov 5 15:52:03.483185 systemd[1]: cri-containerd-6950492916580f17a347c1394b5eb94b344b6928a9317e081397a86d501cc6a0.scope: Consumed 30ms CPU time, 9.3M memory peak, 2.9M read from disk. Nov 5 15:52:03.486911 containerd[1598]: time="2025-11-05T15:52:03.486825904Z" level=info msg="received exit event container_id:\"6950492916580f17a347c1394b5eb94b344b6928a9317e081397a86d501cc6a0\" id:\"6950492916580f17a347c1394b5eb94b344b6928a9317e081397a86d501cc6a0\" pid:4596 exited_at:{seconds:1762357923 nanos:486028669}" Nov 5 15:52:03.487420 containerd[1598]: time="2025-11-05T15:52:03.487261891Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6950492916580f17a347c1394b5eb94b344b6928a9317e081397a86d501cc6a0\" id:\"6950492916580f17a347c1394b5eb94b344b6928a9317e081397a86d501cc6a0\" pid:4596 exited_at:{seconds:1762357923 nanos:486028669}" Nov 5 15:52:03.612407 kubelet[2786]: E1105 15:52:03.612332 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 5 15:52:03.619499 containerd[1598]: time="2025-11-05T15:52:03.619093265Z" level=info msg="CreateContainer within sandbox \"25f928c5ec6982adeacabe527af07e9beffd282852a7bb91d7db6a3b5ea68eca\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 5 15:52:03.630492 containerd[1598]: time="2025-11-05T15:52:03.630428771Z" level=info msg="Container cb01791efb283e21ab0c0e3affdb9226f802e89f73f5198acfff1aee6f159aed: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:52:03.637210 containerd[1598]: time="2025-11-05T15:52:03.637142805Z" level=info msg="CreateContainer within sandbox \"25f928c5ec6982adeacabe527af07e9beffd282852a7bb91d7db6a3b5ea68eca\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"cb01791efb283e21ab0c0e3affdb9226f802e89f73f5198acfff1aee6f159aed\"" Nov 5 15:52:03.638393 containerd[1598]: time="2025-11-05T15:52:03.638187693Z" level=info msg="StartContainer for \"cb01791efb283e21ab0c0e3affdb9226f802e89f73f5198acfff1aee6f159aed\"" Nov 5 15:52:03.644156 containerd[1598]: time="2025-11-05T15:52:03.644090364Z" level=info msg="connecting to shim cb01791efb283e21ab0c0e3affdb9226f802e89f73f5198acfff1aee6f159aed" address="unix:///run/containerd/s/39f7dbbd52bd912a0d61c98ff5e8ce11598eac4462b0231e80a18fc672a4f723" protocol=ttrpc version=3 Nov 5 15:52:03.678361 systemd[1]: Started cri-containerd-cb01791efb283e21ab0c0e3affdb9226f802e89f73f5198acfff1aee6f159aed.scope - libcontainer container cb01791efb283e21ab0c0e3affdb9226f802e89f73f5198acfff1aee6f159aed. Nov 5 15:52:03.717902 containerd[1598]: time="2025-11-05T15:52:03.717845320Z" level=info msg="StartContainer for \"cb01791efb283e21ab0c0e3affdb9226f802e89f73f5198acfff1aee6f159aed\" returns successfully" Nov 5 15:52:03.730913 systemd[1]: cri-containerd-cb01791efb283e21ab0c0e3affdb9226f802e89f73f5198acfff1aee6f159aed.scope: Deactivated successfully. Nov 5 15:52:03.731441 systemd[1]: cri-containerd-cb01791efb283e21ab0c0e3affdb9226f802e89f73f5198acfff1aee6f159aed.scope: Consumed 26ms CPU time, 7.3M memory peak, 2.2M read from disk. Nov 5 15:52:03.732705 containerd[1598]: time="2025-11-05T15:52:03.732539750Z" level=info msg="received exit event container_id:\"cb01791efb283e21ab0c0e3affdb9226f802e89f73f5198acfff1aee6f159aed\" id:\"cb01791efb283e21ab0c0e3affdb9226f802e89f73f5198acfff1aee6f159aed\" pid:4641 exited_at:{seconds:1762357923 nanos:731788211}" Nov 5 15:52:03.732907 containerd[1598]: time="2025-11-05T15:52:03.732876890Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cb01791efb283e21ab0c0e3affdb9226f802e89f73f5198acfff1aee6f159aed\" id:\"cb01791efb283e21ab0c0e3affdb9226f802e89f73f5198acfff1aee6f159aed\" pid:4641 exited_at:{seconds:1762357923 nanos:731788211}" Nov 5 15:52:04.617751 kubelet[2786]: E1105 15:52:04.617699 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 5 15:52:04.626485 containerd[1598]: time="2025-11-05T15:52:04.626428177Z" level=info msg="CreateContainer within sandbox \"25f928c5ec6982adeacabe527af07e9beffd282852a7bb91d7db6a3b5ea68eca\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 5 15:52:04.648627 containerd[1598]: time="2025-11-05T15:52:04.647718970Z" level=info msg="Container 5a62345a15aadfa142993fc90f5f16f48a06721453a7f1927f20d7759ce6cd2f: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:52:04.660994 containerd[1598]: time="2025-11-05T15:52:04.660298341Z" level=info msg="CreateContainer within sandbox \"25f928c5ec6982adeacabe527af07e9beffd282852a7bb91d7db6a3b5ea68eca\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"5a62345a15aadfa142993fc90f5f16f48a06721453a7f1927f20d7759ce6cd2f\"" Nov 5 15:52:04.662417 containerd[1598]: time="2025-11-05T15:52:04.662025270Z" level=info msg="StartContainer for \"5a62345a15aadfa142993fc90f5f16f48a06721453a7f1927f20d7759ce6cd2f\"" Nov 5 15:52:04.665388 containerd[1598]: time="2025-11-05T15:52:04.665251811Z" level=info msg="connecting to shim 5a62345a15aadfa142993fc90f5f16f48a06721453a7f1927f20d7759ce6cd2f" address="unix:///run/containerd/s/39f7dbbd52bd912a0d61c98ff5e8ce11598eac4462b0231e80a18fc672a4f723" protocol=ttrpc version=3 Nov 5 15:52:04.698384 systemd[1]: Started cri-containerd-5a62345a15aadfa142993fc90f5f16f48a06721453a7f1927f20d7759ce6cd2f.scope - libcontainer container 5a62345a15aadfa142993fc90f5f16f48a06721453a7f1927f20d7759ce6cd2f. Nov 5 15:52:04.751196 containerd[1598]: time="2025-11-05T15:52:04.751039094Z" level=info msg="StartContainer for \"5a62345a15aadfa142993fc90f5f16f48a06721453a7f1927f20d7759ce6cd2f\" returns successfully" Nov 5 15:52:04.755771 systemd[1]: cri-containerd-5a62345a15aadfa142993fc90f5f16f48a06721453a7f1927f20d7759ce6cd2f.scope: Deactivated successfully. Nov 5 15:52:04.758731 containerd[1598]: time="2025-11-05T15:52:04.758681505Z" level=info msg="received exit event container_id:\"5a62345a15aadfa142993fc90f5f16f48a06721453a7f1927f20d7759ce6cd2f\" id:\"5a62345a15aadfa142993fc90f5f16f48a06721453a7f1927f20d7759ce6cd2f\" pid:4685 exited_at:{seconds:1762357924 nanos:758170633}" Nov 5 15:52:04.759736 containerd[1598]: time="2025-11-05T15:52:04.759676621Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5a62345a15aadfa142993fc90f5f16f48a06721453a7f1927f20d7759ce6cd2f\" id:\"5a62345a15aadfa142993fc90f5f16f48a06721453a7f1927f20d7759ce6cd2f\" pid:4685 exited_at:{seconds:1762357924 nanos:758170633}" Nov 5 15:52:04.793409 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5a62345a15aadfa142993fc90f5f16f48a06721453a7f1927f20d7759ce6cd2f-rootfs.mount: Deactivated successfully. Nov 5 15:52:05.283760 kubelet[2786]: E1105 15:52:05.283639 2786 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Nov 5 15:52:05.625448 kubelet[2786]: E1105 15:52:05.625108 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 5 15:52:05.631876 containerd[1598]: time="2025-11-05T15:52:05.631827409Z" level=info msg="CreateContainer within sandbox \"25f928c5ec6982adeacabe527af07e9beffd282852a7bb91d7db6a3b5ea68eca\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 5 15:52:05.644977 containerd[1598]: time="2025-11-05T15:52:05.644750726Z" level=info msg="Container 4eb5d2c9a76133c8491164c21558bb8b2477489e1b8db6127d2192b7f82cafd4: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:52:05.651350 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount704212757.mount: Deactivated successfully. Nov 5 15:52:05.657593 containerd[1598]: time="2025-11-05T15:52:05.657532020Z" level=info msg="CreateContainer within sandbox \"25f928c5ec6982adeacabe527af07e9beffd282852a7bb91d7db6a3b5ea68eca\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"4eb5d2c9a76133c8491164c21558bb8b2477489e1b8db6127d2192b7f82cafd4\"" Nov 5 15:52:05.658905 containerd[1598]: time="2025-11-05T15:52:05.658727582Z" level=info msg="StartContainer for \"4eb5d2c9a76133c8491164c21558bb8b2477489e1b8db6127d2192b7f82cafd4\"" Nov 5 15:52:05.660452 containerd[1598]: time="2025-11-05T15:52:05.660407193Z" level=info msg="connecting to shim 4eb5d2c9a76133c8491164c21558bb8b2477489e1b8db6127d2192b7f82cafd4" address="unix:///run/containerd/s/39f7dbbd52bd912a0d61c98ff5e8ce11598eac4462b0231e80a18fc672a4f723" protocol=ttrpc version=3 Nov 5 15:52:05.710259 systemd[1]: Started cri-containerd-4eb5d2c9a76133c8491164c21558bb8b2477489e1b8db6127d2192b7f82cafd4.scope - libcontainer container 4eb5d2c9a76133c8491164c21558bb8b2477489e1b8db6127d2192b7f82cafd4. Nov 5 15:52:05.747980 systemd[1]: cri-containerd-4eb5d2c9a76133c8491164c21558bb8b2477489e1b8db6127d2192b7f82cafd4.scope: Deactivated successfully. Nov 5 15:52:05.750222 containerd[1598]: time="2025-11-05T15:52:05.750181887Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4eb5d2c9a76133c8491164c21558bb8b2477489e1b8db6127d2192b7f82cafd4\" id:\"4eb5d2c9a76133c8491164c21558bb8b2477489e1b8db6127d2192b7f82cafd4\" pid:4726 exited_at:{seconds:1762357925 nanos:749327252}" Nov 5 15:52:05.751167 containerd[1598]: time="2025-11-05T15:52:05.751022391Z" level=info msg="received exit event container_id:\"4eb5d2c9a76133c8491164c21558bb8b2477489e1b8db6127d2192b7f82cafd4\" id:\"4eb5d2c9a76133c8491164c21558bb8b2477489e1b8db6127d2192b7f82cafd4\" pid:4726 exited_at:{seconds:1762357925 nanos:749327252}" Nov 5 15:52:05.764323 containerd[1598]: time="2025-11-05T15:52:05.763283702Z" level=info msg="StartContainer for \"4eb5d2c9a76133c8491164c21558bb8b2477489e1b8db6127d2192b7f82cafd4\" returns successfully" Nov 5 15:52:05.779084 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4eb5d2c9a76133c8491164c21558bb8b2477489e1b8db6127d2192b7f82cafd4-rootfs.mount: Deactivated successfully. Nov 5 15:52:06.635086 kubelet[2786]: E1105 15:52:06.635033 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 5 15:52:06.644980 containerd[1598]: time="2025-11-05T15:52:06.642545655Z" level=info msg="CreateContainer within sandbox \"25f928c5ec6982adeacabe527af07e9beffd282852a7bb91d7db6a3b5ea68eca\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 5 15:52:06.657427 containerd[1598]: time="2025-11-05T15:52:06.657348927Z" level=info msg="Container 8f47351f5cdd019882494424aa91bb869d67b0c3cca81a4063a6d3af53860ac9: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:52:06.667791 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2279006622.mount: Deactivated successfully. Nov 5 15:52:06.672250 containerd[1598]: time="2025-11-05T15:52:06.672187272Z" level=info msg="CreateContainer within sandbox \"25f928c5ec6982adeacabe527af07e9beffd282852a7bb91d7db6a3b5ea68eca\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"8f47351f5cdd019882494424aa91bb869d67b0c3cca81a4063a6d3af53860ac9\"" Nov 5 15:52:06.673844 containerd[1598]: time="2025-11-05T15:52:06.673306976Z" level=info msg="StartContainer for \"8f47351f5cdd019882494424aa91bb869d67b0c3cca81a4063a6d3af53860ac9\"" Nov 5 15:52:06.674705 containerd[1598]: time="2025-11-05T15:52:06.674652636Z" level=info msg="connecting to shim 8f47351f5cdd019882494424aa91bb869d67b0c3cca81a4063a6d3af53860ac9" address="unix:///run/containerd/s/39f7dbbd52bd912a0d61c98ff5e8ce11598eac4462b0231e80a18fc672a4f723" protocol=ttrpc version=3 Nov 5 15:52:06.719260 systemd[1]: Started cri-containerd-8f47351f5cdd019882494424aa91bb869d67b0c3cca81a4063a6d3af53860ac9.scope - libcontainer container 8f47351f5cdd019882494424aa91bb869d67b0c3cca81a4063a6d3af53860ac9. Nov 5 15:52:06.763911 containerd[1598]: time="2025-11-05T15:52:06.763842663Z" level=info msg="StartContainer for \"8f47351f5cdd019882494424aa91bb869d67b0c3cca81a4063a6d3af53860ac9\" returns successfully" Nov 5 15:52:06.877663 containerd[1598]: time="2025-11-05T15:52:06.877553996Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8f47351f5cdd019882494424aa91bb869d67b0c3cca81a4063a6d3af53860ac9\" id:\"987d823e4268aebef3b76868ab0881980062b7a9488400e9ffd1b07efaad62d3\" pid:4791 exited_at:{seconds:1762357926 nanos:876228267}" Nov 5 15:52:07.314012 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Nov 5 15:52:07.642041 kubelet[2786]: E1105 15:52:07.641790 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 5 15:52:07.667452 kubelet[2786]: I1105 15:52:07.667365 2786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-mp4x9" podStartSLOduration=5.666705255 podStartE2EDuration="5.666705255s" podCreationTimestamp="2025-11-05 15:52:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 15:52:07.665923501 +0000 UTC m=+97.630773006" watchObservedRunningTime="2025-11-05 15:52:07.666705255 +0000 UTC m=+97.631554790" Nov 5 15:52:09.232464 kubelet[2786]: E1105 15:52:09.232407 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 5 15:52:09.868639 containerd[1598]: time="2025-11-05T15:52:09.868587764Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8f47351f5cdd019882494424aa91bb869d67b0c3cca81a4063a6d3af53860ac9\" id:\"3c0656112e13ea40e099d06d09e64f0f6143114c10bdf1a609bd4d3078ef0925\" pid:5009 exit_status:1 exited_at:{seconds:1762357929 nanos:868253885}" Nov 5 15:52:11.030625 systemd-networkd[1479]: lxc_health: Link UP Nov 5 15:52:11.038246 systemd-networkd[1479]: lxc_health: Gained carrier Nov 5 15:52:11.232256 kubelet[2786]: E1105 15:52:11.232206 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 5 15:52:11.655628 kubelet[2786]: E1105 15:52:11.655483 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 5 15:52:12.245144 containerd[1598]: time="2025-11-05T15:52:12.245090492Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8f47351f5cdd019882494424aa91bb869d67b0c3cca81a4063a6d3af53860ac9\" id:\"78dcb0d026d5d60ed339984321c4de51d2f3fd613860b5f60f4a41265f4921b2\" pid:5352 exited_at:{seconds:1762357932 nanos:242867164}" Nov 5 15:52:12.603366 systemd-networkd[1479]: lxc_health: Gained IPv6LL Nov 5 15:52:12.658957 kubelet[2786]: E1105 15:52:12.658902 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 5 15:52:14.431725 containerd[1598]: time="2025-11-05T15:52:14.431667560Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8f47351f5cdd019882494424aa91bb869d67b0c3cca81a4063a6d3af53860ac9\" id:\"6e0c1d30be24b79fb33bc6a25190b73fa4f7703072c33b2a59c1d471c63c07e2\" pid:5382 exited_at:{seconds:1762357934 nanos:430762243}" Nov 5 15:52:16.594706 containerd[1598]: time="2025-11-05T15:52:16.594605796Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8f47351f5cdd019882494424aa91bb869d67b0c3cca81a4063a6d3af53860ac9\" id:\"9c1b914c7f4ec085ae4a56cae8a5b059a40d88e9334aff19dfcad0ecd9583e45\" pid:5407 exited_at:{seconds:1762357936 nanos:590112059}" Nov 5 15:52:16.609080 sshd[4532]: Connection closed by 139.178.68.195 port 57888 Nov 5 15:52:16.610494 sshd-session[4528]: pam_unix(sshd:session): session closed for user core Nov 5 15:52:16.628522 systemd-logind[1568]: Session 28 logged out. Waiting for processes to exit. Nov 5 15:52:16.628680 systemd[1]: sshd@27-137.184.121.184:22-139.178.68.195:57888.service: Deactivated successfully. Nov 5 15:52:16.633285 systemd[1]: session-28.scope: Deactivated successfully. Nov 5 15:52:16.637119 systemd-logind[1568]: Removed session 28.