Aug 12 23:55:40.941260 kernel: Linux version 6.6.100-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Tue Aug 12 21:47:31 -00 2025 Aug 12 23:55:40.941308 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=ca71ea747c3f0d1de8a5ffcd0cfb9d0a1a4c4755719a09093b0248fa3902b433 Aug 12 23:55:40.941324 kernel: BIOS-provided physical RAM map: Aug 12 23:55:40.941331 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Aug 12 23:55:40.941338 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Aug 12 23:55:40.941345 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Aug 12 23:55:40.941353 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Aug 12 23:55:40.941360 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Aug 12 23:55:40.941367 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Aug 12 23:55:40.941374 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Aug 12 23:55:40.941384 kernel: NX (Execute Disable) protection: active Aug 12 23:55:40.941391 kernel: APIC: Static calls initialized Aug 12 23:55:40.941403 kernel: SMBIOS 2.8 present. Aug 12 23:55:40.941411 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Aug 12 23:55:40.941420 kernel: Hypervisor detected: KVM Aug 12 23:55:40.941427 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Aug 12 23:55:40.941441 kernel: kvm-clock: using sched offset of 3054379623 cycles Aug 12 23:55:40.941450 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Aug 12 23:55:40.941458 kernel: tsc: Detected 2494.140 MHz processor Aug 12 23:55:40.941466 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Aug 12 23:55:40.941475 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Aug 12 23:55:40.941483 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Aug 12 23:55:40.941491 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Aug 12 23:55:40.941505 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Aug 12 23:55:40.941516 kernel: ACPI: Early table checksum verification disabled Aug 12 23:55:40.941524 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Aug 12 23:55:40.941533 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 12 23:55:40.941540 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 12 23:55:40.941548 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 12 23:55:40.941556 kernel: ACPI: FACS 0x000000007FFE0000 000040 Aug 12 23:55:40.941565 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 12 23:55:40.941573 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 12 23:55:40.941581 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 12 23:55:40.941592 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 12 23:55:40.941600 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Aug 12 23:55:40.941608 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Aug 12 23:55:40.941616 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Aug 12 23:55:40.941624 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Aug 12 23:55:40.941632 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Aug 12 23:55:40.941640 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Aug 12 23:55:40.941652 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Aug 12 23:55:40.941663 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Aug 12 23:55:40.941672 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Aug 12 23:55:40.941680 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Aug 12 23:55:40.941689 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Aug 12 23:55:40.941699 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] Aug 12 23:55:40.941708 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] Aug 12 23:55:40.941719 kernel: Zone ranges: Aug 12 23:55:40.941727 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Aug 12 23:55:40.941736 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Aug 12 23:55:40.941744 kernel: Normal empty Aug 12 23:55:40.941752 kernel: Movable zone start for each node Aug 12 23:55:40.941761 kernel: Early memory node ranges Aug 12 23:55:40.941769 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Aug 12 23:55:40.941777 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Aug 12 23:55:40.941786 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Aug 12 23:55:40.941794 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 12 23:55:40.941805 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Aug 12 23:55:40.941815 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Aug 12 23:55:40.941824 kernel: ACPI: PM-Timer IO Port: 0x608 Aug 12 23:55:40.941833 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Aug 12 23:55:40.941841 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Aug 12 23:55:40.941849 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Aug 12 23:55:40.941858 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Aug 12 23:55:40.941866 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Aug 12 23:55:40.941874 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Aug 12 23:55:40.941890 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Aug 12 23:55:40.941899 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Aug 12 23:55:40.941907 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Aug 12 23:55:40.941915 kernel: TSC deadline timer available Aug 12 23:55:40.941923 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Aug 12 23:55:40.941932 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Aug 12 23:55:40.941940 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Aug 12 23:55:40.941951 kernel: Booting paravirtualized kernel on KVM Aug 12 23:55:40.941960 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Aug 12 23:55:40.941971 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Aug 12 23:55:40.941979 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u1048576 Aug 12 23:55:40.941988 kernel: pcpu-alloc: s197096 r8192 d32280 u1048576 alloc=1*2097152 Aug 12 23:55:40.941996 kernel: pcpu-alloc: [0] 0 1 Aug 12 23:55:40.942004 kernel: kvm-guest: PV spinlocks disabled, no host support Aug 12 23:55:40.942013 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=ca71ea747c3f0d1de8a5ffcd0cfb9d0a1a4c4755719a09093b0248fa3902b433 Aug 12 23:55:40.943082 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 12 23:55:40.943094 kernel: random: crng init done Aug 12 23:55:40.943109 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 12 23:55:40.943118 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Aug 12 23:55:40.943127 kernel: Fallback order for Node 0: 0 Aug 12 23:55:40.943136 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 Aug 12 23:55:40.943144 kernel: Policy zone: DMA32 Aug 12 23:55:40.943153 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 12 23:55:40.943162 kernel: Memory: 1969156K/2096612K available (14336K kernel code, 2295K rwdata, 22872K rodata, 43504K init, 1572K bss, 127196K reserved, 0K cma-reserved) Aug 12 23:55:40.943171 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Aug 12 23:55:40.943180 kernel: Kernel/User page tables isolation: enabled Aug 12 23:55:40.943191 kernel: ftrace: allocating 37942 entries in 149 pages Aug 12 23:55:40.943199 kernel: ftrace: allocated 149 pages with 4 groups Aug 12 23:55:40.943208 kernel: Dynamic Preempt: voluntary Aug 12 23:55:40.943216 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 12 23:55:40.943230 kernel: rcu: RCU event tracing is enabled. Aug 12 23:55:40.943239 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Aug 12 23:55:40.943248 kernel: Trampoline variant of Tasks RCU enabled. Aug 12 23:55:40.943256 kernel: Rude variant of Tasks RCU enabled. Aug 12 23:55:40.943265 kernel: Tracing variant of Tasks RCU enabled. Aug 12 23:55:40.943276 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 12 23:55:40.943285 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Aug 12 23:55:40.943293 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Aug 12 23:55:40.943302 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 12 23:55:40.943315 kernel: Console: colour VGA+ 80x25 Aug 12 23:55:40.943324 kernel: printk: console [tty0] enabled Aug 12 23:55:40.943332 kernel: printk: console [ttyS0] enabled Aug 12 23:55:40.943341 kernel: ACPI: Core revision 20230628 Aug 12 23:55:40.943350 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Aug 12 23:55:40.943361 kernel: APIC: Switch to symmetric I/O mode setup Aug 12 23:55:40.943370 kernel: x2apic enabled Aug 12 23:55:40.943378 kernel: APIC: Switched APIC routing to: physical x2apic Aug 12 23:55:40.943387 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Aug 12 23:55:40.943396 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns Aug 12 23:55:40.943404 kernel: Calibrating delay loop (skipped) preset value.. 4988.28 BogoMIPS (lpj=2494140) Aug 12 23:55:40.943439 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Aug 12 23:55:40.943448 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Aug 12 23:55:40.943468 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Aug 12 23:55:40.943477 kernel: Spectre V2 : Mitigation: Retpolines Aug 12 23:55:40.943486 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Aug 12 23:55:40.943495 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Aug 12 23:55:40.943506 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Aug 12 23:55:40.943515 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Aug 12 23:55:40.943524 kernel: MDS: Mitigation: Clear CPU buffers Aug 12 23:55:40.943533 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Aug 12 23:55:40.943542 kernel: ITS: Mitigation: Aligned branch/return thunks Aug 12 23:55:40.943557 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Aug 12 23:55:40.943567 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Aug 12 23:55:40.943576 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Aug 12 23:55:40.943585 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Aug 12 23:55:40.943594 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Aug 12 23:55:40.943603 kernel: Freeing SMP alternatives memory: 32K Aug 12 23:55:40.943612 kernel: pid_max: default: 32768 minimum: 301 Aug 12 23:55:40.943621 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Aug 12 23:55:40.943632 kernel: landlock: Up and running. Aug 12 23:55:40.943641 kernel: SELinux: Initializing. Aug 12 23:55:40.943650 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Aug 12 23:55:40.943659 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Aug 12 23:55:40.943668 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Aug 12 23:55:40.943677 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 12 23:55:40.943686 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 12 23:55:40.943695 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 12 23:55:40.943704 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Aug 12 23:55:40.943716 kernel: signal: max sigframe size: 1776 Aug 12 23:55:40.943725 kernel: rcu: Hierarchical SRCU implementation. Aug 12 23:55:40.943734 kernel: rcu: Max phase no-delay instances is 400. Aug 12 23:55:40.943743 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Aug 12 23:55:40.943752 kernel: smp: Bringing up secondary CPUs ... Aug 12 23:55:40.943760 kernel: smpboot: x86: Booting SMP configuration: Aug 12 23:55:40.943769 kernel: .... node #0, CPUs: #1 Aug 12 23:55:40.943778 kernel: smp: Brought up 1 node, 2 CPUs Aug 12 23:55:40.943789 kernel: smpboot: Max logical packages: 1 Aug 12 23:55:40.943801 kernel: smpboot: Total of 2 processors activated (9976.56 BogoMIPS) Aug 12 23:55:40.943810 kernel: devtmpfs: initialized Aug 12 23:55:40.943819 kernel: x86/mm: Memory block size: 128MB Aug 12 23:55:40.943828 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 12 23:55:40.943837 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Aug 12 23:55:40.943846 kernel: pinctrl core: initialized pinctrl subsystem Aug 12 23:55:40.943855 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 12 23:55:40.943863 kernel: audit: initializing netlink subsys (disabled) Aug 12 23:55:40.943873 kernel: audit: type=2000 audit(1755042939.375:1): state=initialized audit_enabled=0 res=1 Aug 12 23:55:40.943884 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 12 23:55:40.943893 kernel: thermal_sys: Registered thermal governor 'user_space' Aug 12 23:55:40.943902 kernel: cpuidle: using governor menu Aug 12 23:55:40.943911 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 12 23:55:40.943919 kernel: dca service started, version 1.12.1 Aug 12 23:55:40.943928 kernel: PCI: Using configuration type 1 for base access Aug 12 23:55:40.943937 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Aug 12 23:55:40.943946 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 12 23:55:40.943955 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Aug 12 23:55:40.943966 kernel: ACPI: Added _OSI(Module Device) Aug 12 23:55:40.943975 kernel: ACPI: Added _OSI(Processor Device) Aug 12 23:55:40.943984 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 12 23:55:40.943993 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 12 23:55:40.944002 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Aug 12 23:55:40.944011 kernel: ACPI: Interpreter enabled Aug 12 23:55:40.945121 kernel: ACPI: PM: (supports S0 S5) Aug 12 23:55:40.945139 kernel: ACPI: Using IOAPIC for interrupt routing Aug 12 23:55:40.945148 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Aug 12 23:55:40.945163 kernel: PCI: Using E820 reservations for host bridge windows Aug 12 23:55:40.945173 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Aug 12 23:55:40.945182 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 12 23:55:40.945389 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Aug 12 23:55:40.945516 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Aug 12 23:55:40.945672 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Aug 12 23:55:40.945691 kernel: acpiphp: Slot [3] registered Aug 12 23:55:40.945712 kernel: acpiphp: Slot [4] registered Aug 12 23:55:40.945726 kernel: acpiphp: Slot [5] registered Aug 12 23:55:40.945741 kernel: acpiphp: Slot [6] registered Aug 12 23:55:40.945756 kernel: acpiphp: Slot [7] registered Aug 12 23:55:40.945771 kernel: acpiphp: Slot [8] registered Aug 12 23:55:40.945788 kernel: acpiphp: Slot [9] registered Aug 12 23:55:40.945804 kernel: acpiphp: Slot [10] registered Aug 12 23:55:40.945819 kernel: acpiphp: Slot [11] registered Aug 12 23:55:40.945834 kernel: acpiphp: Slot [12] registered Aug 12 23:55:40.945850 kernel: acpiphp: Slot [13] registered Aug 12 23:55:40.945870 kernel: acpiphp: Slot [14] registered Aug 12 23:55:40.945902 kernel: acpiphp: Slot [15] registered Aug 12 23:55:40.945917 kernel: acpiphp: Slot [16] registered Aug 12 23:55:40.945932 kernel: acpiphp: Slot [17] registered Aug 12 23:55:40.945948 kernel: acpiphp: Slot [18] registered Aug 12 23:55:40.945963 kernel: acpiphp: Slot [19] registered Aug 12 23:55:40.945978 kernel: acpiphp: Slot [20] registered Aug 12 23:55:40.945993 kernel: acpiphp: Slot [21] registered Aug 12 23:55:40.946009 kernel: acpiphp: Slot [22] registered Aug 12 23:55:40.946052 kernel: acpiphp: Slot [23] registered Aug 12 23:55:40.946068 kernel: acpiphp: Slot [24] registered Aug 12 23:55:40.946083 kernel: acpiphp: Slot [25] registered Aug 12 23:55:40.946098 kernel: acpiphp: Slot [26] registered Aug 12 23:55:40.946112 kernel: acpiphp: Slot [27] registered Aug 12 23:55:40.946127 kernel: acpiphp: Slot [28] registered Aug 12 23:55:40.946142 kernel: acpiphp: Slot [29] registered Aug 12 23:55:40.946158 kernel: acpiphp: Slot [30] registered Aug 12 23:55:40.946173 kernel: acpiphp: Slot [31] registered Aug 12 23:55:40.946188 kernel: PCI host bridge to bus 0000:00 Aug 12 23:55:40.946363 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Aug 12 23:55:40.946459 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Aug 12 23:55:40.946548 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Aug 12 23:55:40.946636 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Aug 12 23:55:40.946723 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Aug 12 23:55:40.946839 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 12 23:55:40.946966 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Aug 12 23:55:40.949286 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Aug 12 23:55:40.949523 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Aug 12 23:55:40.949684 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Aug 12 23:55:40.949848 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Aug 12 23:55:40.949996 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Aug 12 23:55:40.951238 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Aug 12 23:55:40.951419 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Aug 12 23:55:40.951595 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Aug 12 23:55:40.951777 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Aug 12 23:55:40.951954 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Aug 12 23:55:40.954106 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Aug 12 23:55:40.954227 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Aug 12 23:55:40.954371 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Aug 12 23:55:40.954472 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Aug 12 23:55:40.954571 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Aug 12 23:55:40.954674 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Aug 12 23:55:40.954826 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Aug 12 23:55:40.954974 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Aug 12 23:55:40.955175 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Aug 12 23:55:40.955288 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Aug 12 23:55:40.955385 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Aug 12 23:55:40.955481 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Aug 12 23:55:40.955594 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Aug 12 23:55:40.955693 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Aug 12 23:55:40.955790 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Aug 12 23:55:40.955893 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Aug 12 23:55:40.956011 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Aug 12 23:55:40.957761 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Aug 12 23:55:40.957872 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Aug 12 23:55:40.957973 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Aug 12 23:55:40.959106 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Aug 12 23:55:40.959225 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Aug 12 23:55:40.959325 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Aug 12 23:55:40.959431 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Aug 12 23:55:40.959546 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Aug 12 23:55:40.959646 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Aug 12 23:55:40.959743 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Aug 12 23:55:40.959840 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Aug 12 23:55:40.961091 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Aug 12 23:55:40.961233 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Aug 12 23:55:40.961345 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Aug 12 23:55:40.961358 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Aug 12 23:55:40.961368 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Aug 12 23:55:40.961378 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Aug 12 23:55:40.961387 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Aug 12 23:55:40.961397 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Aug 12 23:55:40.961406 kernel: iommu: Default domain type: Translated Aug 12 23:55:40.961419 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Aug 12 23:55:40.961428 kernel: PCI: Using ACPI for IRQ routing Aug 12 23:55:40.961437 kernel: PCI: pci_cache_line_size set to 64 bytes Aug 12 23:55:40.961446 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Aug 12 23:55:40.961455 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Aug 12 23:55:40.961557 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Aug 12 23:55:40.961734 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Aug 12 23:55:40.961879 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Aug 12 23:55:40.961899 kernel: vgaarb: loaded Aug 12 23:55:40.961909 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Aug 12 23:55:40.961918 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Aug 12 23:55:40.961928 kernel: clocksource: Switched to clocksource kvm-clock Aug 12 23:55:40.961937 kernel: VFS: Disk quotas dquot_6.6.0 Aug 12 23:55:40.961946 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 12 23:55:40.961955 kernel: pnp: PnP ACPI init Aug 12 23:55:40.961965 kernel: pnp: PnP ACPI: found 4 devices Aug 12 23:55:40.961974 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Aug 12 23:55:40.961986 kernel: NET: Registered PF_INET protocol family Aug 12 23:55:40.961995 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 12 23:55:40.962004 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Aug 12 23:55:40.962013 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 12 23:55:40.962035 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Aug 12 23:55:40.962292 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Aug 12 23:55:40.962302 kernel: TCP: Hash tables configured (established 16384 bind 16384) Aug 12 23:55:40.962311 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Aug 12 23:55:40.962320 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Aug 12 23:55:40.962333 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 12 23:55:40.962342 kernel: NET: Registered PF_XDP protocol family Aug 12 23:55:40.962458 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Aug 12 23:55:40.962570 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Aug 12 23:55:40.962689 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Aug 12 23:55:40.962778 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Aug 12 23:55:40.962873 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Aug 12 23:55:40.962986 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Aug 12 23:55:40.963246 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Aug 12 23:55:40.963264 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Aug 12 23:55:40.963377 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7a0 took 31069 usecs Aug 12 23:55:40.963393 kernel: PCI: CLS 0 bytes, default 64 Aug 12 23:55:40.963404 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Aug 12 23:55:40.963413 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns Aug 12 23:55:40.963423 kernel: Initialise system trusted keyrings Aug 12 23:55:40.963432 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Aug 12 23:55:40.963441 kernel: Key type asymmetric registered Aug 12 23:55:40.963456 kernel: Asymmetric key parser 'x509' registered Aug 12 23:55:40.963465 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Aug 12 23:55:40.963474 kernel: io scheduler mq-deadline registered Aug 12 23:55:40.963483 kernel: io scheduler kyber registered Aug 12 23:55:40.963493 kernel: io scheduler bfq registered Aug 12 23:55:40.963502 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Aug 12 23:55:40.963512 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Aug 12 23:55:40.963521 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Aug 12 23:55:40.963530 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Aug 12 23:55:40.963542 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 12 23:55:40.963551 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Aug 12 23:55:40.963560 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Aug 12 23:55:40.963569 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Aug 12 23:55:40.963579 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Aug 12 23:55:40.963727 kernel: rtc_cmos 00:03: RTC can wake from S4 Aug 12 23:55:40.963743 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Aug 12 23:55:40.963833 kernel: rtc_cmos 00:03: registered as rtc0 Aug 12 23:55:40.963934 kernel: rtc_cmos 00:03: setting system clock to 2025-08-12T23:55:40 UTC (1755042940) Aug 12 23:55:40.964038 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Aug 12 23:55:40.964063 kernel: intel_pstate: CPU model not supported Aug 12 23:55:40.964072 kernel: NET: Registered PF_INET6 protocol family Aug 12 23:55:40.964081 kernel: Segment Routing with IPv6 Aug 12 23:55:40.964090 kernel: In-situ OAM (IOAM) with IPv6 Aug 12 23:55:40.964099 kernel: NET: Registered PF_PACKET protocol family Aug 12 23:55:40.964108 kernel: Key type dns_resolver registered Aug 12 23:55:40.964117 kernel: IPI shorthand broadcast: enabled Aug 12 23:55:40.964130 kernel: sched_clock: Marking stable (941003027, 89562764)->(1125365359, -94799568) Aug 12 23:55:40.964140 kernel: registered taskstats version 1 Aug 12 23:55:40.964149 kernel: Loading compiled-in X.509 certificates Aug 12 23:55:40.964158 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.100-flatcar: dfd2b306eb54324ea79eea0261f8d493924aeeeb' Aug 12 23:55:40.964167 kernel: Key type .fscrypt registered Aug 12 23:55:40.964176 kernel: Key type fscrypt-provisioning registered Aug 12 23:55:40.964185 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 12 23:55:40.964194 kernel: ima: Allocated hash algorithm: sha1 Aug 12 23:55:40.964207 kernel: ima: No architecture policies found Aug 12 23:55:40.964216 kernel: clk: Disabling unused clocks Aug 12 23:55:40.964225 kernel: Freeing unused kernel image (initmem) memory: 43504K Aug 12 23:55:40.964234 kernel: Write protecting the kernel read-only data: 38912k Aug 12 23:55:40.964243 kernel: Freeing unused kernel image (rodata/data gap) memory: 1704K Aug 12 23:55:40.964270 kernel: Run /init as init process Aug 12 23:55:40.964282 kernel: with arguments: Aug 12 23:55:40.964292 kernel: /init Aug 12 23:55:40.964301 kernel: with environment: Aug 12 23:55:40.964313 kernel: HOME=/ Aug 12 23:55:40.964322 kernel: TERM=linux Aug 12 23:55:40.964331 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 12 23:55:40.964342 systemd[1]: Successfully made /usr/ read-only. Aug 12 23:55:40.964356 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Aug 12 23:55:40.964367 systemd[1]: Detected virtualization kvm. Aug 12 23:55:40.964376 systemd[1]: Detected architecture x86-64. Aug 12 23:55:40.964386 systemd[1]: Running in initrd. Aug 12 23:55:40.964398 systemd[1]: No hostname configured, using default hostname. Aug 12 23:55:40.964410 systemd[1]: Hostname set to . Aug 12 23:55:40.964425 systemd[1]: Initializing machine ID from VM UUID. Aug 12 23:55:40.964439 systemd[1]: Queued start job for default target initrd.target. Aug 12 23:55:40.964455 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 12 23:55:40.964466 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 12 23:55:40.964477 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 12 23:55:40.964487 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 12 23:55:40.964501 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 12 23:55:40.964512 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 12 23:55:40.964526 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 12 23:55:40.964536 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 12 23:55:40.964546 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 12 23:55:40.964556 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 12 23:55:40.964565 systemd[1]: Reached target paths.target - Path Units. Aug 12 23:55:40.964578 systemd[1]: Reached target slices.target - Slice Units. Aug 12 23:55:40.964589 systemd[1]: Reached target swap.target - Swaps. Aug 12 23:55:40.964602 systemd[1]: Reached target timers.target - Timer Units. Aug 12 23:55:40.964612 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 12 23:55:40.964622 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 12 23:55:40.964634 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 12 23:55:40.964644 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Aug 12 23:55:40.964654 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 12 23:55:40.964664 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 12 23:55:40.964674 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 12 23:55:40.964683 systemd[1]: Reached target sockets.target - Socket Units. Aug 12 23:55:40.964694 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 12 23:55:40.964703 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 12 23:55:40.964713 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 12 23:55:40.964726 systemd[1]: Starting systemd-fsck-usr.service... Aug 12 23:55:40.964736 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 12 23:55:40.964746 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 12 23:55:40.964756 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 12 23:55:40.964765 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 12 23:55:40.964775 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 12 23:55:40.964827 systemd-journald[183]: Collecting audit messages is disabled. Aug 12 23:55:40.964881 systemd[1]: Finished systemd-fsck-usr.service. Aug 12 23:55:40.964896 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 12 23:55:40.964908 systemd-journald[183]: Journal started Aug 12 23:55:40.964935 systemd-journald[183]: Runtime Journal (/run/log/journal/fb81189b58dd4a7c8fd48a8bc1aa1d52) is 4.9M, max 39.3M, 34.4M free. Aug 12 23:55:40.936368 systemd-modules-load[184]: Inserted module 'overlay' Aug 12 23:55:41.011800 systemd[1]: Started systemd-journald.service - Journal Service. Aug 12 23:55:41.011855 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 12 23:55:41.011896 kernel: Bridge firewalling registered Aug 12 23:55:40.986568 systemd-modules-load[184]: Inserted module 'br_netfilter' Aug 12 23:55:41.011781 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 12 23:55:41.012671 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 12 23:55:41.017540 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 12 23:55:41.026392 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 12 23:55:41.028319 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 12 23:55:41.038735 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 12 23:55:41.046426 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 12 23:55:41.062404 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 12 23:55:41.065529 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 12 23:55:41.070884 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 12 23:55:41.078425 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 12 23:55:41.080759 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 12 23:55:41.091278 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 12 23:55:41.103617 dracut-cmdline[218]: dracut-dracut-053 Aug 12 23:55:41.108462 dracut-cmdline[218]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=ca71ea747c3f0d1de8a5ffcd0cfb9d0a1a4c4755719a09093b0248fa3902b433 Aug 12 23:55:41.140865 systemd-resolved[221]: Positive Trust Anchors: Aug 12 23:55:41.140885 systemd-resolved[221]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 12 23:55:41.140924 systemd-resolved[221]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 12 23:55:41.145604 systemd-resolved[221]: Defaulting to hostname 'linux'. Aug 12 23:55:41.147824 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 12 23:55:41.148415 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 12 23:55:41.204075 kernel: SCSI subsystem initialized Aug 12 23:55:41.215055 kernel: Loading iSCSI transport class v2.0-870. Aug 12 23:55:41.227066 kernel: iscsi: registered transport (tcp) Aug 12 23:55:41.250068 kernel: iscsi: registered transport (qla4xxx) Aug 12 23:55:41.250158 kernel: QLogic iSCSI HBA Driver Aug 12 23:55:41.306747 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 12 23:55:41.312373 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 12 23:55:41.343067 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 12 23:55:41.343165 kernel: device-mapper: uevent: version 1.0.3 Aug 12 23:55:41.344346 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Aug 12 23:55:41.391098 kernel: raid6: avx2x4 gen() 16441 MB/s Aug 12 23:55:41.406081 kernel: raid6: avx2x2 gen() 17550 MB/s Aug 12 23:55:41.423475 kernel: raid6: avx2x1 gen() 13198 MB/s Aug 12 23:55:41.423605 kernel: raid6: using algorithm avx2x2 gen() 17550 MB/s Aug 12 23:55:41.441236 kernel: raid6: .... xor() 18708 MB/s, rmw enabled Aug 12 23:55:41.441354 kernel: raid6: using avx2x2 recovery algorithm Aug 12 23:55:41.463077 kernel: xor: automatically using best checksumming function avx Aug 12 23:55:41.635119 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 12 23:55:41.655084 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 12 23:55:41.662512 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 12 23:55:41.693169 systemd-udevd[404]: Using default interface naming scheme 'v255'. Aug 12 23:55:41.700165 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 12 23:55:41.710394 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 12 23:55:41.729802 dracut-pre-trigger[410]: rd.md=0: removing MD RAID activation Aug 12 23:55:41.768627 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 12 23:55:41.775386 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 12 23:55:41.843256 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 12 23:55:41.853444 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 12 23:55:41.897485 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 12 23:55:41.902801 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 12 23:55:41.903505 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 12 23:55:41.904984 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 12 23:55:41.916039 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 12 23:55:41.952078 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 12 23:55:42.000048 kernel: libata version 3.00 loaded. Aug 12 23:55:42.002070 kernel: ata_piix 0000:00:01.1: version 2.13 Aug 12 23:55:42.011113 kernel: scsi host1: Virtio SCSI HBA Aug 12 23:55:42.040633 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Aug 12 23:55:42.040873 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Aug 12 23:55:42.041214 kernel: scsi host0: ata_piix Aug 12 23:55:42.041458 kernel: cryptd: max_cpu_qlen set to 1000 Aug 12 23:55:42.041497 kernel: scsi host2: ata_piix Aug 12 23:55:42.041703 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Aug 12 23:55:42.041725 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Aug 12 23:55:42.045110 kernel: ACPI: bus type USB registered Aug 12 23:55:42.045212 kernel: usbcore: registered new interface driver usbfs Aug 12 23:55:42.045234 kernel: usbcore: registered new interface driver hub Aug 12 23:55:42.046307 kernel: usbcore: registered new device driver usb Aug 12 23:55:42.054555 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 12 23:55:42.054679 kernel: GPT:9289727 != 125829119 Aug 12 23:55:42.054726 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 12 23:55:42.054744 kernel: GPT:9289727 != 125829119 Aug 12 23:55:42.054762 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 12 23:55:42.054780 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 12 23:55:42.065071 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Aug 12 23:55:42.068244 kernel: virtio_blk virtio5: [vdb] 976 512-byte logical blocks (500 kB/488 KiB) Aug 12 23:55:42.071083 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 12 23:55:42.071300 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 12 23:55:42.072673 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 12 23:55:42.073216 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 12 23:55:42.073525 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 12 23:55:42.074379 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 12 23:55:42.081697 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 12 23:55:42.082910 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Aug 12 23:55:42.145643 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 12 23:55:42.153400 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 12 23:55:42.184201 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 12 23:55:42.214051 kernel: AVX2 version of gcm_enc/dec engaged. Aug 12 23:55:42.218085 kernel: AES CTR mode by8 optimization enabled Aug 12 23:55:42.258160 kernel: BTRFS: device fsid 88a9bed3-d26b-40c9-82ba-dbb7d44acae7 devid 1 transid 45 /dev/vda3 scanned by (udev-worker) (450) Aug 12 23:55:42.277316 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Aug 12 23:55:42.282312 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by (udev-worker) (464) Aug 12 23:55:42.282343 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Aug 12 23:55:42.285050 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Aug 12 23:55:42.287055 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Aug 12 23:55:42.294172 kernel: hub 1-0:1.0: USB hub found Aug 12 23:55:42.294550 kernel: hub 1-0:1.0: 2 ports detected Aug 12 23:55:42.315917 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Aug 12 23:55:42.332091 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Aug 12 23:55:42.333788 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Aug 12 23:55:42.346843 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Aug 12 23:55:42.356927 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 12 23:55:42.364350 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 12 23:55:42.372044 disk-uuid[550]: Primary Header is updated. Aug 12 23:55:42.372044 disk-uuid[550]: Secondary Entries is updated. Aug 12 23:55:42.372044 disk-uuid[550]: Secondary Header is updated. Aug 12 23:55:42.388060 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 12 23:55:42.405087 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 12 23:55:43.413966 disk-uuid[551]: The operation has completed successfully. Aug 12 23:55:43.414796 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 12 23:55:43.476106 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 12 23:55:43.476303 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 12 23:55:43.540437 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 12 23:55:43.544793 sh[563]: Success Aug 12 23:55:43.562146 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Aug 12 23:55:43.636771 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 12 23:55:43.650590 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 12 23:55:43.654891 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 12 23:55:43.677047 kernel: BTRFS info (device dm-0): first mount of filesystem 88a9bed3-d26b-40c9-82ba-dbb7d44acae7 Aug 12 23:55:43.680943 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Aug 12 23:55:43.681090 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Aug 12 23:55:43.681119 kernel: BTRFS info (device dm-0): disabling log replay at mount time Aug 12 23:55:43.681139 kernel: BTRFS info (device dm-0): using free space tree Aug 12 23:55:43.690055 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 12 23:55:43.691283 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 12 23:55:43.697459 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 12 23:55:43.701361 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 12 23:55:43.724585 kernel: BTRFS info (device vda6): first mount of filesystem fdf7217d-4a76-4a93-98b1-684d9c141517 Aug 12 23:55:43.724666 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 12 23:55:43.724681 kernel: BTRFS info (device vda6): using free space tree Aug 12 23:55:43.728137 kernel: BTRFS info (device vda6): auto enabling async discard Aug 12 23:55:43.735118 kernel: BTRFS info (device vda6): last unmount of filesystem fdf7217d-4a76-4a93-98b1-684d9c141517 Aug 12 23:55:43.737762 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 12 23:55:43.745690 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 12 23:55:43.880927 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 12 23:55:43.890343 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 12 23:55:43.902162 ignition[637]: Ignition 2.20.0 Aug 12 23:55:43.902174 ignition[637]: Stage: fetch-offline Aug 12 23:55:43.902229 ignition[637]: no configs at "/usr/lib/ignition/base.d" Aug 12 23:55:43.903863 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 12 23:55:43.902239 ignition[637]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 12 23:55:43.902349 ignition[637]: parsed url from cmdline: "" Aug 12 23:55:43.902354 ignition[637]: no config URL provided Aug 12 23:55:43.902360 ignition[637]: reading system config file "/usr/lib/ignition/user.ign" Aug 12 23:55:43.902370 ignition[637]: no config at "/usr/lib/ignition/user.ign" Aug 12 23:55:43.902376 ignition[637]: failed to fetch config: resource requires networking Aug 12 23:55:43.902732 ignition[637]: Ignition finished successfully Aug 12 23:55:43.927686 systemd-networkd[748]: lo: Link UP Aug 12 23:55:43.927698 systemd-networkd[748]: lo: Gained carrier Aug 12 23:55:43.930501 systemd-networkd[748]: Enumeration completed Aug 12 23:55:43.930643 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 12 23:55:43.931174 systemd[1]: Reached target network.target - Network. Aug 12 23:55:43.931570 systemd-networkd[748]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Aug 12 23:55:43.931575 systemd-networkd[748]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Aug 12 23:55:43.932712 systemd-networkd[748]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 12 23:55:43.932716 systemd-networkd[748]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 12 23:55:43.933521 systemd-networkd[748]: eth0: Link UP Aug 12 23:55:43.933525 systemd-networkd[748]: eth0: Gained carrier Aug 12 23:55:43.933536 systemd-networkd[748]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Aug 12 23:55:43.938175 systemd-networkd[748]: eth1: Link UP Aug 12 23:55:43.938180 systemd-networkd[748]: eth1: Gained carrier Aug 12 23:55:43.938195 systemd-networkd[748]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 12 23:55:43.942275 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Aug 12 23:55:43.952128 systemd-networkd[748]: eth0: DHCPv4 address 137.184.234.76/20, gateway 137.184.224.1 acquired from 169.254.169.253 Aug 12 23:55:43.957158 systemd-networkd[748]: eth1: DHCPv4 address 10.124.0.28/20 acquired from 169.254.169.253 Aug 12 23:55:43.971757 ignition[752]: Ignition 2.20.0 Aug 12 23:55:43.972701 ignition[752]: Stage: fetch Aug 12 23:55:43.973076 ignition[752]: no configs at "/usr/lib/ignition/base.d" Aug 12 23:55:43.973099 ignition[752]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 12 23:55:43.973257 ignition[752]: parsed url from cmdline: "" Aug 12 23:55:43.973264 ignition[752]: no config URL provided Aug 12 23:55:43.973274 ignition[752]: reading system config file "/usr/lib/ignition/user.ign" Aug 12 23:55:43.973290 ignition[752]: no config at "/usr/lib/ignition/user.ign" Aug 12 23:55:43.973326 ignition[752]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Aug 12 23:55:43.996587 ignition[752]: GET result: OK Aug 12 23:55:43.996897 ignition[752]: parsing config with SHA512: fb425ea4a1904879840cf401678776ed39bfa86726dc587b3d2dc3bf33043fd1102d79cf9aae4b22a476c0b93627006646581d9f718231770e7dfac19cd4a3fe Aug 12 23:55:44.005283 unknown[752]: fetched base config from "system" Aug 12 23:55:44.005299 unknown[752]: fetched base config from "system" Aug 12 23:55:44.005307 unknown[752]: fetched user config from "digitalocean" Aug 12 23:55:44.006012 ignition[752]: fetch: fetch complete Aug 12 23:55:44.006032 ignition[752]: fetch: fetch passed Aug 12 23:55:44.006095 ignition[752]: Ignition finished successfully Aug 12 23:55:44.007746 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Aug 12 23:55:44.013280 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 12 23:55:44.042662 ignition[759]: Ignition 2.20.0 Aug 12 23:55:44.042680 ignition[759]: Stage: kargs Aug 12 23:55:44.042907 ignition[759]: no configs at "/usr/lib/ignition/base.d" Aug 12 23:55:44.042920 ignition[759]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 12 23:55:44.043975 ignition[759]: kargs: kargs passed Aug 12 23:55:44.044084 ignition[759]: Ignition finished successfully Aug 12 23:55:44.046784 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 12 23:55:44.053303 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 12 23:55:44.076627 ignition[765]: Ignition 2.20.0 Aug 12 23:55:44.076639 ignition[765]: Stage: disks Aug 12 23:55:44.076878 ignition[765]: no configs at "/usr/lib/ignition/base.d" Aug 12 23:55:44.079990 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 12 23:55:44.076892 ignition[765]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 12 23:55:44.085255 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 12 23:55:44.077791 ignition[765]: disks: disks passed Aug 12 23:55:44.085775 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 12 23:55:44.077847 ignition[765]: Ignition finished successfully Aug 12 23:55:44.086599 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 12 23:55:44.087416 systemd[1]: Reached target sysinit.target - System Initialization. Aug 12 23:55:44.088340 systemd[1]: Reached target basic.target - Basic System. Aug 12 23:55:44.096394 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 12 23:55:44.117154 systemd-fsck[773]: ROOT: clean, 14/553520 files, 52654/553472 blocks Aug 12 23:55:44.120450 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 12 23:55:44.125639 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 12 23:55:44.230133 kernel: EXT4-fs (vda9): mounted filesystem 27db109b-2440-48a3-909e-fd8973275523 r/w with ordered data mode. Quota mode: none. Aug 12 23:55:44.230920 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 12 23:55:44.231927 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 12 23:55:44.245303 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 12 23:55:44.248169 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 12 23:55:44.250241 systemd[1]: Starting flatcar-afterburn-network.service - Flatcar Afterburn network service... Aug 12 23:55:44.259131 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by mount (781) Aug 12 23:55:44.259575 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Aug 12 23:55:44.266360 kernel: BTRFS info (device vda6): first mount of filesystem fdf7217d-4a76-4a93-98b1-684d9c141517 Aug 12 23:55:44.266394 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 12 23:55:44.266407 kernel: BTRFS info (device vda6): using free space tree Aug 12 23:55:44.266582 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 12 23:55:44.266631 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 12 23:55:44.271058 kernel: BTRFS info (device vda6): auto enabling async discard Aug 12 23:55:44.276709 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 12 23:55:44.278091 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 12 23:55:44.282345 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 12 23:55:44.348437 coreos-metadata[783]: Aug 12 23:55:44.348 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Aug 12 23:55:44.358048 initrd-setup-root[811]: cut: /sysroot/etc/passwd: No such file or directory Aug 12 23:55:44.364312 coreos-metadata[784]: Aug 12 23:55:44.364 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Aug 12 23:55:44.366569 initrd-setup-root[818]: cut: /sysroot/etc/group: No such file or directory Aug 12 23:55:44.368391 coreos-metadata[783]: Aug 12 23:55:44.367 INFO Fetch successful Aug 12 23:55:44.374058 systemd[1]: flatcar-afterburn-network.service: Deactivated successfully. Aug 12 23:55:44.374923 coreos-metadata[784]: Aug 12 23:55:44.374 INFO Fetch successful Aug 12 23:55:44.374921 systemd[1]: Finished flatcar-afterburn-network.service - Flatcar Afterburn network service. Aug 12 23:55:44.375867 initrd-setup-root[825]: cut: /sysroot/etc/shadow: No such file or directory Aug 12 23:55:44.381915 initrd-setup-root[833]: cut: /sysroot/etc/gshadow: No such file or directory Aug 12 23:55:44.384747 coreos-metadata[784]: Aug 12 23:55:44.384 INFO wrote hostname ci-4230.2.2-9-8f36bdb456 to /sysroot/etc/hostname Aug 12 23:55:44.387347 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Aug 12 23:55:44.497012 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 12 23:55:44.502184 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 12 23:55:44.504227 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 12 23:55:44.518047 kernel: BTRFS info (device vda6): last unmount of filesystem fdf7217d-4a76-4a93-98b1-684d9c141517 Aug 12 23:55:44.539079 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 12 23:55:44.544688 ignition[901]: INFO : Ignition 2.20.0 Aug 12 23:55:44.545627 ignition[901]: INFO : Stage: mount Aug 12 23:55:44.546556 ignition[901]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 12 23:55:44.546556 ignition[901]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 12 23:55:44.547584 ignition[901]: INFO : mount: mount passed Aug 12 23:55:44.547584 ignition[901]: INFO : Ignition finished successfully Aug 12 23:55:44.549701 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 12 23:55:44.557254 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 12 23:55:44.677315 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 12 23:55:44.684373 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 12 23:55:44.696063 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (914) Aug 12 23:55:44.698351 kernel: BTRFS info (device vda6): first mount of filesystem fdf7217d-4a76-4a93-98b1-684d9c141517 Aug 12 23:55:44.698433 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 12 23:55:44.698447 kernel: BTRFS info (device vda6): using free space tree Aug 12 23:55:44.705063 kernel: BTRFS info (device vda6): auto enabling async discard Aug 12 23:55:44.707685 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 12 23:55:44.733764 ignition[931]: INFO : Ignition 2.20.0 Aug 12 23:55:44.733764 ignition[931]: INFO : Stage: files Aug 12 23:55:44.734918 ignition[931]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 12 23:55:44.734918 ignition[931]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 12 23:55:44.734918 ignition[931]: DEBUG : files: compiled without relabeling support, skipping Aug 12 23:55:44.736782 ignition[931]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 12 23:55:44.736782 ignition[931]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 12 23:55:44.739044 ignition[931]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 12 23:55:44.739750 ignition[931]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 12 23:55:44.739750 ignition[931]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 12 23:55:44.739560 unknown[931]: wrote ssh authorized keys file for user: core Aug 12 23:55:44.741969 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 12 23:55:44.742842 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Aug 12 23:55:44.770657 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 12 23:55:44.911797 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 12 23:55:44.911797 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 12 23:55:44.911797 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Aug 12 23:55:44.993078 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Aug 12 23:55:45.401270 systemd-networkd[748]: eth1: Gained IPv6LL Aug 12 23:55:45.529502 systemd-networkd[748]: eth0: Gained IPv6LL Aug 12 23:55:48.886085 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 12 23:55:48.886085 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Aug 12 23:55:48.887787 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Aug 12 23:55:48.887787 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 12 23:55:48.889876 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 12 23:55:48.890489 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 12 23:55:48.890489 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 12 23:55:48.890489 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 12 23:55:48.890489 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 12 23:55:48.890489 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 12 23:55:48.897905 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 12 23:55:48.897905 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 12 23:55:48.897905 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 12 23:55:48.897905 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 12 23:55:48.897905 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Aug 12 23:55:49.188528 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Aug 12 23:56:06.879136 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 12 23:56:06.879136 ignition[931]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Aug 12 23:56:06.881076 ignition[931]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 12 23:56:06.881076 ignition[931]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 12 23:56:06.881076 ignition[931]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Aug 12 23:56:06.881076 ignition[931]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Aug 12 23:56:06.881076 ignition[931]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Aug 12 23:56:06.885144 ignition[931]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 12 23:56:06.885144 ignition[931]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 12 23:56:06.885144 ignition[931]: INFO : files: files passed Aug 12 23:56:06.885144 ignition[931]: INFO : Ignition finished successfully Aug 12 23:56:06.884743 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 12 23:56:06.892333 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 12 23:56:06.895392 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 12 23:56:06.899444 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 12 23:56:06.899594 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 12 23:56:06.921216 initrd-setup-root-after-ignition[960]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 12 23:56:06.921216 initrd-setup-root-after-ignition[960]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 12 23:56:06.923142 initrd-setup-root-after-ignition[964]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 12 23:56:06.925901 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 12 23:56:06.926954 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 12 23:56:06.933315 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 12 23:56:06.972973 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 12 23:56:06.973134 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 12 23:56:06.974633 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 12 23:56:06.975054 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 12 23:56:06.975799 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 12 23:56:06.977078 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 12 23:56:07.008357 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 12 23:56:07.015337 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 12 23:56:07.037541 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 12 23:56:07.038742 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 12 23:56:07.039812 systemd[1]: Stopped target timers.target - Timer Units. Aug 12 23:56:07.040350 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 12 23:56:07.040552 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 12 23:56:07.041701 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 12 23:56:07.042262 systemd[1]: Stopped target basic.target - Basic System. Aug 12 23:56:07.043082 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 12 23:56:07.043829 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 12 23:56:07.044651 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 12 23:56:07.045632 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 12 23:56:07.046499 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 12 23:56:07.047416 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 12 23:56:07.048259 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 12 23:56:07.049138 systemd[1]: Stopped target swap.target - Swaps. Aug 12 23:56:07.049878 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 12 23:56:07.050101 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 12 23:56:07.051178 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 12 23:56:07.052072 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 12 23:56:07.052884 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 12 23:56:07.053238 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 12 23:56:07.053924 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 12 23:56:07.054132 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 12 23:56:07.055266 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 12 23:56:07.055521 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 12 23:56:07.056447 systemd[1]: ignition-files.service: Deactivated successfully. Aug 12 23:56:07.056671 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 12 23:56:07.057752 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Aug 12 23:56:07.057923 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Aug 12 23:56:07.065359 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 12 23:56:07.065927 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 12 23:56:07.066212 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 12 23:56:07.070339 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 12 23:56:07.071388 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 12 23:56:07.071611 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 12 23:56:07.075380 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 12 23:56:07.075571 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 12 23:56:07.088731 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 12 23:56:07.090998 ignition[984]: INFO : Ignition 2.20.0 Aug 12 23:56:07.090998 ignition[984]: INFO : Stage: umount Aug 12 23:56:07.103167 ignition[984]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 12 23:56:07.103167 ignition[984]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 12 23:56:07.103167 ignition[984]: INFO : umount: umount passed Aug 12 23:56:07.103167 ignition[984]: INFO : Ignition finished successfully Aug 12 23:56:07.095398 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 12 23:56:07.101070 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 12 23:56:07.101216 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 12 23:56:07.103246 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 12 23:56:07.103346 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 12 23:56:07.105448 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 12 23:56:07.105543 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 12 23:56:07.107497 systemd[1]: ignition-fetch.service: Deactivated successfully. Aug 12 23:56:07.107604 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Aug 12 23:56:07.109876 systemd[1]: Stopped target network.target - Network. Aug 12 23:56:07.110265 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 12 23:56:07.110358 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 12 23:56:07.110828 systemd[1]: Stopped target paths.target - Path Units. Aug 12 23:56:07.112150 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 12 23:56:07.116341 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 12 23:56:07.117072 systemd[1]: Stopped target slices.target - Slice Units. Aug 12 23:56:07.117362 systemd[1]: Stopped target sockets.target - Socket Units. Aug 12 23:56:07.117737 systemd[1]: iscsid.socket: Deactivated successfully. Aug 12 23:56:07.118428 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 12 23:56:07.119211 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 12 23:56:07.119266 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 12 23:56:07.119661 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 12 23:56:07.119748 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 12 23:56:07.121206 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 12 23:56:07.121275 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 12 23:56:07.122308 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 12 23:56:07.122795 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 12 23:56:07.125241 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 12 23:56:07.125893 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 12 23:56:07.125983 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 12 23:56:07.129469 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 12 23:56:07.129655 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 12 23:56:07.133818 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Aug 12 23:56:07.136178 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 12 23:56:07.136321 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 12 23:56:07.138590 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Aug 12 23:56:07.139583 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 12 23:56:07.139678 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 12 23:56:07.140270 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 12 23:56:07.140360 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 12 23:56:07.148220 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 12 23:56:07.148619 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 12 23:56:07.148718 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 12 23:56:07.150654 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 12 23:56:07.150723 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 12 23:56:07.151259 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 12 23:56:07.151311 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 12 23:56:07.152111 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 12 23:56:07.152171 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 12 23:56:07.152987 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 12 23:56:07.154678 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Aug 12 23:56:07.154750 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Aug 12 23:56:07.163537 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 12 23:56:07.163751 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 12 23:56:07.165058 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 12 23:56:07.165150 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 12 23:56:07.165867 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 12 23:56:07.165917 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 12 23:56:07.166678 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 12 23:56:07.166748 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 12 23:56:07.167930 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 12 23:56:07.167993 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 12 23:56:07.169362 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 12 23:56:07.169417 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 12 23:56:07.177456 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 12 23:56:07.178218 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 12 23:56:07.178312 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 12 23:56:07.181131 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 12 23:56:07.181224 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 12 23:56:07.182808 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Aug 12 23:56:07.182885 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Aug 12 23:56:07.185336 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 12 23:56:07.185445 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 12 23:56:07.186641 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 12 23:56:07.186776 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 12 23:56:07.188117 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 12 23:56:07.192294 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 12 23:56:07.204431 systemd[1]: Switching root. Aug 12 23:56:07.266547 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Aug 12 23:56:07.266655 systemd-journald[183]: Journal stopped Aug 12 23:56:08.679109 kernel: SELinux: policy capability network_peer_controls=1 Aug 12 23:56:08.679176 kernel: SELinux: policy capability open_perms=1 Aug 12 23:56:08.679190 kernel: SELinux: policy capability extended_socket_class=1 Aug 12 23:56:08.679205 kernel: SELinux: policy capability always_check_network=0 Aug 12 23:56:08.680421 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 12 23:56:08.680460 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 12 23:56:08.680485 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 12 23:56:08.680498 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 12 23:56:08.680511 kernel: audit: type=1403 audit(1755042967.409:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 12 23:56:08.680526 systemd[1]: Successfully loaded SELinux policy in 50.492ms. Aug 12 23:56:08.680551 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 20.296ms. Aug 12 23:56:08.684110 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Aug 12 23:56:08.684141 systemd[1]: Detected virtualization kvm. Aug 12 23:56:08.684154 systemd[1]: Detected architecture x86-64. Aug 12 23:56:08.684172 systemd[1]: Detected first boot. Aug 12 23:56:08.684185 systemd[1]: Hostname set to . Aug 12 23:56:08.684199 systemd[1]: Initializing machine ID from VM UUID. Aug 12 23:56:08.684214 zram_generator::config[1029]: No configuration found. Aug 12 23:56:08.684230 kernel: Guest personality initialized and is inactive Aug 12 23:56:08.684244 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Aug 12 23:56:08.684258 kernel: Initialized host personality Aug 12 23:56:08.684271 kernel: NET: Registered PF_VSOCK protocol family Aug 12 23:56:08.684283 systemd[1]: Populated /etc with preset unit settings. Aug 12 23:56:08.684297 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Aug 12 23:56:08.684310 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 12 23:56:08.684323 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Aug 12 23:56:08.684335 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 12 23:56:08.684348 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 12 23:56:08.684361 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 12 23:56:08.684378 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 12 23:56:08.684390 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 12 23:56:08.684404 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 12 23:56:08.684417 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 12 23:56:08.684430 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 12 23:56:08.684442 systemd[1]: Created slice user.slice - User and Session Slice. Aug 12 23:56:08.684454 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 12 23:56:08.684467 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 12 23:56:08.684486 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 12 23:56:08.684500 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 12 23:56:08.684513 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 12 23:56:08.684526 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 12 23:56:08.684538 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Aug 12 23:56:08.684735 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 12 23:56:08.684754 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Aug 12 23:56:08.684768 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Aug 12 23:56:08.684780 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Aug 12 23:56:08.684793 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 12 23:56:08.684823 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 12 23:56:08.684842 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 12 23:56:08.684860 systemd[1]: Reached target slices.target - Slice Units. Aug 12 23:56:08.684875 systemd[1]: Reached target swap.target - Swaps. Aug 12 23:56:08.684887 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 12 23:56:08.684900 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 12 23:56:08.684916 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Aug 12 23:56:08.684929 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 12 23:56:08.684942 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 12 23:56:08.684954 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 12 23:56:08.684967 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 12 23:56:08.684980 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 12 23:56:08.684994 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 12 23:56:08.685006 systemd[1]: Mounting media.mount - External Media Directory... Aug 12 23:56:08.686119 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 12 23:56:08.686169 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 12 23:56:08.686183 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 12 23:56:08.686196 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 12 23:56:08.686215 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 12 23:56:08.686228 systemd[1]: Reached target machines.target - Containers. Aug 12 23:56:08.686241 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 12 23:56:08.686254 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 12 23:56:08.686266 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 12 23:56:08.686282 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 12 23:56:08.686295 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 12 23:56:08.686307 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 12 23:56:08.686320 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 12 23:56:08.686333 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 12 23:56:08.686345 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 12 23:56:08.686359 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 12 23:56:08.686372 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 12 23:56:08.686388 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Aug 12 23:56:08.686400 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 12 23:56:08.686413 systemd[1]: Stopped systemd-fsck-usr.service. Aug 12 23:56:08.686426 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 12 23:56:08.686439 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 12 23:56:08.686453 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 12 23:56:08.686465 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 12 23:56:08.686477 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 12 23:56:08.686490 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Aug 12 23:56:08.686505 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 12 23:56:08.686518 kernel: ACPI: bus type drm_connector registered Aug 12 23:56:08.686532 systemd[1]: verity-setup.service: Deactivated successfully. Aug 12 23:56:08.686548 systemd[1]: Stopped verity-setup.service. Aug 12 23:56:08.686564 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 12 23:56:08.686576 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 12 23:56:08.686589 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 12 23:56:08.686602 kernel: loop: module loaded Aug 12 23:56:08.686614 systemd[1]: Mounted media.mount - External Media Directory. Aug 12 23:56:08.686627 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 12 23:56:08.686643 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 12 23:56:08.686655 kernel: fuse: init (API version 7.39) Aug 12 23:56:08.686667 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 12 23:56:08.686679 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 12 23:56:08.686691 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 12 23:56:08.686704 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 12 23:56:08.686717 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 12 23:56:08.686729 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 12 23:56:08.686741 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 12 23:56:08.686757 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 12 23:56:08.686770 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 12 23:56:08.686783 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 12 23:56:08.686796 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 12 23:56:08.686809 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 12 23:56:08.686822 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 12 23:56:08.686834 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 12 23:56:08.686847 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 12 23:56:08.686859 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 12 23:56:08.686875 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 12 23:56:08.686888 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 12 23:56:08.686901 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 12 23:56:08.686914 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 12 23:56:08.686929 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 12 23:56:08.686943 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Aug 12 23:56:08.686955 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 12 23:56:08.686968 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 12 23:56:08.686980 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 12 23:56:08.686996 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 12 23:56:08.687009 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 12 23:56:08.693106 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Aug 12 23:56:08.693147 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 12 23:56:08.693209 systemd-journald[1110]: Collecting audit messages is disabled. Aug 12 23:56:08.693240 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 12 23:56:08.693262 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 12 23:56:08.693276 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 12 23:56:08.693289 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 12 23:56:08.693302 systemd-journald[1110]: Journal started Aug 12 23:56:08.693327 systemd-journald[1110]: Runtime Journal (/run/log/journal/fb81189b58dd4a7c8fd48a8bc1aa1d52) is 4.9M, max 39.3M, 34.4M free. Aug 12 23:56:08.283655 systemd[1]: Queued start job for default target multi-user.target. Aug 12 23:56:08.294912 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Aug 12 23:56:08.295444 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 12 23:56:08.698136 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 12 23:56:08.704474 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 12 23:56:08.715470 systemd[1]: Started systemd-journald.service - Journal Service. Aug 12 23:56:08.725328 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 12 23:56:08.730366 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 12 23:56:08.733120 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 12 23:56:08.734079 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 12 23:56:08.734923 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 12 23:56:08.751869 kernel: loop0: detected capacity change from 0 to 8 Aug 12 23:56:08.764398 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 12 23:56:08.775241 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 12 23:56:08.773313 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 12 23:56:08.777786 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Aug 12 23:56:08.788125 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 12 23:56:08.790954 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Aug 12 23:56:08.809928 systemd-journald[1110]: Time spent on flushing to /var/log/journal/fb81189b58dd4a7c8fd48a8bc1aa1d52 is 108.676ms for 1011 entries. Aug 12 23:56:08.809928 systemd-journald[1110]: System Journal (/var/log/journal/fb81189b58dd4a7c8fd48a8bc1aa1d52) is 8M, max 195.6M, 187.6M free. Aug 12 23:56:08.936609 systemd-journald[1110]: Received client request to flush runtime journal. Aug 12 23:56:08.936712 kernel: loop1: detected capacity change from 0 to 147912 Aug 12 23:56:08.936738 kernel: loop2: detected capacity change from 0 to 221472 Aug 12 23:56:08.872708 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 12 23:56:08.881302 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 12 23:56:08.904452 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Aug 12 23:56:08.907261 udevadm[1168]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Aug 12 23:56:08.922910 systemd-tmpfiles[1171]: ACLs are not supported, ignoring. Aug 12 23:56:08.922931 systemd-tmpfiles[1171]: ACLs are not supported, ignoring. Aug 12 23:56:08.938498 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 12 23:56:08.940588 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 12 23:56:08.993548 kernel: loop3: detected capacity change from 0 to 138176 Aug 12 23:56:09.093305 kernel: loop4: detected capacity change from 0 to 8 Aug 12 23:56:09.100077 kernel: loop5: detected capacity change from 0 to 147912 Aug 12 23:56:09.123056 kernel: loop6: detected capacity change from 0 to 221472 Aug 12 23:56:09.146041 kernel: loop7: detected capacity change from 0 to 138176 Aug 12 23:56:09.169387 (sd-merge)[1182]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Aug 12 23:56:09.169965 (sd-merge)[1182]: Merged extensions into '/usr'. Aug 12 23:56:09.186674 systemd[1]: Reload requested from client PID 1137 ('systemd-sysext') (unit systemd-sysext.service)... Aug 12 23:56:09.186693 systemd[1]: Reloading... Aug 12 23:56:09.443834 zram_generator::config[1210]: No configuration found. Aug 12 23:56:09.534996 ldconfig[1134]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 12 23:56:09.646185 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 12 23:56:09.731495 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 12 23:56:09.732093 systemd[1]: Reloading finished in 544 ms. Aug 12 23:56:09.762340 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 12 23:56:09.763829 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 12 23:56:09.782305 systemd[1]: Starting ensure-sysext.service... Aug 12 23:56:09.786269 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 12 23:56:09.838812 systemd[1]: Reload requested from client PID 1253 ('systemctl') (unit ensure-sysext.service)... Aug 12 23:56:09.838947 systemd[1]: Reloading... Aug 12 23:56:09.876678 systemd-tmpfiles[1254]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 12 23:56:09.877288 systemd-tmpfiles[1254]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 12 23:56:09.878973 systemd-tmpfiles[1254]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 12 23:56:09.879400 systemd-tmpfiles[1254]: ACLs are not supported, ignoring. Aug 12 23:56:09.879536 systemd-tmpfiles[1254]: ACLs are not supported, ignoring. Aug 12 23:56:09.885362 systemd-tmpfiles[1254]: Detected autofs mount point /boot during canonicalization of boot. Aug 12 23:56:09.885502 systemd-tmpfiles[1254]: Skipping /boot Aug 12 23:56:09.945435 systemd-tmpfiles[1254]: Detected autofs mount point /boot during canonicalization of boot. Aug 12 23:56:09.945450 systemd-tmpfiles[1254]: Skipping /boot Aug 12 23:56:09.955050 zram_generator::config[1283]: No configuration found. Aug 12 23:56:10.100820 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 12 23:56:10.184138 systemd[1]: Reloading finished in 344 ms. Aug 12 23:56:10.204379 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 12 23:56:10.216000 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 12 23:56:10.230683 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 12 23:56:10.234392 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 12 23:56:10.238532 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 12 23:56:10.253193 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 12 23:56:10.263337 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 12 23:56:10.267366 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 12 23:56:10.273750 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 12 23:56:10.273943 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 12 23:56:10.277363 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 12 23:56:10.281308 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 12 23:56:10.285460 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 12 23:56:10.286799 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 12 23:56:10.287041 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 12 23:56:10.301533 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 12 23:56:10.303317 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 12 23:56:10.309206 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 12 23:56:10.317841 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 12 23:56:10.319284 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 12 23:56:10.319610 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 12 23:56:10.319765 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 12 23:56:10.323449 systemd-udevd[1338]: Using default interface naming scheme 'v255'. Aug 12 23:56:10.327443 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 12 23:56:10.328108 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 12 23:56:10.336904 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 12 23:56:10.338331 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 12 23:56:10.354503 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 12 23:56:10.355245 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 12 23:56:10.355417 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 12 23:56:10.355618 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 12 23:56:10.362781 systemd[1]: Finished ensure-sysext.service. Aug 12 23:56:10.376291 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Aug 12 23:56:10.385085 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 12 23:56:10.392374 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 12 23:56:10.393145 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 12 23:56:10.414923 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 12 23:56:10.420376 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 12 23:56:10.424012 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 12 23:56:10.424899 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 12 23:56:10.425819 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 12 23:56:10.426301 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 12 23:56:10.428015 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 12 23:56:10.435853 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 12 23:56:10.447451 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 12 23:56:10.448086 augenrules[1369]: No rules Aug 12 23:56:10.449476 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 12 23:56:10.449718 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 12 23:56:10.450489 systemd[1]: audit-rules.service: Deactivated successfully. Aug 12 23:56:10.451175 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 12 23:56:10.458137 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 12 23:56:10.460441 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 12 23:56:10.473778 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 12 23:56:10.544176 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Aug 12 23:56:10.573573 systemd[1]: Condition check resulted in dev-disk-by\x2dlabel-config\x2d2.device - /dev/disk/by-label/config-2 being skipped. Aug 12 23:56:10.583243 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Aug 12 23:56:10.583728 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 12 23:56:10.583948 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 12 23:56:10.587511 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 12 23:56:10.598283 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 12 23:56:10.604297 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 12 23:56:10.604788 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 12 23:56:10.604861 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 12 23:56:10.604906 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 12 23:56:10.604929 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 12 23:56:10.638046 kernel: ISO 9660 Extensions: RRIP_1991A Aug 12 23:56:10.640189 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Aug 12 23:56:10.644496 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 12 23:56:10.644708 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 12 23:56:10.655624 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 12 23:56:10.655834 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 12 23:56:10.656390 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 12 23:56:10.679050 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 12 23:56:10.679266 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 12 23:56:10.679946 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 12 23:56:10.686619 systemd-resolved[1334]: Positive Trust Anchors: Aug 12 23:56:10.686638 systemd-resolved[1334]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 12 23:56:10.686687 systemd-resolved[1334]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 12 23:56:10.692905 systemd-resolved[1334]: Using system hostname 'ci-4230.2.2-9-8f36bdb456'. Aug 12 23:56:10.695575 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 12 23:56:10.696511 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 12 23:56:10.705214 systemd-networkd[1370]: lo: Link UP Aug 12 23:56:10.705590 systemd-networkd[1370]: lo: Gained carrier Aug 12 23:56:10.707818 systemd-networkd[1370]: Enumeration completed Aug 12 23:56:10.707965 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 12 23:56:10.709564 systemd[1]: Reached target network.target - Network. Aug 12 23:56:10.719352 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Aug 12 23:56:10.728314 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 12 23:56:10.730241 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Aug 12 23:56:10.739432 systemd[1]: Reached target time-set.target - System Time Set. Aug 12 23:56:10.775178 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 45 scanned by (udev-worker) (1388) Aug 12 23:56:10.788084 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Aug 12 23:56:10.843981 systemd-networkd[1370]: eth1: Configuring with /run/systemd/network/10-6e:1d:d7:b8:8b:cd.network. Aug 12 23:56:10.846044 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Aug 12 23:56:10.846793 systemd-networkd[1370]: eth1: Link UP Aug 12 23:56:10.846802 systemd-networkd[1370]: eth1: Gained carrier Aug 12 23:56:10.853327 systemd-timesyncd[1356]: Network configuration changed, trying to establish connection. Aug 12 23:56:10.865336 systemd-networkd[1370]: eth0: Configuring with /run/systemd/network/10-c2:e7:a1:a3:93:da.network. Aug 12 23:56:10.866312 systemd-networkd[1370]: eth0: Link UP Aug 12 23:56:10.866320 systemd-networkd[1370]: eth0: Gained carrier Aug 12 23:56:10.871064 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Aug 12 23:56:10.887688 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 12 23:56:10.888732 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Aug 12 23:56:10.888777 kernel: ACPI: button: Power Button [PWRF] Aug 12 23:56:10.894280 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 12 23:56:10.926132 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 12 23:56:10.958154 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 12 23:56:11.018047 kernel: mousedev: PS/2 mouse device common for all mice Aug 12 23:56:11.109055 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Aug 12 23:56:11.113067 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Aug 12 23:56:11.121435 kernel: Console: switching to colour dummy device 80x25 Aug 12 23:56:11.119782 systemd-vconsole-setup[1429]: KD_FONT_OP_GET failed while trying to read the font data: Function not implemented Aug 12 23:56:11.119795 systemd-vconsole-setup[1429]: Fonts will not be copied to remaining consoles Aug 12 23:56:11.128057 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Aug 12 23:56:11.128183 kernel: [drm] features: -context_init Aug 12 23:56:11.129222 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 12 23:56:11.131101 kernel: [drm] number of scanouts: 1 Aug 12 23:56:11.131175 kernel: [drm] number of cap sets: 0 Aug 12 23:56:11.133584 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 12 23:56:11.134509 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 12 23:56:11.135195 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 12 23:56:11.138211 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Aug 12 23:56:11.142815 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 12 23:56:11.151151 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Aug 12 23:56:11.163217 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Aug 12 23:56:11.163336 kernel: Console: switching to colour frame buffer device 128x48 Aug 12 23:56:11.187510 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Aug 12 23:56:11.191090 kernel: EDAC MC: Ver: 3.0.0 Aug 12 23:56:11.207606 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 12 23:56:11.208831 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 12 23:56:11.213280 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Aug 12 23:56:11.225379 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 12 23:56:11.231705 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Aug 12 23:56:11.248386 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Aug 12 23:56:11.263868 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 12 23:56:11.268099 lvm[1446]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 12 23:56:11.297635 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Aug 12 23:56:11.299238 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 12 23:56:11.299410 systemd[1]: Reached target sysinit.target - System Initialization. Aug 12 23:56:11.299651 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 12 23:56:11.299779 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 12 23:56:11.300152 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 12 23:56:11.300424 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 12 23:56:11.300525 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 12 23:56:11.300614 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 12 23:56:11.300649 systemd[1]: Reached target paths.target - Path Units. Aug 12 23:56:11.300723 systemd[1]: Reached target timers.target - Timer Units. Aug 12 23:56:11.303159 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 12 23:56:11.305594 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 12 23:56:11.311686 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Aug 12 23:56:11.314245 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Aug 12 23:56:11.316222 systemd[1]: Reached target ssh-access.target - SSH Access Available. Aug 12 23:56:11.330542 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 12 23:56:11.333435 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Aug 12 23:56:11.341350 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Aug 12 23:56:11.346291 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 12 23:56:11.347269 systemd[1]: Reached target sockets.target - Socket Units. Aug 12 23:56:11.347792 systemd[1]: Reached target basic.target - Basic System. Aug 12 23:56:11.349302 lvm[1454]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 12 23:56:11.349871 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 12 23:56:11.349905 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 12 23:56:11.357264 systemd[1]: Starting containerd.service - containerd container runtime... Aug 12 23:56:11.363436 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Aug 12 23:56:11.372352 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 12 23:56:11.378703 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 12 23:56:11.386342 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 12 23:56:11.386975 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 12 23:56:11.390304 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 12 23:56:11.397216 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 12 23:56:11.407814 jq[1458]: false Aug 12 23:56:11.403291 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 12 23:56:11.414412 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 12 23:56:11.425898 extend-filesystems[1461]: Found loop4 Aug 12 23:56:11.425898 extend-filesystems[1461]: Found loop5 Aug 12 23:56:11.425898 extend-filesystems[1461]: Found loop6 Aug 12 23:56:11.425898 extend-filesystems[1461]: Found loop7 Aug 12 23:56:11.425898 extend-filesystems[1461]: Found vda Aug 12 23:56:11.425898 extend-filesystems[1461]: Found vda1 Aug 12 23:56:11.425898 extend-filesystems[1461]: Found vda2 Aug 12 23:56:11.425898 extend-filesystems[1461]: Found vda3 Aug 12 23:56:11.425898 extend-filesystems[1461]: Found usr Aug 12 23:56:11.425898 extend-filesystems[1461]: Found vda4 Aug 12 23:56:11.425898 extend-filesystems[1461]: Found vda6 Aug 12 23:56:11.425898 extend-filesystems[1461]: Found vda7 Aug 12 23:56:11.425898 extend-filesystems[1461]: Found vda9 Aug 12 23:56:11.425898 extend-filesystems[1461]: Checking size of /dev/vda9 Aug 12 23:56:11.520239 extend-filesystems[1461]: Resized partition /dev/vda9 Aug 12 23:56:11.527247 coreos-metadata[1456]: Aug 12 23:56:11.483 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Aug 12 23:56:11.527247 coreos-metadata[1456]: Aug 12 23:56:11.520 INFO Fetch successful Aug 12 23:56:11.427327 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 12 23:56:11.530547 extend-filesystems[1476]: resize2fs 1.47.1 (20-May-2024) Aug 12 23:56:11.433289 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 12 23:56:11.538115 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Aug 12 23:56:11.439737 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 12 23:56:11.442041 systemd[1]: Starting update-engine.service - Update Engine... Aug 12 23:56:11.460132 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 12 23:56:11.488491 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Aug 12 23:56:11.493841 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 12 23:56:11.494199 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 12 23:56:11.541450 jq[1471]: true Aug 12 23:56:11.494667 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 12 23:56:11.494996 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 12 23:56:11.543096 dbus-daemon[1457]: [system] SELinux support is enabled Aug 12 23:56:11.543910 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 12 23:56:11.550928 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 12 23:56:11.550964 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 12 23:56:11.553753 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 12 23:56:11.553862 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Aug 12 23:56:11.553882 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 12 23:56:11.567753 update_engine[1469]: I20250812 23:56:11.565398 1469 main.cc:92] Flatcar Update Engine starting Aug 12 23:56:11.586722 systemd[1]: motdgen.service: Deactivated successfully. Aug 12 23:56:11.587079 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 12 23:56:11.592293 jq[1483]: true Aug 12 23:56:11.595580 tar[1482]: linux-amd64/helm Aug 12 23:56:11.597008 update_engine[1469]: I20250812 23:56:11.596279 1469 update_check_scheduler.cc:74] Next update check in 5m10s Aug 12 23:56:11.601517 systemd[1]: Started update-engine.service - Update Engine. Aug 12 23:56:11.615300 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 12 23:56:11.640658 (ntainerd)[1494]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 12 23:56:11.664085 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Aug 12 23:56:11.678268 extend-filesystems[1476]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Aug 12 23:56:11.678268 extend-filesystems[1476]: old_desc_blocks = 1, new_desc_blocks = 8 Aug 12 23:56:11.678268 extend-filesystems[1476]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Aug 12 23:56:11.729957 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 45 scanned by (udev-worker) (1395) Aug 12 23:56:11.731366 extend-filesystems[1461]: Resized filesystem in /dev/vda9 Aug 12 23:56:11.731366 extend-filesystems[1461]: Found vdb Aug 12 23:56:11.678420 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 12 23:56:11.678657 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 12 23:56:11.684501 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Aug 12 23:56:11.723459 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 12 23:56:11.812204 bash[1521]: Updated "/home/core/.ssh/authorized_keys" Aug 12 23:56:11.808568 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 12 23:56:11.827622 systemd[1]: Starting sshkeys.service... Aug 12 23:56:11.853293 systemd-logind[1467]: New seat seat0. Aug 12 23:56:11.861334 systemd-logind[1467]: Watching system buttons on /dev/input/event1 (Power Button) Aug 12 23:56:11.861362 systemd-logind[1467]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Aug 12 23:56:11.862010 systemd[1]: Started systemd-logind.service - User Login Management. Aug 12 23:56:11.891803 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Aug 12 23:56:11.902240 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Aug 12 23:56:12.002679 coreos-metadata[1524]: Aug 12 23:56:12.002 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Aug 12 23:56:12.004573 sshd_keygen[1493]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 12 23:56:12.015711 coreos-metadata[1524]: Aug 12 23:56:12.014 INFO Fetch successful Aug 12 23:56:12.027082 unknown[1524]: wrote ssh authorized keys file for user: core Aug 12 23:56:12.064714 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 12 23:56:12.065866 update-ssh-keys[1537]: Updated "/home/core/.ssh/authorized_keys" Aug 12 23:56:12.070147 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Aug 12 23:56:12.083799 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 12 23:56:12.088618 systemd[1]: Finished sshkeys.service. Aug 12 23:56:12.113593 systemd[1]: issuegen.service: Deactivated successfully. Aug 12 23:56:12.113844 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 12 23:56:12.127565 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 12 23:56:12.151983 locksmithd[1501]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 12 23:56:12.164910 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 12 23:56:12.177439 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 12 23:56:12.189220 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Aug 12 23:56:12.189897 systemd[1]: Reached target getty.target - Login Prompts. Aug 12 23:56:12.243152 containerd[1494]: time="2025-08-12T23:56:12.242984957Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Aug 12 23:56:12.276055 containerd[1494]: time="2025-08-12T23:56:12.275589366Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Aug 12 23:56:12.278366 containerd[1494]: time="2025-08-12T23:56:12.278072309Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.100-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Aug 12 23:56:12.278366 containerd[1494]: time="2025-08-12T23:56:12.278117180Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Aug 12 23:56:12.278366 containerd[1494]: time="2025-08-12T23:56:12.278137776Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Aug 12 23:56:12.278366 containerd[1494]: time="2025-08-12T23:56:12.278324881Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Aug 12 23:56:12.278366 containerd[1494]: time="2025-08-12T23:56:12.278342222Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Aug 12 23:56:12.278652 containerd[1494]: time="2025-08-12T23:56:12.278402721Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Aug 12 23:56:12.278652 containerd[1494]: time="2025-08-12T23:56:12.278418096Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Aug 12 23:56:12.278652 containerd[1494]: time="2025-08-12T23:56:12.278633397Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 12 23:56:12.278652 containerd[1494]: time="2025-08-12T23:56:12.278648729Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Aug 12 23:56:12.278780 containerd[1494]: time="2025-08-12T23:56:12.278661826Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Aug 12 23:56:12.278780 containerd[1494]: time="2025-08-12T23:56:12.278673295Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Aug 12 23:56:12.278780 containerd[1494]: time="2025-08-12T23:56:12.278757413Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Aug 12 23:56:12.279805 containerd[1494]: time="2025-08-12T23:56:12.279006357Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Aug 12 23:56:12.279805 containerd[1494]: time="2025-08-12T23:56:12.279208562Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 12 23:56:12.279805 containerd[1494]: time="2025-08-12T23:56:12.279228053Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Aug 12 23:56:12.279805 containerd[1494]: time="2025-08-12T23:56:12.279322371Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Aug 12 23:56:12.279805 containerd[1494]: time="2025-08-12T23:56:12.279371998Z" level=info msg="metadata content store policy set" policy=shared Aug 12 23:56:12.282524 containerd[1494]: time="2025-08-12T23:56:12.282475381Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Aug 12 23:56:12.282785 containerd[1494]: time="2025-08-12T23:56:12.282549067Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Aug 12 23:56:12.282785 containerd[1494]: time="2025-08-12T23:56:12.282568896Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Aug 12 23:56:12.282785 containerd[1494]: time="2025-08-12T23:56:12.282613233Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Aug 12 23:56:12.282785 containerd[1494]: time="2025-08-12T23:56:12.282629615Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Aug 12 23:56:12.282942 containerd[1494]: time="2025-08-12T23:56:12.282819158Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Aug 12 23:56:12.287143 containerd[1494]: time="2025-08-12T23:56:12.286511849Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Aug 12 23:56:12.287143 containerd[1494]: time="2025-08-12T23:56:12.286742663Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Aug 12 23:56:12.287143 containerd[1494]: time="2025-08-12T23:56:12.286764102Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Aug 12 23:56:12.287143 containerd[1494]: time="2025-08-12T23:56:12.286781458Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Aug 12 23:56:12.287143 containerd[1494]: time="2025-08-12T23:56:12.286796428Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Aug 12 23:56:12.287143 containerd[1494]: time="2025-08-12T23:56:12.286809096Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Aug 12 23:56:12.287143 containerd[1494]: time="2025-08-12T23:56:12.286823937Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Aug 12 23:56:12.287143 containerd[1494]: time="2025-08-12T23:56:12.286838254Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Aug 12 23:56:12.287143 containerd[1494]: time="2025-08-12T23:56:12.286853345Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Aug 12 23:56:12.287143 containerd[1494]: time="2025-08-12T23:56:12.286866624Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Aug 12 23:56:12.287143 containerd[1494]: time="2025-08-12T23:56:12.286879732Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Aug 12 23:56:12.287143 containerd[1494]: time="2025-08-12T23:56:12.286890448Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Aug 12 23:56:12.287143 containerd[1494]: time="2025-08-12T23:56:12.286913473Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Aug 12 23:56:12.287143 containerd[1494]: time="2025-08-12T23:56:12.286928213Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Aug 12 23:56:12.288003 containerd[1494]: time="2025-08-12T23:56:12.286940598Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Aug 12 23:56:12.288003 containerd[1494]: time="2025-08-12T23:56:12.286968994Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Aug 12 23:56:12.288003 containerd[1494]: time="2025-08-12T23:56:12.286984396Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Aug 12 23:56:12.288003 containerd[1494]: time="2025-08-12T23:56:12.286998036Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Aug 12 23:56:12.288003 containerd[1494]: time="2025-08-12T23:56:12.287010339Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Aug 12 23:56:12.288003 containerd[1494]: time="2025-08-12T23:56:12.287033968Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Aug 12 23:56:12.288003 containerd[1494]: time="2025-08-12T23:56:12.287047628Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Aug 12 23:56:12.288003 containerd[1494]: time="2025-08-12T23:56:12.287063856Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Aug 12 23:56:12.289093 containerd[1494]: time="2025-08-12T23:56:12.287076654Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Aug 12 23:56:12.289177 containerd[1494]: time="2025-08-12T23:56:12.289109030Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Aug 12 23:56:12.289177 containerd[1494]: time="2025-08-12T23:56:12.289133372Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Aug 12 23:56:12.289177 containerd[1494]: time="2025-08-12T23:56:12.289150176Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Aug 12 23:56:12.289252 containerd[1494]: time="2025-08-12T23:56:12.289179711Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Aug 12 23:56:12.289252 containerd[1494]: time="2025-08-12T23:56:12.289194265Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Aug 12 23:56:12.289252 containerd[1494]: time="2025-08-12T23:56:12.289208227Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Aug 12 23:56:12.289325 containerd[1494]: time="2025-08-12T23:56:12.289286646Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Aug 12 23:56:12.289325 containerd[1494]: time="2025-08-12T23:56:12.289310176Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Aug 12 23:56:12.289325 containerd[1494]: time="2025-08-12T23:56:12.289321825Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Aug 12 23:56:12.289388 containerd[1494]: time="2025-08-12T23:56:12.289333303Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Aug 12 23:56:12.289388 containerd[1494]: time="2025-08-12T23:56:12.289342629Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Aug 12 23:56:12.289388 containerd[1494]: time="2025-08-12T23:56:12.289354213Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Aug 12 23:56:12.289388 containerd[1494]: time="2025-08-12T23:56:12.289365113Z" level=info msg="NRI interface is disabled by configuration." Aug 12 23:56:12.289388 containerd[1494]: time="2025-08-12T23:56:12.289375404Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Aug 12 23:56:12.289753 containerd[1494]: time="2025-08-12T23:56:12.289684485Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Aug 12 23:56:12.289753 containerd[1494]: time="2025-08-12T23:56:12.289736664Z" level=info msg="Connect containerd service" Aug 12 23:56:12.289753 containerd[1494]: time="2025-08-12T23:56:12.289767600Z" level=info msg="using legacy CRI server" Aug 12 23:56:12.290163 containerd[1494]: time="2025-08-12T23:56:12.289774758Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 12 23:56:12.290163 containerd[1494]: time="2025-08-12T23:56:12.289925869Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Aug 12 23:56:12.293039 containerd[1494]: time="2025-08-12T23:56:12.292252279Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 12 23:56:12.293604 containerd[1494]: time="2025-08-12T23:56:12.293566712Z" level=info msg="Start subscribing containerd event" Aug 12 23:56:12.293870 containerd[1494]: time="2025-08-12T23:56:12.293847351Z" level=info msg="Start recovering state" Aug 12 23:56:12.294254 containerd[1494]: time="2025-08-12T23:56:12.294104406Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 12 23:56:12.294349 containerd[1494]: time="2025-08-12T23:56:12.294331034Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 12 23:56:12.294407 containerd[1494]: time="2025-08-12T23:56:12.294389086Z" level=info msg="Start event monitor" Aug 12 23:56:12.294463 containerd[1494]: time="2025-08-12T23:56:12.294452488Z" level=info msg="Start snapshots syncer" Aug 12 23:56:12.294507 containerd[1494]: time="2025-08-12T23:56:12.294498845Z" level=info msg="Start cni network conf syncer for default" Aug 12 23:56:12.294550 containerd[1494]: time="2025-08-12T23:56:12.294541834Z" level=info msg="Start streaming server" Aug 12 23:56:12.294678 containerd[1494]: time="2025-08-12T23:56:12.294666560Z" level=info msg="containerd successfully booted in 0.054026s" Aug 12 23:56:12.294816 systemd[1]: Started containerd.service - containerd container runtime. Aug 12 23:56:12.473163 systemd-networkd[1370]: eth1: Gained IPv6LL Aug 12 23:56:12.477703 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 12 23:56:12.481760 systemd[1]: Reached target network-online.target - Network is Online. Aug 12 23:56:12.484456 tar[1482]: linux-amd64/LICENSE Aug 12 23:56:12.484456 tar[1482]: linux-amd64/README.md Aug 12 23:56:12.498524 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 12 23:56:12.502310 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 12 23:56:12.524131 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 12 23:56:12.546989 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 12 23:56:12.857225 systemd-networkd[1370]: eth0: Gained IPv6LL Aug 12 23:56:13.633443 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 12 23:56:13.634525 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 12 23:56:13.636542 systemd[1]: Startup finished in 1.077s (kernel) + 26.701s (initrd) + 6.275s (userspace) = 34.054s. Aug 12 23:56:13.644648 (kubelet)[1580]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 12 23:56:14.242326 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 12 23:56:14.250636 systemd[1]: Started sshd@0-137.184.234.76:22-139.178.68.195:44862.service - OpenSSH per-connection server daemon (139.178.68.195:44862). Aug 12 23:56:14.349727 sshd[1591]: Accepted publickey for core from 139.178.68.195 port 44862 ssh2: RSA SHA256:Yd4cJaNOPrEdOKjK3Hl1fuqro0lLX1aY5TKeqt+Qp+4 Aug 12 23:56:14.351191 sshd-session[1591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:56:14.362165 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 12 23:56:14.368055 kubelet[1580]: E0812 23:56:14.366800 1580 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 12 23:56:14.370468 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 12 23:56:14.370912 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 12 23:56:14.371140 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 12 23:56:14.371540 systemd[1]: kubelet.service: Consumed 1.340s CPU time, 266.6M memory peak. Aug 12 23:56:14.385101 systemd-logind[1467]: New session 1 of user core. Aug 12 23:56:14.395137 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 12 23:56:14.405514 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 12 23:56:14.410129 (systemd)[1596]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 12 23:56:14.414472 systemd-logind[1467]: New session c1 of user core. Aug 12 23:56:14.590880 systemd[1596]: Queued start job for default target default.target. Aug 12 23:56:14.596422 systemd[1596]: Created slice app.slice - User Application Slice. Aug 12 23:56:14.596491 systemd[1596]: Reached target paths.target - Paths. Aug 12 23:56:14.596562 systemd[1596]: Reached target timers.target - Timers. Aug 12 23:56:14.600207 systemd[1596]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 12 23:56:14.614866 systemd[1596]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 12 23:56:14.615069 systemd[1596]: Reached target sockets.target - Sockets. Aug 12 23:56:14.615143 systemd[1596]: Reached target basic.target - Basic System. Aug 12 23:56:14.615198 systemd[1596]: Reached target default.target - Main User Target. Aug 12 23:56:14.615236 systemd[1596]: Startup finished in 191ms. Aug 12 23:56:14.615597 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 12 23:56:14.625467 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 12 23:56:14.703072 systemd[1]: Started sshd@1-137.184.234.76:22-139.178.68.195:44866.service - OpenSSH per-connection server daemon (139.178.68.195:44866). Aug 12 23:56:14.756282 sshd[1607]: Accepted publickey for core from 139.178.68.195 port 44866 ssh2: RSA SHA256:Yd4cJaNOPrEdOKjK3Hl1fuqro0lLX1aY5TKeqt+Qp+4 Aug 12 23:56:14.758296 sshd-session[1607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:56:14.764924 systemd-logind[1467]: New session 2 of user core. Aug 12 23:56:14.771369 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 12 23:56:14.834649 sshd[1609]: Connection closed by 139.178.68.195 port 44866 Aug 12 23:56:14.835320 sshd-session[1607]: pam_unix(sshd:session): session closed for user core Aug 12 23:56:14.855733 systemd[1]: sshd@1-137.184.234.76:22-139.178.68.195:44866.service: Deactivated successfully. Aug 12 23:56:14.858949 systemd[1]: session-2.scope: Deactivated successfully. Aug 12 23:56:14.861539 systemd-logind[1467]: Session 2 logged out. Waiting for processes to exit. Aug 12 23:56:14.876707 systemd[1]: Started sshd@2-137.184.234.76:22-139.178.68.195:44874.service - OpenSSH per-connection server daemon (139.178.68.195:44874). Aug 12 23:56:14.879107 systemd-logind[1467]: Removed session 2. Aug 12 23:56:14.927692 sshd[1614]: Accepted publickey for core from 139.178.68.195 port 44874 ssh2: RSA SHA256:Yd4cJaNOPrEdOKjK3Hl1fuqro0lLX1aY5TKeqt+Qp+4 Aug 12 23:56:14.929400 sshd-session[1614]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:56:14.937722 systemd-logind[1467]: New session 3 of user core. Aug 12 23:56:14.940271 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 12 23:56:14.998232 sshd[1617]: Connection closed by 139.178.68.195 port 44874 Aug 12 23:56:14.998999 sshd-session[1614]: pam_unix(sshd:session): session closed for user core Aug 12 23:56:15.014444 systemd[1]: sshd@2-137.184.234.76:22-139.178.68.195:44874.service: Deactivated successfully. Aug 12 23:56:15.016674 systemd[1]: session-3.scope: Deactivated successfully. Aug 12 23:56:15.019195 systemd-logind[1467]: Session 3 logged out. Waiting for processes to exit. Aug 12 23:56:15.024911 systemd[1]: Started sshd@3-137.184.234.76:22-139.178.68.195:44878.service - OpenSSH per-connection server daemon (139.178.68.195:44878). Aug 12 23:56:15.026750 systemd-logind[1467]: Removed session 3. Aug 12 23:56:15.082420 sshd[1622]: Accepted publickey for core from 139.178.68.195 port 44878 ssh2: RSA SHA256:Yd4cJaNOPrEdOKjK3Hl1fuqro0lLX1aY5TKeqt+Qp+4 Aug 12 23:56:15.084394 sshd-session[1622]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:56:15.092774 systemd-logind[1467]: New session 4 of user core. Aug 12 23:56:15.102323 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 12 23:56:15.163656 sshd[1625]: Connection closed by 139.178.68.195 port 44878 Aug 12 23:56:15.164409 sshd-session[1622]: pam_unix(sshd:session): session closed for user core Aug 12 23:56:15.181609 systemd[1]: sshd@3-137.184.234.76:22-139.178.68.195:44878.service: Deactivated successfully. Aug 12 23:56:15.183817 systemd[1]: session-4.scope: Deactivated successfully. Aug 12 23:56:15.185570 systemd-logind[1467]: Session 4 logged out. Waiting for processes to exit. Aug 12 23:56:15.197546 systemd[1]: Started sshd@4-137.184.234.76:22-139.178.68.195:44886.service - OpenSSH per-connection server daemon (139.178.68.195:44886). Aug 12 23:56:15.199308 systemd-logind[1467]: Removed session 4. Aug 12 23:56:15.248417 sshd[1630]: Accepted publickey for core from 139.178.68.195 port 44886 ssh2: RSA SHA256:Yd4cJaNOPrEdOKjK3Hl1fuqro0lLX1aY5TKeqt+Qp+4 Aug 12 23:56:15.250830 sshd-session[1630]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:56:15.257279 systemd-logind[1467]: New session 5 of user core. Aug 12 23:56:15.265358 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 12 23:56:15.340177 sudo[1634]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 12 23:56:15.340585 sudo[1634]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 12 23:56:15.358268 sudo[1634]: pam_unix(sudo:session): session closed for user root Aug 12 23:56:15.361820 sshd[1633]: Connection closed by 139.178.68.195 port 44886 Aug 12 23:56:15.363412 sshd-session[1630]: pam_unix(sshd:session): session closed for user core Aug 12 23:56:15.373607 systemd[1]: sshd@4-137.184.234.76:22-139.178.68.195:44886.service: Deactivated successfully. Aug 12 23:56:15.375612 systemd[1]: session-5.scope: Deactivated successfully. Aug 12 23:56:15.378271 systemd-logind[1467]: Session 5 logged out. Waiting for processes to exit. Aug 12 23:56:15.385413 systemd[1]: Started sshd@5-137.184.234.76:22-139.178.68.195:44896.service - OpenSSH per-connection server daemon (139.178.68.195:44896). Aug 12 23:56:15.387878 systemd-logind[1467]: Removed session 5. Aug 12 23:56:15.434140 sshd[1639]: Accepted publickey for core from 139.178.68.195 port 44896 ssh2: RSA SHA256:Yd4cJaNOPrEdOKjK3Hl1fuqro0lLX1aY5TKeqt+Qp+4 Aug 12 23:56:15.435491 sshd-session[1639]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:56:15.443805 systemd-logind[1467]: New session 6 of user core. Aug 12 23:56:15.449371 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 12 23:56:15.510912 sudo[1644]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 12 23:56:15.511340 sudo[1644]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 12 23:56:15.515985 sudo[1644]: pam_unix(sudo:session): session closed for user root Aug 12 23:56:15.523461 sudo[1643]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Aug 12 23:56:15.523808 sudo[1643]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 12 23:56:15.542093 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 12 23:56:15.586973 augenrules[1666]: No rules Aug 12 23:56:15.589316 systemd[1]: audit-rules.service: Deactivated successfully. Aug 12 23:56:15.589573 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 12 23:56:15.591364 sudo[1643]: pam_unix(sudo:session): session closed for user root Aug 12 23:56:15.596255 sshd[1642]: Connection closed by 139.178.68.195 port 44896 Aug 12 23:56:15.595307 sshd-session[1639]: pam_unix(sshd:session): session closed for user core Aug 12 23:56:15.605429 systemd[1]: sshd@5-137.184.234.76:22-139.178.68.195:44896.service: Deactivated successfully. Aug 12 23:56:15.607523 systemd[1]: session-6.scope: Deactivated successfully. Aug 12 23:56:15.608451 systemd-logind[1467]: Session 6 logged out. Waiting for processes to exit. Aug 12 23:56:15.615569 systemd[1]: Started sshd@6-137.184.234.76:22-139.178.68.195:44906.service - OpenSSH per-connection server daemon (139.178.68.195:44906). Aug 12 23:56:15.616614 systemd-logind[1467]: Removed session 6. Aug 12 23:56:15.680441 sshd[1674]: Accepted publickey for core from 139.178.68.195 port 44906 ssh2: RSA SHA256:Yd4cJaNOPrEdOKjK3Hl1fuqro0lLX1aY5TKeqt+Qp+4 Aug 12 23:56:15.682508 sshd-session[1674]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:56:15.689190 systemd-logind[1467]: New session 7 of user core. Aug 12 23:56:15.699353 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 12 23:56:15.761650 sudo[1678]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 12 23:56:15.762128 sudo[1678]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 12 23:56:16.270538 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 12 23:56:16.274243 (dockerd)[1695]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 12 23:56:16.785071 dockerd[1695]: time="2025-08-12T23:56:16.784781888Z" level=info msg="Starting up" Aug 12 23:56:16.894012 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2105968867-merged.mount: Deactivated successfully. Aug 12 23:56:16.976236 dockerd[1695]: time="2025-08-12T23:56:16.975913478Z" level=info msg="Loading containers: start." Aug 12 23:56:17.182054 kernel: Initializing XFRM netlink socket Aug 12 23:56:17.251890 systemd-timesyncd[1356]: Contacted time server 68.183.107.237:123 (1.flatcar.pool.ntp.org). Aug 12 23:56:17.251979 systemd-timesyncd[1356]: Initial clock synchronization to Tue 2025-08-12 23:56:17.144678 UTC. Aug 12 23:56:17.298597 systemd-networkd[1370]: docker0: Link UP Aug 12 23:56:17.334355 dockerd[1695]: time="2025-08-12T23:56:17.334280505Z" level=info msg="Loading containers: done." Aug 12 23:56:17.356266 dockerd[1695]: time="2025-08-12T23:56:17.356169901Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 12 23:56:17.356500 dockerd[1695]: time="2025-08-12T23:56:17.356338046Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Aug 12 23:56:17.356500 dockerd[1695]: time="2025-08-12T23:56:17.356482862Z" level=info msg="Daemon has completed initialization" Aug 12 23:56:17.394109 dockerd[1695]: time="2025-08-12T23:56:17.393233331Z" level=info msg="API listen on /run/docker.sock" Aug 12 23:56:17.393493 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 12 23:56:17.886701 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3518556333-merged.mount: Deactivated successfully. Aug 12 23:56:18.296060 containerd[1494]: time="2025-08-12T23:56:18.295860669Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\"" Aug 12 23:56:18.892540 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2611199816.mount: Deactivated successfully. Aug 12 23:56:20.089225 containerd[1494]: time="2025-08-12T23:56:20.089038780Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:56:20.091053 containerd[1494]: time="2025-08-12T23:56:20.090924863Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.11: active requests=0, bytes read=28077759" Aug 12 23:56:20.091228 containerd[1494]: time="2025-08-12T23:56:20.091207161Z" level=info msg="ImageCreate event name:\"sha256:ea7fa3cfabed1b85e7de8e0a02356b6dcb7708442d6e4600d68abaebe1e9b1fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:56:20.094754 containerd[1494]: time="2025-08-12T23:56:20.094672823Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a3d1c4440817725a1b503a7ccce94f3dce2b208ebf257b405dc2d97817df3dde\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:56:20.095931 containerd[1494]: time="2025-08-12T23:56:20.095710927Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.11\" with image id \"sha256:ea7fa3cfabed1b85e7de8e0a02356b6dcb7708442d6e4600d68abaebe1e9b1fc\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a3d1c4440817725a1b503a7ccce94f3dce2b208ebf257b405dc2d97817df3dde\", size \"28074559\" in 1.799775851s" Aug 12 23:56:20.095931 containerd[1494]: time="2025-08-12T23:56:20.095750887Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\" returns image reference \"sha256:ea7fa3cfabed1b85e7de8e0a02356b6dcb7708442d6e4600d68abaebe1e9b1fc\"" Aug 12 23:56:20.096632 containerd[1494]: time="2025-08-12T23:56:20.096592998Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\"" Aug 12 23:56:21.492125 containerd[1494]: time="2025-08-12T23:56:21.491185645Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:56:21.493608 containerd[1494]: time="2025-08-12T23:56:21.493529533Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.11: active requests=0, bytes read=24713245" Aug 12 23:56:21.494841 containerd[1494]: time="2025-08-12T23:56:21.494785113Z" level=info msg="ImageCreate event name:\"sha256:c057eceea4b436b01f9ce394734cfb06f13b2a3688c3983270e99743370b6051\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:56:21.501088 containerd[1494]: time="2025-08-12T23:56:21.500540126Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:0f19de157f3d251f5ddeb6e9d026895bc55cb02592874b326fa345c57e5e2848\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:56:21.502271 containerd[1494]: time="2025-08-12T23:56:21.502100478Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.11\" with image id \"sha256:c057eceea4b436b01f9ce394734cfb06f13b2a3688c3983270e99743370b6051\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:0f19de157f3d251f5ddeb6e9d026895bc55cb02592874b326fa345c57e5e2848\", size \"26315079\" in 1.405299018s" Aug 12 23:56:21.502271 containerd[1494]: time="2025-08-12T23:56:21.502155770Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\" returns image reference \"sha256:c057eceea4b436b01f9ce394734cfb06f13b2a3688c3983270e99743370b6051\"" Aug 12 23:56:21.503134 containerd[1494]: time="2025-08-12T23:56:21.502856609Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\"" Aug 12 23:56:22.661427 containerd[1494]: time="2025-08-12T23:56:22.661243728Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:56:22.662727 containerd[1494]: time="2025-08-12T23:56:22.662417022Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.11: active requests=0, bytes read=18783700" Aug 12 23:56:22.664101 containerd[1494]: time="2025-08-12T23:56:22.663364136Z" level=info msg="ImageCreate event name:\"sha256:64e6a0b453108c87da0bb61473b35fd54078119a09edc56a4c8cb31602437c58\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:56:22.666783 containerd[1494]: time="2025-08-12T23:56:22.666735038Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1a9b59b3bfa6c1f1911f6f865a795620c461d079e413061bb71981cadd67f39d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:56:22.668783 containerd[1494]: time="2025-08-12T23:56:22.668718959Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.11\" with image id \"sha256:64e6a0b453108c87da0bb61473b35fd54078119a09edc56a4c8cb31602437c58\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1a9b59b3bfa6c1f1911f6f865a795620c461d079e413061bb71981cadd67f39d\", size \"20385552\" in 1.165820711s" Aug 12 23:56:22.668999 containerd[1494]: time="2025-08-12T23:56:22.668973171Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\" returns image reference \"sha256:64e6a0b453108c87da0bb61473b35fd54078119a09edc56a4c8cb31602437c58\"" Aug 12 23:56:22.669735 containerd[1494]: time="2025-08-12T23:56:22.669686979Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\"" Aug 12 23:56:23.749699 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3891067075.mount: Deactivated successfully. Aug 12 23:56:24.280229 containerd[1494]: time="2025-08-12T23:56:24.280127595Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:56:24.283649 containerd[1494]: time="2025-08-12T23:56:24.283487593Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.11: active requests=0, bytes read=30383612" Aug 12 23:56:24.286066 containerd[1494]: time="2025-08-12T23:56:24.285663942Z" level=info msg="ImageCreate event name:\"sha256:0cec28fd5c3c446ec52e2886ddea38bf7f7e17755aa5d0095d50d3df5914a8fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:56:24.289089 containerd[1494]: time="2025-08-12T23:56:24.289035169Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a31da847792c5e7e92e91b78da1ad21d693e4b2b48d0e9f4610c8764dc2a5d79\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:56:24.289818 containerd[1494]: time="2025-08-12T23:56:24.289762815Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.11\" with image id \"sha256:0cec28fd5c3c446ec52e2886ddea38bf7f7e17755aa5d0095d50d3df5914a8fd\", repo tag \"registry.k8s.io/kube-proxy:v1.31.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:a31da847792c5e7e92e91b78da1ad21d693e4b2b48d0e9f4610c8764dc2a5d79\", size \"30382631\" in 1.61988449s" Aug 12 23:56:24.289818 containerd[1494]: time="2025-08-12T23:56:24.289818519Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\" returns image reference \"sha256:0cec28fd5c3c446ec52e2886ddea38bf7f7e17755aa5d0095d50d3df5914a8fd\"" Aug 12 23:56:24.290830 containerd[1494]: time="2025-08-12T23:56:24.290360056Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 12 23:56:24.291968 systemd-resolved[1334]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Aug 12 23:56:24.550306 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 12 23:56:24.557404 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 12 23:56:24.728742 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 12 23:56:24.740148 (kubelet)[1969]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 12 23:56:24.835507 kubelet[1969]: E0812 23:56:24.833750 1969 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 12 23:56:24.836161 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1154877083.mount: Deactivated successfully. Aug 12 23:56:24.842627 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 12 23:56:24.842862 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 12 23:56:24.844293 systemd[1]: kubelet.service: Consumed 213ms CPU time, 110.9M memory peak. Aug 12 23:56:25.909157 containerd[1494]: time="2025-08-12T23:56:25.909076843Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:56:25.911037 containerd[1494]: time="2025-08-12T23:56:25.910856158Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Aug 12 23:56:25.911567 containerd[1494]: time="2025-08-12T23:56:25.911418196Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:56:25.918797 containerd[1494]: time="2025-08-12T23:56:25.918628382Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:56:25.923737 containerd[1494]: time="2025-08-12T23:56:25.923149160Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.632748615s" Aug 12 23:56:25.923737 containerd[1494]: time="2025-08-12T23:56:25.923218769Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Aug 12 23:56:25.925804 containerd[1494]: time="2025-08-12T23:56:25.925520139Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Aug 12 23:56:26.400221 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3289106028.mount: Deactivated successfully. Aug 12 23:56:26.405066 containerd[1494]: time="2025-08-12T23:56:26.404399125Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:56:26.405823 containerd[1494]: time="2025-08-12T23:56:26.405449344Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Aug 12 23:56:26.405823 containerd[1494]: time="2025-08-12T23:56:26.405569347Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:56:26.407807 containerd[1494]: time="2025-08-12T23:56:26.407744447Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:56:26.408916 containerd[1494]: time="2025-08-12T23:56:26.408628389Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 483.069301ms" Aug 12 23:56:26.408916 containerd[1494]: time="2025-08-12T23:56:26.408668606Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Aug 12 23:56:26.409193 containerd[1494]: time="2025-08-12T23:56:26.409127133Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Aug 12 23:56:26.893415 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1299765700.mount: Deactivated successfully. Aug 12 23:56:27.385301 systemd-resolved[1334]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Aug 12 23:56:28.654276 containerd[1494]: time="2025-08-12T23:56:28.654198207Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:56:28.655954 containerd[1494]: time="2025-08-12T23:56:28.655886785Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" Aug 12 23:56:28.656856 containerd[1494]: time="2025-08-12T23:56:28.656521610Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:56:28.694052 containerd[1494]: time="2025-08-12T23:56:28.692365369Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:56:28.694389 containerd[1494]: time="2025-08-12T23:56:28.694353130Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.285192057s" Aug 12 23:56:28.694485 containerd[1494]: time="2025-08-12T23:56:28.694464978Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Aug 12 23:56:31.117660 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 12 23:56:31.117862 systemd[1]: kubelet.service: Consumed 213ms CPU time, 110.9M memory peak. Aug 12 23:56:31.125399 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 12 23:56:31.165634 systemd[1]: Reload requested from client PID 2115 ('systemctl') (unit session-7.scope)... Aug 12 23:56:31.165837 systemd[1]: Reloading... Aug 12 23:56:31.343074 zram_generator::config[2160]: No configuration found. Aug 12 23:56:31.526971 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 12 23:56:31.712913 systemd[1]: Reloading finished in 546 ms. Aug 12 23:56:31.781945 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 12 23:56:31.787800 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 12 23:56:31.791445 systemd[1]: kubelet.service: Deactivated successfully. Aug 12 23:56:31.791693 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 12 23:56:31.791751 systemd[1]: kubelet.service: Consumed 124ms CPU time, 98.2M memory peak. Aug 12 23:56:31.797472 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 12 23:56:31.928666 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 12 23:56:31.938818 (kubelet)[2215]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 12 23:56:31.999759 kubelet[2215]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 12 23:56:31.999759 kubelet[2215]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 12 23:56:31.999759 kubelet[2215]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 12 23:56:32.000286 kubelet[2215]: I0812 23:56:31.999768 2215 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 12 23:56:32.503051 kubelet[2215]: I0812 23:56:32.502962 2215 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 12 23:56:32.503051 kubelet[2215]: I0812 23:56:32.503009 2215 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 12 23:56:32.503411 kubelet[2215]: I0812 23:56:32.503384 2215 server.go:934] "Client rotation is on, will bootstrap in background" Aug 12 23:56:32.528376 kubelet[2215]: E0812 23:56:32.528314 2215 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://137.184.234.76:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 137.184.234.76:6443: connect: connection refused" logger="UnhandledError" Aug 12 23:56:32.531809 kubelet[2215]: I0812 23:56:32.530312 2215 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 12 23:56:32.541984 kubelet[2215]: E0812 23:56:32.541943 2215 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 12 23:56:32.542232 kubelet[2215]: I0812 23:56:32.542217 2215 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 12 23:56:32.547411 kubelet[2215]: I0812 23:56:32.547373 2215 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 12 23:56:32.548324 kubelet[2215]: I0812 23:56:32.548289 2215 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 12 23:56:32.548758 kubelet[2215]: I0812 23:56:32.548711 2215 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 12 23:56:32.549111 kubelet[2215]: I0812 23:56:32.548844 2215 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.2.2-9-8f36bdb456","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 12 23:56:32.549306 kubelet[2215]: I0812 23:56:32.549283 2215 topology_manager.go:138] "Creating topology manager with none policy" Aug 12 23:56:32.549365 kubelet[2215]: I0812 23:56:32.549357 2215 container_manager_linux.go:300] "Creating device plugin manager" Aug 12 23:56:32.549538 kubelet[2215]: I0812 23:56:32.549527 2215 state_mem.go:36] "Initialized new in-memory state store" Aug 12 23:56:32.552177 kubelet[2215]: I0812 23:56:32.552145 2215 kubelet.go:408] "Attempting to sync node with API server" Aug 12 23:56:32.552347 kubelet[2215]: I0812 23:56:32.552335 2215 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 12 23:56:32.552437 kubelet[2215]: I0812 23:56:32.552429 2215 kubelet.go:314] "Adding apiserver pod source" Aug 12 23:56:32.552600 kubelet[2215]: I0812 23:56:32.552588 2215 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 12 23:56:32.556220 kubelet[2215]: W0812 23:56:32.556143 2215 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://137.184.234.76:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.2.2-9-8f36bdb456&limit=500&resourceVersion=0": dial tcp 137.184.234.76:6443: connect: connection refused Aug 12 23:56:32.556387 kubelet[2215]: E0812 23:56:32.556229 2215 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://137.184.234.76:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.2.2-9-8f36bdb456&limit=500&resourceVersion=0\": dial tcp 137.184.234.76:6443: connect: connection refused" logger="UnhandledError" Aug 12 23:56:32.556387 kubelet[2215]: I0812 23:56:32.556365 2215 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Aug 12 23:56:32.559884 kubelet[2215]: I0812 23:56:32.559837 2215 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 12 23:56:32.560603 kubelet[2215]: W0812 23:56:32.560565 2215 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 12 23:56:32.562284 kubelet[2215]: W0812 23:56:32.562238 2215 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://137.184.234.76:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 137.184.234.76:6443: connect: connection refused Aug 12 23:56:32.562430 kubelet[2215]: E0812 23:56:32.562415 2215 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://137.184.234.76:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 137.184.234.76:6443: connect: connection refused" logger="UnhandledError" Aug 12 23:56:32.562647 kubelet[2215]: I0812 23:56:32.562616 2215 server.go:1274] "Started kubelet" Aug 12 23:56:32.562906 kubelet[2215]: I0812 23:56:32.562862 2215 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 12 23:56:32.565829 kubelet[2215]: I0812 23:56:32.565775 2215 server.go:449] "Adding debug handlers to kubelet server" Aug 12 23:56:32.568756 kubelet[2215]: I0812 23:56:32.567993 2215 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 12 23:56:32.568756 kubelet[2215]: I0812 23:56:32.568355 2215 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 12 23:56:32.570320 kubelet[2215]: I0812 23:56:32.570135 2215 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 12 23:56:32.570714 kubelet[2215]: E0812 23:56:32.568675 2215 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://137.184.234.76:6443/api/v1/namespaces/default/events\": dial tcp 137.184.234.76:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230.2.2-9-8f36bdb456.185b2a4d993afea2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230.2.2-9-8f36bdb456,UID:ci-4230.2.2-9-8f36bdb456,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230.2.2-9-8f36bdb456,},FirstTimestamp:2025-08-12 23:56:32.562568866 +0000 UTC m=+0.619132665,LastTimestamp:2025-08-12 23:56:32.562568866 +0000 UTC m=+0.619132665,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230.2.2-9-8f36bdb456,}" Aug 12 23:56:32.570969 kubelet[2215]: I0812 23:56:32.570780 2215 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 12 23:56:32.579333 kubelet[2215]: I0812 23:56:32.579092 2215 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 12 23:56:32.579594 kubelet[2215]: E0812 23:56:32.579430 2215 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.2.2-9-8f36bdb456\" not found" Aug 12 23:56:32.581195 kubelet[2215]: I0812 23:56:32.580344 2215 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 12 23:56:32.581195 kubelet[2215]: I0812 23:56:32.580629 2215 reconciler.go:26] "Reconciler: start to sync state" Aug 12 23:56:32.582304 kubelet[2215]: W0812 23:56:32.582246 2215 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://137.184.234.76:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 137.184.234.76:6443: connect: connection refused Aug 12 23:56:32.582488 kubelet[2215]: E0812 23:56:32.582311 2215 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://137.184.234.76:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 137.184.234.76:6443: connect: connection refused" logger="UnhandledError" Aug 12 23:56:32.585883 kubelet[2215]: E0812 23:56:32.583470 2215 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://137.184.234.76:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.2-9-8f36bdb456?timeout=10s\": dial tcp 137.184.234.76:6443: connect: connection refused" interval="200ms" Aug 12 23:56:32.589493 kubelet[2215]: I0812 23:56:32.589458 2215 factory.go:221] Registration of the containerd container factory successfully Aug 12 23:56:32.589493 kubelet[2215]: I0812 23:56:32.589481 2215 factory.go:221] Registration of the systemd container factory successfully Aug 12 23:56:32.589695 kubelet[2215]: I0812 23:56:32.589563 2215 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 12 23:56:32.604577 kubelet[2215]: I0812 23:56:32.604483 2215 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 12 23:56:32.606304 kubelet[2215]: I0812 23:56:32.606267 2215 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 12 23:56:32.606304 kubelet[2215]: I0812 23:56:32.606302 2215 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 12 23:56:32.606464 kubelet[2215]: I0812 23:56:32.606330 2215 kubelet.go:2321] "Starting kubelet main sync loop" Aug 12 23:56:32.606464 kubelet[2215]: E0812 23:56:32.606391 2215 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 12 23:56:32.616204 kubelet[2215]: W0812 23:56:32.616118 2215 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://137.184.234.76:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 137.184.234.76:6443: connect: connection refused Aug 12 23:56:32.616478 kubelet[2215]: E0812 23:56:32.616216 2215 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://137.184.234.76:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 137.184.234.76:6443: connect: connection refused" logger="UnhandledError" Aug 12 23:56:32.619699 kubelet[2215]: I0812 23:56:32.619459 2215 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 12 23:56:32.619699 kubelet[2215]: I0812 23:56:32.619484 2215 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 12 23:56:32.619699 kubelet[2215]: I0812 23:56:32.619506 2215 state_mem.go:36] "Initialized new in-memory state store" Aug 12 23:56:32.621189 kubelet[2215]: I0812 23:56:32.621145 2215 policy_none.go:49] "None policy: Start" Aug 12 23:56:32.622515 kubelet[2215]: I0812 23:56:32.622457 2215 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 12 23:56:32.622515 kubelet[2215]: I0812 23:56:32.622517 2215 state_mem.go:35] "Initializing new in-memory state store" Aug 12 23:56:32.630629 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Aug 12 23:56:32.645145 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Aug 12 23:56:32.650617 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Aug 12 23:56:32.660567 kubelet[2215]: I0812 23:56:32.660534 2215 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 12 23:56:32.660888 kubelet[2215]: I0812 23:56:32.660744 2215 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 12 23:56:32.660888 kubelet[2215]: I0812 23:56:32.660759 2215 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 12 23:56:32.661403 kubelet[2215]: I0812 23:56:32.661385 2215 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 12 23:56:32.665496 kubelet[2215]: E0812 23:56:32.665469 2215 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4230.2.2-9-8f36bdb456\" not found" Aug 12 23:56:32.717989 systemd[1]: Created slice kubepods-burstable-podbe5e76c0629c907fb54eb7ea3b65bd3b.slice - libcontainer container kubepods-burstable-podbe5e76c0629c907fb54eb7ea3b65bd3b.slice. Aug 12 23:56:32.742106 systemd[1]: Created slice kubepods-burstable-podda74f60e70e194e172a4709368a3085e.slice - libcontainer container kubepods-burstable-podda74f60e70e194e172a4709368a3085e.slice. Aug 12 23:56:32.746706 systemd[1]: Created slice kubepods-burstable-pod5ca5dc8a7322b8f6a36602233072f709.slice - libcontainer container kubepods-burstable-pod5ca5dc8a7322b8f6a36602233072f709.slice. Aug 12 23:56:32.762589 kubelet[2215]: I0812 23:56:32.762450 2215 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.2.2-9-8f36bdb456" Aug 12 23:56:32.763056 kubelet[2215]: E0812 23:56:32.762959 2215 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://137.184.234.76:6443/api/v1/nodes\": dial tcp 137.184.234.76:6443: connect: connection refused" node="ci-4230.2.2-9-8f36bdb456" Aug 12 23:56:32.784153 kubelet[2215]: E0812 23:56:32.783966 2215 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://137.184.234.76:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.2-9-8f36bdb456?timeout=10s\": dial tcp 137.184.234.76:6443: connect: connection refused" interval="400ms" Aug 12 23:56:32.881758 kubelet[2215]: I0812 23:56:32.881679 2215 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/be5e76c0629c907fb54eb7ea3b65bd3b-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.2.2-9-8f36bdb456\" (UID: \"be5e76c0629c907fb54eb7ea3b65bd3b\") " pod="kube-system/kube-apiserver-ci-4230.2.2-9-8f36bdb456" Aug 12 23:56:32.881758 kubelet[2215]: I0812 23:56:32.881747 2215 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/da74f60e70e194e172a4709368a3085e-ca-certs\") pod \"kube-controller-manager-ci-4230.2.2-9-8f36bdb456\" (UID: \"da74f60e70e194e172a4709368a3085e\") " pod="kube-system/kube-controller-manager-ci-4230.2.2-9-8f36bdb456" Aug 12 23:56:32.881758 kubelet[2215]: I0812 23:56:32.881773 2215 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/da74f60e70e194e172a4709368a3085e-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.2.2-9-8f36bdb456\" (UID: \"da74f60e70e194e172a4709368a3085e\") " pod="kube-system/kube-controller-manager-ci-4230.2.2-9-8f36bdb456" Aug 12 23:56:32.882002 kubelet[2215]: I0812 23:56:32.881794 2215 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/da74f60e70e194e172a4709368a3085e-kubeconfig\") pod \"kube-controller-manager-ci-4230.2.2-9-8f36bdb456\" (UID: \"da74f60e70e194e172a4709368a3085e\") " pod="kube-system/kube-controller-manager-ci-4230.2.2-9-8f36bdb456" Aug 12 23:56:32.882002 kubelet[2215]: I0812 23:56:32.881815 2215 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/be5e76c0629c907fb54eb7ea3b65bd3b-k8s-certs\") pod \"kube-apiserver-ci-4230.2.2-9-8f36bdb456\" (UID: \"be5e76c0629c907fb54eb7ea3b65bd3b\") " pod="kube-system/kube-apiserver-ci-4230.2.2-9-8f36bdb456" Aug 12 23:56:32.882002 kubelet[2215]: I0812 23:56:32.881840 2215 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/da74f60e70e194e172a4709368a3085e-k8s-certs\") pod \"kube-controller-manager-ci-4230.2.2-9-8f36bdb456\" (UID: \"da74f60e70e194e172a4709368a3085e\") " pod="kube-system/kube-controller-manager-ci-4230.2.2-9-8f36bdb456" Aug 12 23:56:32.882002 kubelet[2215]: I0812 23:56:32.881865 2215 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/da74f60e70e194e172a4709368a3085e-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.2.2-9-8f36bdb456\" (UID: \"da74f60e70e194e172a4709368a3085e\") " pod="kube-system/kube-controller-manager-ci-4230.2.2-9-8f36bdb456" Aug 12 23:56:32.882002 kubelet[2215]: I0812 23:56:32.881891 2215 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5ca5dc8a7322b8f6a36602233072f709-kubeconfig\") pod \"kube-scheduler-ci-4230.2.2-9-8f36bdb456\" (UID: \"5ca5dc8a7322b8f6a36602233072f709\") " pod="kube-system/kube-scheduler-ci-4230.2.2-9-8f36bdb456" Aug 12 23:56:32.882266 kubelet[2215]: I0812 23:56:32.881911 2215 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/be5e76c0629c907fb54eb7ea3b65bd3b-ca-certs\") pod \"kube-apiserver-ci-4230.2.2-9-8f36bdb456\" (UID: \"be5e76c0629c907fb54eb7ea3b65bd3b\") " pod="kube-system/kube-apiserver-ci-4230.2.2-9-8f36bdb456" Aug 12 23:56:32.965565 kubelet[2215]: I0812 23:56:32.965211 2215 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.2.2-9-8f36bdb456" Aug 12 23:56:32.965734 kubelet[2215]: E0812 23:56:32.965660 2215 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://137.184.234.76:6443/api/v1/nodes\": dial tcp 137.184.234.76:6443: connect: connection refused" node="ci-4230.2.2-9-8f36bdb456" Aug 12 23:56:33.039082 kubelet[2215]: E0812 23:56:33.038866 2215 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 12 23:56:33.042445 containerd[1494]: time="2025-08-12T23:56:33.040481989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.2.2-9-8f36bdb456,Uid:be5e76c0629c907fb54eb7ea3b65bd3b,Namespace:kube-system,Attempt:0,}" Aug 12 23:56:33.047352 kubelet[2215]: E0812 23:56:33.046440 2215 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 12 23:56:33.047627 containerd[1494]: time="2025-08-12T23:56:33.047173632Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.2.2-9-8f36bdb456,Uid:da74f60e70e194e172a4709368a3085e,Namespace:kube-system,Attempt:0,}" Aug 12 23:56:33.049769 systemd-resolved[1334]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.2. Aug 12 23:56:33.050622 kubelet[2215]: E0812 23:56:33.050573 2215 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 12 23:56:33.052191 containerd[1494]: time="2025-08-12T23:56:33.052143084Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.2.2-9-8f36bdb456,Uid:5ca5dc8a7322b8f6a36602233072f709,Namespace:kube-system,Attempt:0,}" Aug 12 23:56:33.184735 kubelet[2215]: E0812 23:56:33.184690 2215 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://137.184.234.76:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.2-9-8f36bdb456?timeout=10s\": dial tcp 137.184.234.76:6443: connect: connection refused" interval="800ms" Aug 12 23:56:33.368139 kubelet[2215]: I0812 23:56:33.367817 2215 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.2.2-9-8f36bdb456" Aug 12 23:56:33.368334 kubelet[2215]: E0812 23:56:33.368245 2215 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://137.184.234.76:6443/api/v1/nodes\": dial tcp 137.184.234.76:6443: connect: connection refused" node="ci-4230.2.2-9-8f36bdb456" Aug 12 23:56:33.433541 kubelet[2215]: W0812 23:56:33.433489 2215 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://137.184.234.76:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 137.184.234.76:6443: connect: connection refused Aug 12 23:56:33.433699 kubelet[2215]: E0812 23:56:33.433560 2215 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://137.184.234.76:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 137.184.234.76:6443: connect: connection refused" logger="UnhandledError" Aug 12 23:56:33.501471 kubelet[2215]: E0812 23:56:33.501342 2215 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://137.184.234.76:6443/api/v1/namespaces/default/events\": dial tcp 137.184.234.76:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230.2.2-9-8f36bdb456.185b2a4d993afea2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230.2.2-9-8f36bdb456,UID:ci-4230.2.2-9-8f36bdb456,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230.2.2-9-8f36bdb456,},FirstTimestamp:2025-08-12 23:56:32.562568866 +0000 UTC m=+0.619132665,LastTimestamp:2025-08-12 23:56:32.562568866 +0000 UTC m=+0.619132665,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230.2.2-9-8f36bdb456,}" Aug 12 23:56:33.525564 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1607651293.mount: Deactivated successfully. Aug 12 23:56:33.531070 containerd[1494]: time="2025-08-12T23:56:33.530315412Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 12 23:56:33.532117 containerd[1494]: time="2025-08-12T23:56:33.532006244Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Aug 12 23:56:33.533684 containerd[1494]: time="2025-08-12T23:56:33.533647183Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 12 23:56:33.535528 containerd[1494]: time="2025-08-12T23:56:33.535472228Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 12 23:56:33.537956 containerd[1494]: time="2025-08-12T23:56:33.537885121Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 12 23:56:33.539760 containerd[1494]: time="2025-08-12T23:56:33.539713786Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 12 23:56:33.541049 containerd[1494]: time="2025-08-12T23:56:33.540657140Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 12 23:56:33.545090 containerd[1494]: time="2025-08-12T23:56:33.545007312Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 12 23:56:33.546242 containerd[1494]: time="2025-08-12T23:56:33.546144583Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 498.376024ms" Aug 12 23:56:33.550409 containerd[1494]: time="2025-08-12T23:56:33.550357692Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 502.91992ms" Aug 12 23:56:33.552994 containerd[1494]: time="2025-08-12T23:56:33.552942688Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 500.498165ms" Aug 12 23:56:33.645875 kubelet[2215]: W0812 23:56:33.644538 2215 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://137.184.234.76:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 137.184.234.76:6443: connect: connection refused Aug 12 23:56:33.645875 kubelet[2215]: E0812 23:56:33.644605 2215 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://137.184.234.76:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 137.184.234.76:6443: connect: connection refused" logger="UnhandledError" Aug 12 23:56:33.744727 containerd[1494]: time="2025-08-12T23:56:33.743655950Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 12 23:56:33.744727 containerd[1494]: time="2025-08-12T23:56:33.743720040Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 12 23:56:33.744727 containerd[1494]: time="2025-08-12T23:56:33.743734962Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:56:33.744727 containerd[1494]: time="2025-08-12T23:56:33.743845533Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:56:33.745721 containerd[1494]: time="2025-08-12T23:56:33.743215852Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 12 23:56:33.747072 containerd[1494]: time="2025-08-12T23:56:33.745282793Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 12 23:56:33.747072 containerd[1494]: time="2025-08-12T23:56:33.745337844Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:56:33.747072 containerd[1494]: time="2025-08-12T23:56:33.745520135Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:56:33.754051 containerd[1494]: time="2025-08-12T23:56:33.751816701Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 12 23:56:33.754051 containerd[1494]: time="2025-08-12T23:56:33.752308822Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 12 23:56:33.754051 containerd[1494]: time="2025-08-12T23:56:33.752475641Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:56:33.754051 containerd[1494]: time="2025-08-12T23:56:33.752809144Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:56:33.781363 systemd[1]: Started cri-containerd-edb36bb0d5aaecbe761db8d04dc6a89b7cab2e664c2d9bfafd373cb32fe2e93c.scope - libcontainer container edb36bb0d5aaecbe761db8d04dc6a89b7cab2e664c2d9bfafd373cb32fe2e93c. Aug 12 23:56:33.798316 systemd[1]: Started cri-containerd-4fa611e7de3efa5db8789c42c620c3ff532c23c795fff7e3aae982aeac882fea.scope - libcontainer container 4fa611e7de3efa5db8789c42c620c3ff532c23c795fff7e3aae982aeac882fea. Aug 12 23:56:33.799974 systemd[1]: Started cri-containerd-564c4396e417df588886ff1ba3e76d2f9c41eeea2501eedf26cfda913cc12568.scope - libcontainer container 564c4396e417df588886ff1ba3e76d2f9c41eeea2501eedf26cfda913cc12568. Aug 12 23:56:33.873254 containerd[1494]: time="2025-08-12T23:56:33.873208097Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.2.2-9-8f36bdb456,Uid:be5e76c0629c907fb54eb7ea3b65bd3b,Namespace:kube-system,Attempt:0,} returns sandbox id \"564c4396e417df588886ff1ba3e76d2f9c41eeea2501eedf26cfda913cc12568\"" Aug 12 23:56:33.880824 kubelet[2215]: E0812 23:56:33.880783 2215 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 12 23:56:33.891044 containerd[1494]: time="2025-08-12T23:56:33.890970666Z" level=info msg="CreateContainer within sandbox \"564c4396e417df588886ff1ba3e76d2f9c41eeea2501eedf26cfda913cc12568\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 12 23:56:33.900331 containerd[1494]: time="2025-08-12T23:56:33.900208658Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.2.2-9-8f36bdb456,Uid:5ca5dc8a7322b8f6a36602233072f709,Namespace:kube-system,Attempt:0,} returns sandbox id \"edb36bb0d5aaecbe761db8d04dc6a89b7cab2e664c2d9bfafd373cb32fe2e93c\"" Aug 12 23:56:33.901003 kubelet[2215]: E0812 23:56:33.900971 2215 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 12 23:56:33.904467 containerd[1494]: time="2025-08-12T23:56:33.904351861Z" level=info msg="CreateContainer within sandbox \"edb36bb0d5aaecbe761db8d04dc6a89b7cab2e664c2d9bfafd373cb32fe2e93c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 12 23:56:33.911139 containerd[1494]: time="2025-08-12T23:56:33.911062170Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.2.2-9-8f36bdb456,Uid:da74f60e70e194e172a4709368a3085e,Namespace:kube-system,Attempt:0,} returns sandbox id \"4fa611e7de3efa5db8789c42c620c3ff532c23c795fff7e3aae982aeac882fea\"" Aug 12 23:56:33.912063 kubelet[2215]: W0812 23:56:33.911711 2215 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://137.184.234.76:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.2.2-9-8f36bdb456&limit=500&resourceVersion=0": dial tcp 137.184.234.76:6443: connect: connection refused Aug 12 23:56:33.912063 kubelet[2215]: E0812 23:56:33.911778 2215 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://137.184.234.76:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.2.2-9-8f36bdb456&limit=500&resourceVersion=0\": dial tcp 137.184.234.76:6443: connect: connection refused" logger="UnhandledError" Aug 12 23:56:33.914069 kubelet[2215]: E0812 23:56:33.913901 2215 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 12 23:56:33.915935 containerd[1494]: time="2025-08-12T23:56:33.915538899Z" level=info msg="CreateContainer within sandbox \"564c4396e417df588886ff1ba3e76d2f9c41eeea2501eedf26cfda913cc12568\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ebb7c3d78ac8bc0e63921a39e4a4ec0beeca16b280cfa0e2465b41cea694e5ea\"" Aug 12 23:56:33.916710 containerd[1494]: time="2025-08-12T23:56:33.916634407Z" level=info msg="StartContainer for \"ebb7c3d78ac8bc0e63921a39e4a4ec0beeca16b280cfa0e2465b41cea694e5ea\"" Aug 12 23:56:33.920295 containerd[1494]: time="2025-08-12T23:56:33.918962444Z" level=info msg="CreateContainer within sandbox \"4fa611e7de3efa5db8789c42c620c3ff532c23c795fff7e3aae982aeac882fea\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 12 23:56:33.926057 containerd[1494]: time="2025-08-12T23:56:33.925989380Z" level=info msg="CreateContainer within sandbox \"edb36bb0d5aaecbe761db8d04dc6a89b7cab2e664c2d9bfafd373cb32fe2e93c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"3639964f90cd16628d36538fd1697ae6be48b6b5bc8af39a08425df3ffe16f2f\"" Aug 12 23:56:33.936849 containerd[1494]: time="2025-08-12T23:56:33.936799436Z" level=info msg="StartContainer for \"3639964f90cd16628d36538fd1697ae6be48b6b5bc8af39a08425df3ffe16f2f\"" Aug 12 23:56:33.947996 containerd[1494]: time="2025-08-12T23:56:33.947932924Z" level=info msg="CreateContainer within sandbox \"4fa611e7de3efa5db8789c42c620c3ff532c23c795fff7e3aae982aeac882fea\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"8e8784fade7eba7b8e82d01c3b20ba006a7041d7df2242782e9ec57c1819ff4d\"" Aug 12 23:56:33.948864 containerd[1494]: time="2025-08-12T23:56:33.948828246Z" level=info msg="StartContainer for \"8e8784fade7eba7b8e82d01c3b20ba006a7041d7df2242782e9ec57c1819ff4d\"" Aug 12 23:56:33.982324 systemd[1]: Started cri-containerd-ebb7c3d78ac8bc0e63921a39e4a4ec0beeca16b280cfa0e2465b41cea694e5ea.scope - libcontainer container ebb7c3d78ac8bc0e63921a39e4a4ec0beeca16b280cfa0e2465b41cea694e5ea. Aug 12 23:56:33.986940 kubelet[2215]: E0812 23:56:33.986757 2215 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://137.184.234.76:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.2-9-8f36bdb456?timeout=10s\": dial tcp 137.184.234.76:6443: connect: connection refused" interval="1.6s" Aug 12 23:56:34.009329 systemd[1]: Started cri-containerd-3639964f90cd16628d36538fd1697ae6be48b6b5bc8af39a08425df3ffe16f2f.scope - libcontainer container 3639964f90cd16628d36538fd1697ae6be48b6b5bc8af39a08425df3ffe16f2f. Aug 12 23:56:34.010974 systemd[1]: Started cri-containerd-8e8784fade7eba7b8e82d01c3b20ba006a7041d7df2242782e9ec57c1819ff4d.scope - libcontainer container 8e8784fade7eba7b8e82d01c3b20ba006a7041d7df2242782e9ec57c1819ff4d. Aug 12 23:56:34.077493 containerd[1494]: time="2025-08-12T23:56:34.076880442Z" level=info msg="StartContainer for \"ebb7c3d78ac8bc0e63921a39e4a4ec0beeca16b280cfa0e2465b41cea694e5ea\" returns successfully" Aug 12 23:56:34.100844 containerd[1494]: time="2025-08-12T23:56:34.100154485Z" level=info msg="StartContainer for \"8e8784fade7eba7b8e82d01c3b20ba006a7041d7df2242782e9ec57c1819ff4d\" returns successfully" Aug 12 23:56:34.118453 containerd[1494]: time="2025-08-12T23:56:34.118383403Z" level=info msg="StartContainer for \"3639964f90cd16628d36538fd1697ae6be48b6b5bc8af39a08425df3ffe16f2f\" returns successfully" Aug 12 23:56:34.148091 kubelet[2215]: W0812 23:56:34.147553 2215 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://137.184.234.76:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 137.184.234.76:6443: connect: connection refused Aug 12 23:56:34.148091 kubelet[2215]: E0812 23:56:34.147644 2215 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://137.184.234.76:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 137.184.234.76:6443: connect: connection refused" logger="UnhandledError" Aug 12 23:56:34.173838 kubelet[2215]: I0812 23:56:34.169929 2215 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.2.2-9-8f36bdb456" Aug 12 23:56:34.173838 kubelet[2215]: E0812 23:56:34.170355 2215 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://137.184.234.76:6443/api/v1/nodes\": dial tcp 137.184.234.76:6443: connect: connection refused" node="ci-4230.2.2-9-8f36bdb456" Aug 12 23:56:34.629084 kubelet[2215]: E0812 23:56:34.628885 2215 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 12 23:56:34.634237 kubelet[2215]: E0812 23:56:34.633456 2215 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 12 23:56:34.635498 kubelet[2215]: E0812 23:56:34.635465 2215 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 12 23:56:35.636860 kubelet[2215]: E0812 23:56:35.636815 2215 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 12 23:56:35.771659 kubelet[2215]: I0812 23:56:35.771619 2215 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.2.2-9-8f36bdb456" Aug 12 23:56:36.227219 kubelet[2215]: I0812 23:56:36.227121 2215 kubelet_node_status.go:75] "Successfully registered node" node="ci-4230.2.2-9-8f36bdb456" Aug 12 23:56:36.227219 kubelet[2215]: E0812 23:56:36.227179 2215 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4230.2.2-9-8f36bdb456\": node \"ci-4230.2.2-9-8f36bdb456\" not found" Aug 12 23:56:36.308538 kubelet[2215]: E0812 23:56:36.308488 2215 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="3.2s" Aug 12 23:56:36.563819 kubelet[2215]: I0812 23:56:36.563727 2215 apiserver.go:52] "Watching apiserver" Aug 12 23:56:36.581377 kubelet[2215]: I0812 23:56:36.581291 2215 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Aug 12 23:56:38.393753 systemd[1]: Reload requested from client PID 2489 ('systemctl') (unit session-7.scope)... Aug 12 23:56:38.393777 systemd[1]: Reloading... Aug 12 23:56:38.513107 zram_generator::config[2542]: No configuration found. Aug 12 23:56:38.649667 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 12 23:56:38.782639 systemd[1]: Reloading finished in 388 ms. Aug 12 23:56:38.817996 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 12 23:56:38.834827 systemd[1]: kubelet.service: Deactivated successfully. Aug 12 23:56:38.835287 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 12 23:56:38.835387 systemd[1]: kubelet.service: Consumed 1.049s CPU time, 126M memory peak. Aug 12 23:56:38.841601 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 12 23:56:39.023531 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 12 23:56:39.040557 (kubelet)[2584]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 12 23:56:39.122510 kubelet[2584]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 12 23:56:39.127864 kubelet[2584]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 12 23:56:39.127864 kubelet[2584]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 12 23:56:39.128198 kubelet[2584]: I0812 23:56:39.128146 2584 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 12 23:56:39.139718 kubelet[2584]: I0812 23:56:39.139664 2584 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 12 23:56:39.140092 kubelet[2584]: I0812 23:56:39.139936 2584 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 12 23:56:39.142997 kubelet[2584]: I0812 23:56:39.142570 2584 server.go:934] "Client rotation is on, will bootstrap in background" Aug 12 23:56:39.146812 kubelet[2584]: I0812 23:56:39.145034 2584 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 12 23:56:39.149671 kubelet[2584]: I0812 23:56:39.149621 2584 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 12 23:56:39.156319 kubelet[2584]: E0812 23:56:39.156242 2584 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 12 23:56:39.156569 kubelet[2584]: I0812 23:56:39.156553 2584 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 12 23:56:39.160873 kubelet[2584]: I0812 23:56:39.160817 2584 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 12 23:56:39.161596 kubelet[2584]: I0812 23:56:39.161008 2584 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 12 23:56:39.161596 kubelet[2584]: I0812 23:56:39.161256 2584 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 12 23:56:39.161945 kubelet[2584]: I0812 23:56:39.161283 2584 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.2.2-9-8f36bdb456","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 12 23:56:39.162120 kubelet[2584]: I0812 23:56:39.162110 2584 topology_manager.go:138] "Creating topology manager with none policy" Aug 12 23:56:39.162173 kubelet[2584]: I0812 23:56:39.162167 2584 container_manager_linux.go:300] "Creating device plugin manager" Aug 12 23:56:39.162248 kubelet[2584]: I0812 23:56:39.162241 2584 state_mem.go:36] "Initialized new in-memory state store" Aug 12 23:56:39.162415 kubelet[2584]: I0812 23:56:39.162405 2584 kubelet.go:408] "Attempting to sync node with API server" Aug 12 23:56:39.162471 kubelet[2584]: I0812 23:56:39.162463 2584 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 12 23:56:39.162555 kubelet[2584]: I0812 23:56:39.162548 2584 kubelet.go:314] "Adding apiserver pod source" Aug 12 23:56:39.162604 kubelet[2584]: I0812 23:56:39.162597 2584 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 12 23:56:39.169146 kubelet[2584]: I0812 23:56:39.169109 2584 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Aug 12 23:56:39.172442 kubelet[2584]: I0812 23:56:39.169646 2584 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 12 23:56:39.187864 kubelet[2584]: I0812 23:56:39.187283 2584 server.go:1274] "Started kubelet" Aug 12 23:56:39.191404 kubelet[2584]: I0812 23:56:39.191344 2584 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 12 23:56:39.192634 kubelet[2584]: I0812 23:56:39.192475 2584 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 12 23:56:39.194504 kubelet[2584]: I0812 23:56:39.193825 2584 server.go:449] "Adding debug handlers to kubelet server" Aug 12 23:56:39.198511 kubelet[2584]: I0812 23:56:39.197955 2584 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 12 23:56:39.198511 kubelet[2584]: I0812 23:56:39.198198 2584 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 12 23:56:39.209761 kubelet[2584]: I0812 23:56:39.209631 2584 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 12 23:56:39.212943 kubelet[2584]: I0812 23:56:39.210101 2584 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 12 23:56:39.213910 kubelet[2584]: I0812 23:56:39.210120 2584 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 12 23:56:39.214363 kubelet[2584]: I0812 23:56:39.214263 2584 reconciler.go:26] "Reconciler: start to sync state" Aug 12 23:56:39.214563 kubelet[2584]: E0812 23:56:39.212276 2584 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 12 23:56:39.214941 kubelet[2584]: E0812 23:56:39.210264 2584 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.2.2-9-8f36bdb456\" not found" Aug 12 23:56:39.217890 kubelet[2584]: I0812 23:56:39.217062 2584 factory.go:221] Registration of the systemd container factory successfully Aug 12 23:56:39.218083 kubelet[2584]: I0812 23:56:39.218040 2584 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 12 23:56:39.223569 kubelet[2584]: I0812 23:56:39.223443 2584 factory.go:221] Registration of the containerd container factory successfully Aug 12 23:56:39.234007 kubelet[2584]: I0812 23:56:39.233911 2584 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 12 23:56:39.242095 kubelet[2584]: I0812 23:56:39.241979 2584 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 12 23:56:39.242403 kubelet[2584]: I0812 23:56:39.242382 2584 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 12 23:56:39.242475 kubelet[2584]: I0812 23:56:39.242410 2584 kubelet.go:2321] "Starting kubelet main sync loop" Aug 12 23:56:39.242475 kubelet[2584]: E0812 23:56:39.242460 2584 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 12 23:56:39.315498 kubelet[2584]: I0812 23:56:39.315464 2584 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 12 23:56:39.316061 kubelet[2584]: I0812 23:56:39.315843 2584 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 12 23:56:39.316061 kubelet[2584]: I0812 23:56:39.315923 2584 state_mem.go:36] "Initialized new in-memory state store" Aug 12 23:56:39.317096 kubelet[2584]: I0812 23:56:39.316366 2584 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 12 23:56:39.317279 kubelet[2584]: I0812 23:56:39.316391 2584 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 12 23:56:39.317279 kubelet[2584]: I0812 23:56:39.317204 2584 policy_none.go:49] "None policy: Start" Aug 12 23:56:39.319799 kubelet[2584]: I0812 23:56:39.319302 2584 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 12 23:56:39.319799 kubelet[2584]: I0812 23:56:39.319400 2584 state_mem.go:35] "Initializing new in-memory state store" Aug 12 23:56:39.319799 kubelet[2584]: I0812 23:56:39.319653 2584 state_mem.go:75] "Updated machine memory state" Aug 12 23:56:39.329432 kubelet[2584]: I0812 23:56:39.329395 2584 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 12 23:56:39.333224 kubelet[2584]: I0812 23:56:39.331506 2584 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 12 23:56:39.333224 kubelet[2584]: I0812 23:56:39.331543 2584 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 12 23:56:39.333224 kubelet[2584]: I0812 23:56:39.331958 2584 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 12 23:56:39.362984 kubelet[2584]: W0812 23:56:39.362939 2584 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 12 23:56:39.369579 kubelet[2584]: W0812 23:56:39.369170 2584 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 12 23:56:39.369579 kubelet[2584]: W0812 23:56:39.369527 2584 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 12 23:56:39.414659 sudo[2617]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Aug 12 23:56:39.415793 sudo[2617]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Aug 12 23:56:39.417782 kubelet[2584]: I0812 23:56:39.416801 2584 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/da74f60e70e194e172a4709368a3085e-ca-certs\") pod \"kube-controller-manager-ci-4230.2.2-9-8f36bdb456\" (UID: \"da74f60e70e194e172a4709368a3085e\") " pod="kube-system/kube-controller-manager-ci-4230.2.2-9-8f36bdb456" Aug 12 23:56:39.417782 kubelet[2584]: I0812 23:56:39.416838 2584 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/da74f60e70e194e172a4709368a3085e-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.2.2-9-8f36bdb456\" (UID: \"da74f60e70e194e172a4709368a3085e\") " pod="kube-system/kube-controller-manager-ci-4230.2.2-9-8f36bdb456" Aug 12 23:56:39.417782 kubelet[2584]: I0812 23:56:39.416886 2584 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5ca5dc8a7322b8f6a36602233072f709-kubeconfig\") pod \"kube-scheduler-ci-4230.2.2-9-8f36bdb456\" (UID: \"5ca5dc8a7322b8f6a36602233072f709\") " pod="kube-system/kube-scheduler-ci-4230.2.2-9-8f36bdb456" Aug 12 23:56:39.417782 kubelet[2584]: I0812 23:56:39.416953 2584 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/be5e76c0629c907fb54eb7ea3b65bd3b-ca-certs\") pod \"kube-apiserver-ci-4230.2.2-9-8f36bdb456\" (UID: \"be5e76c0629c907fb54eb7ea3b65bd3b\") " pod="kube-system/kube-apiserver-ci-4230.2.2-9-8f36bdb456" Aug 12 23:56:39.417782 kubelet[2584]: I0812 23:56:39.416980 2584 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/be5e76c0629c907fb54eb7ea3b65bd3b-k8s-certs\") pod \"kube-apiserver-ci-4230.2.2-9-8f36bdb456\" (UID: \"be5e76c0629c907fb54eb7ea3b65bd3b\") " pod="kube-system/kube-apiserver-ci-4230.2.2-9-8f36bdb456" Aug 12 23:56:39.418212 kubelet[2584]: I0812 23:56:39.417015 2584 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/da74f60e70e194e172a4709368a3085e-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.2.2-9-8f36bdb456\" (UID: \"da74f60e70e194e172a4709368a3085e\") " pod="kube-system/kube-controller-manager-ci-4230.2.2-9-8f36bdb456" Aug 12 23:56:39.418212 kubelet[2584]: I0812 23:56:39.417085 2584 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/da74f60e70e194e172a4709368a3085e-k8s-certs\") pod \"kube-controller-manager-ci-4230.2.2-9-8f36bdb456\" (UID: \"da74f60e70e194e172a4709368a3085e\") " pod="kube-system/kube-controller-manager-ci-4230.2.2-9-8f36bdb456" Aug 12 23:56:39.418212 kubelet[2584]: I0812 23:56:39.417110 2584 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/da74f60e70e194e172a4709368a3085e-kubeconfig\") pod \"kube-controller-manager-ci-4230.2.2-9-8f36bdb456\" (UID: \"da74f60e70e194e172a4709368a3085e\") " pod="kube-system/kube-controller-manager-ci-4230.2.2-9-8f36bdb456" Aug 12 23:56:39.418212 kubelet[2584]: I0812 23:56:39.417144 2584 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/be5e76c0629c907fb54eb7ea3b65bd3b-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.2.2-9-8f36bdb456\" (UID: \"be5e76c0629c907fb54eb7ea3b65bd3b\") " pod="kube-system/kube-apiserver-ci-4230.2.2-9-8f36bdb456" Aug 12 23:56:39.440271 kubelet[2584]: I0812 23:56:39.440230 2584 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.2.2-9-8f36bdb456" Aug 12 23:56:39.456534 kubelet[2584]: I0812 23:56:39.456488 2584 kubelet_node_status.go:111] "Node was previously registered" node="ci-4230.2.2-9-8f36bdb456" Aug 12 23:56:39.456778 kubelet[2584]: I0812 23:56:39.456589 2584 kubelet_node_status.go:75] "Successfully registered node" node="ci-4230.2.2-9-8f36bdb456" Aug 12 23:56:39.663624 kubelet[2584]: E0812 23:56:39.663485 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 12 23:56:39.670959 kubelet[2584]: E0812 23:56:39.670266 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 12 23:56:39.670959 kubelet[2584]: E0812 23:56:39.670463 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 12 23:56:40.060161 sudo[2617]: pam_unix(sudo:session): session closed for user root Aug 12 23:56:40.178663 kubelet[2584]: I0812 23:56:40.178281 2584 apiserver.go:52] "Watching apiserver" Aug 12 23:56:40.214405 kubelet[2584]: I0812 23:56:40.214297 2584 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Aug 12 23:56:40.275064 kubelet[2584]: E0812 23:56:40.274381 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 12 23:56:40.294066 kubelet[2584]: W0812 23:56:40.294015 2584 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 12 23:56:40.294381 kubelet[2584]: E0812 23:56:40.294361 2584 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4230.2.2-9-8f36bdb456\" already exists" pod="kube-system/kube-controller-manager-ci-4230.2.2-9-8f36bdb456" Aug 12 23:56:40.294890 kubelet[2584]: E0812 23:56:40.294858 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 12 23:56:40.300192 kubelet[2584]: W0812 23:56:40.299577 2584 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 12 23:56:40.300192 kubelet[2584]: E0812 23:56:40.299674 2584 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4230.2.2-9-8f36bdb456\" already exists" pod="kube-system/kube-apiserver-ci-4230.2.2-9-8f36bdb456" Aug 12 23:56:40.301639 kubelet[2584]: E0812 23:56:40.301570 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 12 23:56:40.346199 kubelet[2584]: I0812 23:56:40.345838 2584 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4230.2.2-9-8f36bdb456" podStartSLOduration=1.345816135 podStartE2EDuration="1.345816135s" podCreationTimestamp="2025-08-12 23:56:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-12 23:56:40.328788913 +0000 UTC m=+1.282465621" watchObservedRunningTime="2025-08-12 23:56:40.345816135 +0000 UTC m=+1.299492841" Aug 12 23:56:40.364006 kubelet[2584]: I0812 23:56:40.363542 2584 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4230.2.2-9-8f36bdb456" podStartSLOduration=1.363496956 podStartE2EDuration="1.363496956s" podCreationTimestamp="2025-08-12 23:56:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-12 23:56:40.348371677 +0000 UTC m=+1.302048393" watchObservedRunningTime="2025-08-12 23:56:40.363496956 +0000 UTC m=+1.317173663" Aug 12 23:56:41.277601 kubelet[2584]: E0812 23:56:41.276934 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 12 23:56:41.278550 kubelet[2584]: E0812 23:56:41.278427 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 12 23:56:41.872660 sudo[1678]: pam_unix(sudo:session): session closed for user root Aug 12 23:56:41.876776 sshd[1677]: Connection closed by 139.178.68.195 port 44906 Aug 12 23:56:41.878712 sshd-session[1674]: pam_unix(sshd:session): session closed for user core Aug 12 23:56:41.883678 systemd-logind[1467]: Session 7 logged out. Waiting for processes to exit. Aug 12 23:56:41.885607 systemd[1]: sshd@6-137.184.234.76:22-139.178.68.195:44906.service: Deactivated successfully. Aug 12 23:56:41.890726 systemd[1]: session-7.scope: Deactivated successfully. Aug 12 23:56:41.891193 systemd[1]: session-7.scope: Consumed 4.905s CPU time, 216.3M memory peak. Aug 12 23:56:41.895500 systemd-logind[1467]: Removed session 7. Aug 12 23:56:42.279285 kubelet[2584]: E0812 23:56:42.279136 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 12 23:56:42.515287 kubelet[2584]: E0812 23:56:42.514891 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 12 23:56:43.042875 kubelet[2584]: E0812 23:56:43.042833 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 12 23:56:43.710001 kubelet[2584]: I0812 23:56:43.709828 2584 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 12 23:56:43.710450 containerd[1494]: time="2025-08-12T23:56:43.710200419Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 12 23:56:43.710962 kubelet[2584]: I0812 23:56:43.710935 2584 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 12 23:56:44.715229 kubelet[2584]: I0812 23:56:44.713436 2584 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4230.2.2-9-8f36bdb456" podStartSLOduration=5.713413448 podStartE2EDuration="5.713413448s" podCreationTimestamp="2025-08-12 23:56:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-12 23:56:40.365597429 +0000 UTC m=+1.319274138" watchObservedRunningTime="2025-08-12 23:56:44.713413448 +0000 UTC m=+5.667090157" Aug 12 23:56:44.728400 systemd[1]: Created slice kubepods-besteffort-pod63f866e5_9074_4b55_8dad_af6abcb56a04.slice - libcontainer container kubepods-besteffort-pod63f866e5_9074_4b55_8dad_af6abcb56a04.slice. Aug 12 23:56:44.757729 kubelet[2584]: I0812 23:56:44.756219 2584 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/63f866e5-9074-4b55-8dad-af6abcb56a04-kube-proxy\") pod \"kube-proxy-jhmnw\" (UID: \"63f866e5-9074-4b55-8dad-af6abcb56a04\") " pod="kube-system/kube-proxy-jhmnw" Aug 12 23:56:44.757729 kubelet[2584]: I0812 23:56:44.756300 2584 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmskc\" (UniqueName: \"kubernetes.io/projected/63f866e5-9074-4b55-8dad-af6abcb56a04-kube-api-access-mmskc\") pod \"kube-proxy-jhmnw\" (UID: \"63f866e5-9074-4b55-8dad-af6abcb56a04\") " pod="kube-system/kube-proxy-jhmnw" Aug 12 23:56:44.757729 kubelet[2584]: I0812 23:56:44.756528 2584 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/50c5fd17-a29b-4a6f-b010-2a19bd801007-bpf-maps\") pod \"cilium-2crx4\" (UID: \"50c5fd17-a29b-4a6f-b010-2a19bd801007\") " pod="kube-system/cilium-2crx4" Aug 12 23:56:44.757729 kubelet[2584]: I0812 23:56:44.756548 2584 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/50c5fd17-a29b-4a6f-b010-2a19bd801007-lib-modules\") pod \"cilium-2crx4\" (UID: \"50c5fd17-a29b-4a6f-b010-2a19bd801007\") " pod="kube-system/cilium-2crx4" Aug 12 23:56:44.759105 systemd[1]: Created slice kubepods-burstable-pod50c5fd17_a29b_4a6f_b010_2a19bd801007.slice - libcontainer container kubepods-burstable-pod50c5fd17_a29b_4a6f_b010_2a19bd801007.slice. Aug 12 23:56:44.760055 kubelet[2584]: I0812 23:56:44.759199 2584 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/50c5fd17-a29b-4a6f-b010-2a19bd801007-cilium-config-path\") pod \"cilium-2crx4\" (UID: \"50c5fd17-a29b-4a6f-b010-2a19bd801007\") " pod="kube-system/cilium-2crx4" Aug 12 23:56:44.760055 kubelet[2584]: I0812 23:56:44.759289 2584 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/50c5fd17-a29b-4a6f-b010-2a19bd801007-host-proc-sys-net\") pod \"cilium-2crx4\" (UID: \"50c5fd17-a29b-4a6f-b010-2a19bd801007\") " pod="kube-system/cilium-2crx4" Aug 12 23:56:44.760055 kubelet[2584]: I0812 23:56:44.759306 2584 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/50c5fd17-a29b-4a6f-b010-2a19bd801007-hubble-tls\") pod \"cilium-2crx4\" (UID: \"50c5fd17-a29b-4a6f-b010-2a19bd801007\") " pod="kube-system/cilium-2crx4" Aug 12 23:56:44.760055 kubelet[2584]: I0812 23:56:44.759345 2584 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/50c5fd17-a29b-4a6f-b010-2a19bd801007-cilium-run\") pod \"cilium-2crx4\" (UID: \"50c5fd17-a29b-4a6f-b010-2a19bd801007\") " pod="kube-system/cilium-2crx4" Aug 12 23:56:44.760055 kubelet[2584]: I0812 23:56:44.759361 2584 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/50c5fd17-a29b-4a6f-b010-2a19bd801007-xtables-lock\") pod \"cilium-2crx4\" (UID: \"50c5fd17-a29b-4a6f-b010-2a19bd801007\") " pod="kube-system/cilium-2crx4" Aug 12 23:56:44.760055 kubelet[2584]: I0812 23:56:44.759376 2584 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/50c5fd17-a29b-4a6f-b010-2a19bd801007-clustermesh-secrets\") pod \"cilium-2crx4\" (UID: \"50c5fd17-a29b-4a6f-b010-2a19bd801007\") " pod="kube-system/cilium-2crx4" Aug 12 23:56:44.760369 kubelet[2584]: I0812 23:56:44.759391 2584 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ft4lc\" (UniqueName: \"kubernetes.io/projected/50c5fd17-a29b-4a6f-b010-2a19bd801007-kube-api-access-ft4lc\") pod \"cilium-2crx4\" (UID: \"50c5fd17-a29b-4a6f-b010-2a19bd801007\") " pod="kube-system/cilium-2crx4" Aug 12 23:56:44.760369 kubelet[2584]: I0812 23:56:44.759408 2584 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/50c5fd17-a29b-4a6f-b010-2a19bd801007-cilium-cgroup\") pod \"cilium-2crx4\" (UID: \"50c5fd17-a29b-4a6f-b010-2a19bd801007\") " pod="kube-system/cilium-2crx4" Aug 12 23:56:44.760369 kubelet[2584]: I0812 23:56:44.759423 2584 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/50c5fd17-a29b-4a6f-b010-2a19bd801007-cni-path\") pod \"cilium-2crx4\" (UID: \"50c5fd17-a29b-4a6f-b010-2a19bd801007\") " pod="kube-system/cilium-2crx4" Aug 12 23:56:44.760369 kubelet[2584]: I0812 23:56:44.759438 2584 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/50c5fd17-a29b-4a6f-b010-2a19bd801007-etc-cni-netd\") pod \"cilium-2crx4\" (UID: \"50c5fd17-a29b-4a6f-b010-2a19bd801007\") " pod="kube-system/cilium-2crx4" Aug 12 23:56:44.760369 kubelet[2584]: I0812 23:56:44.759461 2584 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/50c5fd17-a29b-4a6f-b010-2a19bd801007-host-proc-sys-kernel\") pod \"cilium-2crx4\" (UID: \"50c5fd17-a29b-4a6f-b010-2a19bd801007\") " pod="kube-system/cilium-2crx4" Aug 12 23:56:44.760369 kubelet[2584]: I0812 23:56:44.759477 2584 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/50c5fd17-a29b-4a6f-b010-2a19bd801007-hostproc\") pod \"cilium-2crx4\" (UID: \"50c5fd17-a29b-4a6f-b010-2a19bd801007\") " pod="kube-system/cilium-2crx4" Aug 12 23:56:44.760552 kubelet[2584]: I0812 23:56:44.759496 2584 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/63f866e5-9074-4b55-8dad-af6abcb56a04-xtables-lock\") pod \"kube-proxy-jhmnw\" (UID: \"63f866e5-9074-4b55-8dad-af6abcb56a04\") " pod="kube-system/kube-proxy-jhmnw" Aug 12 23:56:44.760552 kubelet[2584]: I0812 23:56:44.759510 2584 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/63f866e5-9074-4b55-8dad-af6abcb56a04-lib-modules\") pod \"kube-proxy-jhmnw\" (UID: \"63f866e5-9074-4b55-8dad-af6abcb56a04\") " pod="kube-system/kube-proxy-jhmnw" Aug 12 23:56:44.844646 systemd[1]: Created slice kubepods-besteffort-pod1c87706e_66de_43e0_a390_87da9fa3e36d.slice - libcontainer container kubepods-besteffort-pod1c87706e_66de_43e0_a390_87da9fa3e36d.slice. Aug 12 23:56:44.860683 kubelet[2584]: I0812 23:56:44.860617 2584 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8t98\" (UniqueName: \"kubernetes.io/projected/1c87706e-66de-43e0-a390-87da9fa3e36d-kube-api-access-f8t98\") pod \"cilium-operator-5d85765b45-86zql\" (UID: \"1c87706e-66de-43e0-a390-87da9fa3e36d\") " pod="kube-system/cilium-operator-5d85765b45-86zql" Aug 12 23:56:44.860911 kubelet[2584]: I0812 23:56:44.860799 2584 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1c87706e-66de-43e0-a390-87da9fa3e36d-cilium-config-path\") pod \"cilium-operator-5d85765b45-86zql\" (UID: \"1c87706e-66de-43e0-a390-87da9fa3e36d\") " pod="kube-system/cilium-operator-5d85765b45-86zql" Aug 12 23:56:45.040453 kubelet[2584]: E0812 23:56:45.040296 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 12 23:56:45.041749 containerd[1494]: time="2025-08-12T23:56:45.041365426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jhmnw,Uid:63f866e5-9074-4b55-8dad-af6abcb56a04,Namespace:kube-system,Attempt:0,}" Aug 12 23:56:45.066301 kubelet[2584]: E0812 23:56:45.066259 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 12 23:56:45.067310 containerd[1494]: time="2025-08-12T23:56:45.066874664Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2crx4,Uid:50c5fd17-a29b-4a6f-b010-2a19bd801007,Namespace:kube-system,Attempt:0,}" Aug 12 23:56:45.073084 containerd[1494]: time="2025-08-12T23:56:45.072929729Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 12 23:56:45.073331 containerd[1494]: time="2025-08-12T23:56:45.073006446Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 12 23:56:45.073331 containerd[1494]: time="2025-08-12T23:56:45.073033593Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:56:45.073331 containerd[1494]: time="2025-08-12T23:56:45.073165672Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:56:45.106440 systemd[1]: Started cri-containerd-9ecfb6546cd350d3f1710b03473c91c0538e093dba4f709564153dfe565e3193.scope - libcontainer container 9ecfb6546cd350d3f1710b03473c91c0538e093dba4f709564153dfe565e3193. Aug 12 23:56:45.118848 containerd[1494]: time="2025-08-12T23:56:45.118598834Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 12 23:56:45.118848 containerd[1494]: time="2025-08-12T23:56:45.118710170Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 12 23:56:45.118848 containerd[1494]: time="2025-08-12T23:56:45.118741231Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:56:45.119442 containerd[1494]: time="2025-08-12T23:56:45.118890522Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:56:45.153852 kubelet[2584]: E0812 23:56:45.152246 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 12 23:56:45.154127 containerd[1494]: time="2025-08-12T23:56:45.153530306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-86zql,Uid:1c87706e-66de-43e0-a390-87da9fa3e36d,Namespace:kube-system,Attempt:0,}" Aug 12 23:56:45.158337 systemd[1]: Started cri-containerd-09eca810e340f53ecda4b28fac44112c69c5170401b1e6e1ca8c0795e037bf0e.scope - libcontainer container 09eca810e340f53ecda4b28fac44112c69c5170401b1e6e1ca8c0795e037bf0e. Aug 12 23:56:45.180644 containerd[1494]: time="2025-08-12T23:56:45.180585397Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jhmnw,Uid:63f866e5-9074-4b55-8dad-af6abcb56a04,Namespace:kube-system,Attempt:0,} returns sandbox id \"9ecfb6546cd350d3f1710b03473c91c0538e093dba4f709564153dfe565e3193\"" Aug 12 23:56:45.186092 kubelet[2584]: E0812 23:56:45.186051 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 12 23:56:45.191280 containerd[1494]: time="2025-08-12T23:56:45.190964521Z" level=info msg="CreateContainer within sandbox \"9ecfb6546cd350d3f1710b03473c91c0538e093dba4f709564153dfe565e3193\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 12 23:56:45.217988 containerd[1494]: time="2025-08-12T23:56:45.217877935Z" level=info msg="CreateContainer within sandbox \"9ecfb6546cd350d3f1710b03473c91c0538e093dba4f709564153dfe565e3193\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"78f1d586bf624f727b8234523bcdac366b7871d807d387eca194aac29165e320\"" Aug 12 23:56:45.221106 containerd[1494]: time="2025-08-12T23:56:45.219098619Z" level=info msg="StartContainer for \"78f1d586bf624f727b8234523bcdac366b7871d807d387eca194aac29165e320\"" Aug 12 23:56:45.227630 containerd[1494]: time="2025-08-12T23:56:45.227569540Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2crx4,Uid:50c5fd17-a29b-4a6f-b010-2a19bd801007,Namespace:kube-system,Attempt:0,} returns sandbox id \"09eca810e340f53ecda4b28fac44112c69c5170401b1e6e1ca8c0795e037bf0e\"" Aug 12 23:56:45.229547 kubelet[2584]: E0812 23:56:45.229505 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 12 23:56:45.233273 containerd[1494]: time="2025-08-12T23:56:45.232808143Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Aug 12 23:56:45.240636 containerd[1494]: time="2025-08-12T23:56:45.240415696Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 12 23:56:45.240636 containerd[1494]: time="2025-08-12T23:56:45.240585713Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 12 23:56:45.240941 containerd[1494]: time="2025-08-12T23:56:45.240903216Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:56:45.241533 containerd[1494]: time="2025-08-12T23:56:45.241471814Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:56:45.284460 systemd[1]: Started cri-containerd-38206d25010d74774ad9e16bb4b268693485e51629606a80656c151eefa1122c.scope - libcontainer container 38206d25010d74774ad9e16bb4b268693485e51629606a80656c151eefa1122c. Aug 12 23:56:45.288087 systemd[1]: Started cri-containerd-78f1d586bf624f727b8234523bcdac366b7871d807d387eca194aac29165e320.scope - libcontainer container 78f1d586bf624f727b8234523bcdac366b7871d807d387eca194aac29165e320. Aug 12 23:56:45.345274 containerd[1494]: time="2025-08-12T23:56:45.345223066Z" level=info msg="StartContainer for \"78f1d586bf624f727b8234523bcdac366b7871d807d387eca194aac29165e320\" returns successfully" Aug 12 23:56:45.372579 containerd[1494]: time="2025-08-12T23:56:45.372080162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-86zql,Uid:1c87706e-66de-43e0-a390-87da9fa3e36d,Namespace:kube-system,Attempt:0,} returns sandbox id \"38206d25010d74774ad9e16bb4b268693485e51629606a80656c151eefa1122c\"" Aug 12 23:56:45.374256 kubelet[2584]: E0812 23:56:45.374127 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 12 23:56:46.305455 kubelet[2584]: E0812 23:56:46.305323 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 12 23:56:46.322169 kubelet[2584]: I0812 23:56:46.321647 2584 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-jhmnw" podStartSLOduration=2.321515815 podStartE2EDuration="2.321515815s" podCreationTimestamp="2025-08-12 23:56:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-12 23:56:46.320489849 +0000 UTC m=+7.274166555" watchObservedRunningTime="2025-08-12 23:56:46.321515815 +0000 UTC m=+7.275192501" Aug 12 23:56:47.313368 kubelet[2584]: E0812 23:56:47.313296 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 12 23:56:49.973958 kubelet[2584]: E0812 23:56:49.973876 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 12 23:56:50.329701 kubelet[2584]: E0812 23:56:50.329120 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 12 23:56:51.967610 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2939554514.mount: Deactivated successfully. Aug 12 23:56:52.529600 kubelet[2584]: E0812 23:56:52.529557 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 12 23:56:53.055310 kubelet[2584]: E0812 23:56:53.055265 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 12 23:56:54.420887 containerd[1494]: time="2025-08-12T23:56:54.420820522Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:56:54.428891 containerd[1494]: time="2025-08-12T23:56:54.428815625Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Aug 12 23:56:54.435360 containerd[1494]: time="2025-08-12T23:56:54.435097512Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:56:54.439246 containerd[1494]: time="2025-08-12T23:56:54.439009819Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 9.206140626s" Aug 12 23:56:54.439246 containerd[1494]: time="2025-08-12T23:56:54.439097925Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Aug 12 23:56:54.442605 containerd[1494]: time="2025-08-12T23:56:54.442536187Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Aug 12 23:56:54.445638 containerd[1494]: time="2025-08-12T23:56:54.445593154Z" level=info msg="CreateContainer within sandbox \"09eca810e340f53ecda4b28fac44112c69c5170401b1e6e1ca8c0795e037bf0e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 12 23:56:54.555177 containerd[1494]: time="2025-08-12T23:56:54.554923036Z" level=info msg="CreateContainer within sandbox \"09eca810e340f53ecda4b28fac44112c69c5170401b1e6e1ca8c0795e037bf0e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b82bfdd6db0dadd02d70142fe9abb9c1b48aa6429f9dbea7a0fe2499dc73a7fb\"" Aug 12 23:56:54.557630 containerd[1494]: time="2025-08-12T23:56:54.557243163Z" level=info msg="StartContainer for \"b82bfdd6db0dadd02d70142fe9abb9c1b48aa6429f9dbea7a0fe2499dc73a7fb\"" Aug 12 23:56:54.681417 systemd[1]: run-containerd-runc-k8s.io-b82bfdd6db0dadd02d70142fe9abb9c1b48aa6429f9dbea7a0fe2499dc73a7fb-runc.zG9Okw.mount: Deactivated successfully. Aug 12 23:56:54.694778 systemd[1]: Started cri-containerd-b82bfdd6db0dadd02d70142fe9abb9c1b48aa6429f9dbea7a0fe2499dc73a7fb.scope - libcontainer container b82bfdd6db0dadd02d70142fe9abb9c1b48aa6429f9dbea7a0fe2499dc73a7fb. Aug 12 23:56:54.727525 containerd[1494]: time="2025-08-12T23:56:54.727479076Z" level=info msg="StartContainer for \"b82bfdd6db0dadd02d70142fe9abb9c1b48aa6429f9dbea7a0fe2499dc73a7fb\" returns successfully" Aug 12 23:56:54.744245 systemd[1]: cri-containerd-b82bfdd6db0dadd02d70142fe9abb9c1b48aa6429f9dbea7a0fe2499dc73a7fb.scope: Deactivated successfully. Aug 12 23:56:54.745348 systemd[1]: cri-containerd-b82bfdd6db0dadd02d70142fe9abb9c1b48aa6429f9dbea7a0fe2499dc73a7fb.scope: Consumed 25ms CPU time, 6.3M memory peak, 4K read from disk, 2.6M written to disk. Aug 12 23:56:54.834435 containerd[1494]: time="2025-08-12T23:56:54.810555842Z" level=info msg="shim disconnected" id=b82bfdd6db0dadd02d70142fe9abb9c1b48aa6429f9dbea7a0fe2499dc73a7fb namespace=k8s.io Aug 12 23:56:54.834435 containerd[1494]: time="2025-08-12T23:56:54.834198926Z" level=warning msg="cleaning up after shim disconnected" id=b82bfdd6db0dadd02d70142fe9abb9c1b48aa6429f9dbea7a0fe2499dc73a7fb namespace=k8s.io Aug 12 23:56:54.834435 containerd[1494]: time="2025-08-12T23:56:54.834221012Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 12 23:56:54.850825 containerd[1494]: time="2025-08-12T23:56:54.850463230Z" level=warning msg="cleanup warnings time=\"2025-08-12T23:56:54Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Aug 12 23:56:55.343881 kubelet[2584]: E0812 23:56:55.343846 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 12 23:56:55.350323 containerd[1494]: time="2025-08-12T23:56:55.348166876Z" level=info msg="CreateContainer within sandbox \"09eca810e340f53ecda4b28fac44112c69c5170401b1e6e1ca8c0795e037bf0e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 12 23:56:55.384618 containerd[1494]: time="2025-08-12T23:56:55.384560040Z" level=info msg="CreateContainer within sandbox \"09eca810e340f53ecda4b28fac44112c69c5170401b1e6e1ca8c0795e037bf0e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c60543eef062c006031aa3d5e2fb336089441f3ed0f8c00c70bf2e3ba6d8295b\"" Aug 12 23:56:55.386277 containerd[1494]: time="2025-08-12T23:56:55.385287474Z" level=info msg="StartContainer for \"c60543eef062c006031aa3d5e2fb336089441f3ed0f8c00c70bf2e3ba6d8295b\"" Aug 12 23:56:55.421286 systemd[1]: Started cri-containerd-c60543eef062c006031aa3d5e2fb336089441f3ed0f8c00c70bf2e3ba6d8295b.scope - libcontainer container c60543eef062c006031aa3d5e2fb336089441f3ed0f8c00c70bf2e3ba6d8295b. Aug 12 23:56:55.459351 containerd[1494]: time="2025-08-12T23:56:55.459256061Z" level=info msg="StartContainer for \"c60543eef062c006031aa3d5e2fb336089441f3ed0f8c00c70bf2e3ba6d8295b\" returns successfully" Aug 12 23:56:55.477980 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 12 23:56:55.478616 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 12 23:56:55.479584 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Aug 12 23:56:55.485522 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 12 23:56:55.485746 systemd[1]: cri-containerd-c60543eef062c006031aa3d5e2fb336089441f3ed0f8c00c70bf2e3ba6d8295b.scope: Deactivated successfully. Aug 12 23:56:55.524759 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 12 23:56:55.530841 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b82bfdd6db0dadd02d70142fe9abb9c1b48aa6429f9dbea7a0fe2499dc73a7fb-rootfs.mount: Deactivated successfully. Aug 12 23:56:55.542728 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c60543eef062c006031aa3d5e2fb336089441f3ed0f8c00c70bf2e3ba6d8295b-rootfs.mount: Deactivated successfully. Aug 12 23:56:55.546503 containerd[1494]: time="2025-08-12T23:56:55.546425886Z" level=info msg="shim disconnected" id=c60543eef062c006031aa3d5e2fb336089441f3ed0f8c00c70bf2e3ba6d8295b namespace=k8s.io Aug 12 23:56:55.547038 containerd[1494]: time="2025-08-12T23:56:55.546818736Z" level=warning msg="cleaning up after shim disconnected" id=c60543eef062c006031aa3d5e2fb336089441f3ed0f8c00c70bf2e3ba6d8295b namespace=k8s.io Aug 12 23:56:55.547038 containerd[1494]: time="2025-08-12T23:56:55.546845732Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 12 23:56:55.854956 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1431510762.mount: Deactivated successfully. Aug 12 23:56:56.352409 kubelet[2584]: E0812 23:56:56.351910 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 12 23:56:56.364842 containerd[1494]: time="2025-08-12T23:56:56.364016554Z" level=info msg="CreateContainer within sandbox \"09eca810e340f53ecda4b28fac44112c69c5170401b1e6e1ca8c0795e037bf0e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 12 23:56:56.432305 containerd[1494]: time="2025-08-12T23:56:56.432185574Z" level=info msg="CreateContainer within sandbox \"09eca810e340f53ecda4b28fac44112c69c5170401b1e6e1ca8c0795e037bf0e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f13882758f251fe196918fa814ff421fd4428447193bd9bd72a3db7b9ad550ba\"" Aug 12 23:56:56.434113 containerd[1494]: time="2025-08-12T23:56:56.433590616Z" level=info msg="StartContainer for \"f13882758f251fe196918fa814ff421fd4428447193bd9bd72a3db7b9ad550ba\"" Aug 12 23:56:56.501329 systemd[1]: Started cri-containerd-f13882758f251fe196918fa814ff421fd4428447193bd9bd72a3db7b9ad550ba.scope - libcontainer container f13882758f251fe196918fa814ff421fd4428447193bd9bd72a3db7b9ad550ba. Aug 12 23:56:56.571697 containerd[1494]: time="2025-08-12T23:56:56.570927214Z" level=info msg="StartContainer for \"f13882758f251fe196918fa814ff421fd4428447193bd9bd72a3db7b9ad550ba\" returns successfully" Aug 12 23:56:56.581309 systemd[1]: cri-containerd-f13882758f251fe196918fa814ff421fd4428447193bd9bd72a3db7b9ad550ba.scope: Deactivated successfully. Aug 12 23:56:56.583518 update_engine[1469]: I20250812 23:56:56.582885 1469 update_attempter.cc:509] Updating boot flags... Aug 12 23:56:56.689585 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 45 scanned by (udev-worker) (3165) Aug 12 23:56:56.703593 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f13882758f251fe196918fa814ff421fd4428447193bd9bd72a3db7b9ad550ba-rootfs.mount: Deactivated successfully. Aug 12 23:56:56.708078 containerd[1494]: time="2025-08-12T23:56:56.707689704Z" level=info msg="shim disconnected" id=f13882758f251fe196918fa814ff421fd4428447193bd9bd72a3db7b9ad550ba namespace=k8s.io Aug 12 23:56:56.708078 containerd[1494]: time="2025-08-12T23:56:56.707839243Z" level=warning msg="cleaning up after shim disconnected" id=f13882758f251fe196918fa814ff421fd4428447193bd9bd72a3db7b9ad550ba namespace=k8s.io Aug 12 23:56:56.708078 containerd[1494]: time="2025-08-12T23:56:56.707851621Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 12 23:56:56.838357 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 45 scanned by (udev-worker) (3165) Aug 12 23:56:56.973118 containerd[1494]: time="2025-08-12T23:56:56.972034425Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:56:56.973706 containerd[1494]: time="2025-08-12T23:56:56.973645764Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Aug 12 23:56:56.974614 containerd[1494]: time="2025-08-12T23:56:56.974550447Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:56:56.976526 containerd[1494]: time="2025-08-12T23:56:56.976486871Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.533810254s" Aug 12 23:56:56.976739 containerd[1494]: time="2025-08-12T23:56:56.976633332Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Aug 12 23:56:56.983376 containerd[1494]: time="2025-08-12T23:56:56.982417636Z" level=info msg="CreateContainer within sandbox \"38206d25010d74774ad9e16bb4b268693485e51629606a80656c151eefa1122c\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Aug 12 23:56:57.000557 containerd[1494]: time="2025-08-12T23:56:57.000236661Z" level=info msg="CreateContainer within sandbox \"38206d25010d74774ad9e16bb4b268693485e51629606a80656c151eefa1122c\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"3c94959b1fa7613366d1c70f6a875a7973e2dd8f64afac9e7313462e987ec048\"" Aug 12 23:56:57.001224 containerd[1494]: time="2025-08-12T23:56:57.001091837Z" level=info msg="StartContainer for \"3c94959b1fa7613366d1c70f6a875a7973e2dd8f64afac9e7313462e987ec048\"" Aug 12 23:56:57.042465 systemd[1]: Started cri-containerd-3c94959b1fa7613366d1c70f6a875a7973e2dd8f64afac9e7313462e987ec048.scope - libcontainer container 3c94959b1fa7613366d1c70f6a875a7973e2dd8f64afac9e7313462e987ec048. Aug 12 23:56:57.072427 containerd[1494]: time="2025-08-12T23:56:57.072366892Z" level=info msg="StartContainer for \"3c94959b1fa7613366d1c70f6a875a7973e2dd8f64afac9e7313462e987ec048\" returns successfully" Aug 12 23:56:57.359037 kubelet[2584]: E0812 23:56:57.358671 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 12 23:56:57.364565 kubelet[2584]: E0812 23:56:57.364354 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 12 23:56:57.368656 containerd[1494]: time="2025-08-12T23:56:57.368370295Z" level=info msg="CreateContainer within sandbox \"09eca810e340f53ecda4b28fac44112c69c5170401b1e6e1ca8c0795e037bf0e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 12 23:56:57.381604 containerd[1494]: time="2025-08-12T23:56:57.381295160Z" level=info msg="CreateContainer within sandbox \"09eca810e340f53ecda4b28fac44112c69c5170401b1e6e1ca8c0795e037bf0e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"413b711de6e8295511ed27d09f166c4885b5a7ad38889a3b23f5fd576a82592d\"" Aug 12 23:56:57.382377 containerd[1494]: time="2025-08-12T23:56:57.382253521Z" level=info msg="StartContainer for \"413b711de6e8295511ed27d09f166c4885b5a7ad38889a3b23f5fd576a82592d\"" Aug 12 23:56:57.450526 systemd[1]: Started cri-containerd-413b711de6e8295511ed27d09f166c4885b5a7ad38889a3b23f5fd576a82592d.scope - libcontainer container 413b711de6e8295511ed27d09f166c4885b5a7ad38889a3b23f5fd576a82592d. Aug 12 23:56:57.469011 kubelet[2584]: I0812 23:56:57.468911 2584 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-86zql" podStartSLOduration=1.867108531 podStartE2EDuration="13.468881542s" podCreationTimestamp="2025-08-12 23:56:44 +0000 UTC" firstStartedPulling="2025-08-12 23:56:45.376453352 +0000 UTC m=+6.330130052" lastFinishedPulling="2025-08-12 23:56:56.978226362 +0000 UTC m=+17.931903063" observedRunningTime="2025-08-12 23:56:57.419807296 +0000 UTC m=+18.373484004" watchObservedRunningTime="2025-08-12 23:56:57.468881542 +0000 UTC m=+18.422558251" Aug 12 23:56:57.512191 systemd[1]: cri-containerd-413b711de6e8295511ed27d09f166c4885b5a7ad38889a3b23f5fd576a82592d.scope: Deactivated successfully. Aug 12 23:56:57.515391 containerd[1494]: time="2025-08-12T23:56:57.515249597Z" level=info msg="StartContainer for \"413b711de6e8295511ed27d09f166c4885b5a7ad38889a3b23f5fd576a82592d\" returns successfully" Aug 12 23:56:57.552836 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-413b711de6e8295511ed27d09f166c4885b5a7ad38889a3b23f5fd576a82592d-rootfs.mount: Deactivated successfully. Aug 12 23:56:57.554167 containerd[1494]: time="2025-08-12T23:56:57.553899052Z" level=info msg="shim disconnected" id=413b711de6e8295511ed27d09f166c4885b5a7ad38889a3b23f5fd576a82592d namespace=k8s.io Aug 12 23:56:57.554167 containerd[1494]: time="2025-08-12T23:56:57.553974572Z" level=warning msg="cleaning up after shim disconnected" id=413b711de6e8295511ed27d09f166c4885b5a7ad38889a3b23f5fd576a82592d namespace=k8s.io Aug 12 23:56:57.554167 containerd[1494]: time="2025-08-12T23:56:57.553990168Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 12 23:56:58.369703 kubelet[2584]: E0812 23:56:58.369661 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 12 23:56:58.371214 kubelet[2584]: E0812 23:56:58.370403 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 12 23:56:58.373230 containerd[1494]: time="2025-08-12T23:56:58.373179529Z" level=info msg="CreateContainer within sandbox \"09eca810e340f53ecda4b28fac44112c69c5170401b1e6e1ca8c0795e037bf0e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 12 23:56:58.404417 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4283675705.mount: Deactivated successfully. Aug 12 23:56:58.409822 containerd[1494]: time="2025-08-12T23:56:58.409653255Z" level=info msg="CreateContainer within sandbox \"09eca810e340f53ecda4b28fac44112c69c5170401b1e6e1ca8c0795e037bf0e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d2821a359a352fc6e61a7beb4e5bbcc7ee59fd8a9c90e141afb4defd9abe61ea\"" Aug 12 23:56:58.411510 containerd[1494]: time="2025-08-12T23:56:58.410318864Z" level=info msg="StartContainer for \"d2821a359a352fc6e61a7beb4e5bbcc7ee59fd8a9c90e141afb4defd9abe61ea\"" Aug 12 23:56:58.467340 systemd[1]: Started cri-containerd-d2821a359a352fc6e61a7beb4e5bbcc7ee59fd8a9c90e141afb4defd9abe61ea.scope - libcontainer container d2821a359a352fc6e61a7beb4e5bbcc7ee59fd8a9c90e141afb4defd9abe61ea. Aug 12 23:56:58.505542 containerd[1494]: time="2025-08-12T23:56:58.505434899Z" level=info msg="StartContainer for \"d2821a359a352fc6e61a7beb4e5bbcc7ee59fd8a9c90e141afb4defd9abe61ea\" returns successfully" Aug 12 23:56:58.701947 kubelet[2584]: I0812 23:56:58.701753 2584 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Aug 12 23:56:58.750325 kubelet[2584]: W0812 23:56:58.750274 2584 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-4230.2.2-9-8f36bdb456" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4230.2.2-9-8f36bdb456' and this object Aug 12 23:56:58.751272 kubelet[2584]: E0812 23:56:58.751218 2584 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:ci-4230.2.2-9-8f36bdb456\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4230.2.2-9-8f36bdb456' and this object" logger="UnhandledError" Aug 12 23:56:58.754605 systemd[1]: Created slice kubepods-burstable-podfdbba348_fcfe_4ce2_99e7_5728bc9a2c50.slice - libcontainer container kubepods-burstable-podfdbba348_fcfe_4ce2_99e7_5728bc9a2c50.slice. Aug 12 23:56:58.766358 systemd[1]: Created slice kubepods-burstable-pod7cef01cd_639a_42bf_bd98_9fd75cbf0a58.slice - libcontainer container kubepods-burstable-pod7cef01cd_639a_42bf_bd98_9fd75cbf0a58.slice. Aug 12 23:56:58.785916 kubelet[2584]: I0812 23:56:58.785784 2584 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fdbba348-fcfe-4ce2-99e7-5728bc9a2c50-config-volume\") pod \"coredns-7c65d6cfc9-4gqmj\" (UID: \"fdbba348-fcfe-4ce2-99e7-5728bc9a2c50\") " pod="kube-system/coredns-7c65d6cfc9-4gqmj" Aug 12 23:56:58.785916 kubelet[2584]: I0812 23:56:58.785843 2584 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpmh9\" (UniqueName: \"kubernetes.io/projected/7cef01cd-639a-42bf-bd98-9fd75cbf0a58-kube-api-access-xpmh9\") pod \"coredns-7c65d6cfc9-xzdjs\" (UID: \"7cef01cd-639a-42bf-bd98-9fd75cbf0a58\") " pod="kube-system/coredns-7c65d6cfc9-xzdjs" Aug 12 23:56:58.785916 kubelet[2584]: I0812 23:56:58.785866 2584 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k22hl\" (UniqueName: \"kubernetes.io/projected/fdbba348-fcfe-4ce2-99e7-5728bc9a2c50-kube-api-access-k22hl\") pod \"coredns-7c65d6cfc9-4gqmj\" (UID: \"fdbba348-fcfe-4ce2-99e7-5728bc9a2c50\") " pod="kube-system/coredns-7c65d6cfc9-4gqmj" Aug 12 23:56:58.785916 kubelet[2584]: I0812 23:56:58.785884 2584 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7cef01cd-639a-42bf-bd98-9fd75cbf0a58-config-volume\") pod \"coredns-7c65d6cfc9-xzdjs\" (UID: \"7cef01cd-639a-42bf-bd98-9fd75cbf0a58\") " pod="kube-system/coredns-7c65d6cfc9-xzdjs" Aug 12 23:56:59.376571 kubelet[2584]: E0812 23:56:59.375165 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 12 23:56:59.891533 kubelet[2584]: E0812 23:56:59.891087 2584 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Aug 12 23:56:59.891533 kubelet[2584]: E0812 23:56:59.891230 2584 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fdbba348-fcfe-4ce2-99e7-5728bc9a2c50-config-volume podName:fdbba348-fcfe-4ce2-99e7-5728bc9a2c50 nodeName:}" failed. No retries permitted until 2025-08-12 23:57:00.391203532 +0000 UTC m=+21.344880219 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/fdbba348-fcfe-4ce2-99e7-5728bc9a2c50-config-volume") pod "coredns-7c65d6cfc9-4gqmj" (UID: "fdbba348-fcfe-4ce2-99e7-5728bc9a2c50") : failed to sync configmap cache: timed out waiting for the condition Aug 12 23:56:59.891533 kubelet[2584]: E0812 23:56:59.891355 2584 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Aug 12 23:56:59.891533 kubelet[2584]: E0812 23:56:59.891437 2584 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7cef01cd-639a-42bf-bd98-9fd75cbf0a58-config-volume podName:7cef01cd-639a-42bf-bd98-9fd75cbf0a58 nodeName:}" failed. No retries permitted until 2025-08-12 23:57:00.391419704 +0000 UTC m=+21.345096389 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/7cef01cd-639a-42bf-bd98-9fd75cbf0a58-config-volume") pod "coredns-7c65d6cfc9-xzdjs" (UID: "7cef01cd-639a-42bf-bd98-9fd75cbf0a58") : failed to sync configmap cache: timed out waiting for the condition Aug 12 23:57:00.377732 kubelet[2584]: E0812 23:57:00.377673 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 12 23:57:00.561369 kubelet[2584]: E0812 23:57:00.561303 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 12 23:57:00.562995 containerd[1494]: time="2025-08-12T23:57:00.562935073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-4gqmj,Uid:fdbba348-fcfe-4ce2-99e7-5728bc9a2c50,Namespace:kube-system,Attempt:0,}" Aug 12 23:57:00.580079 kubelet[2584]: E0812 23:57:00.573681 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 12 23:57:00.580259 containerd[1494]: time="2025-08-12T23:57:00.575751894Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-xzdjs,Uid:7cef01cd-639a-42bf-bd98-9fd75cbf0a58,Namespace:kube-system,Attempt:0,}" Aug 12 23:57:00.907670 systemd-networkd[1370]: cilium_host: Link UP Aug 12 23:57:00.907853 systemd-networkd[1370]: cilium_net: Link UP Aug 12 23:57:00.910401 systemd-networkd[1370]: cilium_net: Gained carrier Aug 12 23:57:00.910741 systemd-networkd[1370]: cilium_host: Gained carrier Aug 12 23:57:00.910922 systemd-networkd[1370]: cilium_net: Gained IPv6LL Aug 12 23:57:00.911954 systemd-networkd[1370]: cilium_host: Gained IPv6LL Aug 12 23:57:01.068640 systemd-networkd[1370]: cilium_vxlan: Link UP Aug 12 23:57:01.068651 systemd-networkd[1370]: cilium_vxlan: Gained carrier Aug 12 23:57:01.380121 kubelet[2584]: E0812 23:57:01.380071 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 12 23:57:01.556067 kernel: NET: Registered PF_ALG protocol family Aug 12 23:57:02.633727 systemd-networkd[1370]: lxc_health: Link UP Aug 12 23:57:02.642523 systemd-networkd[1370]: lxc_health: Gained carrier Aug 12 23:57:02.713248 systemd-networkd[1370]: cilium_vxlan: Gained IPv6LL Aug 12 23:57:03.070836 kubelet[2584]: E0812 23:57:03.070636 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 12 23:57:03.097982 kubelet[2584]: I0812 23:57:03.097387 2584 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-2crx4" podStartSLOduration=9.887481654 podStartE2EDuration="19.097364787s" podCreationTimestamp="2025-08-12 23:56:44 +0000 UTC" firstStartedPulling="2025-08-12 23:56:45.232031885 +0000 UTC m=+6.185708583" lastFinishedPulling="2025-08-12 23:56:54.44191499 +0000 UTC m=+15.395591716" observedRunningTime="2025-08-12 23:56:59.399852331 +0000 UTC m=+20.353529041" watchObservedRunningTime="2025-08-12 23:57:03.097364787 +0000 UTC m=+24.051041472" Aug 12 23:57:03.173441 kernel: eth0: renamed from tmpad567 Aug 12 23:57:03.177463 systemd-networkd[1370]: lxc10f9b20fec91: Link UP Aug 12 23:57:03.179681 systemd-networkd[1370]: lxc10f9b20fec91: Gained carrier Aug 12 23:57:03.201376 systemd-networkd[1370]: lxcff8209bd43d4: Link UP Aug 12 23:57:03.206571 kernel: eth0: renamed from tmp5bfd0 Aug 12 23:57:03.220930 systemd-networkd[1370]: lxcff8209bd43d4: Gained carrier Aug 12 23:57:03.386237 kubelet[2584]: E0812 23:57:03.385814 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 12 23:57:04.378138 systemd-networkd[1370]: lxc_health: Gained IPv6LL Aug 12 23:57:04.387806 kubelet[2584]: E0812 23:57:04.387766 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 12 23:57:04.505310 systemd-networkd[1370]: lxc10f9b20fec91: Gained IPv6LL Aug 12 23:57:05.017203 systemd-networkd[1370]: lxcff8209bd43d4: Gained IPv6LL Aug 12 23:57:08.322209 containerd[1494]: time="2025-08-12T23:57:08.322066520Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 12 23:57:08.323111 containerd[1494]: time="2025-08-12T23:57:08.322837269Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 12 23:57:08.323546 containerd[1494]: time="2025-08-12T23:57:08.323463342Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:57:08.325331 containerd[1494]: time="2025-08-12T23:57:08.325238266Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:57:08.331318 containerd[1494]: time="2025-08-12T23:57:08.329120063Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 12 23:57:08.331318 containerd[1494]: time="2025-08-12T23:57:08.331258343Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 12 23:57:08.331318 containerd[1494]: time="2025-08-12T23:57:08.331280619Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:57:08.333808 containerd[1494]: time="2025-08-12T23:57:08.331392103Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:57:08.369584 systemd[1]: Started cri-containerd-ad56722e68c93f16a127ba271839cf4a623b0c5d072cf2fe05cff78ced836302.scope - libcontainer container ad56722e68c93f16a127ba271839cf4a623b0c5d072cf2fe05cff78ced836302. Aug 12 23:57:08.394313 systemd[1]: Started cri-containerd-5bfd03f0b9386db3aea8aa3e59d0c14a4f3fbb041c21209d977562f08dca4291.scope - libcontainer container 5bfd03f0b9386db3aea8aa3e59d0c14a4f3fbb041c21209d977562f08dca4291. Aug 12 23:57:08.502369 containerd[1494]: time="2025-08-12T23:57:08.502309535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-4gqmj,Uid:fdbba348-fcfe-4ce2-99e7-5728bc9a2c50,Namespace:kube-system,Attempt:0,} returns sandbox id \"ad56722e68c93f16a127ba271839cf4a623b0c5d072cf2fe05cff78ced836302\"" Aug 12 23:57:08.505226 kubelet[2584]: E0812 23:57:08.503565 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 12 23:57:08.511460 containerd[1494]: time="2025-08-12T23:57:08.510855391Z" level=info msg="CreateContainer within sandbox \"ad56722e68c93f16a127ba271839cf4a623b0c5d072cf2fe05cff78ced836302\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 12 23:57:08.515432 containerd[1494]: time="2025-08-12T23:57:08.515211645Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-xzdjs,Uid:7cef01cd-639a-42bf-bd98-9fd75cbf0a58,Namespace:kube-system,Attempt:0,} returns sandbox id \"5bfd03f0b9386db3aea8aa3e59d0c14a4f3fbb041c21209d977562f08dca4291\"" Aug 12 23:57:08.517940 kubelet[2584]: E0812 23:57:08.517877 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 12 23:57:08.521471 containerd[1494]: time="2025-08-12T23:57:08.521414343Z" level=info msg="CreateContainer within sandbox \"5bfd03f0b9386db3aea8aa3e59d0c14a4f3fbb041c21209d977562f08dca4291\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 12 23:57:08.576386 containerd[1494]: time="2025-08-12T23:57:08.576169970Z" level=info msg="CreateContainer within sandbox \"ad56722e68c93f16a127ba271839cf4a623b0c5d072cf2fe05cff78ced836302\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d48be11c202503f78e2439e092c9a01cc2eaefd2416fd9ccf1f6881c580df624\"" Aug 12 23:57:08.578339 containerd[1494]: time="2025-08-12T23:57:08.578013626Z" level=info msg="StartContainer for \"d48be11c202503f78e2439e092c9a01cc2eaefd2416fd9ccf1f6881c580df624\"" Aug 12 23:57:08.584869 containerd[1494]: time="2025-08-12T23:57:08.584599873Z" level=info msg="CreateContainer within sandbox \"5bfd03f0b9386db3aea8aa3e59d0c14a4f3fbb041c21209d977562f08dca4291\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"eec789f45962186513ad85f0c7e0b9734ee06ef1ce98bc70eda4224aa0e67033\"" Aug 12 23:57:08.585923 containerd[1494]: time="2025-08-12T23:57:08.585683349Z" level=info msg="StartContainer for \"eec789f45962186513ad85f0c7e0b9734ee06ef1ce98bc70eda4224aa0e67033\"" Aug 12 23:57:08.639285 systemd[1]: Started cri-containerd-eec789f45962186513ad85f0c7e0b9734ee06ef1ce98bc70eda4224aa0e67033.scope - libcontainer container eec789f45962186513ad85f0c7e0b9734ee06ef1ce98bc70eda4224aa0e67033. Aug 12 23:57:08.650294 systemd[1]: Started cri-containerd-d48be11c202503f78e2439e092c9a01cc2eaefd2416fd9ccf1f6881c580df624.scope - libcontainer container d48be11c202503f78e2439e092c9a01cc2eaefd2416fd9ccf1f6881c580df624. Aug 12 23:57:08.712069 containerd[1494]: time="2025-08-12T23:57:08.711827011Z" level=info msg="StartContainer for \"d48be11c202503f78e2439e092c9a01cc2eaefd2416fd9ccf1f6881c580df624\" returns successfully" Aug 12 23:57:08.712069 containerd[1494]: time="2025-08-12T23:57:08.711855528Z" level=info msg="StartContainer for \"eec789f45962186513ad85f0c7e0b9734ee06ef1ce98bc70eda4224aa0e67033\" returns successfully" Aug 12 23:57:09.341849 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3012120991.mount: Deactivated successfully. Aug 12 23:57:09.403892 kubelet[2584]: E0812 23:57:09.403853 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 12 23:57:09.407717 kubelet[2584]: E0812 23:57:09.407673 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 12 23:57:09.453792 kubelet[2584]: I0812 23:57:09.452792 2584 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-xzdjs" podStartSLOduration=25.4527704 podStartE2EDuration="25.4527704s" podCreationTimestamp="2025-08-12 23:56:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-12 23:57:09.428931355 +0000 UTC m=+30.382608061" watchObservedRunningTime="2025-08-12 23:57:09.4527704 +0000 UTC m=+30.406447159" Aug 12 23:57:09.492067 kubelet[2584]: I0812 23:57:09.491506 2584 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-4gqmj" podStartSLOduration=25.491483909 podStartE2EDuration="25.491483909s" podCreationTimestamp="2025-08-12 23:56:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-12 23:57:09.455310679 +0000 UTC m=+30.408987388" watchObservedRunningTime="2025-08-12 23:57:09.491483909 +0000 UTC m=+30.445160615" Aug 12 23:57:10.410825 kubelet[2584]: E0812 23:57:10.410666 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 12 23:57:10.412188 kubelet[2584]: E0812 23:57:10.412008 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 12 23:57:11.412255 kubelet[2584]: E0812 23:57:11.412108 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 12 23:57:11.412255 kubelet[2584]: E0812 23:57:11.412169 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 12 23:57:17.245760 systemd[1]: Started sshd@7-137.184.234.76:22-139.178.68.195:54220.service - OpenSSH per-connection server daemon (139.178.68.195:54220). Aug 12 23:57:17.345695 sshd[3984]: Accepted publickey for core from 139.178.68.195 port 54220 ssh2: RSA SHA256:Yd4cJaNOPrEdOKjK3Hl1fuqro0lLX1aY5TKeqt+Qp+4 Aug 12 23:57:17.347966 sshd-session[3984]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:57:17.354388 systemd-logind[1467]: New session 8 of user core. Aug 12 23:57:17.358240 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 12 23:57:17.951091 sshd[3986]: Connection closed by 139.178.68.195 port 54220 Aug 12 23:57:17.951880 sshd-session[3984]: pam_unix(sshd:session): session closed for user core Aug 12 23:57:17.955133 systemd[1]: sshd@7-137.184.234.76:22-139.178.68.195:54220.service: Deactivated successfully. Aug 12 23:57:17.957770 systemd[1]: session-8.scope: Deactivated successfully. Aug 12 23:57:17.960771 systemd-logind[1467]: Session 8 logged out. Waiting for processes to exit. Aug 12 23:57:17.961918 systemd-logind[1467]: Removed session 8. Aug 12 23:57:22.977630 systemd[1]: Started sshd@8-137.184.234.76:22-139.178.68.195:38136.service - OpenSSH per-connection server daemon (139.178.68.195:38136). Aug 12 23:57:23.032307 sshd[3999]: Accepted publickey for core from 139.178.68.195 port 38136 ssh2: RSA SHA256:Yd4cJaNOPrEdOKjK3Hl1fuqro0lLX1aY5TKeqt+Qp+4 Aug 12 23:57:23.034208 sshd-session[3999]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:57:23.041270 systemd-logind[1467]: New session 9 of user core. Aug 12 23:57:23.048409 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 12 23:57:23.235484 sshd[4001]: Connection closed by 139.178.68.195 port 38136 Aug 12 23:57:23.235317 sshd-session[3999]: pam_unix(sshd:session): session closed for user core Aug 12 23:57:23.240943 systemd[1]: sshd@8-137.184.234.76:22-139.178.68.195:38136.service: Deactivated successfully. Aug 12 23:57:23.243665 systemd[1]: session-9.scope: Deactivated successfully. Aug 12 23:57:23.245509 systemd-logind[1467]: Session 9 logged out. Waiting for processes to exit. Aug 12 23:57:23.248599 systemd-logind[1467]: Removed session 9. Aug 12 23:57:28.254570 systemd[1]: Started sshd@9-137.184.234.76:22-139.178.68.195:38138.service - OpenSSH per-connection server daemon (139.178.68.195:38138). Aug 12 23:57:28.306509 sshd[4014]: Accepted publickey for core from 139.178.68.195 port 38138 ssh2: RSA SHA256:Yd4cJaNOPrEdOKjK3Hl1fuqro0lLX1aY5TKeqt+Qp+4 Aug 12 23:57:28.308254 sshd-session[4014]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:57:28.315761 systemd-logind[1467]: New session 10 of user core. Aug 12 23:57:28.321339 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 12 23:57:28.492643 sshd[4016]: Connection closed by 139.178.68.195 port 38138 Aug 12 23:57:28.493587 sshd-session[4014]: pam_unix(sshd:session): session closed for user core Aug 12 23:57:28.501104 systemd[1]: sshd@9-137.184.234.76:22-139.178.68.195:38138.service: Deactivated successfully. Aug 12 23:57:28.508964 systemd[1]: session-10.scope: Deactivated successfully. Aug 12 23:57:28.510130 systemd-logind[1467]: Session 10 logged out. Waiting for processes to exit. Aug 12 23:57:28.511445 systemd-logind[1467]: Removed session 10. Aug 12 23:57:33.514542 systemd[1]: Started sshd@10-137.184.234.76:22-139.178.68.195:40664.service - OpenSSH per-connection server daemon (139.178.68.195:40664). Aug 12 23:57:33.576711 sshd[4029]: Accepted publickey for core from 139.178.68.195 port 40664 ssh2: RSA SHA256:Yd4cJaNOPrEdOKjK3Hl1fuqro0lLX1aY5TKeqt+Qp+4 Aug 12 23:57:33.578332 sshd-session[4029]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:57:33.584493 systemd-logind[1467]: New session 11 of user core. Aug 12 23:57:33.591352 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 12 23:57:33.773214 sshd[4031]: Connection closed by 139.178.68.195 port 40664 Aug 12 23:57:33.774051 sshd-session[4029]: pam_unix(sshd:session): session closed for user core Aug 12 23:57:33.790755 systemd[1]: sshd@10-137.184.234.76:22-139.178.68.195:40664.service: Deactivated successfully. Aug 12 23:57:33.794015 systemd[1]: session-11.scope: Deactivated successfully. Aug 12 23:57:33.797465 systemd-logind[1467]: Session 11 logged out. Waiting for processes to exit. Aug 12 23:57:33.804545 systemd[1]: Started sshd@11-137.184.234.76:22-139.178.68.195:40670.service - OpenSSH per-connection server daemon (139.178.68.195:40670). Aug 12 23:57:33.806767 systemd-logind[1467]: Removed session 11. Aug 12 23:57:33.865160 sshd[4045]: Accepted publickey for core from 139.178.68.195 port 40670 ssh2: RSA SHA256:Yd4cJaNOPrEdOKjK3Hl1fuqro0lLX1aY5TKeqt+Qp+4 Aug 12 23:57:33.867179 sshd-session[4045]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:57:33.874347 systemd-logind[1467]: New session 12 of user core. Aug 12 23:57:33.885324 systemd[1]: Started session-12.scope - Session 12 of User core. Aug 12 23:57:34.084328 sshd[4048]: Connection closed by 139.178.68.195 port 40670 Aug 12 23:57:34.086217 sshd-session[4045]: pam_unix(sshd:session): session closed for user core Aug 12 23:57:34.101894 systemd[1]: sshd@11-137.184.234.76:22-139.178.68.195:40670.service: Deactivated successfully. Aug 12 23:57:34.106296 systemd[1]: session-12.scope: Deactivated successfully. Aug 12 23:57:34.109868 systemd-logind[1467]: Session 12 logged out. Waiting for processes to exit. Aug 12 23:57:34.120514 systemd[1]: Started sshd@12-137.184.234.76:22-139.178.68.195:40676.service - OpenSSH per-connection server daemon (139.178.68.195:40676). Aug 12 23:57:34.125109 systemd-logind[1467]: Removed session 12. Aug 12 23:57:34.181806 sshd[4057]: Accepted publickey for core from 139.178.68.195 port 40676 ssh2: RSA SHA256:Yd4cJaNOPrEdOKjK3Hl1fuqro0lLX1aY5TKeqt+Qp+4 Aug 12 23:57:34.184100 sshd-session[4057]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:57:34.191314 systemd-logind[1467]: New session 13 of user core. Aug 12 23:57:34.199375 systemd[1]: Started session-13.scope - Session 13 of User core. Aug 12 23:57:34.350426 sshd[4060]: Connection closed by 139.178.68.195 port 40676 Aug 12 23:57:34.351565 sshd-session[4057]: pam_unix(sshd:session): session closed for user core Aug 12 23:57:34.355467 systemd-logind[1467]: Session 13 logged out. Waiting for processes to exit. Aug 12 23:57:34.356399 systemd[1]: sshd@12-137.184.234.76:22-139.178.68.195:40676.service: Deactivated successfully. Aug 12 23:57:34.359492 systemd[1]: session-13.scope: Deactivated successfully. Aug 12 23:57:34.361826 systemd-logind[1467]: Removed session 13. Aug 12 23:57:39.372876 systemd[1]: Started sshd@13-137.184.234.76:22-139.178.68.195:40692.service - OpenSSH per-connection server daemon (139.178.68.195:40692). Aug 12 23:57:39.423846 sshd[4074]: Accepted publickey for core from 139.178.68.195 port 40692 ssh2: RSA SHA256:Yd4cJaNOPrEdOKjK3Hl1fuqro0lLX1aY5TKeqt+Qp+4 Aug 12 23:57:39.425589 sshd-session[4074]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:57:39.431798 systemd-logind[1467]: New session 14 of user core. Aug 12 23:57:39.440296 systemd[1]: Started session-14.scope - Session 14 of User core. Aug 12 23:57:39.593187 sshd[4076]: Connection closed by 139.178.68.195 port 40692 Aug 12 23:57:39.593869 sshd-session[4074]: pam_unix(sshd:session): session closed for user core Aug 12 23:57:39.598479 systemd[1]: sshd@13-137.184.234.76:22-139.178.68.195:40692.service: Deactivated successfully. Aug 12 23:57:39.601475 systemd[1]: session-14.scope: Deactivated successfully. Aug 12 23:57:39.602727 systemd-logind[1467]: Session 14 logged out. Waiting for processes to exit. Aug 12 23:57:39.605011 systemd-logind[1467]: Removed session 14. Aug 12 23:57:44.619575 systemd[1]: Started sshd@14-137.184.234.76:22-139.178.68.195:35102.service - OpenSSH per-connection server daemon (139.178.68.195:35102). Aug 12 23:57:44.675591 sshd[4088]: Accepted publickey for core from 139.178.68.195 port 35102 ssh2: RSA SHA256:Yd4cJaNOPrEdOKjK3Hl1fuqro0lLX1aY5TKeqt+Qp+4 Aug 12 23:57:44.677384 sshd-session[4088]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:57:44.683300 systemd-logind[1467]: New session 15 of user core. Aug 12 23:57:44.690345 systemd[1]: Started session-15.scope - Session 15 of User core. Aug 12 23:57:44.830658 sshd[4090]: Connection closed by 139.178.68.195 port 35102 Aug 12 23:57:44.831613 sshd-session[4088]: pam_unix(sshd:session): session closed for user core Aug 12 23:57:44.842657 systemd[1]: sshd@14-137.184.234.76:22-139.178.68.195:35102.service: Deactivated successfully. Aug 12 23:57:44.845141 systemd[1]: session-15.scope: Deactivated successfully. Aug 12 23:57:44.847995 systemd-logind[1467]: Session 15 logged out. Waiting for processes to exit. Aug 12 23:57:44.855890 systemd[1]: Started sshd@15-137.184.234.76:22-139.178.68.195:35118.service - OpenSSH per-connection server daemon (139.178.68.195:35118). Aug 12 23:57:44.857751 systemd-logind[1467]: Removed session 15. Aug 12 23:57:44.916893 sshd[4101]: Accepted publickey for core from 139.178.68.195 port 35118 ssh2: RSA SHA256:Yd4cJaNOPrEdOKjK3Hl1fuqro0lLX1aY5TKeqt+Qp+4 Aug 12 23:57:44.919041 sshd-session[4101]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:57:44.926092 systemd-logind[1467]: New session 16 of user core. Aug 12 23:57:44.936434 systemd[1]: Started session-16.scope - Session 16 of User core. Aug 12 23:57:45.280685 sshd[4104]: Connection closed by 139.178.68.195 port 35118 Aug 12 23:57:45.282141 sshd-session[4101]: pam_unix(sshd:session): session closed for user core Aug 12 23:57:45.293067 systemd[1]: sshd@15-137.184.234.76:22-139.178.68.195:35118.service: Deactivated successfully. Aug 12 23:57:45.297206 systemd[1]: session-16.scope: Deactivated successfully. Aug 12 23:57:45.299715 systemd-logind[1467]: Session 16 logged out. Waiting for processes to exit. Aug 12 23:57:45.310513 systemd[1]: Started sshd@16-137.184.234.76:22-139.178.68.195:35128.service - OpenSSH per-connection server daemon (139.178.68.195:35128). Aug 12 23:57:45.313728 systemd-logind[1467]: Removed session 16. Aug 12 23:57:45.380566 sshd[4112]: Accepted publickey for core from 139.178.68.195 port 35128 ssh2: RSA SHA256:Yd4cJaNOPrEdOKjK3Hl1fuqro0lLX1aY5TKeqt+Qp+4 Aug 12 23:57:45.384420 sshd-session[4112]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:57:45.392444 systemd-logind[1467]: New session 17 of user core. Aug 12 23:57:45.403470 systemd[1]: Started session-17.scope - Session 17 of User core. Aug 12 23:57:46.944886 sshd[4115]: Connection closed by 139.178.68.195 port 35128 Aug 12 23:57:46.945845 sshd-session[4112]: pam_unix(sshd:session): session closed for user core Aug 12 23:57:46.976891 systemd[1]: sshd@16-137.184.234.76:22-139.178.68.195:35128.service: Deactivated successfully. Aug 12 23:57:46.980634 systemd[1]: session-17.scope: Deactivated successfully. Aug 12 23:57:46.981627 systemd[1]: session-17.scope: Consumed 605ms CPU time, 66.2M memory peak. Aug 12 23:57:46.983909 systemd-logind[1467]: Session 17 logged out. Waiting for processes to exit. Aug 12 23:57:46.992859 systemd[1]: Started sshd@17-137.184.234.76:22-139.178.68.195:35134.service - OpenSSH per-connection server daemon (139.178.68.195:35134). Aug 12 23:57:46.996848 systemd-logind[1467]: Removed session 17. Aug 12 23:57:47.056047 sshd[4131]: Accepted publickey for core from 139.178.68.195 port 35134 ssh2: RSA SHA256:Yd4cJaNOPrEdOKjK3Hl1fuqro0lLX1aY5TKeqt+Qp+4 Aug 12 23:57:47.058142 sshd-session[4131]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:57:47.065122 systemd-logind[1467]: New session 18 of user core. Aug 12 23:57:47.068306 systemd[1]: Started session-18.scope - Session 18 of User core. Aug 12 23:57:47.400566 sshd[4136]: Connection closed by 139.178.68.195 port 35134 Aug 12 23:57:47.401154 sshd-session[4131]: pam_unix(sshd:session): session closed for user core Aug 12 23:57:47.417133 systemd[1]: sshd@17-137.184.234.76:22-139.178.68.195:35134.service: Deactivated successfully. Aug 12 23:57:47.420711 systemd[1]: session-18.scope: Deactivated successfully. Aug 12 23:57:47.423879 systemd-logind[1467]: Session 18 logged out. Waiting for processes to exit. Aug 12 23:57:47.431568 systemd[1]: Started sshd@18-137.184.234.76:22-139.178.68.195:35138.service - OpenSSH per-connection server daemon (139.178.68.195:35138). Aug 12 23:57:47.437356 systemd-logind[1467]: Removed session 18. Aug 12 23:57:47.497501 sshd[4145]: Accepted publickey for core from 139.178.68.195 port 35138 ssh2: RSA SHA256:Yd4cJaNOPrEdOKjK3Hl1fuqro0lLX1aY5TKeqt+Qp+4 Aug 12 23:57:47.499740 sshd-session[4145]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:57:47.506923 systemd-logind[1467]: New session 19 of user core. Aug 12 23:57:47.514289 systemd[1]: Started session-19.scope - Session 19 of User core. Aug 12 23:57:47.679586 sshd[4148]: Connection closed by 139.178.68.195 port 35138 Aug 12 23:57:47.680921 sshd-session[4145]: pam_unix(sshd:session): session closed for user core Aug 12 23:57:47.687898 systemd[1]: sshd@18-137.184.234.76:22-139.178.68.195:35138.service: Deactivated successfully. Aug 12 23:57:47.691702 systemd[1]: session-19.scope: Deactivated successfully. Aug 12 23:57:47.693475 systemd-logind[1467]: Session 19 logged out. Waiting for processes to exit. Aug 12 23:57:47.695431 systemd-logind[1467]: Removed session 19. Aug 12 23:57:52.700411 systemd[1]: Started sshd@19-137.184.234.76:22-139.178.68.195:42784.service - OpenSSH per-connection server daemon (139.178.68.195:42784). Aug 12 23:57:52.762276 sshd[4163]: Accepted publickey for core from 139.178.68.195 port 42784 ssh2: RSA SHA256:Yd4cJaNOPrEdOKjK3Hl1fuqro0lLX1aY5TKeqt+Qp+4 Aug 12 23:57:52.764289 sshd-session[4163]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:57:52.770675 systemd-logind[1467]: New session 20 of user core. Aug 12 23:57:52.776364 systemd[1]: Started session-20.scope - Session 20 of User core. Aug 12 23:57:52.921030 sshd[4165]: Connection closed by 139.178.68.195 port 42784 Aug 12 23:57:52.921900 sshd-session[4163]: pam_unix(sshd:session): session closed for user core Aug 12 23:57:52.927044 systemd[1]: sshd@19-137.184.234.76:22-139.178.68.195:42784.service: Deactivated successfully. Aug 12 23:57:52.930074 systemd[1]: session-20.scope: Deactivated successfully. Aug 12 23:57:52.931169 systemd-logind[1467]: Session 20 logged out. Waiting for processes to exit. Aug 12 23:57:52.932389 systemd-logind[1467]: Removed session 20. Aug 12 23:57:57.942390 systemd[1]: Started sshd@20-137.184.234.76:22-139.178.68.195:42798.service - OpenSSH per-connection server daemon (139.178.68.195:42798). Aug 12 23:57:58.001718 sshd[4177]: Accepted publickey for core from 139.178.68.195 port 42798 ssh2: RSA SHA256:Yd4cJaNOPrEdOKjK3Hl1fuqro0lLX1aY5TKeqt+Qp+4 Aug 12 23:57:58.004773 sshd-session[4177]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:57:58.011066 systemd-logind[1467]: New session 21 of user core. Aug 12 23:57:58.024340 systemd[1]: Started session-21.scope - Session 21 of User core. Aug 12 23:57:58.182302 sshd[4179]: Connection closed by 139.178.68.195 port 42798 Aug 12 23:57:58.182997 sshd-session[4177]: pam_unix(sshd:session): session closed for user core Aug 12 23:57:58.188248 systemd[1]: sshd@20-137.184.234.76:22-139.178.68.195:42798.service: Deactivated successfully. Aug 12 23:57:58.190973 systemd[1]: session-21.scope: Deactivated successfully. Aug 12 23:57:58.192007 systemd-logind[1467]: Session 21 logged out. Waiting for processes to exit. Aug 12 23:57:58.193589 systemd-logind[1467]: Removed session 21. Aug 12 23:58:03.188504 systemd[1]: Started sshd@21-137.184.234.76:22-139.178.68.195:37198.service - OpenSSH per-connection server daemon (139.178.68.195:37198). Aug 12 23:58:03.249575 sshd[4190]: Accepted publickey for core from 139.178.68.195 port 37198 ssh2: RSA SHA256:Yd4cJaNOPrEdOKjK3Hl1fuqro0lLX1aY5TKeqt+Qp+4 Aug 12 23:58:03.251662 sshd-session[4190]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:58:03.259230 systemd-logind[1467]: New session 22 of user core. Aug 12 23:58:03.269435 systemd[1]: Started session-22.scope - Session 22 of User core. Aug 12 23:58:03.408816 sshd[4192]: Connection closed by 139.178.68.195 port 37198 Aug 12 23:58:03.409283 sshd-session[4190]: pam_unix(sshd:session): session closed for user core Aug 12 23:58:03.413697 systemd-logind[1467]: Session 22 logged out. Waiting for processes to exit. Aug 12 23:58:03.413922 systemd[1]: sshd@21-137.184.234.76:22-139.178.68.195:37198.service: Deactivated successfully. Aug 12 23:58:03.416399 systemd[1]: session-22.scope: Deactivated successfully. Aug 12 23:58:03.418704 systemd-logind[1467]: Removed session 22. Aug 12 23:58:07.244485 kubelet[2584]: E0812 23:58:07.243688 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 12 23:58:08.435655 systemd[1]: Started sshd@22-137.184.234.76:22-139.178.68.195:37212.service - OpenSSH per-connection server daemon (139.178.68.195:37212). Aug 12 23:58:08.487595 sshd[4204]: Accepted publickey for core from 139.178.68.195 port 37212 ssh2: RSA SHA256:Yd4cJaNOPrEdOKjK3Hl1fuqro0lLX1aY5TKeqt+Qp+4 Aug 12 23:58:08.489287 sshd-session[4204]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:58:08.495624 systemd-logind[1467]: New session 23 of user core. Aug 12 23:58:08.505352 systemd[1]: Started session-23.scope - Session 23 of User core. Aug 12 23:58:08.659521 sshd[4206]: Connection closed by 139.178.68.195 port 37212 Aug 12 23:58:08.660335 sshd-session[4204]: pam_unix(sshd:session): session closed for user core Aug 12 23:58:08.673518 systemd[1]: sshd@22-137.184.234.76:22-139.178.68.195:37212.service: Deactivated successfully. Aug 12 23:58:08.676504 systemd[1]: session-23.scope: Deactivated successfully. Aug 12 23:58:08.678635 systemd-logind[1467]: Session 23 logged out. Waiting for processes to exit. Aug 12 23:58:08.684632 systemd[1]: Started sshd@23-137.184.234.76:22-139.178.68.195:37224.service - OpenSSH per-connection server daemon (139.178.68.195:37224). Aug 12 23:58:08.686614 systemd-logind[1467]: Removed session 23. Aug 12 23:58:08.744270 sshd[4217]: Accepted publickey for core from 139.178.68.195 port 37224 ssh2: RSA SHA256:Yd4cJaNOPrEdOKjK3Hl1fuqro0lLX1aY5TKeqt+Qp+4 Aug 12 23:58:08.746337 sshd-session[4217]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:58:08.753469 systemd-logind[1467]: New session 24 of user core. Aug 12 23:58:08.762317 systemd[1]: Started session-24.scope - Session 24 of User core. Aug 12 23:58:10.477054 containerd[1494]: time="2025-08-12T23:58:10.474809050Z" level=info msg="StopContainer for \"3c94959b1fa7613366d1c70f6a875a7973e2dd8f64afac9e7313462e987ec048\" with timeout 30 (s)" Aug 12 23:58:10.479200 systemd[1]: run-containerd-runc-k8s.io-d2821a359a352fc6e61a7beb4e5bbcc7ee59fd8a9c90e141afb4defd9abe61ea-runc.yQb048.mount: Deactivated successfully. Aug 12 23:58:10.479566 containerd[1494]: time="2025-08-12T23:58:10.479230225Z" level=info msg="Stop container \"3c94959b1fa7613366d1c70f6a875a7973e2dd8f64afac9e7313462e987ec048\" with signal terminated" Aug 12 23:58:10.502973 containerd[1494]: time="2025-08-12T23:58:10.502909705Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 12 23:58:10.504340 systemd[1]: cri-containerd-3c94959b1fa7613366d1c70f6a875a7973e2dd8f64afac9e7313462e987ec048.scope: Deactivated successfully. Aug 12 23:58:10.515580 containerd[1494]: time="2025-08-12T23:58:10.515503656Z" level=info msg="StopContainer for \"d2821a359a352fc6e61a7beb4e5bbcc7ee59fd8a9c90e141afb4defd9abe61ea\" with timeout 2 (s)" Aug 12 23:58:10.516356 containerd[1494]: time="2025-08-12T23:58:10.516318756Z" level=info msg="Stop container \"d2821a359a352fc6e61a7beb4e5bbcc7ee59fd8a9c90e141afb4defd9abe61ea\" with signal terminated" Aug 12 23:58:10.531470 systemd-networkd[1370]: lxc_health: Link DOWN Aug 12 23:58:10.531483 systemd-networkd[1370]: lxc_health: Lost carrier Aug 12 23:58:10.557959 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3c94959b1fa7613366d1c70f6a875a7973e2dd8f64afac9e7313462e987ec048-rootfs.mount: Deactivated successfully. Aug 12 23:58:10.561460 systemd[1]: cri-containerd-d2821a359a352fc6e61a7beb4e5bbcc7ee59fd8a9c90e141afb4defd9abe61ea.scope: Deactivated successfully. Aug 12 23:58:10.561822 systemd[1]: cri-containerd-d2821a359a352fc6e61a7beb4e5bbcc7ee59fd8a9c90e141afb4defd9abe61ea.scope: Consumed 9.143s CPU time, 190.2M memory peak, 70.1M read from disk, 13.3M written to disk. Aug 12 23:58:10.572672 containerd[1494]: time="2025-08-12T23:58:10.572422616Z" level=info msg="shim disconnected" id=3c94959b1fa7613366d1c70f6a875a7973e2dd8f64afac9e7313462e987ec048 namespace=k8s.io Aug 12 23:58:10.572672 containerd[1494]: time="2025-08-12T23:58:10.572486924Z" level=warning msg="cleaning up after shim disconnected" id=3c94959b1fa7613366d1c70f6a875a7973e2dd8f64afac9e7313462e987ec048 namespace=k8s.io Aug 12 23:58:10.572672 containerd[1494]: time="2025-08-12T23:58:10.572495122Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 12 23:58:10.599388 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d2821a359a352fc6e61a7beb4e5bbcc7ee59fd8a9c90e141afb4defd9abe61ea-rootfs.mount: Deactivated successfully. Aug 12 23:58:10.608941 containerd[1494]: time="2025-08-12T23:58:10.608651686Z" level=info msg="shim disconnected" id=d2821a359a352fc6e61a7beb4e5bbcc7ee59fd8a9c90e141afb4defd9abe61ea namespace=k8s.io Aug 12 23:58:10.609273 containerd[1494]: time="2025-08-12T23:58:10.608933963Z" level=warning msg="cleaning up after shim disconnected" id=d2821a359a352fc6e61a7beb4e5bbcc7ee59fd8a9c90e141afb4defd9abe61ea namespace=k8s.io Aug 12 23:58:10.609273 containerd[1494]: time="2025-08-12T23:58:10.609082394Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 12 23:58:10.610005 containerd[1494]: time="2025-08-12T23:58:10.609832901Z" level=info msg="StopContainer for \"3c94959b1fa7613366d1c70f6a875a7973e2dd8f64afac9e7313462e987ec048\" returns successfully" Aug 12 23:58:10.612573 containerd[1494]: time="2025-08-12T23:58:10.612232544Z" level=info msg="StopPodSandbox for \"38206d25010d74774ad9e16bb4b268693485e51629606a80656c151eefa1122c\"" Aug 12 23:58:10.621688 containerd[1494]: time="2025-08-12T23:58:10.621428713Z" level=info msg="Container to stop \"3c94959b1fa7613366d1c70f6a875a7973e2dd8f64afac9e7313462e987ec048\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 12 23:58:10.627826 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-38206d25010d74774ad9e16bb4b268693485e51629606a80656c151eefa1122c-shm.mount: Deactivated successfully. Aug 12 23:58:10.651533 systemd[1]: cri-containerd-38206d25010d74774ad9e16bb4b268693485e51629606a80656c151eefa1122c.scope: Deactivated successfully. Aug 12 23:58:10.658639 containerd[1494]: time="2025-08-12T23:58:10.658575274Z" level=info msg="StopContainer for \"d2821a359a352fc6e61a7beb4e5bbcc7ee59fd8a9c90e141afb4defd9abe61ea\" returns successfully" Aug 12 23:58:10.659454 containerd[1494]: time="2025-08-12T23:58:10.659387193Z" level=info msg="StopPodSandbox for \"09eca810e340f53ecda4b28fac44112c69c5170401b1e6e1ca8c0795e037bf0e\"" Aug 12 23:58:10.659454 containerd[1494]: time="2025-08-12T23:58:10.659433994Z" level=info msg="Container to stop \"f13882758f251fe196918fa814ff421fd4428447193bd9bd72a3db7b9ad550ba\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 12 23:58:10.659637 containerd[1494]: time="2025-08-12T23:58:10.659470427Z" level=info msg="Container to stop \"b82bfdd6db0dadd02d70142fe9abb9c1b48aa6429f9dbea7a0fe2499dc73a7fb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 12 23:58:10.659637 containerd[1494]: time="2025-08-12T23:58:10.659480757Z" level=info msg="Container to stop \"c60543eef062c006031aa3d5e2fb336089441f3ed0f8c00c70bf2e3ba6d8295b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 12 23:58:10.659637 containerd[1494]: time="2025-08-12T23:58:10.659489221Z" level=info msg="Container to stop \"413b711de6e8295511ed27d09f166c4885b5a7ad38889a3b23f5fd576a82592d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 12 23:58:10.659637 containerd[1494]: time="2025-08-12T23:58:10.659498126Z" level=info msg="Container to stop \"d2821a359a352fc6e61a7beb4e5bbcc7ee59fd8a9c90e141afb4defd9abe61ea\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 12 23:58:10.669769 systemd[1]: cri-containerd-09eca810e340f53ecda4b28fac44112c69c5170401b1e6e1ca8c0795e037bf0e.scope: Deactivated successfully. Aug 12 23:58:10.710252 containerd[1494]: time="2025-08-12T23:58:10.710153943Z" level=info msg="shim disconnected" id=38206d25010d74774ad9e16bb4b268693485e51629606a80656c151eefa1122c namespace=k8s.io Aug 12 23:58:10.710252 containerd[1494]: time="2025-08-12T23:58:10.710237604Z" level=warning msg="cleaning up after shim disconnected" id=38206d25010d74774ad9e16bb4b268693485e51629606a80656c151eefa1122c namespace=k8s.io Aug 12 23:58:10.710252 containerd[1494]: time="2025-08-12T23:58:10.710251238Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 12 23:58:10.712876 containerd[1494]: time="2025-08-12T23:58:10.712798456Z" level=info msg="shim disconnected" id=09eca810e340f53ecda4b28fac44112c69c5170401b1e6e1ca8c0795e037bf0e namespace=k8s.io Aug 12 23:58:10.713130 containerd[1494]: time="2025-08-12T23:58:10.713105820Z" level=warning msg="cleaning up after shim disconnected" id=09eca810e340f53ecda4b28fac44112c69c5170401b1e6e1ca8c0795e037bf0e namespace=k8s.io Aug 12 23:58:10.713265 containerd[1494]: time="2025-08-12T23:58:10.713242638Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 12 23:58:10.734078 containerd[1494]: time="2025-08-12T23:58:10.733395356Z" level=info msg="TearDown network for sandbox \"38206d25010d74774ad9e16bb4b268693485e51629606a80656c151eefa1122c\" successfully" Aug 12 23:58:10.734247 containerd[1494]: time="2025-08-12T23:58:10.734223238Z" level=info msg="StopPodSandbox for \"38206d25010d74774ad9e16bb4b268693485e51629606a80656c151eefa1122c\" returns successfully" Aug 12 23:58:10.748430 containerd[1494]: time="2025-08-12T23:58:10.748185438Z" level=info msg="TearDown network for sandbox \"09eca810e340f53ecda4b28fac44112c69c5170401b1e6e1ca8c0795e037bf0e\" successfully" Aug 12 23:58:10.748430 containerd[1494]: time="2025-08-12T23:58:10.748255092Z" level=info msg="StopPodSandbox for \"09eca810e340f53ecda4b28fac44112c69c5170401b1e6e1ca8c0795e037bf0e\" returns successfully" Aug 12 23:58:10.797519 kubelet[2584]: I0812 23:58:10.796716 2584 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/50c5fd17-a29b-4a6f-b010-2a19bd801007-cilium-cgroup\") pod \"50c5fd17-a29b-4a6f-b010-2a19bd801007\" (UID: \"50c5fd17-a29b-4a6f-b010-2a19bd801007\") " Aug 12 23:58:10.797519 kubelet[2584]: I0812 23:58:10.796780 2584 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/50c5fd17-a29b-4a6f-b010-2a19bd801007-host-proc-sys-kernel\") pod \"50c5fd17-a29b-4a6f-b010-2a19bd801007\" (UID: \"50c5fd17-a29b-4a6f-b010-2a19bd801007\") " Aug 12 23:58:10.797519 kubelet[2584]: I0812 23:58:10.796813 2584 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f8t98\" (UniqueName: \"kubernetes.io/projected/1c87706e-66de-43e0-a390-87da9fa3e36d-kube-api-access-f8t98\") pod \"1c87706e-66de-43e0-a390-87da9fa3e36d\" (UID: \"1c87706e-66de-43e0-a390-87da9fa3e36d\") " Aug 12 23:58:10.797519 kubelet[2584]: I0812 23:58:10.796841 2584 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/50c5fd17-a29b-4a6f-b010-2a19bd801007-etc-cni-netd\") pod \"50c5fd17-a29b-4a6f-b010-2a19bd801007\" (UID: \"50c5fd17-a29b-4a6f-b010-2a19bd801007\") " Aug 12 23:58:10.797519 kubelet[2584]: I0812 23:58:10.796841 2584 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50c5fd17-a29b-4a6f-b010-2a19bd801007-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "50c5fd17-a29b-4a6f-b010-2a19bd801007" (UID: "50c5fd17-a29b-4a6f-b010-2a19bd801007"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 12 23:58:10.797519 kubelet[2584]: I0812 23:58:10.796871 2584 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1c87706e-66de-43e0-a390-87da9fa3e36d-cilium-config-path\") pod \"1c87706e-66de-43e0-a390-87da9fa3e36d\" (UID: \"1c87706e-66de-43e0-a390-87da9fa3e36d\") " Aug 12 23:58:10.798372 kubelet[2584]: I0812 23:58:10.796896 2584 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ft4lc\" (UniqueName: \"kubernetes.io/projected/50c5fd17-a29b-4a6f-b010-2a19bd801007-kube-api-access-ft4lc\") pod \"50c5fd17-a29b-4a6f-b010-2a19bd801007\" (UID: \"50c5fd17-a29b-4a6f-b010-2a19bd801007\") " Aug 12 23:58:10.798372 kubelet[2584]: I0812 23:58:10.796919 2584 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/50c5fd17-a29b-4a6f-b010-2a19bd801007-hubble-tls\") pod \"50c5fd17-a29b-4a6f-b010-2a19bd801007\" (UID: \"50c5fd17-a29b-4a6f-b010-2a19bd801007\") " Aug 12 23:58:10.798372 kubelet[2584]: I0812 23:58:10.796944 2584 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/50c5fd17-a29b-4a6f-b010-2a19bd801007-clustermesh-secrets\") pod \"50c5fd17-a29b-4a6f-b010-2a19bd801007\" (UID: \"50c5fd17-a29b-4a6f-b010-2a19bd801007\") " Aug 12 23:58:10.798372 kubelet[2584]: I0812 23:58:10.796961 2584 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/50c5fd17-a29b-4a6f-b010-2a19bd801007-bpf-maps\") pod \"50c5fd17-a29b-4a6f-b010-2a19bd801007\" (UID: \"50c5fd17-a29b-4a6f-b010-2a19bd801007\") " Aug 12 23:58:10.798372 kubelet[2584]: I0812 23:58:10.796978 2584 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/50c5fd17-a29b-4a6f-b010-2a19bd801007-host-proc-sys-net\") pod \"50c5fd17-a29b-4a6f-b010-2a19bd801007\" (UID: \"50c5fd17-a29b-4a6f-b010-2a19bd801007\") " Aug 12 23:58:10.798372 kubelet[2584]: I0812 23:58:10.796992 2584 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/50c5fd17-a29b-4a6f-b010-2a19bd801007-cni-path\") pod \"50c5fd17-a29b-4a6f-b010-2a19bd801007\" (UID: \"50c5fd17-a29b-4a6f-b010-2a19bd801007\") " Aug 12 23:58:10.798611 kubelet[2584]: I0812 23:58:10.797012 2584 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/50c5fd17-a29b-4a6f-b010-2a19bd801007-cilium-config-path\") pod \"50c5fd17-a29b-4a6f-b010-2a19bd801007\" (UID: \"50c5fd17-a29b-4a6f-b010-2a19bd801007\") " Aug 12 23:58:10.798611 kubelet[2584]: I0812 23:58:10.797044 2584 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/50c5fd17-a29b-4a6f-b010-2a19bd801007-lib-modules\") pod \"50c5fd17-a29b-4a6f-b010-2a19bd801007\" (UID: \"50c5fd17-a29b-4a6f-b010-2a19bd801007\") " Aug 12 23:58:10.798611 kubelet[2584]: I0812 23:58:10.797060 2584 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/50c5fd17-a29b-4a6f-b010-2a19bd801007-cilium-run\") pod \"50c5fd17-a29b-4a6f-b010-2a19bd801007\" (UID: \"50c5fd17-a29b-4a6f-b010-2a19bd801007\") " Aug 12 23:58:10.798611 kubelet[2584]: I0812 23:58:10.797075 2584 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/50c5fd17-a29b-4a6f-b010-2a19bd801007-xtables-lock\") pod \"50c5fd17-a29b-4a6f-b010-2a19bd801007\" (UID: \"50c5fd17-a29b-4a6f-b010-2a19bd801007\") " Aug 12 23:58:10.798611 kubelet[2584]: I0812 23:58:10.797092 2584 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/50c5fd17-a29b-4a6f-b010-2a19bd801007-hostproc\") pod \"50c5fd17-a29b-4a6f-b010-2a19bd801007\" (UID: \"50c5fd17-a29b-4a6f-b010-2a19bd801007\") " Aug 12 23:58:10.798611 kubelet[2584]: I0812 23:58:10.797141 2584 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/50c5fd17-a29b-4a6f-b010-2a19bd801007-cilium-cgroup\") on node \"ci-4230.2.2-9-8f36bdb456\" DevicePath \"\"" Aug 12 23:58:10.798843 kubelet[2584]: I0812 23:58:10.797193 2584 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50c5fd17-a29b-4a6f-b010-2a19bd801007-hostproc" (OuterVolumeSpecName: "hostproc") pod "50c5fd17-a29b-4a6f-b010-2a19bd801007" (UID: "50c5fd17-a29b-4a6f-b010-2a19bd801007"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 12 23:58:10.798843 kubelet[2584]: I0812 23:58:10.797225 2584 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50c5fd17-a29b-4a6f-b010-2a19bd801007-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "50c5fd17-a29b-4a6f-b010-2a19bd801007" (UID: "50c5fd17-a29b-4a6f-b010-2a19bd801007"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 12 23:58:10.805196 kubelet[2584]: I0812 23:58:10.804325 2584 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50c5fd17-a29b-4a6f-b010-2a19bd801007-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "50c5fd17-a29b-4a6f-b010-2a19bd801007" (UID: "50c5fd17-a29b-4a6f-b010-2a19bd801007"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 12 23:58:10.805383 kubelet[2584]: I0812 23:58:10.805292 2584 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50c5fd17-a29b-4a6f-b010-2a19bd801007-kube-api-access-ft4lc" (OuterVolumeSpecName: "kube-api-access-ft4lc") pod "50c5fd17-a29b-4a6f-b010-2a19bd801007" (UID: "50c5fd17-a29b-4a6f-b010-2a19bd801007"). InnerVolumeSpecName "kube-api-access-ft4lc". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 12 23:58:10.811056 kubelet[2584]: I0812 23:58:10.810209 2584 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50c5fd17-a29b-4a6f-b010-2a19bd801007-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "50c5fd17-a29b-4a6f-b010-2a19bd801007" (UID: "50c5fd17-a29b-4a6f-b010-2a19bd801007"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 12 23:58:10.811056 kubelet[2584]: I0812 23:58:10.810289 2584 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50c5fd17-a29b-4a6f-b010-2a19bd801007-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "50c5fd17-a29b-4a6f-b010-2a19bd801007" (UID: "50c5fd17-a29b-4a6f-b010-2a19bd801007"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 12 23:58:10.811056 kubelet[2584]: I0812 23:58:10.810316 2584 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50c5fd17-a29b-4a6f-b010-2a19bd801007-cni-path" (OuterVolumeSpecName: "cni-path") pod "50c5fd17-a29b-4a6f-b010-2a19bd801007" (UID: "50c5fd17-a29b-4a6f-b010-2a19bd801007"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 12 23:58:10.811346 kubelet[2584]: I0812 23:58:10.811194 2584 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50c5fd17-a29b-4a6f-b010-2a19bd801007-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "50c5fd17-a29b-4a6f-b010-2a19bd801007" (UID: "50c5fd17-a29b-4a6f-b010-2a19bd801007"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 12 23:58:10.811346 kubelet[2584]: I0812 23:58:10.811258 2584 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50c5fd17-a29b-4a6f-b010-2a19bd801007-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "50c5fd17-a29b-4a6f-b010-2a19bd801007" (UID: "50c5fd17-a29b-4a6f-b010-2a19bd801007"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 12 23:58:10.811491 kubelet[2584]: I0812 23:58:10.811396 2584 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50c5fd17-a29b-4a6f-b010-2a19bd801007-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "50c5fd17-a29b-4a6f-b010-2a19bd801007" (UID: "50c5fd17-a29b-4a6f-b010-2a19bd801007"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 12 23:58:10.811741 kubelet[2584]: I0812 23:58:10.811711 2584 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c87706e-66de-43e0-a390-87da9fa3e36d-kube-api-access-f8t98" (OuterVolumeSpecName: "kube-api-access-f8t98") pod "1c87706e-66de-43e0-a390-87da9fa3e36d" (UID: "1c87706e-66de-43e0-a390-87da9fa3e36d"). InnerVolumeSpecName "kube-api-access-f8t98". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 12 23:58:10.812371 kubelet[2584]: I0812 23:58:10.812341 2584 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c87706e-66de-43e0-a390-87da9fa3e36d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "1c87706e-66de-43e0-a390-87da9fa3e36d" (UID: "1c87706e-66de-43e0-a390-87da9fa3e36d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 12 23:58:10.813412 kubelet[2584]: I0812 23:58:10.813372 2584 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50c5fd17-a29b-4a6f-b010-2a19bd801007-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "50c5fd17-a29b-4a6f-b010-2a19bd801007" (UID: "50c5fd17-a29b-4a6f-b010-2a19bd801007"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 12 23:58:10.814746 kubelet[2584]: I0812 23:58:10.814681 2584 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50c5fd17-a29b-4a6f-b010-2a19bd801007-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "50c5fd17-a29b-4a6f-b010-2a19bd801007" (UID: "50c5fd17-a29b-4a6f-b010-2a19bd801007"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 12 23:58:10.815598 kubelet[2584]: I0812 23:58:10.815559 2584 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50c5fd17-a29b-4a6f-b010-2a19bd801007-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "50c5fd17-a29b-4a6f-b010-2a19bd801007" (UID: "50c5fd17-a29b-4a6f-b010-2a19bd801007"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 12 23:58:10.898282 kubelet[2584]: I0812 23:58:10.898222 2584 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/50c5fd17-a29b-4a6f-b010-2a19bd801007-hostproc\") on node \"ci-4230.2.2-9-8f36bdb456\" DevicePath \"\"" Aug 12 23:58:10.898550 kubelet[2584]: I0812 23:58:10.898524 2584 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/50c5fd17-a29b-4a6f-b010-2a19bd801007-cilium-config-path\") on node \"ci-4230.2.2-9-8f36bdb456\" DevicePath \"\"" Aug 12 23:58:10.898662 kubelet[2584]: I0812 23:58:10.898636 2584 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/50c5fd17-a29b-4a6f-b010-2a19bd801007-lib-modules\") on node \"ci-4230.2.2-9-8f36bdb456\" DevicePath \"\"" Aug 12 23:58:10.898969 kubelet[2584]: I0812 23:58:10.898765 2584 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/50c5fd17-a29b-4a6f-b010-2a19bd801007-cilium-run\") on node \"ci-4230.2.2-9-8f36bdb456\" DevicePath \"\"" Aug 12 23:58:10.898969 kubelet[2584]: I0812 23:58:10.898798 2584 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/50c5fd17-a29b-4a6f-b010-2a19bd801007-xtables-lock\") on node \"ci-4230.2.2-9-8f36bdb456\" DevicePath \"\"" Aug 12 23:58:10.898969 kubelet[2584]: I0812 23:58:10.898812 2584 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/50c5fd17-a29b-4a6f-b010-2a19bd801007-host-proc-sys-kernel\") on node \"ci-4230.2.2-9-8f36bdb456\" DevicePath \"\"" Aug 12 23:58:10.898969 kubelet[2584]: I0812 23:58:10.898827 2584 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f8t98\" (UniqueName: \"kubernetes.io/projected/1c87706e-66de-43e0-a390-87da9fa3e36d-kube-api-access-f8t98\") on node \"ci-4230.2.2-9-8f36bdb456\" DevicePath \"\"" Aug 12 23:58:10.898969 kubelet[2584]: I0812 23:58:10.898841 2584 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/50c5fd17-a29b-4a6f-b010-2a19bd801007-etc-cni-netd\") on node \"ci-4230.2.2-9-8f36bdb456\" DevicePath \"\"" Aug 12 23:58:10.898969 kubelet[2584]: I0812 23:58:10.898854 2584 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1c87706e-66de-43e0-a390-87da9fa3e36d-cilium-config-path\") on node \"ci-4230.2.2-9-8f36bdb456\" DevicePath \"\"" Aug 12 23:58:10.898969 kubelet[2584]: I0812 23:58:10.898867 2584 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ft4lc\" (UniqueName: \"kubernetes.io/projected/50c5fd17-a29b-4a6f-b010-2a19bd801007-kube-api-access-ft4lc\") on node \"ci-4230.2.2-9-8f36bdb456\" DevicePath \"\"" Aug 12 23:58:10.898969 kubelet[2584]: I0812 23:58:10.898887 2584 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/50c5fd17-a29b-4a6f-b010-2a19bd801007-hubble-tls\") on node \"ci-4230.2.2-9-8f36bdb456\" DevicePath \"\"" Aug 12 23:58:10.899410 kubelet[2584]: I0812 23:58:10.898901 2584 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/50c5fd17-a29b-4a6f-b010-2a19bd801007-clustermesh-secrets\") on node \"ci-4230.2.2-9-8f36bdb456\" DevicePath \"\"" Aug 12 23:58:10.899410 kubelet[2584]: I0812 23:58:10.898915 2584 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/50c5fd17-a29b-4a6f-b010-2a19bd801007-bpf-maps\") on node \"ci-4230.2.2-9-8f36bdb456\" DevicePath \"\"" Aug 12 23:58:10.899410 kubelet[2584]: I0812 23:58:10.898930 2584 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/50c5fd17-a29b-4a6f-b010-2a19bd801007-host-proc-sys-net\") on node \"ci-4230.2.2-9-8f36bdb456\" DevicePath \"\"" Aug 12 23:58:10.899410 kubelet[2584]: I0812 23:58:10.898943 2584 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/50c5fd17-a29b-4a6f-b010-2a19bd801007-cni-path\") on node \"ci-4230.2.2-9-8f36bdb456\" DevicePath \"\"" Aug 12 23:58:11.253370 systemd[1]: Removed slice kubepods-besteffort-pod1c87706e_66de_43e0_a390_87da9fa3e36d.slice - libcontainer container kubepods-besteffort-pod1c87706e_66de_43e0_a390_87da9fa3e36d.slice. Aug 12 23:58:11.257805 systemd[1]: Removed slice kubepods-burstable-pod50c5fd17_a29b_4a6f_b010_2a19bd801007.slice - libcontainer container kubepods-burstable-pod50c5fd17_a29b_4a6f_b010_2a19bd801007.slice. Aug 12 23:58:11.258166 systemd[1]: kubepods-burstable-pod50c5fd17_a29b_4a6f_b010_2a19bd801007.slice: Consumed 9.251s CPU time, 190.5M memory peak, 70.1M read from disk, 15.9M written to disk. Aug 12 23:58:11.469816 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-38206d25010d74774ad9e16bb4b268693485e51629606a80656c151eefa1122c-rootfs.mount: Deactivated successfully. Aug 12 23:58:11.469972 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-09eca810e340f53ecda4b28fac44112c69c5170401b1e6e1ca8c0795e037bf0e-rootfs.mount: Deactivated successfully. Aug 12 23:58:11.470078 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-09eca810e340f53ecda4b28fac44112c69c5170401b1e6e1ca8c0795e037bf0e-shm.mount: Deactivated successfully. Aug 12 23:58:11.470189 systemd[1]: var-lib-kubelet-pods-1c87706e\x2d66de\x2d43e0\x2da390\x2d87da9fa3e36d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2df8t98.mount: Deactivated successfully. Aug 12 23:58:11.470286 systemd[1]: var-lib-kubelet-pods-50c5fd17\x2da29b\x2d4a6f\x2db010\x2d2a19bd801007-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dft4lc.mount: Deactivated successfully. Aug 12 23:58:11.470388 systemd[1]: var-lib-kubelet-pods-50c5fd17\x2da29b\x2d4a6f\x2db010\x2d2a19bd801007-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Aug 12 23:58:11.470486 systemd[1]: var-lib-kubelet-pods-50c5fd17\x2da29b\x2d4a6f\x2db010\x2d2a19bd801007-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Aug 12 23:58:11.552302 kubelet[2584]: I0812 23:58:11.551965 2584 scope.go:117] "RemoveContainer" containerID="d2821a359a352fc6e61a7beb4e5bbcc7ee59fd8a9c90e141afb4defd9abe61ea" Aug 12 23:58:11.569294 containerd[1494]: time="2025-08-12T23:58:11.569228275Z" level=info msg="RemoveContainer for \"d2821a359a352fc6e61a7beb4e5bbcc7ee59fd8a9c90e141afb4defd9abe61ea\"" Aug 12 23:58:11.573109 containerd[1494]: time="2025-08-12T23:58:11.573067741Z" level=info msg="RemoveContainer for \"d2821a359a352fc6e61a7beb4e5bbcc7ee59fd8a9c90e141afb4defd9abe61ea\" returns successfully" Aug 12 23:58:11.573688 kubelet[2584]: I0812 23:58:11.573569 2584 scope.go:117] "RemoveContainer" containerID="413b711de6e8295511ed27d09f166c4885b5a7ad38889a3b23f5fd576a82592d" Aug 12 23:58:11.578740 containerd[1494]: time="2025-08-12T23:58:11.578703704Z" level=info msg="RemoveContainer for \"413b711de6e8295511ed27d09f166c4885b5a7ad38889a3b23f5fd576a82592d\"" Aug 12 23:58:11.583777 containerd[1494]: time="2025-08-12T23:58:11.583565696Z" level=info msg="RemoveContainer for \"413b711de6e8295511ed27d09f166c4885b5a7ad38889a3b23f5fd576a82592d\" returns successfully" Aug 12 23:58:11.584301 kubelet[2584]: I0812 23:58:11.583969 2584 scope.go:117] "RemoveContainer" containerID="f13882758f251fe196918fa814ff421fd4428447193bd9bd72a3db7b9ad550ba" Aug 12 23:58:11.588171 containerd[1494]: time="2025-08-12T23:58:11.588079289Z" level=info msg="RemoveContainer for \"f13882758f251fe196918fa814ff421fd4428447193bd9bd72a3db7b9ad550ba\"" Aug 12 23:58:11.591923 containerd[1494]: time="2025-08-12T23:58:11.591864728Z" level=info msg="RemoveContainer for \"f13882758f251fe196918fa814ff421fd4428447193bd9bd72a3db7b9ad550ba\" returns successfully" Aug 12 23:58:11.592585 kubelet[2584]: I0812 23:58:11.592197 2584 scope.go:117] "RemoveContainer" containerID="c60543eef062c006031aa3d5e2fb336089441f3ed0f8c00c70bf2e3ba6d8295b" Aug 12 23:58:11.593572 containerd[1494]: time="2025-08-12T23:58:11.593528504Z" level=info msg="RemoveContainer for \"c60543eef062c006031aa3d5e2fb336089441f3ed0f8c00c70bf2e3ba6d8295b\"" Aug 12 23:58:11.597852 containerd[1494]: time="2025-08-12T23:58:11.597128618Z" level=info msg="RemoveContainer for \"c60543eef062c006031aa3d5e2fb336089441f3ed0f8c00c70bf2e3ba6d8295b\" returns successfully" Aug 12 23:58:11.598357 kubelet[2584]: I0812 23:58:11.598292 2584 scope.go:117] "RemoveContainer" containerID="b82bfdd6db0dadd02d70142fe9abb9c1b48aa6429f9dbea7a0fe2499dc73a7fb" Aug 12 23:58:11.602217 containerd[1494]: time="2025-08-12T23:58:11.602167467Z" level=info msg="RemoveContainer for \"b82bfdd6db0dadd02d70142fe9abb9c1b48aa6429f9dbea7a0fe2499dc73a7fb\"" Aug 12 23:58:11.605009 containerd[1494]: time="2025-08-12T23:58:11.604945065Z" level=info msg="RemoveContainer for \"b82bfdd6db0dadd02d70142fe9abb9c1b48aa6429f9dbea7a0fe2499dc73a7fb\" returns successfully" Aug 12 23:58:11.605410 kubelet[2584]: I0812 23:58:11.605379 2584 scope.go:117] "RemoveContainer" containerID="d2821a359a352fc6e61a7beb4e5bbcc7ee59fd8a9c90e141afb4defd9abe61ea" Aug 12 23:58:11.605950 containerd[1494]: time="2025-08-12T23:58:11.605656029Z" level=error msg="ContainerStatus for \"d2821a359a352fc6e61a7beb4e5bbcc7ee59fd8a9c90e141afb4defd9abe61ea\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d2821a359a352fc6e61a7beb4e5bbcc7ee59fd8a9c90e141afb4defd9abe61ea\": not found" Aug 12 23:58:11.606997 kubelet[2584]: E0812 23:58:11.606132 2584 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d2821a359a352fc6e61a7beb4e5bbcc7ee59fd8a9c90e141afb4defd9abe61ea\": not found" containerID="d2821a359a352fc6e61a7beb4e5bbcc7ee59fd8a9c90e141afb4defd9abe61ea" Aug 12 23:58:11.608416 kubelet[2584]: I0812 23:58:11.606188 2584 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d2821a359a352fc6e61a7beb4e5bbcc7ee59fd8a9c90e141afb4defd9abe61ea"} err="failed to get container status \"d2821a359a352fc6e61a7beb4e5bbcc7ee59fd8a9c90e141afb4defd9abe61ea\": rpc error: code = NotFound desc = an error occurred when try to find container \"d2821a359a352fc6e61a7beb4e5bbcc7ee59fd8a9c90e141afb4defd9abe61ea\": not found" Aug 12 23:58:11.608416 kubelet[2584]: I0812 23:58:11.607204 2584 scope.go:117] "RemoveContainer" containerID="413b711de6e8295511ed27d09f166c4885b5a7ad38889a3b23f5fd576a82592d" Aug 12 23:58:11.608549 containerd[1494]: time="2025-08-12T23:58:11.607563543Z" level=error msg="ContainerStatus for \"413b711de6e8295511ed27d09f166c4885b5a7ad38889a3b23f5fd576a82592d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"413b711de6e8295511ed27d09f166c4885b5a7ad38889a3b23f5fd576a82592d\": not found" Aug 12 23:58:11.608585 kubelet[2584]: E0812 23:58:11.608560 2584 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"413b711de6e8295511ed27d09f166c4885b5a7ad38889a3b23f5fd576a82592d\": not found" containerID="413b711de6e8295511ed27d09f166c4885b5a7ad38889a3b23f5fd576a82592d" Aug 12 23:58:11.608617 kubelet[2584]: I0812 23:58:11.608592 2584 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"413b711de6e8295511ed27d09f166c4885b5a7ad38889a3b23f5fd576a82592d"} err="failed to get container status \"413b711de6e8295511ed27d09f166c4885b5a7ad38889a3b23f5fd576a82592d\": rpc error: code = NotFound desc = an error occurred when try to find container \"413b711de6e8295511ed27d09f166c4885b5a7ad38889a3b23f5fd576a82592d\": not found" Aug 12 23:58:11.608655 kubelet[2584]: I0812 23:58:11.608617 2584 scope.go:117] "RemoveContainer" containerID="f13882758f251fe196918fa814ff421fd4428447193bd9bd72a3db7b9ad550ba" Aug 12 23:58:11.610084 containerd[1494]: time="2025-08-12T23:58:11.608841667Z" level=error msg="ContainerStatus for \"f13882758f251fe196918fa814ff421fd4428447193bd9bd72a3db7b9ad550ba\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f13882758f251fe196918fa814ff421fd4428447193bd9bd72a3db7b9ad550ba\": not found" Aug 12 23:58:11.610084 containerd[1494]: time="2025-08-12T23:58:11.609299849Z" level=error msg="ContainerStatus for \"c60543eef062c006031aa3d5e2fb336089441f3ed0f8c00c70bf2e3ba6d8295b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c60543eef062c006031aa3d5e2fb336089441f3ed0f8c00c70bf2e3ba6d8295b\": not found" Aug 12 23:58:11.610084 containerd[1494]: time="2025-08-12T23:58:11.609670663Z" level=error msg="ContainerStatus for \"b82bfdd6db0dadd02d70142fe9abb9c1b48aa6429f9dbea7a0fe2499dc73a7fb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b82bfdd6db0dadd02d70142fe9abb9c1b48aa6429f9dbea7a0fe2499dc73a7fb\": not found" Aug 12 23:58:11.610293 kubelet[2584]: E0812 23:58:11.608989 2584 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f13882758f251fe196918fa814ff421fd4428447193bd9bd72a3db7b9ad550ba\": not found" containerID="f13882758f251fe196918fa814ff421fd4428447193bd9bd72a3db7b9ad550ba" Aug 12 23:58:11.610293 kubelet[2584]: I0812 23:58:11.609046 2584 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f13882758f251fe196918fa814ff421fd4428447193bd9bd72a3db7b9ad550ba"} err="failed to get container status \"f13882758f251fe196918fa814ff421fd4428447193bd9bd72a3db7b9ad550ba\": rpc error: code = NotFound desc = an error occurred when try to find container \"f13882758f251fe196918fa814ff421fd4428447193bd9bd72a3db7b9ad550ba\": not found" Aug 12 23:58:11.610293 kubelet[2584]: I0812 23:58:11.609068 2584 scope.go:117] "RemoveContainer" containerID="c60543eef062c006031aa3d5e2fb336089441f3ed0f8c00c70bf2e3ba6d8295b" Aug 12 23:58:11.610293 kubelet[2584]: E0812 23:58:11.609472 2584 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c60543eef062c006031aa3d5e2fb336089441f3ed0f8c00c70bf2e3ba6d8295b\": not found" containerID="c60543eef062c006031aa3d5e2fb336089441f3ed0f8c00c70bf2e3ba6d8295b" Aug 12 23:58:11.610293 kubelet[2584]: I0812 23:58:11.609491 2584 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c60543eef062c006031aa3d5e2fb336089441f3ed0f8c00c70bf2e3ba6d8295b"} err="failed to get container status \"c60543eef062c006031aa3d5e2fb336089441f3ed0f8c00c70bf2e3ba6d8295b\": rpc error: code = NotFound desc = an error occurred when try to find container \"c60543eef062c006031aa3d5e2fb336089441f3ed0f8c00c70bf2e3ba6d8295b\": not found" Aug 12 23:58:11.610293 kubelet[2584]: I0812 23:58:11.609506 2584 scope.go:117] "RemoveContainer" containerID="b82bfdd6db0dadd02d70142fe9abb9c1b48aa6429f9dbea7a0fe2499dc73a7fb" Aug 12 23:58:11.610493 kubelet[2584]: E0812 23:58:11.609794 2584 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b82bfdd6db0dadd02d70142fe9abb9c1b48aa6429f9dbea7a0fe2499dc73a7fb\": not found" containerID="b82bfdd6db0dadd02d70142fe9abb9c1b48aa6429f9dbea7a0fe2499dc73a7fb" Aug 12 23:58:11.610493 kubelet[2584]: I0812 23:58:11.609812 2584 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b82bfdd6db0dadd02d70142fe9abb9c1b48aa6429f9dbea7a0fe2499dc73a7fb"} err="failed to get container status \"b82bfdd6db0dadd02d70142fe9abb9c1b48aa6429f9dbea7a0fe2499dc73a7fb\": rpc error: code = NotFound desc = an error occurred when try to find container \"b82bfdd6db0dadd02d70142fe9abb9c1b48aa6429f9dbea7a0fe2499dc73a7fb\": not found" Aug 12 23:58:11.610493 kubelet[2584]: I0812 23:58:11.609825 2584 scope.go:117] "RemoveContainer" containerID="3c94959b1fa7613366d1c70f6a875a7973e2dd8f64afac9e7313462e987ec048" Aug 12 23:58:11.611795 containerd[1494]: time="2025-08-12T23:58:11.611002372Z" level=info msg="RemoveContainer for \"3c94959b1fa7613366d1c70f6a875a7973e2dd8f64afac9e7313462e987ec048\"" Aug 12 23:58:11.615043 containerd[1494]: time="2025-08-12T23:58:11.614954884Z" level=info msg="RemoveContainer for \"3c94959b1fa7613366d1c70f6a875a7973e2dd8f64afac9e7313462e987ec048\" returns successfully" Aug 12 23:58:11.615425 kubelet[2584]: I0812 23:58:11.615391 2584 scope.go:117] "RemoveContainer" containerID="3c94959b1fa7613366d1c70f6a875a7973e2dd8f64afac9e7313462e987ec048" Aug 12 23:58:11.615887 containerd[1494]: time="2025-08-12T23:58:11.615792484Z" level=error msg="ContainerStatus for \"3c94959b1fa7613366d1c70f6a875a7973e2dd8f64afac9e7313462e987ec048\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3c94959b1fa7613366d1c70f6a875a7973e2dd8f64afac9e7313462e987ec048\": not found" Aug 12 23:58:11.617330 kubelet[2584]: E0812 23:58:11.616117 2584 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3c94959b1fa7613366d1c70f6a875a7973e2dd8f64afac9e7313462e987ec048\": not found" containerID="3c94959b1fa7613366d1c70f6a875a7973e2dd8f64afac9e7313462e987ec048" Aug 12 23:58:11.617330 kubelet[2584]: I0812 23:58:11.616146 2584 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3c94959b1fa7613366d1c70f6a875a7973e2dd8f64afac9e7313462e987ec048"} err="failed to get container status \"3c94959b1fa7613366d1c70f6a875a7973e2dd8f64afac9e7313462e987ec048\": rpc error: code = NotFound desc = an error occurred when try to find container \"3c94959b1fa7613366d1c70f6a875a7973e2dd8f64afac9e7313462e987ec048\": not found" Aug 12 23:58:12.243172 kubelet[2584]: E0812 23:58:12.243008 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 12 23:58:12.397125 sshd[4220]: Connection closed by 139.178.68.195 port 37224 Aug 12 23:58:12.398400 sshd-session[4217]: pam_unix(sshd:session): session closed for user core Aug 12 23:58:12.412675 systemd[1]: sshd@23-137.184.234.76:22-139.178.68.195:37224.service: Deactivated successfully. Aug 12 23:58:12.415311 systemd[1]: session-24.scope: Deactivated successfully. Aug 12 23:58:12.416726 systemd-logind[1467]: Session 24 logged out. Waiting for processes to exit. Aug 12 23:58:12.424572 systemd[1]: Started sshd@24-137.184.234.76:22-139.178.68.195:59908.service - OpenSSH per-connection server daemon (139.178.68.195:59908). Aug 12 23:58:12.425826 systemd-logind[1467]: Removed session 24. Aug 12 23:58:12.477837 sshd[4377]: Accepted publickey for core from 139.178.68.195 port 59908 ssh2: RSA SHA256:Yd4cJaNOPrEdOKjK3Hl1fuqro0lLX1aY5TKeqt+Qp+4 Aug 12 23:58:12.479820 sshd-session[4377]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:58:12.485394 systemd-logind[1467]: New session 25 of user core. Aug 12 23:58:12.491306 systemd[1]: Started session-25.scope - Session 25 of User core. Aug 12 23:58:13.128276 sshd[4380]: Connection closed by 139.178.68.195 port 59908 Aug 12 23:58:13.130178 sshd-session[4377]: pam_unix(sshd:session): session closed for user core Aug 12 23:58:13.147460 systemd[1]: sshd@24-137.184.234.76:22-139.178.68.195:59908.service: Deactivated successfully. Aug 12 23:58:13.151706 systemd[1]: session-25.scope: Deactivated successfully. Aug 12 23:58:13.155575 systemd-logind[1467]: Session 25 logged out. Waiting for processes to exit. Aug 12 23:58:13.165203 systemd[1]: Started sshd@25-137.184.234.76:22-139.178.68.195:59916.service - OpenSSH per-connection server daemon (139.178.68.195:59916). Aug 12 23:58:13.170537 systemd-logind[1467]: Removed session 25. Aug 12 23:58:13.220941 sshd[4390]: Accepted publickey for core from 139.178.68.195 port 59916 ssh2: RSA SHA256:Yd4cJaNOPrEdOKjK3Hl1fuqro0lLX1aY5TKeqt+Qp+4 Aug 12 23:58:13.222335 sshd-session[4390]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:58:13.231301 systemd-logind[1467]: New session 26 of user core. Aug 12 23:58:13.234504 kubelet[2584]: E0812 23:58:13.232948 2584 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="50c5fd17-a29b-4a6f-b010-2a19bd801007" containerName="apply-sysctl-overwrites" Aug 12 23:58:13.234504 kubelet[2584]: E0812 23:58:13.232986 2584 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="50c5fd17-a29b-4a6f-b010-2a19bd801007" containerName="mount-bpf-fs" Aug 12 23:58:13.234504 kubelet[2584]: E0812 23:58:13.232993 2584 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1c87706e-66de-43e0-a390-87da9fa3e36d" containerName="cilium-operator" Aug 12 23:58:13.234504 kubelet[2584]: E0812 23:58:13.233001 2584 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="50c5fd17-a29b-4a6f-b010-2a19bd801007" containerName="clean-cilium-state" Aug 12 23:58:13.234504 kubelet[2584]: E0812 23:58:13.233009 2584 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="50c5fd17-a29b-4a6f-b010-2a19bd801007" containerName="mount-cgroup" Aug 12 23:58:13.234504 kubelet[2584]: E0812 23:58:13.233015 2584 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="50c5fd17-a29b-4a6f-b010-2a19bd801007" containerName="cilium-agent" Aug 12 23:58:13.234504 kubelet[2584]: I0812 23:58:13.233054 2584 memory_manager.go:354] "RemoveStaleState removing state" podUID="50c5fd17-a29b-4a6f-b010-2a19bd801007" containerName="cilium-agent" Aug 12 23:58:13.234504 kubelet[2584]: I0812 23:58:13.233063 2584 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c87706e-66de-43e0-a390-87da9fa3e36d" containerName="cilium-operator" Aug 12 23:58:13.239395 systemd[1]: Started session-26.scope - Session 26 of User core. Aug 12 23:58:13.252762 kubelet[2584]: I0812 23:58:13.251291 2584 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c87706e-66de-43e0-a390-87da9fa3e36d" path="/var/lib/kubelet/pods/1c87706e-66de-43e0-a390-87da9fa3e36d/volumes" Aug 12 23:58:13.260796 kubelet[2584]: I0812 23:58:13.260728 2584 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50c5fd17-a29b-4a6f-b010-2a19bd801007" path="/var/lib/kubelet/pods/50c5fd17-a29b-4a6f-b010-2a19bd801007/volumes" Aug 12 23:58:13.263407 systemd[1]: Created slice kubepods-burstable-pod9513a573_c69c_4eb1_8ae5_dd718ec290f1.slice - libcontainer container kubepods-burstable-pod9513a573_c69c_4eb1_8ae5_dd718ec290f1.slice. Aug 12 23:58:13.281174 kubelet[2584]: W0812 23:58:13.280678 2584 reflector.go:561] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-4230.2.2-9-8f36bdb456" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4230.2.2-9-8f36bdb456' and this object Aug 12 23:58:13.281636 kubelet[2584]: E0812 23:58:13.281394 2584 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:ci-4230.2.2-9-8f36bdb456\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4230.2.2-9-8f36bdb456' and this object" logger="UnhandledError" Aug 12 23:58:13.281636 kubelet[2584]: W0812 23:58:13.281527 2584 reflector.go:561] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ci-4230.2.2-9-8f36bdb456" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4230.2.2-9-8f36bdb456' and this object Aug 12 23:58:13.281636 kubelet[2584]: E0812 23:58:13.281548 2584 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-ipsec-keys\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-ipsec-keys\" is forbidden: User \"system:node:ci-4230.2.2-9-8f36bdb456\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4230.2.2-9-8f36bdb456' and this object" logger="UnhandledError" Aug 12 23:58:13.281636 kubelet[2584]: W0812 23:58:13.281596 2584 reflector.go:561] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-4230.2.2-9-8f36bdb456" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4230.2.2-9-8f36bdb456' and this object Aug 12 23:58:13.281800 kubelet[2584]: E0812 23:58:13.281607 2584 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:ci-4230.2.2-9-8f36bdb456\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4230.2.2-9-8f36bdb456' and this object" logger="UnhandledError" Aug 12 23:58:13.281964 kubelet[2584]: W0812 23:58:13.281916 2584 reflector.go:561] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-4230.2.2-9-8f36bdb456" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4230.2.2-9-8f36bdb456' and this object Aug 12 23:58:13.281964 kubelet[2584]: E0812 23:58:13.281936 2584 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:ci-4230.2.2-9-8f36bdb456\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4230.2.2-9-8f36bdb456' and this object" logger="UnhandledError" Aug 12 23:58:13.317218 sshd[4393]: Connection closed by 139.178.68.195 port 59916 Aug 12 23:58:13.318179 kubelet[2584]: I0812 23:58:13.317706 2584 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9513a573-c69c-4eb1-8ae5-dd718ec290f1-lib-modules\") pod \"cilium-n5r5q\" (UID: \"9513a573-c69c-4eb1-8ae5-dd718ec290f1\") " pod="kube-system/cilium-n5r5q" Aug 12 23:58:13.318179 kubelet[2584]: I0812 23:58:13.317803 2584 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9513a573-c69c-4eb1-8ae5-dd718ec290f1-host-proc-sys-net\") pod \"cilium-n5r5q\" (UID: \"9513a573-c69c-4eb1-8ae5-dd718ec290f1\") " pod="kube-system/cilium-n5r5q" Aug 12 23:58:13.318179 kubelet[2584]: I0812 23:58:13.317839 2584 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9513a573-c69c-4eb1-8ae5-dd718ec290f1-cni-path\") pod \"cilium-n5r5q\" (UID: \"9513a573-c69c-4eb1-8ae5-dd718ec290f1\") " pod="kube-system/cilium-n5r5q" Aug 12 23:58:13.318179 kubelet[2584]: I0812 23:58:13.317853 2584 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9513a573-c69c-4eb1-8ae5-dd718ec290f1-xtables-lock\") pod \"cilium-n5r5q\" (UID: \"9513a573-c69c-4eb1-8ae5-dd718ec290f1\") " pod="kube-system/cilium-n5r5q" Aug 12 23:58:13.318179 kubelet[2584]: I0812 23:58:13.317876 2584 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9513a573-c69c-4eb1-8ae5-dd718ec290f1-cilium-run\") pod \"cilium-n5r5q\" (UID: \"9513a573-c69c-4eb1-8ae5-dd718ec290f1\") " pod="kube-system/cilium-n5r5q" Aug 12 23:58:13.318179 kubelet[2584]: I0812 23:58:13.317923 2584 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9513a573-c69c-4eb1-8ae5-dd718ec290f1-hubble-tls\") pod \"cilium-n5r5q\" (UID: \"9513a573-c69c-4eb1-8ae5-dd718ec290f1\") " pod="kube-system/cilium-n5r5q" Aug 12 23:58:13.318400 kubelet[2584]: I0812 23:58:13.317946 2584 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rwfn\" (UniqueName: \"kubernetes.io/projected/9513a573-c69c-4eb1-8ae5-dd718ec290f1-kube-api-access-2rwfn\") pod \"cilium-n5r5q\" (UID: \"9513a573-c69c-4eb1-8ae5-dd718ec290f1\") " pod="kube-system/cilium-n5r5q" Aug 12 23:58:13.318400 kubelet[2584]: I0812 23:58:13.318015 2584 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9513a573-c69c-4eb1-8ae5-dd718ec290f1-bpf-maps\") pod \"cilium-n5r5q\" (UID: \"9513a573-c69c-4eb1-8ae5-dd718ec290f1\") " pod="kube-system/cilium-n5r5q" Aug 12 23:58:13.318400 kubelet[2584]: I0812 23:58:13.318060 2584 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9513a573-c69c-4eb1-8ae5-dd718ec290f1-hostproc\") pod \"cilium-n5r5q\" (UID: \"9513a573-c69c-4eb1-8ae5-dd718ec290f1\") " pod="kube-system/cilium-n5r5q" Aug 12 23:58:13.318400 kubelet[2584]: I0812 23:58:13.318075 2584 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9513a573-c69c-4eb1-8ae5-dd718ec290f1-host-proc-sys-kernel\") pod \"cilium-n5r5q\" (UID: \"9513a573-c69c-4eb1-8ae5-dd718ec290f1\") " pod="kube-system/cilium-n5r5q" Aug 12 23:58:13.318400 kubelet[2584]: I0812 23:58:13.318092 2584 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9513a573-c69c-4eb1-8ae5-dd718ec290f1-cilium-cgroup\") pod \"cilium-n5r5q\" (UID: \"9513a573-c69c-4eb1-8ae5-dd718ec290f1\") " pod="kube-system/cilium-n5r5q" Aug 12 23:58:13.318400 kubelet[2584]: I0812 23:58:13.318133 2584 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9513a573-c69c-4eb1-8ae5-dd718ec290f1-cilium-config-path\") pod \"cilium-n5r5q\" (UID: \"9513a573-c69c-4eb1-8ae5-dd718ec290f1\") " pod="kube-system/cilium-n5r5q" Aug 12 23:58:13.318581 kubelet[2584]: I0812 23:58:13.318150 2584 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9513a573-c69c-4eb1-8ae5-dd718ec290f1-clustermesh-secrets\") pod \"cilium-n5r5q\" (UID: \"9513a573-c69c-4eb1-8ae5-dd718ec290f1\") " pod="kube-system/cilium-n5r5q" Aug 12 23:58:13.318852 sshd-session[4390]: pam_unix(sshd:session): session closed for user core Aug 12 23:58:13.319569 kubelet[2584]: I0812 23:58:13.319145 2584 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9513a573-c69c-4eb1-8ae5-dd718ec290f1-etc-cni-netd\") pod \"cilium-n5r5q\" (UID: \"9513a573-c69c-4eb1-8ae5-dd718ec290f1\") " pod="kube-system/cilium-n5r5q" Aug 12 23:58:13.319569 kubelet[2584]: I0812 23:58:13.319203 2584 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/9513a573-c69c-4eb1-8ae5-dd718ec290f1-cilium-ipsec-secrets\") pod \"cilium-n5r5q\" (UID: \"9513a573-c69c-4eb1-8ae5-dd718ec290f1\") " pod="kube-system/cilium-n5r5q" Aug 12 23:58:13.333276 systemd[1]: sshd@25-137.184.234.76:22-139.178.68.195:59916.service: Deactivated successfully. Aug 12 23:58:13.336399 systemd[1]: session-26.scope: Deactivated successfully. Aug 12 23:58:13.342281 systemd-logind[1467]: Session 26 logged out. Waiting for processes to exit. Aug 12 23:58:13.353385 systemd[1]: Started sshd@26-137.184.234.76:22-139.178.68.195:59920.service - OpenSSH per-connection server daemon (139.178.68.195:59920). Aug 12 23:58:13.360366 systemd-logind[1467]: Removed session 26. Aug 12 23:58:13.431691 sshd[4399]: Accepted publickey for core from 139.178.68.195 port 59920 ssh2: RSA SHA256:Yd4cJaNOPrEdOKjK3Hl1fuqro0lLX1aY5TKeqt+Qp+4 Aug 12 23:58:13.435043 sshd-session[4399]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:58:13.445896 systemd-logind[1467]: New session 27 of user core. Aug 12 23:58:13.450250 systemd[1]: Started session-27.scope - Session 27 of User core. Aug 12 23:58:14.243195 kubelet[2584]: E0812 23:58:14.243121 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 12 23:58:14.370537 kubelet[2584]: E0812 23:58:14.370457 2584 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Aug 12 23:58:14.422089 kubelet[2584]: E0812 23:58:14.421984 2584 secret.go:189] Couldn't get secret kube-system/cilium-ipsec-keys: failed to sync secret cache: timed out waiting for the condition Aug 12 23:58:14.422260 kubelet[2584]: E0812 23:58:14.422158 2584 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9513a573-c69c-4eb1-8ae5-dd718ec290f1-cilium-ipsec-secrets podName:9513a573-c69c-4eb1-8ae5-dd718ec290f1 nodeName:}" failed. No retries permitted until 2025-08-12 23:58:14.922131479 +0000 UTC m=+95.875808164 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-ipsec-secrets" (UniqueName: "kubernetes.io/secret/9513a573-c69c-4eb1-8ae5-dd718ec290f1-cilium-ipsec-secrets") pod "cilium-n5r5q" (UID: "9513a573-c69c-4eb1-8ae5-dd718ec290f1") : failed to sync secret cache: timed out waiting for the condition Aug 12 23:58:15.071653 kubelet[2584]: E0812 23:58:15.071174 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 12 23:58:15.072385 containerd[1494]: time="2025-08-12T23:58:15.071829787Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-n5r5q,Uid:9513a573-c69c-4eb1-8ae5-dd718ec290f1,Namespace:kube-system,Attempt:0,}" Aug 12 23:58:15.103903 containerd[1494]: time="2025-08-12T23:58:15.103764153Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 12 23:58:15.103903 containerd[1494]: time="2025-08-12T23:58:15.103837622Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 12 23:58:15.103903 containerd[1494]: time="2025-08-12T23:58:15.103857010Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:58:15.104171 containerd[1494]: time="2025-08-12T23:58:15.103948342Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:58:15.137009 systemd[1]: run-containerd-runc-k8s.io-d8ebf967bed0c8920d849c971065ca138faa86c36dc44c44c33ae776276374dd-runc.xoYFrF.mount: Deactivated successfully. Aug 12 23:58:15.151357 systemd[1]: Started cri-containerd-d8ebf967bed0c8920d849c971065ca138faa86c36dc44c44c33ae776276374dd.scope - libcontainer container d8ebf967bed0c8920d849c971065ca138faa86c36dc44c44c33ae776276374dd. Aug 12 23:58:15.184418 containerd[1494]: time="2025-08-12T23:58:15.184378068Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-n5r5q,Uid:9513a573-c69c-4eb1-8ae5-dd718ec290f1,Namespace:kube-system,Attempt:0,} returns sandbox id \"d8ebf967bed0c8920d849c971065ca138faa86c36dc44c44c33ae776276374dd\"" Aug 12 23:58:15.185718 kubelet[2584]: E0812 23:58:15.185689 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 12 23:58:15.189690 containerd[1494]: time="2025-08-12T23:58:15.189647384Z" level=info msg="CreateContainer within sandbox \"d8ebf967bed0c8920d849c971065ca138faa86c36dc44c44c33ae776276374dd\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 12 23:58:15.206115 containerd[1494]: time="2025-08-12T23:58:15.205671550Z" level=info msg="CreateContainer within sandbox \"d8ebf967bed0c8920d849c971065ca138faa86c36dc44c44c33ae776276374dd\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a291cd29c01c9b48c82e031af0b0a064911d03045c9b27955c43b4486ef2d989\"" Aug 12 23:58:15.206504 containerd[1494]: time="2025-08-12T23:58:15.206474298Z" level=info msg="StartContainer for \"a291cd29c01c9b48c82e031af0b0a064911d03045c9b27955c43b4486ef2d989\"" Aug 12 23:58:15.237393 systemd[1]: Started cri-containerd-a291cd29c01c9b48c82e031af0b0a064911d03045c9b27955c43b4486ef2d989.scope - libcontainer container a291cd29c01c9b48c82e031af0b0a064911d03045c9b27955c43b4486ef2d989. Aug 12 23:58:15.282085 containerd[1494]: time="2025-08-12T23:58:15.281797402Z" level=info msg="StartContainer for \"a291cd29c01c9b48c82e031af0b0a064911d03045c9b27955c43b4486ef2d989\" returns successfully" Aug 12 23:58:15.297733 systemd[1]: cri-containerd-a291cd29c01c9b48c82e031af0b0a064911d03045c9b27955c43b4486ef2d989.scope: Deactivated successfully. Aug 12 23:58:15.326989 containerd[1494]: time="2025-08-12T23:58:15.326480575Z" level=info msg="shim disconnected" id=a291cd29c01c9b48c82e031af0b0a064911d03045c9b27955c43b4486ef2d989 namespace=k8s.io Aug 12 23:58:15.326989 containerd[1494]: time="2025-08-12T23:58:15.326539984Z" level=warning msg="cleaning up after shim disconnected" id=a291cd29c01c9b48c82e031af0b0a064911d03045c9b27955c43b4486ef2d989 namespace=k8s.io Aug 12 23:58:15.326989 containerd[1494]: time="2025-08-12T23:58:15.326549062Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 12 23:58:15.576808 kubelet[2584]: E0812 23:58:15.575257 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 12 23:58:15.580728 containerd[1494]: time="2025-08-12T23:58:15.580353785Z" level=info msg="CreateContainer within sandbox \"d8ebf967bed0c8920d849c971065ca138faa86c36dc44c44c33ae776276374dd\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 12 23:58:15.595804 containerd[1494]: time="2025-08-12T23:58:15.595655720Z" level=info msg="CreateContainer within sandbox \"d8ebf967bed0c8920d849c971065ca138faa86c36dc44c44c33ae776276374dd\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"6451bd90a46a4c3dda5917004fccffbac4cad980bca05ca3e9533ee8a1a99a02\"" Aug 12 23:58:15.596439 containerd[1494]: time="2025-08-12T23:58:15.596408242Z" level=info msg="StartContainer for \"6451bd90a46a4c3dda5917004fccffbac4cad980bca05ca3e9533ee8a1a99a02\"" Aug 12 23:58:15.639988 systemd[1]: Started cri-containerd-6451bd90a46a4c3dda5917004fccffbac4cad980bca05ca3e9533ee8a1a99a02.scope - libcontainer container 6451bd90a46a4c3dda5917004fccffbac4cad980bca05ca3e9533ee8a1a99a02. Aug 12 23:58:15.676640 containerd[1494]: time="2025-08-12T23:58:15.676593143Z" level=info msg="StartContainer for \"6451bd90a46a4c3dda5917004fccffbac4cad980bca05ca3e9533ee8a1a99a02\" returns successfully" Aug 12 23:58:15.686710 systemd[1]: cri-containerd-6451bd90a46a4c3dda5917004fccffbac4cad980bca05ca3e9533ee8a1a99a02.scope: Deactivated successfully. Aug 12 23:58:15.714279 containerd[1494]: time="2025-08-12T23:58:15.713894514Z" level=info msg="shim disconnected" id=6451bd90a46a4c3dda5917004fccffbac4cad980bca05ca3e9533ee8a1a99a02 namespace=k8s.io Aug 12 23:58:15.714279 containerd[1494]: time="2025-08-12T23:58:15.713950955Z" level=warning msg="cleaning up after shim disconnected" id=6451bd90a46a4c3dda5917004fccffbac4cad980bca05ca3e9533ee8a1a99a02 namespace=k8s.io Aug 12 23:58:15.714279 containerd[1494]: time="2025-08-12T23:58:15.713959158Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 12 23:58:16.585111 kubelet[2584]: E0812 23:58:16.583611 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 12 23:58:16.592696 containerd[1494]: time="2025-08-12T23:58:16.591837757Z" level=info msg="CreateContainer within sandbox \"d8ebf967bed0c8920d849c971065ca138faa86c36dc44c44c33ae776276374dd\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 12 23:58:16.621493 containerd[1494]: time="2025-08-12T23:58:16.620550618Z" level=info msg="CreateContainer within sandbox \"d8ebf967bed0c8920d849c971065ca138faa86c36dc44c44c33ae776276374dd\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"6bff69e0122c87d83e5b1dda3f5c2b7e1f9431435bc9a2d848230a983f7530d0\"" Aug 12 23:58:16.623297 containerd[1494]: time="2025-08-12T23:58:16.623237124Z" level=info msg="StartContainer for \"6bff69e0122c87d83e5b1dda3f5c2b7e1f9431435bc9a2d848230a983f7530d0\"" Aug 12 23:58:16.678373 systemd[1]: Started cri-containerd-6bff69e0122c87d83e5b1dda3f5c2b7e1f9431435bc9a2d848230a983f7530d0.scope - libcontainer container 6bff69e0122c87d83e5b1dda3f5c2b7e1f9431435bc9a2d848230a983f7530d0. Aug 12 23:58:16.718409 containerd[1494]: time="2025-08-12T23:58:16.718244873Z" level=info msg="StartContainer for \"6bff69e0122c87d83e5b1dda3f5c2b7e1f9431435bc9a2d848230a983f7530d0\" returns successfully" Aug 12 23:58:16.728891 systemd[1]: cri-containerd-6bff69e0122c87d83e5b1dda3f5c2b7e1f9431435bc9a2d848230a983f7530d0.scope: Deactivated successfully. Aug 12 23:58:16.761326 containerd[1494]: time="2025-08-12T23:58:16.761251738Z" level=info msg="shim disconnected" id=6bff69e0122c87d83e5b1dda3f5c2b7e1f9431435bc9a2d848230a983f7530d0 namespace=k8s.io Aug 12 23:58:16.761326 containerd[1494]: time="2025-08-12T23:58:16.761310528Z" level=warning msg="cleaning up after shim disconnected" id=6bff69e0122c87d83e5b1dda3f5c2b7e1f9431435bc9a2d848230a983f7530d0 namespace=k8s.io Aug 12 23:58:16.761326 containerd[1494]: time="2025-08-12T23:58:16.761319407Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 12 23:58:16.762316 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6bff69e0122c87d83e5b1dda3f5c2b7e1f9431435bc9a2d848230a983f7530d0-rootfs.mount: Deactivated successfully. Aug 12 23:58:17.591145 kubelet[2584]: E0812 23:58:17.590608 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 12 23:58:17.597546 containerd[1494]: time="2025-08-12T23:58:17.596270025Z" level=info msg="CreateContainer within sandbox \"d8ebf967bed0c8920d849c971065ca138faa86c36dc44c44c33ae776276374dd\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 12 23:58:17.619383 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2243256485.mount: Deactivated successfully. Aug 12 23:58:17.623623 containerd[1494]: time="2025-08-12T23:58:17.623389410Z" level=info msg="CreateContainer within sandbox \"d8ebf967bed0c8920d849c971065ca138faa86c36dc44c44c33ae776276374dd\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"e37886e7ad56d9db5144f30075993e9fe704dcdb5f512059e50ac0bb1a27b91e\"" Aug 12 23:58:17.628605 containerd[1494]: time="2025-08-12T23:58:17.625754286Z" level=info msg="StartContainer for \"e37886e7ad56d9db5144f30075993e9fe704dcdb5f512059e50ac0bb1a27b91e\"" Aug 12 23:58:17.692381 systemd[1]: Started cri-containerd-e37886e7ad56d9db5144f30075993e9fe704dcdb5f512059e50ac0bb1a27b91e.scope - libcontainer container e37886e7ad56d9db5144f30075993e9fe704dcdb5f512059e50ac0bb1a27b91e. Aug 12 23:58:17.742594 systemd[1]: cri-containerd-e37886e7ad56d9db5144f30075993e9fe704dcdb5f512059e50ac0bb1a27b91e.scope: Deactivated successfully. Aug 12 23:58:17.745487 containerd[1494]: time="2025-08-12T23:58:17.744829019Z" level=info msg="StartContainer for \"e37886e7ad56d9db5144f30075993e9fe704dcdb5f512059e50ac0bb1a27b91e\" returns successfully" Aug 12 23:58:17.773515 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e37886e7ad56d9db5144f30075993e9fe704dcdb5f512059e50ac0bb1a27b91e-rootfs.mount: Deactivated successfully. Aug 12 23:58:17.793166 containerd[1494]: time="2025-08-12T23:58:17.793062982Z" level=info msg="shim disconnected" id=e37886e7ad56d9db5144f30075993e9fe704dcdb5f512059e50ac0bb1a27b91e namespace=k8s.io Aug 12 23:58:17.793166 containerd[1494]: time="2025-08-12T23:58:17.793156062Z" level=warning msg="cleaning up after shim disconnected" id=e37886e7ad56d9db5144f30075993e9fe704dcdb5f512059e50ac0bb1a27b91e namespace=k8s.io Aug 12 23:58:17.793166 containerd[1494]: time="2025-08-12T23:58:17.793168969Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 12 23:58:17.812661 containerd[1494]: time="2025-08-12T23:58:17.812202206Z" level=warning msg="cleanup warnings time=\"2025-08-12T23:58:17Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Aug 12 23:58:18.244150 kubelet[2584]: E0812 23:58:18.243759 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 12 23:58:18.594317 kubelet[2584]: E0812 23:58:18.594263 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 12 23:58:18.598612 containerd[1494]: time="2025-08-12T23:58:18.598414482Z" level=info msg="CreateContainer within sandbox \"d8ebf967bed0c8920d849c971065ca138faa86c36dc44c44c33ae776276374dd\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 12 23:58:18.629523 containerd[1494]: time="2025-08-12T23:58:18.629355224Z" level=info msg="CreateContainer within sandbox \"d8ebf967bed0c8920d849c971065ca138faa86c36dc44c44c33ae776276374dd\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"9bce5d410f810acd0de2da13f361a2bf452a56f64cd32cc2cf89cc15eec9589d\"" Aug 12 23:58:18.630190 containerd[1494]: time="2025-08-12T23:58:18.630142248Z" level=info msg="StartContainer for \"9bce5d410f810acd0de2da13f361a2bf452a56f64cd32cc2cf89cc15eec9589d\"" Aug 12 23:58:18.632966 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1581613014.mount: Deactivated successfully. Aug 12 23:58:18.689360 systemd[1]: Started cri-containerd-9bce5d410f810acd0de2da13f361a2bf452a56f64cd32cc2cf89cc15eec9589d.scope - libcontainer container 9bce5d410f810acd0de2da13f361a2bf452a56f64cd32cc2cf89cc15eec9589d. Aug 12 23:58:18.723875 containerd[1494]: time="2025-08-12T23:58:18.723733591Z" level=info msg="StartContainer for \"9bce5d410f810acd0de2da13f361a2bf452a56f64cd32cc2cf89cc15eec9589d\" returns successfully" Aug 12 23:58:19.348241 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Aug 12 23:58:19.601418 kubelet[2584]: E0812 23:58:19.600803 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 12 23:58:19.631859 kubelet[2584]: I0812 23:58:19.631367 2584 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-n5r5q" podStartSLOduration=6.631339604 podStartE2EDuration="6.631339604s" podCreationTimestamp="2025-08-12 23:58:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-12 23:58:19.631074582 +0000 UTC m=+100.584751285" watchObservedRunningTime="2025-08-12 23:58:19.631339604 +0000 UTC m=+100.585016316" Aug 12 23:58:19.895735 systemd[1]: run-containerd-runc-k8s.io-9bce5d410f810acd0de2da13f361a2bf452a56f64cd32cc2cf89cc15eec9589d-runc.SxkDRA.mount: Deactivated successfully. Aug 12 23:58:21.074131 kubelet[2584]: E0812 23:58:21.074085 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 12 23:58:21.244062 kubelet[2584]: E0812 23:58:21.243428 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 12 23:58:22.745550 systemd-networkd[1370]: lxc_health: Link UP Aug 12 23:58:22.745840 systemd-networkd[1370]: lxc_health: Gained carrier Aug 12 23:58:23.073875 kubelet[2584]: E0812 23:58:23.073835 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 12 23:58:23.612047 kubelet[2584]: E0812 23:58:23.611119 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 12 23:58:24.249262 systemd-networkd[1370]: lxc_health: Gained IPv6LL Aug 12 23:58:24.614050 kubelet[2584]: E0812 23:58:24.613571 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 12 23:58:28.927064 systemd[1]: run-containerd-runc-k8s.io-9bce5d410f810acd0de2da13f361a2bf452a56f64cd32cc2cf89cc15eec9589d-runc.hGHa9j.mount: Deactivated successfully. Aug 12 23:58:29.004616 sshd[4403]: Connection closed by 139.178.68.195 port 59920 Aug 12 23:58:29.006653 sshd-session[4399]: pam_unix(sshd:session): session closed for user core Aug 12 23:58:29.010899 systemd[1]: sshd@26-137.184.234.76:22-139.178.68.195:59920.service: Deactivated successfully. Aug 12 23:58:29.013835 systemd[1]: session-27.scope: Deactivated successfully. Aug 12 23:58:29.016533 systemd-logind[1467]: Session 27 logged out. Waiting for processes to exit. Aug 12 23:58:29.018292 systemd-logind[1467]: Removed session 27.