May 15 14:58:51.954089 kernel: Linux version 6.12.20-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Thu May 15 10:42:41 -00 2025 May 15 14:58:51.954144 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=48287e633374b880fa618bd42bee102ae77c50831859c6cedd6ca9e1aec3dd5c May 15 14:58:51.954156 kernel: BIOS-provided physical RAM map: May 15 14:58:51.954164 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable May 15 14:58:51.954171 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved May 15 14:58:51.954178 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 15 14:58:51.954185 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable May 15 14:58:51.954197 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved May 15 14:58:51.954206 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 15 14:58:51.954213 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 15 14:58:51.954219 kernel: NX (Execute Disable) protection: active May 15 14:58:51.954226 kernel: APIC: Static calls initialized May 15 14:58:51.954233 kernel: SMBIOS 2.8 present. May 15 14:58:51.954240 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 May 15 14:58:51.954251 kernel: DMI: Memory slots populated: 1/1 May 15 14:58:51.954259 kernel: Hypervisor detected: KVM May 15 14:58:51.954270 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 15 14:58:51.954278 kernel: kvm-clock: using sched offset of 5030991135 cycles May 15 14:58:51.954286 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 15 14:58:51.954294 kernel: tsc: Detected 1999.999 MHz processor May 15 14:58:51.954302 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 15 14:58:51.954310 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 15 14:58:51.954317 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 May 15 14:58:51.954329 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs May 15 14:58:51.954355 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 15 14:58:51.954368 kernel: ACPI: Early table checksum verification disabled May 15 14:58:51.954382 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) May 15 14:58:51.954393 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 14:58:51.954405 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 14:58:51.954416 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 14:58:51.954426 kernel: ACPI: FACS 0x000000007FFE0000 000040 May 15 14:58:51.954437 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 14:58:51.954453 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 14:58:51.954465 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 14:58:51.954478 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 14:58:51.954488 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] May 15 14:58:51.954495 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] May 15 14:58:51.954502 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] May 15 14:58:51.954510 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] May 15 14:58:51.954518 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] May 15 14:58:51.954533 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] May 15 14:58:51.954540 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] May 15 14:58:51.954549 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] May 15 14:58:51.954562 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] May 15 14:58:51.954575 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00001000-0x7ffdafff] May 15 14:58:51.954592 kernel: NODE_DATA(0) allocated [mem 0x7ffd3dc0-0x7ffdafff] May 15 14:58:51.954605 kernel: Zone ranges: May 15 14:58:51.954619 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 15 14:58:51.954630 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] May 15 14:58:51.954637 kernel: Normal empty May 15 14:58:51.954645 kernel: Device empty May 15 14:58:51.954653 kernel: Movable zone start for each node May 15 14:58:51.954661 kernel: Early memory node ranges May 15 14:58:51.954670 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 15 14:58:51.954678 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] May 15 14:58:51.954688 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] May 15 14:58:51.954697 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 15 14:58:51.954705 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 15 14:58:51.954713 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges May 15 14:58:51.954720 kernel: ACPI: PM-Timer IO Port: 0x608 May 15 14:58:51.954728 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 15 14:58:51.954741 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 15 14:58:51.954749 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 15 14:58:51.954761 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 15 14:58:51.954772 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 15 14:58:51.954784 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 15 14:58:51.954792 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 15 14:58:51.954800 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 15 14:58:51.954808 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 15 14:58:51.954816 kernel: TSC deadline timer available May 15 14:58:51.954824 kernel: CPU topo: Max. logical packages: 1 May 15 14:58:51.954831 kernel: CPU topo: Max. logical dies: 1 May 15 14:58:51.954839 kernel: CPU topo: Max. dies per package: 1 May 15 14:58:51.954846 kernel: CPU topo: Max. threads per core: 1 May 15 14:58:51.954856 kernel: CPU topo: Num. cores per package: 2 May 15 14:58:51.954864 kernel: CPU topo: Num. threads per package: 2 May 15 14:58:51.954871 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs May 15 14:58:51.954879 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 15 14:58:51.954886 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices May 15 14:58:51.954894 kernel: Booting paravirtualized kernel on KVM May 15 14:58:51.954902 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 15 14:58:51.954910 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 May 15 14:58:51.954918 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 May 15 14:58:51.954928 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 May 15 14:58:51.954935 kernel: pcpu-alloc: [0] 0 1 May 15 14:58:51.954943 kernel: kvm-guest: PV spinlocks disabled, no host support May 15 14:58:51.954952 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=48287e633374b880fa618bd42bee102ae77c50831859c6cedd6ca9e1aec3dd5c May 15 14:58:51.954960 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 15 14:58:51.954968 kernel: random: crng init done May 15 14:58:51.954975 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 15 14:58:51.954983 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) May 15 14:58:51.954993 kernel: Fallback order for Node 0: 0 May 15 14:58:51.955002 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524153 May 15 14:58:51.955009 kernel: Policy zone: DMA32 May 15 14:58:51.955017 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 15 14:58:51.955024 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 15 14:58:51.955032 kernel: Kernel/User page tables isolation: enabled May 15 14:58:51.955039 kernel: ftrace: allocating 40065 entries in 157 pages May 15 14:58:51.955047 kernel: ftrace: allocated 157 pages with 5 groups May 15 14:58:51.955054 kernel: Dynamic Preempt: voluntary May 15 14:58:51.955064 kernel: rcu: Preemptible hierarchical RCU implementation. May 15 14:58:51.955074 kernel: rcu: RCU event tracing is enabled. May 15 14:58:51.955082 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 15 14:58:51.955089 kernel: Trampoline variant of Tasks RCU enabled. May 15 14:58:51.957153 kernel: Rude variant of Tasks RCU enabled. May 15 14:58:51.957200 kernel: Tracing variant of Tasks RCU enabled. May 15 14:58:51.957214 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 15 14:58:51.957227 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 15 14:58:51.957240 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 15 14:58:51.957267 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 15 14:58:51.957276 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 15 14:58:51.957288 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 May 15 14:58:51.957305 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 15 14:58:51.957316 kernel: Console: colour VGA+ 80x25 May 15 14:58:51.957328 kernel: printk: legacy console [tty0] enabled May 15 14:58:51.957339 kernel: printk: legacy console [ttyS0] enabled May 15 14:58:51.957351 kernel: ACPI: Core revision 20240827 May 15 14:58:51.957362 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 15 14:58:51.957381 kernel: APIC: Switch to symmetric I/O mode setup May 15 14:58:51.957389 kernel: x2apic enabled May 15 14:58:51.957398 kernel: APIC: Switched APIC routing to: physical x2apic May 15 14:58:51.957408 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 15 14:58:51.957422 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x39a85afc727, max_idle_ns: 881590685098 ns May 15 14:58:51.957431 kernel: Calibrating delay loop (skipped) preset value.. 3999.99 BogoMIPS (lpj=1999999) May 15 14:58:51.957439 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 May 15 14:58:51.957448 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 May 15 14:58:51.957456 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 15 14:58:51.957467 kernel: Spectre V2 : Mitigation: Retpolines May 15 14:58:51.957475 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch May 15 14:58:51.957484 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT May 15 14:58:51.957492 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls May 15 14:58:51.957501 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 15 14:58:51.957509 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 15 14:58:51.957518 kernel: MDS: Mitigation: Clear CPU buffers May 15 14:58:51.957529 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode May 15 14:58:51.957537 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 15 14:58:51.957546 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 15 14:58:51.957554 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 15 14:58:51.957563 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 15 14:58:51.957571 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. May 15 14:58:51.957579 kernel: Freeing SMP alternatives memory: 32K May 15 14:58:51.957588 kernel: pid_max: default: 32768 minimum: 301 May 15 14:58:51.957596 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima May 15 14:58:51.957607 kernel: landlock: Up and running. May 15 14:58:51.957615 kernel: SELinux: Initializing. May 15 14:58:51.957624 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 15 14:58:51.957632 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 15 14:58:51.957641 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) May 15 14:58:51.957650 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. May 15 14:58:51.957658 kernel: signal: max sigframe size: 1776 May 15 14:58:51.957667 kernel: rcu: Hierarchical SRCU implementation. May 15 14:58:51.957675 kernel: rcu: Max phase no-delay instances is 400. May 15 14:58:51.957686 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level May 15 14:58:51.957695 kernel: NMI watchdog: Perf NMI watchdog permanently disabled May 15 14:58:51.957703 kernel: smp: Bringing up secondary CPUs ... May 15 14:58:51.957711 kernel: smpboot: x86: Booting SMP configuration: May 15 14:58:51.957724 kernel: .... node #0, CPUs: #1 May 15 14:58:51.957732 kernel: smp: Brought up 1 node, 2 CPUs May 15 14:58:51.957741 kernel: smpboot: Total of 2 processors activated (7999.99 BogoMIPS) May 15 14:58:51.957750 kernel: Memory: 1966908K/2096612K available (14336K kernel code, 2438K rwdata, 9944K rodata, 54416K init, 2544K bss, 125140K reserved, 0K cma-reserved) May 15 14:58:51.957759 kernel: devtmpfs: initialized May 15 14:58:51.957771 kernel: x86/mm: Memory block size: 128MB May 15 14:58:51.957779 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 15 14:58:51.957788 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 15 14:58:51.957796 kernel: pinctrl core: initialized pinctrl subsystem May 15 14:58:51.957805 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 15 14:58:51.957813 kernel: audit: initializing netlink subsys (disabled) May 15 14:58:51.957822 kernel: thermal_sys: Registered thermal governor 'step_wise' May 15 14:58:51.957830 kernel: thermal_sys: Registered thermal governor 'user_space' May 15 14:58:51.957838 kernel: audit: type=2000 audit(1747321128.160:1): state=initialized audit_enabled=0 res=1 May 15 14:58:51.957850 kernel: cpuidle: using governor menu May 15 14:58:51.957858 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 15 14:58:51.957867 kernel: dca service started, version 1.12.1 May 15 14:58:51.957875 kernel: PCI: Using configuration type 1 for base access May 15 14:58:51.957884 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 15 14:58:51.957892 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 15 14:58:51.957901 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 15 14:58:51.957909 kernel: ACPI: Added _OSI(Module Device) May 15 14:58:51.957917 kernel: ACPI: Added _OSI(Processor Device) May 15 14:58:51.957928 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 15 14:58:51.957937 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 15 14:58:51.957945 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 15 14:58:51.957954 kernel: ACPI: Interpreter enabled May 15 14:58:51.957962 kernel: ACPI: PM: (supports S0 S5) May 15 14:58:51.957970 kernel: ACPI: Using IOAPIC for interrupt routing May 15 14:58:51.957979 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 15 14:58:51.957987 kernel: PCI: Using E820 reservations for host bridge windows May 15 14:58:51.957995 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F May 15 14:58:51.958006 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 15 14:58:51.958327 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] May 15 14:58:51.958486 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] May 15 14:58:51.958576 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge May 15 14:58:51.958587 kernel: acpiphp: Slot [3] registered May 15 14:58:51.958596 kernel: acpiphp: Slot [4] registered May 15 14:58:51.958604 kernel: acpiphp: Slot [5] registered May 15 14:58:51.958618 kernel: acpiphp: Slot [6] registered May 15 14:58:51.958626 kernel: acpiphp: Slot [7] registered May 15 14:58:51.958634 kernel: acpiphp: Slot [8] registered May 15 14:58:51.958643 kernel: acpiphp: Slot [9] registered May 15 14:58:51.958651 kernel: acpiphp: Slot [10] registered May 15 14:58:51.958659 kernel: acpiphp: Slot [11] registered May 15 14:58:51.958667 kernel: acpiphp: Slot [12] registered May 15 14:58:51.958676 kernel: acpiphp: Slot [13] registered May 15 14:58:51.958684 kernel: acpiphp: Slot [14] registered May 15 14:58:51.958695 kernel: acpiphp: Slot [15] registered May 15 14:58:51.958704 kernel: acpiphp: Slot [16] registered May 15 14:58:51.958712 kernel: acpiphp: Slot [17] registered May 15 14:58:51.958720 kernel: acpiphp: Slot [18] registered May 15 14:58:51.958728 kernel: acpiphp: Slot [19] registered May 15 14:58:51.958736 kernel: acpiphp: Slot [20] registered May 15 14:58:51.958744 kernel: acpiphp: Slot [21] registered May 15 14:58:51.958753 kernel: acpiphp: Slot [22] registered May 15 14:58:51.958761 kernel: acpiphp: Slot [23] registered May 15 14:58:51.958769 kernel: acpiphp: Slot [24] registered May 15 14:58:51.958780 kernel: acpiphp: Slot [25] registered May 15 14:58:51.958789 kernel: acpiphp: Slot [26] registered May 15 14:58:51.958797 kernel: acpiphp: Slot [27] registered May 15 14:58:51.958806 kernel: acpiphp: Slot [28] registered May 15 14:58:51.958814 kernel: acpiphp: Slot [29] registered May 15 14:58:51.958822 kernel: acpiphp: Slot [30] registered May 15 14:58:51.958830 kernel: acpiphp: Slot [31] registered May 15 14:58:51.958839 kernel: PCI host bridge to bus 0000:00 May 15 14:58:51.958944 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 15 14:58:51.959035 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 15 14:58:51.959502 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 15 14:58:51.959596 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] May 15 14:58:51.959675 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] May 15 14:58:51.959752 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 15 14:58:51.959882 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint May 15 14:58:51.959997 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint May 15 14:58:51.960153 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint May 15 14:58:51.960244 kernel: pci 0000:00:01.1: BAR 4 [io 0xc1e0-0xc1ef] May 15 14:58:51.960332 kernel: pci 0000:00:01.1: BAR 0 [io 0x01f0-0x01f7]: legacy IDE quirk May 15 14:58:51.960419 kernel: pci 0000:00:01.1: BAR 1 [io 0x03f6]: legacy IDE quirk May 15 14:58:51.960506 kernel: pci 0000:00:01.1: BAR 2 [io 0x0170-0x0177]: legacy IDE quirk May 15 14:58:51.960592 kernel: pci 0000:00:01.1: BAR 3 [io 0x0376]: legacy IDE quirk May 15 14:58:51.960698 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint May 15 14:58:51.960786 kernel: pci 0000:00:01.2: BAR 4 [io 0xc180-0xc19f] May 15 14:58:51.960887 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint May 15 14:58:51.960975 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI May 15 14:58:51.961066 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB May 15 14:58:51.963302 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint May 15 14:58:51.963431 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref] May 15 14:58:51.963559 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref] May 15 14:58:51.963659 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfebf0000-0xfebf0fff] May 15 14:58:51.963750 kernel: pci 0000:00:02.0: ROM [mem 0xfebe0000-0xfebeffff pref] May 15 14:58:51.963839 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 15 14:58:51.963968 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint May 15 14:58:51.964062 kernel: pci 0000:00:03.0: BAR 0 [io 0xc1a0-0xc1bf] May 15 14:58:51.964203 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebf1000-0xfebf1fff] May 15 14:58:51.964334 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref] May 15 14:58:51.964439 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint May 15 14:58:51.964552 kernel: pci 0000:00:04.0: BAR 0 [io 0xc1c0-0xc1df] May 15 14:58:51.964654 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebf2000-0xfebf2fff] May 15 14:58:51.964744 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref] May 15 14:58:51.964867 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint May 15 14:58:51.964976 kernel: pci 0000:00:05.0: BAR 0 [io 0xc100-0xc13f] May 15 14:58:51.965144 kernel: pci 0000:00:05.0: BAR 1 [mem 0xfebf3000-0xfebf3fff] May 15 14:58:51.965279 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref] May 15 14:58:51.966278 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint May 15 14:58:51.966442 kernel: pci 0000:00:06.0: BAR 0 [io 0xc000-0xc07f] May 15 14:58:51.966584 kernel: pci 0000:00:06.0: BAR 1 [mem 0xfebf4000-0xfebf4fff] May 15 14:58:51.966722 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref] May 15 14:58:51.966926 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint May 15 14:58:51.967062 kernel: pci 0000:00:07.0: BAR 0 [io 0xc080-0xc0ff] May 15 14:58:51.968301 kernel: pci 0000:00:07.0: BAR 1 [mem 0xfebf5000-0xfebf5fff] May 15 14:58:51.968415 kernel: pci 0000:00:07.0: BAR 4 [mem 0xfe814000-0xfe817fff 64bit pref] May 15 14:58:51.968526 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint May 15 14:58:51.968656 kernel: pci 0000:00:08.0: BAR 0 [io 0xc140-0xc17f] May 15 14:58:51.968792 kernel: pci 0000:00:08.0: BAR 4 [mem 0xfe818000-0xfe81bfff 64bit pref] May 15 14:58:51.968810 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 15 14:58:51.968826 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 15 14:58:51.968840 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 15 14:58:51.968854 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 15 14:58:51.968869 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 May 15 14:58:51.968884 kernel: iommu: Default domain type: Translated May 15 14:58:51.968894 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 15 14:58:51.968910 kernel: PCI: Using ACPI for IRQ routing May 15 14:58:51.968924 kernel: PCI: pci_cache_line_size set to 64 bytes May 15 14:58:51.968939 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] May 15 14:58:51.968954 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] May 15 14:58:51.971140 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device May 15 14:58:51.971316 kernel: pci 0000:00:02.0: vgaarb: bridge control possible May 15 14:58:51.971413 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 15 14:58:51.971429 kernel: vgaarb: loaded May 15 14:58:51.971444 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 15 14:58:51.971465 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 15 14:58:51.971479 kernel: clocksource: Switched to clocksource kvm-clock May 15 14:58:51.971493 kernel: VFS: Disk quotas dquot_6.6.0 May 15 14:58:51.971507 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 15 14:58:51.971517 kernel: pnp: PnP ACPI init May 15 14:58:51.971525 kernel: pnp: PnP ACPI: found 4 devices May 15 14:58:51.971534 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 15 14:58:51.971543 kernel: NET: Registered PF_INET protocol family May 15 14:58:51.971551 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) May 15 14:58:51.971583 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) May 15 14:58:51.971592 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 15 14:58:51.971601 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) May 15 14:58:51.971610 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) May 15 14:58:51.971618 kernel: TCP: Hash tables configured (established 16384 bind 16384) May 15 14:58:51.971627 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) May 15 14:58:51.971635 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) May 15 14:58:51.971643 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 15 14:58:51.971655 kernel: NET: Registered PF_XDP protocol family May 15 14:58:51.971765 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 15 14:58:51.971847 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 15 14:58:51.971952 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 15 14:58:51.972073 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] May 15 14:58:51.972171 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] May 15 14:58:51.972266 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release May 15 14:58:51.972359 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers May 15 14:58:51.972378 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 May 15 14:58:51.972471 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x720 took 27553 usecs May 15 14:58:51.972482 kernel: PCI: CLS 0 bytes, default 64 May 15 14:58:51.972491 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer May 15 14:58:51.972501 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x39a85afc727, max_idle_ns: 881590685098 ns May 15 14:58:51.972509 kernel: Initialise system trusted keyrings May 15 14:58:51.972518 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 May 15 14:58:51.972526 kernel: Key type asymmetric registered May 15 14:58:51.972535 kernel: Asymmetric key parser 'x509' registered May 15 14:58:51.972546 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 15 14:58:51.972555 kernel: io scheduler mq-deadline registered May 15 14:58:51.972564 kernel: io scheduler kyber registered May 15 14:58:51.972572 kernel: io scheduler bfq registered May 15 14:58:51.972581 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 15 14:58:51.972589 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 May 15 14:58:51.972597 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 May 15 14:58:51.972606 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 May 15 14:58:51.972614 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 15 14:58:51.972625 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 15 14:58:51.972634 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 15 14:58:51.972643 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 15 14:58:51.972651 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 15 14:58:51.972765 kernel: rtc_cmos 00:03: RTC can wake from S4 May 15 14:58:51.972778 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 15 14:58:51.972859 kernel: rtc_cmos 00:03: registered as rtc0 May 15 14:58:51.972940 kernel: rtc_cmos 00:03: setting system clock to 2025-05-15T14:58:51 UTC (1747321131) May 15 14:58:51.973024 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram May 15 14:58:51.973035 kernel: intel_pstate: CPU model not supported May 15 14:58:51.973043 kernel: NET: Registered PF_INET6 protocol family May 15 14:58:51.973052 kernel: Segment Routing with IPv6 May 15 14:58:51.973060 kernel: In-situ OAM (IOAM) with IPv6 May 15 14:58:51.973069 kernel: NET: Registered PF_PACKET protocol family May 15 14:58:51.973078 kernel: Key type dns_resolver registered May 15 14:58:51.973086 kernel: IPI shorthand broadcast: enabled May 15 14:58:51.973094 kernel: sched_clock: Marking stable (3911007002, 188018432)->(4129337284, -30311850) May 15 14:58:51.978038 kernel: registered taskstats version 1 May 15 14:58:51.978051 kernel: Loading compiled-in X.509 certificates May 15 14:58:51.978062 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.20-flatcar: 05e05785144663be6df1db78301487421c4773b6' May 15 14:58:51.978072 kernel: Demotion targets for Node 0: null May 15 14:58:51.978081 kernel: Key type .fscrypt registered May 15 14:58:51.978089 kernel: Key type fscrypt-provisioning registered May 15 14:58:51.978142 kernel: ima: No TPM chip found, activating TPM-bypass! May 15 14:58:51.978153 kernel: ima: Allocated hash algorithm: sha1 May 15 14:58:51.978164 kernel: ima: No architecture policies found May 15 14:58:51.978173 kernel: clk: Disabling unused clocks May 15 14:58:51.978182 kernel: Warning: unable to open an initial console. May 15 14:58:51.978191 kernel: Freeing unused kernel image (initmem) memory: 54416K May 15 14:58:51.978200 kernel: Write protecting the kernel read-only data: 24576k May 15 14:58:51.978209 kernel: Freeing unused kernel image (rodata/data gap) memory: 296K May 15 14:58:51.978217 kernel: Run /init as init process May 15 14:58:51.978226 kernel: with arguments: May 15 14:58:51.978235 kernel: /init May 15 14:58:51.978246 kernel: with environment: May 15 14:58:51.978254 kernel: HOME=/ May 15 14:58:51.978262 kernel: TERM=linux May 15 14:58:51.978271 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 15 14:58:51.978282 systemd[1]: Successfully made /usr/ read-only. May 15 14:58:51.978296 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 15 14:58:51.978307 systemd[1]: Detected virtualization kvm. May 15 14:58:51.978316 systemd[1]: Detected architecture x86-64. May 15 14:58:51.978327 systemd[1]: Running in initrd. May 15 14:58:51.978353 systemd[1]: No hostname configured, using default hostname. May 15 14:58:51.978363 systemd[1]: Hostname set to . May 15 14:58:51.978372 systemd[1]: Initializing machine ID from VM UUID. May 15 14:58:51.978381 systemd[1]: Queued start job for default target initrd.target. May 15 14:58:51.978390 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 15 14:58:51.978400 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 15 14:58:51.978410 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 15 14:58:51.978423 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 15 14:58:51.978433 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 15 14:58:51.978445 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 15 14:58:51.978456 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 15 14:58:51.978468 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 15 14:58:51.978478 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 15 14:58:51.978487 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 15 14:58:51.978496 systemd[1]: Reached target paths.target - Path Units. May 15 14:58:51.978505 systemd[1]: Reached target slices.target - Slice Units. May 15 14:58:51.978514 systemd[1]: Reached target swap.target - Swaps. May 15 14:58:51.978523 systemd[1]: Reached target timers.target - Timer Units. May 15 14:58:51.978532 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 15 14:58:51.978544 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 15 14:58:51.978553 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 15 14:58:51.978563 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 15 14:58:51.978572 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 15 14:58:51.978581 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 15 14:58:51.978590 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 15 14:58:51.978600 systemd[1]: Reached target sockets.target - Socket Units. May 15 14:58:51.978609 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 15 14:58:51.978618 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 15 14:58:51.978630 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 15 14:58:51.978640 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). May 15 14:58:51.978650 systemd[1]: Starting systemd-fsck-usr.service... May 15 14:58:51.978659 systemd[1]: Starting systemd-journald.service - Journal Service... May 15 14:58:51.978668 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 15 14:58:51.978678 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 14:58:51.978741 systemd-journald[212]: Collecting audit messages is disabled. May 15 14:58:51.978768 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 15 14:58:51.978779 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 15 14:58:51.978811 systemd[1]: Finished systemd-fsck-usr.service. May 15 14:58:51.978821 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 15 14:58:51.978832 systemd-journald[212]: Journal started May 15 14:58:51.978854 systemd-journald[212]: Runtime Journal (/run/log/journal/0cc1ad7be659421ea5007298915d617c) is 4.9M, max 39.5M, 34.6M free. May 15 14:58:51.981155 systemd[1]: Started systemd-journald.service - Journal Service. May 15 14:58:51.994283 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 15 14:58:52.006036 systemd-modules-load[213]: Inserted module 'overlay' May 15 14:58:52.006953 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 15 14:58:52.010455 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 15 14:58:52.032200 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 15 14:58:52.075534 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 15 14:58:52.075573 kernel: Bridge firewalling registered May 15 14:58:52.033999 systemd-tmpfiles[226]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. May 15 14:58:52.054567 systemd-modules-load[213]: Inserted module 'br_netfilter' May 15 14:58:52.075989 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 15 14:58:52.077289 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 14:58:52.078986 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 15 14:58:52.083386 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 15 14:58:52.086255 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 15 14:58:52.110780 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 15 14:58:52.114280 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 15 14:58:52.132172 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 14:58:52.136321 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 15 14:58:52.170784 dracut-cmdline[252]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=48287e633374b880fa618bd42bee102ae77c50831859c6cedd6ca9e1aec3dd5c May 15 14:58:52.178586 systemd-resolved[244]: Positive Trust Anchors: May 15 14:58:52.178606 systemd-resolved[244]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 15 14:58:52.178655 systemd-resolved[244]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 15 14:58:52.188774 systemd-resolved[244]: Defaulting to hostname 'linux'. May 15 14:58:52.192741 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 15 14:58:52.194402 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 15 14:58:52.277147 kernel: SCSI subsystem initialized May 15 14:58:52.288141 kernel: Loading iSCSI transport class v2.0-870. May 15 14:58:52.301140 kernel: iscsi: registered transport (tcp) May 15 14:58:52.327413 kernel: iscsi: registered transport (qla4xxx) May 15 14:58:52.327540 kernel: QLogic iSCSI HBA Driver May 15 14:58:52.355572 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 15 14:58:52.387999 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 15 14:58:52.389684 systemd[1]: Reached target network-pre.target - Preparation for Network. May 15 14:58:52.462427 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 15 14:58:52.465218 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 15 14:58:52.524178 kernel: raid6: avx2x4 gen() 30440 MB/s May 15 14:58:52.541148 kernel: raid6: avx2x2 gen() 30020 MB/s May 15 14:58:52.558442 kernel: raid6: avx2x1 gen() 23049 MB/s May 15 14:58:52.558560 kernel: raid6: using algorithm avx2x4 gen() 30440 MB/s May 15 14:58:52.576382 kernel: raid6: .... xor() 8970 MB/s, rmw enabled May 15 14:58:52.576478 kernel: raid6: using avx2x2 recovery algorithm May 15 14:58:52.604171 kernel: xor: automatically using best checksumming function avx May 15 14:58:52.771177 kernel: Btrfs loaded, zoned=no, fsverity=no May 15 14:58:52.780378 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 15 14:58:52.782970 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 15 14:58:52.818568 systemd-udevd[460]: Using default interface naming scheme 'v255'. May 15 14:58:52.824734 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 15 14:58:52.828489 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 15 14:58:52.856439 dracut-pre-trigger[464]: rd.md=0: removing MD RAID activation May 15 14:58:52.889111 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 15 14:58:52.892240 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 15 14:58:52.959707 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 15 14:58:52.962015 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 15 14:58:53.042146 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues May 15 14:58:53.088483 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) May 15 14:58:53.088639 kernel: cryptd: max_cpu_qlen set to 1000 May 15 14:58:53.088652 kernel: virtio_scsi virtio3: 2/0/0 default/read/poll queues May 15 14:58:53.128201 kernel: scsi host0: Virtio SCSI HBA May 15 14:58:53.128434 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 15 14:58:53.128454 kernel: GPT:9289727 != 125829119 May 15 14:58:53.128477 kernel: GPT:Alternate GPT header not at the end of the disk. May 15 14:58:53.128496 kernel: GPT:9289727 != 125829119 May 15 14:58:53.128513 kernel: GPT: Use GNU Parted to correct GPT errors. May 15 14:58:53.128531 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 14:58:53.128550 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues May 15 14:58:53.128711 kernel: virtio_blk virtio5: [vdb] 976 512-byte logical blocks (500 kB/488 KiB) May 15 14:58:53.128854 kernel: AES CTR mode by8 optimization enabled May 15 14:58:53.137142 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 15 14:58:53.143625 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 May 15 14:58:53.137336 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 15 14:58:53.143321 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 15 14:58:53.149683 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 14:58:53.155994 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 15 14:58:53.185139 kernel: libata version 3.00 loaded. May 15 14:58:53.189302 kernel: ata_piix 0000:00:01.1: version 2.13 May 15 14:58:53.216540 kernel: scsi host1: ata_piix May 15 14:58:53.216767 kernel: scsi host2: ata_piix May 15 14:58:53.216931 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 lpm-pol 0 May 15 14:58:53.216949 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 lpm-pol 0 May 15 14:58:53.246144 kernel: ACPI: bus type USB registered May 15 14:58:53.246225 kernel: usbcore: registered new interface driver usbfs May 15 14:58:53.246257 kernel: usbcore: registered new interface driver hub May 15 14:58:53.246274 kernel: usbcore: registered new device driver usb May 15 14:58:53.288964 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 15 14:58:53.306046 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 15 14:58:53.306778 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 15 14:58:53.314264 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 14:58:53.324975 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 15 14:58:53.336317 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 15 14:58:53.339303 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 15 14:58:53.372835 disk-uuid[608]: Primary Header is updated. May 15 14:58:53.372835 disk-uuid[608]: Secondary Entries is updated. May 15 14:58:53.372835 disk-uuid[608]: Secondary Header is updated. May 15 14:58:53.378173 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 14:58:53.386163 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 14:58:53.431155 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller May 15 14:58:53.457487 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 May 15 14:58:53.457660 kernel: uhci_hcd 0000:00:01.2: detected 2 ports May 15 14:58:53.457784 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 May 15 14:58:53.457896 kernel: hub 1-0:1.0: USB hub found May 15 14:58:53.458054 kernel: hub 1-0:1.0: 2 ports detected May 15 14:58:53.530722 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 15 14:58:53.564151 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 15 14:58:53.565611 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 15 14:58:53.566954 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 15 14:58:53.569023 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 15 14:58:53.604262 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 15 14:58:54.387595 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 14:58:54.388409 disk-uuid[609]: The operation has completed successfully. May 15 14:58:54.456599 systemd[1]: disk-uuid.service: Deactivated successfully. May 15 14:58:54.456767 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 15 14:58:54.483028 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 15 14:58:54.496951 sh[633]: Success May 15 14:58:54.520389 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 15 14:58:54.520474 kernel: device-mapper: uevent: version 1.0.3 May 15 14:58:54.521331 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev May 15 14:58:54.537148 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" May 15 14:58:54.600858 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 15 14:58:54.613276 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 15 14:58:54.616908 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 15 14:58:54.645166 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' May 15 14:58:54.648166 kernel: BTRFS: device fsid 2d504097-db49-4d66-a0d5-eeb665b21004 devid 1 transid 41 /dev/mapper/usr (253:0) scanned by mount (645) May 15 14:58:54.651879 kernel: BTRFS info (device dm-0): first mount of filesystem 2d504097-db49-4d66-a0d5-eeb665b21004 May 15 14:58:54.651994 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 15 14:58:54.652015 kernel: BTRFS info (device dm-0): using free-space-tree May 15 14:58:54.661481 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 15 14:58:54.662958 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. May 15 14:58:54.664284 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 15 14:58:54.666295 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 15 14:58:54.667914 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 15 14:58:54.703139 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (679) May 15 14:58:54.706417 kernel: BTRFS info (device vda6): first mount of filesystem afd0c70c-d15e-448c-8325-f96e3c3ed3a5 May 15 14:58:54.706533 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 15 14:58:54.708852 kernel: BTRFS info (device vda6): using free-space-tree May 15 14:58:54.720170 kernel: BTRFS info (device vda6): last unmount of filesystem afd0c70c-d15e-448c-8325-f96e3c3ed3a5 May 15 14:58:54.723547 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 15 14:58:54.726770 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 15 14:58:54.825417 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 15 14:58:54.831387 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 15 14:58:54.916218 systemd-networkd[815]: lo: Link UP May 15 14:58:54.916229 systemd-networkd[815]: lo: Gained carrier May 15 14:58:54.918769 systemd-networkd[815]: Enumeration completed May 15 14:58:54.919730 systemd-networkd[815]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. May 15 14:58:54.919735 systemd-networkd[815]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. May 15 14:58:54.921867 systemd[1]: Started systemd-networkd.service - Network Configuration. May 15 14:58:54.922617 systemd[1]: Reached target network.target - Network. May 15 14:58:54.924879 systemd-networkd[815]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 14:58:54.925312 systemd-networkd[815]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. May 15 14:58:54.932820 systemd-networkd[815]: eth0: Link UP May 15 14:58:54.932832 systemd-networkd[815]: eth0: Gained carrier May 15 14:58:54.932855 systemd-networkd[815]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. May 15 14:58:54.939777 systemd-networkd[815]: eth1: Link UP May 15 14:58:54.939789 systemd-networkd[815]: eth1: Gained carrier May 15 14:58:54.939811 systemd-networkd[815]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 14:58:54.962221 systemd-networkd[815]: eth0: DHCPv4 address 137.184.120.255/20, gateway 137.184.112.1 acquired from 169.254.169.253 May 15 14:58:54.969259 systemd-networkd[815]: eth1: DHCPv4 address 10.124.0.35/20 acquired from 169.254.169.253 May 15 14:58:54.977542 ignition[729]: Ignition 2.21.0 May 15 14:58:54.978412 ignition[729]: Stage: fetch-offline May 15 14:58:54.978920 ignition[729]: no configs at "/usr/lib/ignition/base.d" May 15 14:58:54.978931 ignition[729]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 15 14:58:54.979043 ignition[729]: parsed url from cmdline: "" May 15 14:58:54.979047 ignition[729]: no config URL provided May 15 14:58:54.979053 ignition[729]: reading system config file "/usr/lib/ignition/user.ign" May 15 14:58:54.982456 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 15 14:58:54.979060 ignition[729]: no config at "/usr/lib/ignition/user.ign" May 15 14:58:54.979067 ignition[729]: failed to fetch config: resource requires networking May 15 14:58:54.979261 ignition[729]: Ignition finished successfully May 15 14:58:54.987569 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 15 14:58:55.021538 ignition[826]: Ignition 2.21.0 May 15 14:58:55.021556 ignition[826]: Stage: fetch May 15 14:58:55.021758 ignition[826]: no configs at "/usr/lib/ignition/base.d" May 15 14:58:55.021769 ignition[826]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 15 14:58:55.021867 ignition[826]: parsed url from cmdline: "" May 15 14:58:55.021872 ignition[826]: no config URL provided May 15 14:58:55.021878 ignition[826]: reading system config file "/usr/lib/ignition/user.ign" May 15 14:58:55.021886 ignition[826]: no config at "/usr/lib/ignition/user.ign" May 15 14:58:55.021936 ignition[826]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 May 15 14:58:55.041526 ignition[826]: GET result: OK May 15 14:58:55.041745 ignition[826]: parsing config with SHA512: f5daeeebc679475c272c30a758dda27e1c46e67e132788124a00a00d02b5f20d58f3da02e784472f3497bf44ad3ad9685bff7b27ed6367d7cfb2a480beedeecd May 15 14:58:55.048593 unknown[826]: fetched base config from "system" May 15 14:58:55.048608 unknown[826]: fetched base config from "system" May 15 14:58:55.049082 ignition[826]: fetch: fetch complete May 15 14:58:55.048615 unknown[826]: fetched user config from "digitalocean" May 15 14:58:55.049090 ignition[826]: fetch: fetch passed May 15 14:58:55.049170 ignition[826]: Ignition finished successfully May 15 14:58:55.051845 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 15 14:58:55.059421 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 15 14:58:55.093183 ignition[833]: Ignition 2.21.0 May 15 14:58:55.093204 ignition[833]: Stage: kargs May 15 14:58:55.093416 ignition[833]: no configs at "/usr/lib/ignition/base.d" May 15 14:58:55.093427 ignition[833]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 15 14:58:55.097720 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 15 14:58:55.094721 ignition[833]: kargs: kargs passed May 15 14:58:55.094818 ignition[833]: Ignition finished successfully May 15 14:58:55.101362 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 15 14:58:55.136962 ignition[840]: Ignition 2.21.0 May 15 14:58:55.136980 ignition[840]: Stage: disks May 15 14:58:55.137163 ignition[840]: no configs at "/usr/lib/ignition/base.d" May 15 14:58:55.137173 ignition[840]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 15 14:58:55.140028 ignition[840]: disks: disks passed May 15 14:58:55.141026 ignition[840]: Ignition finished successfully May 15 14:58:55.142618 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 15 14:58:55.143806 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 15 14:58:55.144497 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 15 14:58:55.145816 systemd[1]: Reached target local-fs.target - Local File Systems. May 15 14:58:55.147263 systemd[1]: Reached target sysinit.target - System Initialization. May 15 14:58:55.148333 systemd[1]: Reached target basic.target - Basic System. May 15 14:58:55.150885 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 15 14:58:55.182711 systemd-fsck[849]: ROOT: clean, 15/553520 files, 52789/553472 blocks May 15 14:58:55.185790 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 15 14:58:55.189317 systemd[1]: Mounting sysroot.mount - /sysroot... May 15 14:58:55.323136 kernel: EXT4-fs (vda9): mounted filesystem f7dea4bd-2644-4592-b85b-330f322c4d2b r/w with ordered data mode. Quota mode: none. May 15 14:58:55.323507 systemd[1]: Mounted sysroot.mount - /sysroot. May 15 14:58:55.325305 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 15 14:58:55.328470 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 15 14:58:55.330950 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 15 14:58:55.335395 systemd[1]: Starting flatcar-afterburn-network.service - Flatcar Afterburn network service... May 15 14:58:55.346875 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... May 15 14:58:55.351272 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 15 14:58:55.352664 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 15 14:58:55.363212 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (857) May 15 14:58:55.367175 kernel: BTRFS info (device vda6): first mount of filesystem afd0c70c-d15e-448c-8325-f96e3c3ed3a5 May 15 14:58:55.367214 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 15 14:58:55.383463 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 15 14:58:55.383512 kernel: BTRFS info (device vda6): using free-space-tree May 15 14:58:55.375996 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 15 14:58:55.386233 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 15 14:58:55.438054 coreos-metadata[860]: May 15 14:58:55.437 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 May 15 14:58:55.447948 coreos-metadata[859]: May 15 14:58:55.447 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 May 15 14:58:55.449443 coreos-metadata[860]: May 15 14:58:55.449 INFO Fetch successful May 15 14:58:55.451896 initrd-setup-root[887]: cut: /sysroot/etc/passwd: No such file or directory May 15 14:58:55.454872 coreos-metadata[860]: May 15 14:58:55.454 INFO wrote hostname ci-4334.0.0-a-cad88baf47 to /sysroot/etc/hostname May 15 14:58:55.456512 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 15 14:58:55.459209 coreos-metadata[859]: May 15 14:58:55.459 INFO Fetch successful May 15 14:58:55.463349 initrd-setup-root[895]: cut: /sysroot/etc/group: No such file or directory May 15 14:58:55.466518 systemd[1]: flatcar-afterburn-network.service: Deactivated successfully. May 15 14:58:55.468041 systemd[1]: Finished flatcar-afterburn-network.service - Flatcar Afterburn network service. May 15 14:58:55.470779 initrd-setup-root[903]: cut: /sysroot/etc/shadow: No such file or directory May 15 14:58:55.476230 initrd-setup-root[910]: cut: /sysroot/etc/gshadow: No such file or directory May 15 14:58:55.596578 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 15 14:58:55.600157 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 15 14:58:55.602532 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 15 14:58:55.620170 kernel: BTRFS info (device vda6): last unmount of filesystem afd0c70c-d15e-448c-8325-f96e3c3ed3a5 May 15 14:58:55.646280 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 15 14:58:55.647084 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 15 14:58:55.658965 ignition[979]: INFO : Ignition 2.21.0 May 15 14:58:55.660883 ignition[979]: INFO : Stage: mount May 15 14:58:55.660883 ignition[979]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 14:58:55.660883 ignition[979]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 15 14:58:55.664238 ignition[979]: INFO : mount: mount passed May 15 14:58:55.664238 ignition[979]: INFO : Ignition finished successfully May 15 14:58:55.666601 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 15 14:58:55.669244 systemd[1]: Starting ignition-files.service - Ignition (files)... May 15 14:58:55.693868 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 15 14:58:55.715288 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (990) May 15 14:58:55.715366 kernel: BTRFS info (device vda6): first mount of filesystem afd0c70c-d15e-448c-8325-f96e3c3ed3a5 May 15 14:58:55.717265 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 15 14:58:55.719387 kernel: BTRFS info (device vda6): using free-space-tree May 15 14:58:55.724404 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 15 14:58:55.759418 ignition[1007]: INFO : Ignition 2.21.0 May 15 14:58:55.759418 ignition[1007]: INFO : Stage: files May 15 14:58:55.762391 ignition[1007]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 14:58:55.762391 ignition[1007]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 15 14:58:55.762391 ignition[1007]: DEBUG : files: compiled without relabeling support, skipping May 15 14:58:55.764803 ignition[1007]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 15 14:58:55.764803 ignition[1007]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 15 14:58:55.769550 ignition[1007]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 15 14:58:55.770609 ignition[1007]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 15 14:58:55.770609 ignition[1007]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 15 14:58:55.770498 unknown[1007]: wrote ssh authorized keys file for user: core May 15 14:58:55.773395 ignition[1007]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 15 14:58:55.774577 ignition[1007]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 May 15 14:58:55.822171 ignition[1007]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 15 14:58:55.949928 ignition[1007]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 15 14:58:55.949928 ignition[1007]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 15 14:58:55.952350 ignition[1007]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 May 15 14:58:56.058423 systemd-networkd[815]: eth1: Gained IPv6LL May 15 14:58:56.420784 ignition[1007]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 15 14:58:56.504842 ignition[1007]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 15 14:58:56.504842 ignition[1007]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 15 14:58:56.513510 ignition[1007]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 15 14:58:56.513510 ignition[1007]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 15 14:58:56.513510 ignition[1007]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 15 14:58:56.513510 ignition[1007]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 15 14:58:56.513510 ignition[1007]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 15 14:58:56.513510 ignition[1007]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 15 14:58:56.513510 ignition[1007]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 15 14:58:56.513510 ignition[1007]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 15 14:58:56.513510 ignition[1007]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 15 14:58:56.513510 ignition[1007]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 15 14:58:56.513510 ignition[1007]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 15 14:58:56.513510 ignition[1007]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 15 14:58:56.513510 ignition[1007]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 May 15 14:58:56.570618 systemd-networkd[815]: eth0: Gained IPv6LL May 15 14:58:56.815284 ignition[1007]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 15 14:58:57.096492 ignition[1007]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 15 14:58:57.097775 ignition[1007]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 15 14:58:57.098640 ignition[1007]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 15 14:58:57.101635 ignition[1007]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 15 14:58:57.101635 ignition[1007]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 15 14:58:57.101635 ignition[1007]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" May 15 14:58:57.104370 ignition[1007]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" May 15 14:58:57.104370 ignition[1007]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" May 15 14:58:57.104370 ignition[1007]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" May 15 14:58:57.104370 ignition[1007]: INFO : files: files passed May 15 14:58:57.104370 ignition[1007]: INFO : Ignition finished successfully May 15 14:58:57.104447 systemd[1]: Finished ignition-files.service - Ignition (files). May 15 14:58:57.108250 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 15 14:58:57.113291 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 15 14:58:57.128093 systemd[1]: ignition-quench.service: Deactivated successfully. May 15 14:58:57.128272 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 15 14:58:57.138212 initrd-setup-root-after-ignition[1037]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 15 14:58:57.138212 initrd-setup-root-after-ignition[1037]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 15 14:58:57.142673 initrd-setup-root-after-ignition[1041]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 15 14:58:57.143738 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 15 14:58:57.145843 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 15 14:58:57.147847 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 15 14:58:57.222813 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 15 14:58:57.223011 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 15 14:58:57.225604 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 15 14:58:57.226413 systemd[1]: Reached target initrd.target - Initrd Default Target. May 15 14:58:57.227916 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 15 14:58:57.229312 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 15 14:58:57.263030 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 15 14:58:57.266366 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 15 14:58:57.292116 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 15 14:58:57.293720 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 15 14:58:57.294469 systemd[1]: Stopped target timers.target - Timer Units. May 15 14:58:57.296054 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 15 14:58:57.296237 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 15 14:58:57.297604 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 15 14:58:57.298352 systemd[1]: Stopped target basic.target - Basic System. May 15 14:58:57.299548 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 15 14:58:57.300743 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 15 14:58:57.302013 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 15 14:58:57.303545 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. May 15 14:58:57.304889 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 15 14:58:57.306249 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 15 14:58:57.307694 systemd[1]: Stopped target sysinit.target - System Initialization. May 15 14:58:57.308861 systemd[1]: Stopped target local-fs.target - Local File Systems. May 15 14:58:57.310213 systemd[1]: Stopped target swap.target - Swaps. May 15 14:58:57.311241 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 15 14:58:57.311403 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 15 14:58:57.312865 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 15 14:58:57.313663 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 15 14:58:57.314989 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 15 14:58:57.315283 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 15 14:58:57.316282 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 15 14:58:57.316469 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 15 14:58:57.318475 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 15 14:58:57.318764 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 15 14:58:57.320194 systemd[1]: ignition-files.service: Deactivated successfully. May 15 14:58:57.320386 systemd[1]: Stopped ignition-files.service - Ignition (files). May 15 14:58:57.321553 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. May 15 14:58:57.321743 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 15 14:58:57.325380 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 15 14:58:57.329478 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 15 14:58:57.330226 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 15 14:58:57.332368 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 15 14:58:57.337946 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 15 14:58:57.340380 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 15 14:58:57.350807 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 15 14:58:57.350947 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 15 14:58:57.374865 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 15 14:58:57.407561 ignition[1061]: INFO : Ignition 2.21.0 May 15 14:58:57.407561 ignition[1061]: INFO : Stage: umount May 15 14:58:57.407561 ignition[1061]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 14:58:57.407561 ignition[1061]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 15 14:58:57.407561 ignition[1061]: INFO : umount: umount passed May 15 14:58:57.407561 ignition[1061]: INFO : Ignition finished successfully May 15 14:58:57.410027 systemd[1]: ignition-mount.service: Deactivated successfully. May 15 14:58:57.410216 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 15 14:58:57.459565 systemd[1]: ignition-disks.service: Deactivated successfully. May 15 14:58:57.459684 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 15 14:58:57.461024 systemd[1]: ignition-kargs.service: Deactivated successfully. May 15 14:58:57.461136 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 15 14:58:57.462178 systemd[1]: ignition-fetch.service: Deactivated successfully. May 15 14:58:57.462240 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 15 14:58:57.485994 systemd[1]: Stopped target network.target - Network. May 15 14:58:57.488642 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 15 14:58:57.488776 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 15 14:58:57.489721 systemd[1]: Stopped target paths.target - Path Units. May 15 14:58:57.490816 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 15 14:58:57.494398 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 15 14:58:57.495194 systemd[1]: Stopped target slices.target - Slice Units. May 15 14:58:57.496752 systemd[1]: Stopped target sockets.target - Socket Units. May 15 14:58:57.497964 systemd[1]: iscsid.socket: Deactivated successfully. May 15 14:58:57.498050 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 15 14:58:57.499334 systemd[1]: iscsiuio.socket: Deactivated successfully. May 15 14:58:57.499403 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 15 14:58:57.500659 systemd[1]: ignition-setup.service: Deactivated successfully. May 15 14:58:57.500777 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 15 14:58:57.501906 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 15 14:58:57.501985 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 15 14:58:57.503373 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 15 14:58:57.504501 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 15 14:58:57.506705 systemd[1]: sysroot-boot.service: Deactivated successfully. May 15 14:58:57.506871 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 15 14:58:57.511697 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 15 14:58:57.511932 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 15 14:58:57.514472 systemd[1]: systemd-resolved.service: Deactivated successfully. May 15 14:58:57.514663 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 15 14:58:57.521885 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 15 14:58:57.522482 systemd[1]: systemd-networkd.service: Deactivated successfully. May 15 14:58:57.522679 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 15 14:58:57.525903 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 15 14:58:57.527919 systemd[1]: Stopped target network-pre.target - Preparation for Network. May 15 14:58:57.528880 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 15 14:58:57.528952 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 15 14:58:57.533304 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 15 14:58:57.534046 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 15 14:58:57.535577 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 15 14:58:57.536364 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 15 14:58:57.536423 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 15 14:58:57.539024 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 15 14:58:57.539816 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 15 14:58:57.541322 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 15 14:58:57.541415 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 15 14:58:57.543177 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 15 14:58:57.549015 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 15 14:58:57.549339 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 15 14:58:57.560066 systemd[1]: systemd-udevd.service: Deactivated successfully. May 15 14:58:57.565464 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 15 14:58:57.566855 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 15 14:58:57.566907 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 15 14:58:57.569007 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 15 14:58:57.569049 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 15 14:58:57.570262 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 15 14:58:57.570379 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 15 14:58:57.572214 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 15 14:58:57.572291 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 15 14:58:57.573548 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 15 14:58:57.573606 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 14:58:57.576260 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 15 14:58:57.578155 systemd[1]: systemd-network-generator.service: Deactivated successfully. May 15 14:58:57.578260 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. May 15 14:58:57.581739 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 15 14:58:57.581836 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 15 14:58:57.583165 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 15 14:58:57.583249 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 15 14:58:57.586678 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 15 14:58:57.586741 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 15 14:58:57.588049 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 15 14:58:57.588150 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 15 14:58:57.594152 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. May 15 14:58:57.594226 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. May 15 14:58:57.594262 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 15 14:58:57.594394 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 15 14:58:57.594784 systemd[1]: network-cleanup.service: Deactivated successfully. May 15 14:58:57.594943 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 15 14:58:57.601412 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 15 14:58:57.601548 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 15 14:58:57.607521 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 15 14:58:57.609442 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 15 14:58:57.632790 systemd[1]: Switching root. May 15 14:58:57.709902 systemd-journald[212]: Journal stopped May 15 14:58:58.912543 systemd-journald[212]: Received SIGTERM from PID 1 (systemd). May 15 14:58:58.912624 kernel: SELinux: policy capability network_peer_controls=1 May 15 14:58:58.912662 kernel: SELinux: policy capability open_perms=1 May 15 14:58:58.912674 kernel: SELinux: policy capability extended_socket_class=1 May 15 14:58:58.912685 kernel: SELinux: policy capability always_check_network=0 May 15 14:58:58.912704 kernel: SELinux: policy capability cgroup_seclabel=1 May 15 14:58:58.912720 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 15 14:58:58.912731 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 15 14:58:58.912742 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 15 14:58:58.912753 kernel: SELinux: policy capability userspace_initial_context=0 May 15 14:58:58.912767 kernel: audit: type=1403 audit(1747321137.839:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 15 14:58:58.912780 systemd[1]: Successfully loaded SELinux policy in 45.440ms. May 15 14:58:58.912799 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 13.883ms. May 15 14:58:58.912813 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 15 14:58:58.912826 systemd[1]: Detected virtualization kvm. May 15 14:58:58.912842 systemd[1]: Detected architecture x86-64. May 15 14:58:58.912853 systemd[1]: Detected first boot. May 15 14:58:58.912864 systemd[1]: Hostname set to . May 15 14:58:58.912883 systemd[1]: Initializing machine ID from VM UUID. May 15 14:58:58.912894 zram_generator::config[1105]: No configuration found. May 15 14:58:58.912908 kernel: Guest personality initialized and is inactive May 15 14:58:58.912919 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 15 14:58:58.912929 kernel: Initialized host personality May 15 14:58:58.912940 kernel: NET: Registered PF_VSOCK protocol family May 15 14:58:58.912951 systemd[1]: Populated /etc with preset unit settings. May 15 14:58:58.912968 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 15 14:58:58.912981 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 15 14:58:58.912998 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 15 14:58:58.913011 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 15 14:58:58.913024 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 15 14:58:58.913036 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 15 14:58:58.913047 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 15 14:58:58.913059 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 15 14:58:58.913071 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 15 14:58:58.913089 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 15 14:58:58.914190 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 15 14:58:58.914209 systemd[1]: Created slice user.slice - User and Session Slice. May 15 14:58:58.914223 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 15 14:58:58.914235 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 15 14:58:58.914278 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 15 14:58:58.914292 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 15 14:58:58.914308 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 15 14:58:58.914320 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 15 14:58:58.914332 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 15 14:58:58.914344 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 15 14:58:58.914356 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 15 14:58:58.914381 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 15 14:58:58.914392 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 15 14:58:58.914404 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 15 14:58:58.914416 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 15 14:58:58.914430 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 15 14:58:58.914443 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 15 14:58:58.914454 systemd[1]: Reached target slices.target - Slice Units. May 15 14:58:58.914472 systemd[1]: Reached target swap.target - Swaps. May 15 14:58:58.914488 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 15 14:58:58.914506 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 15 14:58:58.914525 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 15 14:58:58.914544 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 15 14:58:58.914575 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 15 14:58:58.914589 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 15 14:58:58.914611 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 15 14:58:58.914629 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 15 14:58:58.914647 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 15 14:58:58.914664 systemd[1]: Mounting media.mount - External Media Directory... May 15 14:58:58.914682 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 14:58:58.914700 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 15 14:58:58.914720 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 15 14:58:58.914739 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 15 14:58:58.914761 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 15 14:58:58.914781 systemd[1]: Reached target machines.target - Containers. May 15 14:58:58.914802 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 15 14:58:58.914820 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 14:58:58.914838 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 15 14:58:58.914856 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 15 14:58:58.914873 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 14:58:58.914891 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 15 14:58:58.914909 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 15 14:58:58.914929 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 15 14:58:58.914947 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 15 14:58:58.914965 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 15 14:58:58.914984 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 15 14:58:58.915004 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 15 14:58:58.915022 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 15 14:58:58.915042 systemd[1]: Stopped systemd-fsck-usr.service. May 15 14:58:58.915061 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 15 14:58:58.915073 systemd[1]: Starting systemd-journald.service - Journal Service... May 15 14:58:58.915085 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 15 14:58:58.915115 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 15 14:58:58.916197 kernel: loop: module loaded May 15 14:58:58.916220 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 15 14:58:58.916234 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 15 14:58:58.916252 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 15 14:58:58.916265 systemd[1]: verity-setup.service: Deactivated successfully. May 15 14:58:58.916277 systemd[1]: Stopped verity-setup.service. May 15 14:58:58.916288 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 14:58:58.916304 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 15 14:58:58.916315 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 15 14:58:58.916329 systemd[1]: Mounted media.mount - External Media Directory. May 15 14:58:58.916341 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 15 14:58:58.916353 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 15 14:58:58.916369 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 15 14:58:58.916381 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 15 14:58:58.916393 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 15 14:58:58.916409 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 15 14:58:58.916424 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 14:58:58.916435 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 14:58:58.916448 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 14:58:58.916460 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 15 14:58:58.916471 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 14:58:58.916483 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 15 14:58:58.916496 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 15 14:58:58.916508 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 15 14:58:58.916521 systemd[1]: Reached target network-pre.target - Preparation for Network. May 15 14:58:58.916543 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 15 14:58:58.916562 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 15 14:58:58.916626 systemd-journald[1182]: Collecting audit messages is disabled. May 15 14:58:58.916672 systemd-journald[1182]: Journal started May 15 14:58:58.916709 systemd-journald[1182]: Runtime Journal (/run/log/journal/0cc1ad7be659421ea5007298915d617c) is 4.9M, max 39.5M, 34.6M free. May 15 14:58:58.504236 systemd[1]: Queued start job for default target multi-user.target. May 15 14:58:58.530027 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 15 14:58:58.530665 systemd[1]: systemd-journald.service: Deactivated successfully. May 15 14:58:58.927187 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 15 14:58:58.934154 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 15 14:58:58.939167 systemd[1]: Started systemd-journald.service - Journal Service. May 15 14:58:58.942236 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 15 14:58:58.943207 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 15 14:58:58.963139 kernel: ACPI: bus type drm_connector registered May 15 14:58:58.970505 systemd[1]: modprobe@drm.service: Deactivated successfully. May 15 14:58:58.974343 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 15 14:58:58.980331 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 15 14:58:58.980399 systemd[1]: Reached target local-fs.target - Local File Systems. May 15 14:58:58.984290 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 15 14:58:58.989214 kernel: fuse: init (API version 7.41) May 15 14:58:58.993220 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 15 14:58:58.994706 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 14:58:58.996364 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 15 14:58:58.998663 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 15 14:58:59.000227 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 14:58:59.005262 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 15 14:58:59.013312 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 15 14:58:59.015132 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 15 14:58:59.015420 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 15 14:58:59.017186 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 15 14:58:59.018656 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 15 14:58:59.076879 systemd-journald[1182]: Time spent on flushing to /var/log/journal/0cc1ad7be659421ea5007298915d617c is 22.754ms for 1014 entries. May 15 14:58:59.076879 systemd-journald[1182]: System Journal (/var/log/journal/0cc1ad7be659421ea5007298915d617c) is 8M, max 195.6M, 187.6M free. May 15 14:58:59.117077 systemd-journald[1182]: Received client request to flush runtime journal. May 15 14:58:59.117198 kernel: loop0: detected capacity change from 0 to 218376 May 15 14:58:59.078640 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 15 14:58:59.092576 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 15 14:58:59.119934 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 15 14:58:59.125210 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 15 14:58:59.128657 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 15 14:58:59.133986 systemd-tmpfiles[1200]: ACLs are not supported, ignoring. May 15 14:58:59.134631 systemd-tmpfiles[1200]: ACLs are not supported, ignoring. May 15 14:58:59.142477 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 15 14:58:59.153518 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 15 14:58:59.162626 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 15 14:58:59.167725 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 15 14:58:59.195091 kernel: loop1: detected capacity change from 0 to 146240 May 15 14:58:59.206923 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 15 14:58:59.242355 kernel: loop2: detected capacity change from 0 to 8 May 15 14:58:59.258480 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 15 14:58:59.268818 kernel: loop3: detected capacity change from 0 to 113872 May 15 14:58:59.267893 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 15 14:58:59.314381 kernel: loop4: detected capacity change from 0 to 218376 May 15 14:58:59.339401 kernel: loop5: detected capacity change from 0 to 146240 May 15 14:58:59.399308 kernel: loop6: detected capacity change from 0 to 8 May 15 14:58:59.400986 systemd-tmpfiles[1252]: ACLs are not supported, ignoring. May 15 14:58:59.401018 systemd-tmpfiles[1252]: ACLs are not supported, ignoring. May 15 14:58:59.410183 kernel: loop7: detected capacity change from 0 to 113872 May 15 14:58:59.424036 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 15 14:58:59.444420 (sd-merge)[1254]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. May 15 14:58:59.445340 (sd-merge)[1254]: Merged extensions into '/usr'. May 15 14:58:59.462565 systemd[1]: Reload requested from client PID 1232 ('systemd-sysext') (unit systemd-sysext.service)... May 15 14:58:59.462606 systemd[1]: Reloading... May 15 14:58:59.612176 zram_generator::config[1279]: No configuration found. May 15 14:58:59.975342 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 14:58:59.978487 ldconfig[1225]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 15 14:59:00.101587 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 15 14:59:00.101901 systemd[1]: Reloading finished in 638 ms. May 15 14:59:00.139851 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 15 14:59:00.141491 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 15 14:59:00.151457 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 15 14:59:00.162398 systemd[1]: Starting ensure-sysext.service... May 15 14:59:00.170631 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 15 14:59:00.187210 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 15 14:59:00.227405 systemd[1]: Reload requested from client PID 1327 ('systemctl') (unit ensure-sysext.service)... May 15 14:59:00.227439 systemd[1]: Reloading... May 15 14:59:00.272641 systemd-tmpfiles[1328]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. May 15 14:59:00.273716 systemd-tmpfiles[1328]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. May 15 14:59:00.275548 systemd-tmpfiles[1328]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 15 14:59:00.275977 systemd-tmpfiles[1328]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 15 14:59:00.278551 systemd-tmpfiles[1328]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 15 14:59:00.279324 systemd-tmpfiles[1328]: ACLs are not supported, ignoring. May 15 14:59:00.279574 systemd-tmpfiles[1328]: ACLs are not supported, ignoring. May 15 14:59:00.291472 systemd-tmpfiles[1328]: Detected autofs mount point /boot during canonicalization of boot. May 15 14:59:00.291720 systemd-tmpfiles[1328]: Skipping /boot May 15 14:59:00.362169 systemd-tmpfiles[1328]: Detected autofs mount point /boot during canonicalization of boot. May 15 14:59:00.362192 systemd-tmpfiles[1328]: Skipping /boot May 15 14:59:00.379171 zram_generator::config[1352]: No configuration found. May 15 14:59:00.620789 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 14:59:00.780429 systemd[1]: Reloading finished in 552 ms. May 15 14:59:00.795897 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 15 14:59:00.812160 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 15 14:59:00.829301 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 15 14:59:00.835566 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 15 14:59:00.842191 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 15 14:59:00.849951 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 15 14:59:00.855280 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 15 14:59:00.862697 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 15 14:59:00.870432 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 14:59:00.870658 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 14:59:00.874684 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 14:59:00.885990 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 15 14:59:00.892844 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 15 14:59:00.896193 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 14:59:00.896450 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 15 14:59:00.896612 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 14:59:00.903477 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 14:59:00.903799 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 14:59:00.904010 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 14:59:00.904188 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 15 14:59:00.910942 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 15 14:59:00.913587 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 14:59:00.914934 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 14:59:00.915366 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 14:59:00.929560 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 14:59:00.930627 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 14:59:00.948705 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 14:59:00.960595 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 15 14:59:00.969208 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 14:59:00.969414 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 15 14:59:00.969624 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 14:59:00.972257 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 15 14:59:00.975057 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 14:59:00.975453 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 15 14:59:00.977787 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 14:59:00.978037 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 15 14:59:01.002320 systemd[1]: Finished ensure-sysext.service. May 15 14:59:01.004975 systemd[1]: modprobe@drm.service: Deactivated successfully. May 15 14:59:01.007091 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 15 14:59:01.014444 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 15 14:59:01.016807 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 14:59:01.017047 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 14:59:01.025056 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 14:59:01.025237 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 15 14:59:01.032393 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 15 14:59:01.034294 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 15 14:59:01.034952 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 15 14:59:01.044035 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 15 14:59:01.062605 systemd-udevd[1406]: Using default interface naming scheme 'v255'. May 15 14:59:01.067658 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 15 14:59:01.091432 augenrules[1447]: No rules May 15 14:59:01.092443 systemd[1]: audit-rules.service: Deactivated successfully. May 15 14:59:01.095614 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 15 14:59:01.101367 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 15 14:59:01.124388 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 15 14:59:01.131992 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 15 14:59:01.253747 systemd-resolved[1404]: Positive Trust Anchors: May 15 14:59:01.253768 systemd-resolved[1404]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 15 14:59:01.253806 systemd-resolved[1404]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 15 14:59:01.256285 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 15 14:59:01.258506 systemd[1]: Reached target time-set.target - System Time Set. May 15 14:59:01.262205 systemd-resolved[1404]: Using system hostname 'ci-4334.0.0-a-cad88baf47'. May 15 14:59:01.264767 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 15 14:59:01.266199 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 15 14:59:01.267481 systemd[1]: Reached target sysinit.target - System Initialization. May 15 14:59:01.268842 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 15 14:59:01.269921 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 15 14:59:01.271501 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. May 15 14:59:01.272473 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 15 14:59:01.273946 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 15 14:59:01.274910 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 15 14:59:01.276090 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 15 14:59:01.276404 systemd[1]: Reached target paths.target - Path Units. May 15 14:59:01.277451 systemd[1]: Reached target timers.target - Timer Units. May 15 14:59:01.281167 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 15 14:59:01.287163 systemd[1]: Starting docker.socket - Docker Socket for the API... May 15 14:59:01.298183 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 15 14:59:01.300059 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 15 14:59:01.301712 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 15 14:59:01.311855 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 15 14:59:01.314370 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 15 14:59:01.317603 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 15 14:59:01.323639 systemd[1]: Reached target sockets.target - Socket Units. May 15 14:59:01.324301 systemd[1]: Reached target basic.target - Basic System. May 15 14:59:01.325254 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 15 14:59:01.325563 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 15 14:59:01.329483 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 15 14:59:01.335562 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 15 14:59:01.340801 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 15 14:59:01.345420 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 15 14:59:01.353347 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 15 14:59:01.355337 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 15 14:59:01.360420 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... May 15 14:59:01.370623 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 15 14:59:01.376715 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 15 14:59:01.392895 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 15 14:59:01.402798 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 15 14:59:01.418644 systemd[1]: Starting systemd-logind.service - User Login Management... May 15 14:59:01.420660 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 15 14:59:01.422007 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 15 14:59:01.430822 jq[1488]: false May 15 14:59:01.431093 systemd[1]: Starting update-engine.service - Update Engine... May 15 14:59:01.449157 google_oslogin_nss_cache[1490]: oslogin_cache_refresh[1490]: Refreshing passwd entry cache May 15 14:59:01.440791 oslogin_cache_refresh[1490]: Refreshing passwd entry cache May 15 14:59:01.451642 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 15 14:59:01.455727 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 15 14:59:01.457255 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 15 14:59:01.457603 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 15 14:59:01.472905 systemd[1]: Condition check resulted in dev-disk-by\x2dlabel-config\x2d2.device - /dev/disk/by-label/config-2 being skipped. May 15 14:59:01.478579 google_oslogin_nss_cache[1490]: oslogin_cache_refresh[1490]: Failure getting users, quitting May 15 14:59:01.478579 google_oslogin_nss_cache[1490]: oslogin_cache_refresh[1490]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 15 14:59:01.478579 google_oslogin_nss_cache[1490]: oslogin_cache_refresh[1490]: Refreshing group entry cache May 15 14:59:01.476416 oslogin_cache_refresh[1490]: Failure getting users, quitting May 15 14:59:01.476447 oslogin_cache_refresh[1490]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 15 14:59:01.476529 oslogin_cache_refresh[1490]: Refreshing group entry cache May 15 14:59:01.491677 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... May 15 14:59:01.493603 oslogin_cache_refresh[1490]: Failure getting groups, quitting May 15 14:59:01.498044 google_oslogin_nss_cache[1490]: oslogin_cache_refresh[1490]: Failure getting groups, quitting May 15 14:59:01.498044 google_oslogin_nss_cache[1490]: oslogin_cache_refresh[1490]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 15 14:59:01.492605 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 15 14:59:01.493621 oslogin_cache_refresh[1490]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 15 14:59:01.497050 systemd[1]: google-oslogin-cache.service: Deactivated successfully. May 15 14:59:01.498638 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. May 15 14:59:01.542149 kernel: ISO 9660 Extensions: RRIP_1991A May 15 14:59:01.547980 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 15 14:59:01.548433 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 15 14:59:01.577690 jq[1500]: true May 15 14:59:01.587146 update_engine[1498]: I20250515 14:59:01.579803 1498 main.cc:92] Flatcar Update Engine starting May 15 14:59:01.604898 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. May 15 14:59:01.625507 extend-filesystems[1489]: Found loop4 May 15 14:59:01.629390 extend-filesystems[1489]: Found loop5 May 15 14:59:01.629390 extend-filesystems[1489]: Found loop6 May 15 14:59:01.629390 extend-filesystems[1489]: Found loop7 May 15 14:59:01.629390 extend-filesystems[1489]: Found vda May 15 14:59:01.629390 extend-filesystems[1489]: Found vda1 May 15 14:59:01.629390 extend-filesystems[1489]: Found vda2 May 15 14:59:01.629390 extend-filesystems[1489]: Found vda3 May 15 14:59:01.629390 extend-filesystems[1489]: Found usr May 15 14:59:01.629390 extend-filesystems[1489]: Found vda4 May 15 14:59:01.629390 extend-filesystems[1489]: Found vda6 May 15 14:59:01.629390 extend-filesystems[1489]: Found vda7 May 15 14:59:01.629390 extend-filesystems[1489]: Found vda9 May 15 14:59:01.629390 extend-filesystems[1489]: Found vdb May 15 14:59:01.658730 dbus-daemon[1486]: [system] SELinux support is enabled May 15 14:59:01.680793 coreos-metadata[1485]: May 15 14:59:01.638 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 May 15 14:59:01.680793 coreos-metadata[1485]: May 15 14:59:01.641 INFO Failed to fetch: error sending request for url (http://169.254.169.254/metadata/v1.json) May 15 14:59:01.681050 tar[1504]: linux-amd64/LICENSE May 15 14:59:01.681050 tar[1504]: linux-amd64/helm May 15 14:59:01.638214 systemd[1]: extend-filesystems.service: Deactivated successfully. May 15 14:59:01.681391 update_engine[1498]: I20250515 14:59:01.678872 1498 update_check_scheduler.cc:74] Next update check in 2m41s May 15 14:59:01.681420 jq[1519]: true May 15 14:59:01.638764 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 15 14:59:01.659070 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 15 14:59:01.664649 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 15 14:59:01.664697 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 15 14:59:01.665609 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 15 14:59:01.665781 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). May 15 14:59:01.665804 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 15 14:59:01.676030 systemd[1]: motdgen.service: Deactivated successfully. May 15 14:59:01.676463 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 15 14:59:01.677878 systemd[1]: Started update-engine.service - Update Engine. May 15 14:59:01.693646 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 15 14:59:01.799396 bash[1543]: Updated "/home/core/.ssh/authorized_keys" May 15 14:59:01.800664 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 15 14:59:01.806346 systemd[1]: Starting sshkeys.service... May 15 14:59:01.843212 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 15 14:59:01.854754 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 15 14:59:01.878136 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 15 14:59:01.901154 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 15 14:59:01.904199 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 15 14:59:01.954254 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 15 14:59:02.120076 coreos-metadata[1549]: May 15 14:59:02.119 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 May 15 14:59:02.124361 coreos-metadata[1549]: May 15 14:59:02.124 INFO Failed to fetch: error sending request for url (http://169.254.169.254/metadata/v1.json) May 15 14:59:02.143170 kernel: mousedev: PS/2 mouse device common for all mice May 15 14:59:02.225534 sshd_keygen[1522]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 15 14:59:02.236040 locksmithd[1526]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 15 14:59:02.242589 systemd-logind[1497]: New seat seat0. May 15 14:59:02.246573 systemd[1]: Started systemd-logind.service - User Login Management. May 15 14:59:02.264393 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 May 15 14:59:02.263938 systemd-networkd[1458]: lo: Link UP May 15 14:59:02.263952 systemd-networkd[1458]: lo: Gained carrier May 15 14:59:02.267273 systemd-networkd[1458]: Enumeration completed May 15 14:59:02.267457 systemd[1]: Started systemd-networkd.service - Network Configuration. May 15 14:59:02.268674 systemd-networkd[1458]: eth0: Configuring with /run/systemd/network/10-ba:a5:fd:62:04:d4.network. May 15 14:59:02.269719 systemd[1]: Reached target network.target - Network. May 15 14:59:02.273473 systemd[1]: Starting containerd.service - containerd container runtime... May 15 14:59:02.274788 systemd-networkd[1458]: eth1: Configuring with /run/systemd/network/10-02:7b:32:f3:6d:c5.network. May 15 14:59:02.278164 systemd-networkd[1458]: eth0: Link UP May 15 14:59:02.278477 systemd-networkd[1458]: eth0: Gained carrier May 15 14:59:02.280201 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 15 14:59:02.290004 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 15 14:59:02.290880 systemd-networkd[1458]: eth1: Link UP May 15 14:59:02.292448 systemd-networkd[1458]: eth1: Gained carrier May 15 14:59:02.296988 systemd-timesyncd[1436]: Network configuration changed, trying to establish connection. May 15 14:59:02.300243 systemd-timesyncd[1436]: Network configuration changed, trying to establish connection. May 15 14:59:02.311840 kernel: ACPI: button: Power Button [PWRF] May 15 14:59:02.332843 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 15 14:59:02.342147 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 May 15 14:59:02.344608 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 15 14:59:02.364306 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 15 14:59:02.369737 systemd[1]: Starting issuegen.service - Generate /run/issue... May 15 14:59:02.394714 (ntainerd)[1582]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 15 14:59:02.420584 systemd[1]: issuegen.service: Deactivated successfully. May 15 14:59:02.421856 systemd[1]: Finished issuegen.service - Generate /run/issue. May 15 14:59:02.427228 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 15 14:59:02.496475 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 15 14:59:02.507761 systemd[1]: Started getty@tty1.service - Getty on tty1. May 15 14:59:02.514762 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 15 14:59:02.516047 systemd[1]: Reached target getty.target - Login Prompts. May 15 14:59:02.641194 coreos-metadata[1485]: May 15 14:59:02.641 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #2 May 15 14:59:02.663580 coreos-metadata[1485]: May 15 14:59:02.663 INFO Fetch successful May 15 14:59:02.814608 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 15 14:59:02.816213 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 15 14:59:02.907894 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 May 15 14:59:02.974675 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console May 15 14:59:03.016600 systemd-logind[1497]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 15 14:59:03.018921 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 14:59:03.049673 kernel: Console: switching to colour dummy device 80x25 May 15 14:59:03.053262 kernel: [drm] features: -virgl +edid -resource_blob -host_visible May 15 14:59:03.053428 kernel: [drm] features: -context_init May 15 14:59:03.059140 kernel: [drm] number of scanouts: 1 May 15 14:59:03.059268 kernel: [drm] number of cap sets: 0 May 15 14:59:03.069141 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0 May 15 14:59:03.124691 coreos-metadata[1549]: May 15 14:59:03.124 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #2 May 15 14:59:03.144901 coreos-metadata[1549]: May 15 14:59:03.141 INFO Fetch successful May 15 14:59:03.158280 unknown[1549]: wrote ssh authorized keys file for user: core May 15 14:59:03.194643 containerd[1582]: time="2025-05-15T14:59:03Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 15 14:59:03.199180 containerd[1582]: time="2025-05-15T14:59:03.199089445Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 May 15 14:59:03.207017 update-ssh-keys[1623]: Updated "/home/core/.ssh/authorized_keys" May 15 14:59:03.208529 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 15 14:59:03.215229 systemd[1]: Finished sshkeys.service. May 15 14:59:03.230626 systemd-logind[1497]: Watching system buttons on /dev/input/event2 (Power Button) May 15 14:59:03.238695 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 15 14:59:03.239849 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 15 14:59:03.254309 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 15 14:59:03.275727 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 14:59:03.280345 containerd[1582]: time="2025-05-15T14:59:03.276807530Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="17µs" May 15 14:59:03.280345 containerd[1582]: time="2025-05-15T14:59:03.276880066Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 15 14:59:03.280345 containerd[1582]: time="2025-05-15T14:59:03.276914885Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 15 14:59:03.282139 containerd[1582]: time="2025-05-15T14:59:03.281762488Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 15 14:59:03.282139 containerd[1582]: time="2025-05-15T14:59:03.281815826Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 15 14:59:03.282139 containerd[1582]: time="2025-05-15T14:59:03.281848763Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 15 14:59:03.282139 containerd[1582]: time="2025-05-15T14:59:03.281934682Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 15 14:59:03.282139 containerd[1582]: time="2025-05-15T14:59:03.281958687Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 15 14:59:03.285536 containerd[1582]: time="2025-05-15T14:59:03.285320411Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 15 14:59:03.285536 containerd[1582]: time="2025-05-15T14:59:03.285364105Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 15 14:59:03.285536 containerd[1582]: time="2025-05-15T14:59:03.285420844Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 15 14:59:03.285536 containerd[1582]: time="2025-05-15T14:59:03.285440440Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 15 14:59:03.286127 containerd[1582]: time="2025-05-15T14:59:03.285697282Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 15 14:59:03.286127 containerd[1582]: time="2025-05-15T14:59:03.286028511Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 15 14:59:03.286127 containerd[1582]: time="2025-05-15T14:59:03.286080781Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 15 14:59:03.286287 containerd[1582]: time="2025-05-15T14:59:03.286144820Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 15 14:59:03.289154 containerd[1582]: time="2025-05-15T14:59:03.287603471Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 15 14:59:03.289154 containerd[1582]: time="2025-05-15T14:59:03.288016528Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 15 14:59:03.289154 containerd[1582]: time="2025-05-15T14:59:03.288198651Z" level=info msg="metadata content store policy set" policy=shared May 15 14:59:03.301831 containerd[1582]: time="2025-05-15T14:59:03.301734342Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 15 14:59:03.302014 containerd[1582]: time="2025-05-15T14:59:03.301876986Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 15 14:59:03.302014 containerd[1582]: time="2025-05-15T14:59:03.301909748Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 15 14:59:03.302014 containerd[1582]: time="2025-05-15T14:59:03.301931086Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 15 14:59:03.302014 containerd[1582]: time="2025-05-15T14:59:03.301954493Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 15 14:59:03.302014 containerd[1582]: time="2025-05-15T14:59:03.301972721Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 15 14:59:03.302014 containerd[1582]: time="2025-05-15T14:59:03.301992289Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 15 14:59:03.302014 containerd[1582]: time="2025-05-15T14:59:03.302011073Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 15 14:59:03.302306 containerd[1582]: time="2025-05-15T14:59:03.302030946Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 15 14:59:03.302306 containerd[1582]: time="2025-05-15T14:59:03.302122267Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 15 14:59:03.302306 containerd[1582]: time="2025-05-15T14:59:03.302173508Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 15 14:59:03.302306 containerd[1582]: time="2025-05-15T14:59:03.302274738Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 15 14:59:03.303620 containerd[1582]: time="2025-05-15T14:59:03.303534881Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 15 14:59:03.303620 containerd[1582]: time="2025-05-15T14:59:03.303610245Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 15 14:59:03.303799 containerd[1582]: time="2025-05-15T14:59:03.303652956Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 15 14:59:03.303799 containerd[1582]: time="2025-05-15T14:59:03.303672610Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 15 14:59:03.303799 containerd[1582]: time="2025-05-15T14:59:03.303689668Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 15 14:59:03.336853 containerd[1582]: time="2025-05-15T14:59:03.335807090Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 15 14:59:03.336853 containerd[1582]: time="2025-05-15T14:59:03.335912640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 15 14:59:03.336853 containerd[1582]: time="2025-05-15T14:59:03.335934349Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 15 14:59:03.336853 containerd[1582]: time="2025-05-15T14:59:03.335975434Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 15 14:59:03.336853 containerd[1582]: time="2025-05-15T14:59:03.335995447Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 15 14:59:03.336853 containerd[1582]: time="2025-05-15T14:59:03.336017763Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 15 14:59:03.336853 containerd[1582]: time="2025-05-15T14:59:03.336522955Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 15 14:59:03.336853 containerd[1582]: time="2025-05-15T14:59:03.336560383Z" level=info msg="Start snapshots syncer" May 15 14:59:03.336853 containerd[1582]: time="2025-05-15T14:59:03.336613372Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 15 14:59:03.336299 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 14:59:03.338719 containerd[1582]: time="2025-05-15T14:59:03.338607714Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 15 14:59:03.342632 containerd[1582]: time="2025-05-15T14:59:03.339528823Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 15 14:59:03.344329 containerd[1582]: time="2025-05-15T14:59:03.344178322Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 15 14:59:03.345136 containerd[1582]: time="2025-05-15T14:59:03.345029389Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 15 14:59:03.349368 containerd[1582]: time="2025-05-15T14:59:03.347811721Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 15 14:59:03.349368 containerd[1582]: time="2025-05-15T14:59:03.347973244Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 15 14:59:03.349368 containerd[1582]: time="2025-05-15T14:59:03.348653766Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 15 14:59:03.350315 containerd[1582]: time="2025-05-15T14:59:03.350007876Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 15 14:59:03.350315 containerd[1582]: time="2025-05-15T14:59:03.350149058Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 15 14:59:03.350315 containerd[1582]: time="2025-05-15T14:59:03.350248538Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 15 14:59:03.350915 containerd[1582]: time="2025-05-15T14:59:03.350734667Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 15 14:59:03.350915 containerd[1582]: time="2025-05-15T14:59:03.350822963Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 15 14:59:03.350915 containerd[1582]: time="2025-05-15T14:59:03.350868219Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 15 14:59:03.351330 containerd[1582]: time="2025-05-15T14:59:03.351301312Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 15 14:59:03.351495 containerd[1582]: time="2025-05-15T14:59:03.351462300Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 15 14:59:03.351626 containerd[1582]: time="2025-05-15T14:59:03.351596584Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 15 14:59:03.351759 containerd[1582]: time="2025-05-15T14:59:03.351730875Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 15 14:59:03.351902 containerd[1582]: time="2025-05-15T14:59:03.351871906Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 15 14:59:03.352035 containerd[1582]: time="2025-05-15T14:59:03.352007050Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 15 14:59:03.353135 containerd[1582]: time="2025-05-15T14:59:03.352721724Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 15 14:59:03.353135 containerd[1582]: time="2025-05-15T14:59:03.352773945Z" level=info msg="runtime interface created" May 15 14:59:03.353135 containerd[1582]: time="2025-05-15T14:59:03.352784670Z" level=info msg="created NRI interface" May 15 14:59:03.353135 containerd[1582]: time="2025-05-15T14:59:03.352799927Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 15 14:59:03.353135 containerd[1582]: time="2025-05-15T14:59:03.352828469Z" level=info msg="Connect containerd service" May 15 14:59:03.353135 containerd[1582]: time="2025-05-15T14:59:03.352894590Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 15 14:59:03.355330 systemd-networkd[1458]: eth0: Gained IPv6LL May 15 14:59:03.357411 systemd-timesyncd[1436]: Network configuration changed, trying to establish connection. May 15 14:59:03.360834 containerd[1582]: time="2025-05-15T14:59:03.358475014Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 15 14:59:03.366692 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 15 14:59:03.367836 systemd[1]: Reached target network-online.target - Network is Online. May 15 14:59:03.372067 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 14:59:03.375668 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 15 14:59:03.535272 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 15 14:59:03.596768 kernel: EDAC MC: Ver: 3.0.0 May 15 14:59:03.743323 containerd[1582]: time="2025-05-15T14:59:03.742896564Z" level=info msg="Start subscribing containerd event" May 15 14:59:03.743323 containerd[1582]: time="2025-05-15T14:59:03.742969815Z" level=info msg="Start recovering state" May 15 14:59:03.743323 containerd[1582]: time="2025-05-15T14:59:03.743272572Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 15 14:59:03.743507 containerd[1582]: time="2025-05-15T14:59:03.743373831Z" level=info msg=serving... address=/run/containerd/containerd.sock May 15 14:59:03.745817 containerd[1582]: time="2025-05-15T14:59:03.743651390Z" level=info msg="Start event monitor" May 15 14:59:03.745817 containerd[1582]: time="2025-05-15T14:59:03.743678201Z" level=info msg="Start cni network conf syncer for default" May 15 14:59:03.745817 containerd[1582]: time="2025-05-15T14:59:03.743686698Z" level=info msg="Start streaming server" May 15 14:59:03.745817 containerd[1582]: time="2025-05-15T14:59:03.743696435Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 15 14:59:03.745817 containerd[1582]: time="2025-05-15T14:59:03.743708151Z" level=info msg="runtime interface starting up..." May 15 14:59:03.745817 containerd[1582]: time="2025-05-15T14:59:03.743714690Z" level=info msg="starting plugins..." May 15 14:59:03.745817 containerd[1582]: time="2025-05-15T14:59:03.743728409Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 15 14:59:03.745817 containerd[1582]: time="2025-05-15T14:59:03.744299827Z" level=info msg="containerd successfully booted in 0.550337s" May 15 14:59:03.744546 systemd[1]: Started containerd.service - containerd container runtime. May 15 14:59:03.875924 tar[1504]: linux-amd64/README.md May 15 14:59:03.904814 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 15 14:59:04.250524 systemd-networkd[1458]: eth1: Gained IPv6LL May 15 14:59:04.251146 systemd-timesyncd[1436]: Network configuration changed, trying to establish connection. May 15 14:59:04.766000 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 14:59:04.767739 systemd[1]: Reached target multi-user.target - Multi-User System. May 15 14:59:04.768611 systemd[1]: Startup finished in 4.026s (kernel) + 6.166s (initrd) + 6.971s (userspace) = 17.164s. May 15 14:59:04.774877 (kubelet)[1668]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 14:59:05.500079 kubelet[1668]: E0515 14:59:05.500004 1668 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 14:59:05.503867 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 14:59:05.504044 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 14:59:05.504564 systemd[1]: kubelet.service: Consumed 1.550s CPU time, 253.5M memory peak. May 15 14:59:05.614416 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 15 14:59:05.618038 systemd[1]: Started sshd@0-137.184.120.255:22-139.178.68.195:55638.service - OpenSSH per-connection server daemon (139.178.68.195:55638). May 15 14:59:05.731408 sshd[1680]: Accepted publickey for core from 139.178.68.195 port 55638 ssh2: RSA SHA256:CR2QFGI8Wi38j7m0fVendNlhmaPvJh+gYMXcH5yQYrY May 15 14:59:05.734033 sshd-session[1680]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 14:59:05.743833 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 15 14:59:05.745514 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 15 14:59:05.757657 systemd-logind[1497]: New session 1 of user core. May 15 14:59:05.776086 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 15 14:59:05.781142 systemd[1]: Starting user@500.service - User Manager for UID 500... May 15 14:59:05.801921 (systemd)[1684]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 15 14:59:05.805627 systemd-logind[1497]: New session c1 of user core. May 15 14:59:05.980306 systemd[1684]: Queued start job for default target default.target. May 15 14:59:05.991769 systemd[1684]: Created slice app.slice - User Application Slice. May 15 14:59:05.991818 systemd[1684]: Reached target paths.target - Paths. May 15 14:59:05.991876 systemd[1684]: Reached target timers.target - Timers. May 15 14:59:05.993665 systemd[1684]: Starting dbus.socket - D-Bus User Message Bus Socket... May 15 14:59:06.008507 systemd[1684]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 15 14:59:06.008874 systemd[1684]: Reached target sockets.target - Sockets. May 15 14:59:06.009167 systemd[1684]: Reached target basic.target - Basic System. May 15 14:59:06.009347 systemd[1684]: Reached target default.target - Main User Target. May 15 14:59:06.009393 systemd[1]: Started user@500.service - User Manager for UID 500. May 15 14:59:06.009570 systemd[1684]: Startup finished in 193ms. May 15 14:59:06.018877 systemd[1]: Started session-1.scope - Session 1 of User core. May 15 14:59:06.091466 systemd[1]: Started sshd@1-137.184.120.255:22-139.178.68.195:55642.service - OpenSSH per-connection server daemon (139.178.68.195:55642). May 15 14:59:06.164508 sshd[1695]: Accepted publickey for core from 139.178.68.195 port 55642 ssh2: RSA SHA256:CR2QFGI8Wi38j7m0fVendNlhmaPvJh+gYMXcH5yQYrY May 15 14:59:06.167172 sshd-session[1695]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 14:59:06.177536 systemd-logind[1497]: New session 2 of user core. May 15 14:59:06.192529 systemd[1]: Started session-2.scope - Session 2 of User core. May 15 14:59:06.262820 sshd[1697]: Connection closed by 139.178.68.195 port 55642 May 15 14:59:06.263647 sshd-session[1695]: pam_unix(sshd:session): session closed for user core May 15 14:59:06.277199 systemd[1]: sshd@1-137.184.120.255:22-139.178.68.195:55642.service: Deactivated successfully. May 15 14:59:06.280422 systemd[1]: session-2.scope: Deactivated successfully. May 15 14:59:06.282055 systemd-logind[1497]: Session 2 logged out. Waiting for processes to exit. May 15 14:59:06.286963 systemd[1]: Started sshd@2-137.184.120.255:22-139.178.68.195:55658.service - OpenSSH per-connection server daemon (139.178.68.195:55658). May 15 14:59:06.288621 systemd-logind[1497]: Removed session 2. May 15 14:59:06.354014 sshd[1703]: Accepted publickey for core from 139.178.68.195 port 55658 ssh2: RSA SHA256:CR2QFGI8Wi38j7m0fVendNlhmaPvJh+gYMXcH5yQYrY May 15 14:59:06.356319 sshd-session[1703]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 14:59:06.365649 systemd-logind[1497]: New session 3 of user core. May 15 14:59:06.381961 systemd[1]: Started session-3.scope - Session 3 of User core. May 15 14:59:06.442907 sshd[1705]: Connection closed by 139.178.68.195 port 55658 May 15 14:59:06.443781 sshd-session[1703]: pam_unix(sshd:session): session closed for user core May 15 14:59:06.459457 systemd[1]: sshd@2-137.184.120.255:22-139.178.68.195:55658.service: Deactivated successfully. May 15 14:59:06.462128 systemd[1]: session-3.scope: Deactivated successfully. May 15 14:59:06.463476 systemd-logind[1497]: Session 3 logged out. Waiting for processes to exit. May 15 14:59:06.469523 systemd[1]: Started sshd@3-137.184.120.255:22-139.178.68.195:55668.service - OpenSSH per-connection server daemon (139.178.68.195:55668). May 15 14:59:06.471558 systemd-logind[1497]: Removed session 3. May 15 14:59:06.539310 sshd[1711]: Accepted publickey for core from 139.178.68.195 port 55668 ssh2: RSA SHA256:CR2QFGI8Wi38j7m0fVendNlhmaPvJh+gYMXcH5yQYrY May 15 14:59:06.541761 sshd-session[1711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 14:59:06.548498 systemd-logind[1497]: New session 4 of user core. May 15 14:59:06.567526 systemd[1]: Started session-4.scope - Session 4 of User core. May 15 14:59:06.633930 sshd[1713]: Connection closed by 139.178.68.195 port 55668 May 15 14:59:06.634804 sshd-session[1711]: pam_unix(sshd:session): session closed for user core May 15 14:59:06.653001 systemd[1]: sshd@3-137.184.120.255:22-139.178.68.195:55668.service: Deactivated successfully. May 15 14:59:06.656062 systemd[1]: session-4.scope: Deactivated successfully. May 15 14:59:06.658643 systemd-logind[1497]: Session 4 logged out. Waiting for processes to exit. May 15 14:59:06.661192 systemd-logind[1497]: Removed session 4. May 15 14:59:06.663389 systemd[1]: Started sshd@4-137.184.120.255:22-139.178.68.195:55674.service - OpenSSH per-connection server daemon (139.178.68.195:55674). May 15 14:59:06.723422 sshd[1719]: Accepted publickey for core from 139.178.68.195 port 55674 ssh2: RSA SHA256:CR2QFGI8Wi38j7m0fVendNlhmaPvJh+gYMXcH5yQYrY May 15 14:59:06.725454 sshd-session[1719]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 14:59:06.733038 systemd-logind[1497]: New session 5 of user core. May 15 14:59:06.739447 systemd[1]: Started session-5.scope - Session 5 of User core. May 15 14:59:06.816989 sudo[1722]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 15 14:59:06.817452 sudo[1722]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 14:59:06.835534 sudo[1722]: pam_unix(sudo:session): session closed for user root May 15 14:59:06.838864 sshd[1721]: Connection closed by 139.178.68.195 port 55674 May 15 14:59:06.839459 sshd-session[1719]: pam_unix(sshd:session): session closed for user core May 15 14:59:06.857223 systemd[1]: sshd@4-137.184.120.255:22-139.178.68.195:55674.service: Deactivated successfully. May 15 14:59:06.860945 systemd[1]: session-5.scope: Deactivated successfully. May 15 14:59:06.864002 systemd-logind[1497]: Session 5 logged out. Waiting for processes to exit. May 15 14:59:06.866861 systemd-logind[1497]: Removed session 5. May 15 14:59:06.869754 systemd[1]: Started sshd@5-137.184.120.255:22-139.178.68.195:55680.service - OpenSSH per-connection server daemon (139.178.68.195:55680). May 15 14:59:06.937578 sshd[1728]: Accepted publickey for core from 139.178.68.195 port 55680 ssh2: RSA SHA256:CR2QFGI8Wi38j7m0fVendNlhmaPvJh+gYMXcH5yQYrY May 15 14:59:06.939407 sshd-session[1728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 14:59:06.946385 systemd-logind[1497]: New session 6 of user core. May 15 14:59:06.955552 systemd[1]: Started session-6.scope - Session 6 of User core. May 15 14:59:07.020427 sudo[1732]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 15 14:59:07.021411 sudo[1732]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 14:59:07.029229 sudo[1732]: pam_unix(sudo:session): session closed for user root May 15 14:59:07.037246 sudo[1731]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 15 14:59:07.037615 sudo[1731]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 14:59:07.051595 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 15 14:59:07.111922 augenrules[1754]: No rules May 15 14:59:07.114039 systemd[1]: audit-rules.service: Deactivated successfully. May 15 14:59:07.114496 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 15 14:59:07.116091 sudo[1731]: pam_unix(sudo:session): session closed for user root May 15 14:59:07.119275 sshd[1730]: Connection closed by 139.178.68.195 port 55680 May 15 14:59:07.120032 sshd-session[1728]: pam_unix(sshd:session): session closed for user core May 15 14:59:07.131643 systemd[1]: sshd@5-137.184.120.255:22-139.178.68.195:55680.service: Deactivated successfully. May 15 14:59:07.134395 systemd[1]: session-6.scope: Deactivated successfully. May 15 14:59:07.135834 systemd-logind[1497]: Session 6 logged out. Waiting for processes to exit. May 15 14:59:07.140486 systemd[1]: Started sshd@6-137.184.120.255:22-139.178.68.195:55682.service - OpenSSH per-connection server daemon (139.178.68.195:55682). May 15 14:59:07.142042 systemd-logind[1497]: Removed session 6. May 15 14:59:07.207679 sshd[1763]: Accepted publickey for core from 139.178.68.195 port 55682 ssh2: RSA SHA256:CR2QFGI8Wi38j7m0fVendNlhmaPvJh+gYMXcH5yQYrY May 15 14:59:07.209533 sshd-session[1763]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 14:59:07.216886 systemd-logind[1497]: New session 7 of user core. May 15 14:59:07.226595 systemd[1]: Started session-7.scope - Session 7 of User core. May 15 14:59:07.288312 sudo[1766]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 15 14:59:07.289251 sudo[1766]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 14:59:07.862997 systemd[1]: Starting docker.service - Docker Application Container Engine... May 15 14:59:07.874924 (dockerd)[1785]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 15 14:59:08.307131 dockerd[1785]: time="2025-05-15T14:59:08.307013904Z" level=info msg="Starting up" May 15 14:59:08.309838 dockerd[1785]: time="2025-05-15T14:59:08.309788377Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 15 14:59:08.361710 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2442519750-merged.mount: Deactivated successfully. May 15 14:59:08.506037 systemd[1]: var-lib-docker-metacopy\x2dcheck3401643233-merged.mount: Deactivated successfully. May 15 14:59:08.538360 dockerd[1785]: time="2025-05-15T14:59:08.537873121Z" level=info msg="Loading containers: start." May 15 14:59:08.556156 kernel: Initializing XFRM netlink socket May 15 14:59:08.916698 systemd-timesyncd[1436]: Network configuration changed, trying to establish connection. May 15 14:59:08.919037 systemd-timesyncd[1436]: Network configuration changed, trying to establish connection. May 15 14:59:08.932972 systemd-timesyncd[1436]: Network configuration changed, trying to establish connection. May 15 14:59:08.989218 systemd-networkd[1458]: docker0: Link UP May 15 14:59:08.990531 systemd-timesyncd[1436]: Network configuration changed, trying to establish connection. May 15 14:59:08.996733 dockerd[1785]: time="2025-05-15T14:59:08.996656011Z" level=info msg="Loading containers: done." May 15 14:59:09.027305 dockerd[1785]: time="2025-05-15T14:59:09.026843876Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 15 14:59:09.027305 dockerd[1785]: time="2025-05-15T14:59:09.026958036Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 May 15 14:59:09.027305 dockerd[1785]: time="2025-05-15T14:59:09.027185031Z" level=info msg="Initializing buildkit" May 15 14:59:09.077592 dockerd[1785]: time="2025-05-15T14:59:09.077511363Z" level=info msg="Completed buildkit initialization" May 15 14:59:09.091354 dockerd[1785]: time="2025-05-15T14:59:09.091286507Z" level=info msg="Daemon has completed initialization" May 15 14:59:09.092374 dockerd[1785]: time="2025-05-15T14:59:09.092225422Z" level=info msg="API listen on /run/docker.sock" May 15 14:59:09.091801 systemd[1]: Started docker.service - Docker Application Container Engine. May 15 14:59:09.357755 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2395167160-merged.mount: Deactivated successfully. May 15 14:59:10.148396 containerd[1582]: time="2025-05-15T14:59:10.148259159Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\"" May 15 14:59:10.788086 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount170694219.mount: Deactivated successfully. May 15 14:59:12.935909 containerd[1582]: time="2025-05-15T14:59:12.935828593Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 14:59:12.937533 containerd[1582]: time="2025-05-15T14:59:12.937326039Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.4: active requests=0, bytes read=28682879" May 15 14:59:12.938790 containerd[1582]: time="2025-05-15T14:59:12.938730434Z" level=info msg="ImageCreate event name:\"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 14:59:12.944392 containerd[1582]: time="2025-05-15T14:59:12.944230870Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 14:59:12.948013 containerd[1582]: time="2025-05-15T14:59:12.947449919Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.4\" with image id \"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\", size \"28679679\" in 2.799134599s" May 15 14:59:12.948013 containerd[1582]: time="2025-05-15T14:59:12.947513139Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\" returns image reference \"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\"" May 15 14:59:12.948602 containerd[1582]: time="2025-05-15T14:59:12.948574235Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\"" May 15 14:59:15.115498 containerd[1582]: time="2025-05-15T14:59:15.114188944Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 14:59:15.115498 containerd[1582]: time="2025-05-15T14:59:15.115219148Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.4: active requests=0, bytes read=24779589" May 15 14:59:15.116145 containerd[1582]: time="2025-05-15T14:59:15.116084593Z" level=info msg="ImageCreate event name:\"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 14:59:15.119020 containerd[1582]: time="2025-05-15T14:59:15.118952215Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 14:59:15.120975 containerd[1582]: time="2025-05-15T14:59:15.120878462Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.4\" with image id \"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\", size \"26267962\" in 2.172214179s" May 15 14:59:15.120975 containerd[1582]: time="2025-05-15T14:59:15.120947513Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\" returns image reference \"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\"" May 15 14:59:15.121446 containerd[1582]: time="2025-05-15T14:59:15.121413449Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\"" May 15 14:59:15.627567 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 15 14:59:15.631832 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 14:59:15.897671 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 14:59:15.917077 (kubelet)[2063]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 14:59:16.057764 kubelet[2063]: E0515 14:59:16.057622 2063 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 14:59:16.071070 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 14:59:16.071368 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 14:59:16.073283 systemd[1]: kubelet.service: Consumed 283ms CPU time, 104.3M memory peak. May 15 14:59:17.109744 containerd[1582]: time="2025-05-15T14:59:17.109643746Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 14:59:17.113140 containerd[1582]: time="2025-05-15T14:59:17.113024003Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.4: active requests=0, bytes read=19169938" May 15 14:59:17.114900 containerd[1582]: time="2025-05-15T14:59:17.114807678Z" level=info msg="ImageCreate event name:\"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 14:59:17.122539 containerd[1582]: time="2025-05-15T14:59:17.120908379Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 14:59:17.123488 containerd[1582]: time="2025-05-15T14:59:17.123430952Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.4\" with image id \"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\", size \"20658329\" in 2.001969831s" May 15 14:59:17.123660 containerd[1582]: time="2025-05-15T14:59:17.123643009Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\" returns image reference \"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\"" May 15 14:59:17.124930 containerd[1582]: time="2025-05-15T14:59:17.124878234Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\"" May 15 14:59:17.128364 systemd-resolved[1404]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. May 15 14:59:18.371673 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount387794988.mount: Deactivated successfully. May 15 14:59:19.200281 containerd[1582]: time="2025-05-15T14:59:19.199172791Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 14:59:19.201068 containerd[1582]: time="2025-05-15T14:59:19.200995924Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.4: active requests=0, bytes read=30917856" May 15 14:59:19.201765 containerd[1582]: time="2025-05-15T14:59:19.201707692Z" level=info msg="ImageCreate event name:\"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 14:59:19.205063 containerd[1582]: time="2025-05-15T14:59:19.204996589Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 14:59:19.206199 containerd[1582]: time="2025-05-15T14:59:19.205984690Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.4\" with image id \"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\", repo tag \"registry.k8s.io/kube-proxy:v1.32.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\", size \"30916875\" in 2.081034102s" May 15 14:59:19.206394 containerd[1582]: time="2025-05-15T14:59:19.206369637Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\" returns image reference \"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\"" May 15 14:59:19.207767 containerd[1582]: time="2025-05-15T14:59:19.207702738Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 15 14:59:19.789966 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount488628859.mount: Deactivated successfully. May 15 14:59:20.186494 systemd-resolved[1404]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. May 15 14:59:21.157160 containerd[1582]: time="2025-05-15T14:59:21.156034252Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 14:59:21.159162 containerd[1582]: time="2025-05-15T14:59:21.157269678Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" May 15 14:59:21.159162 containerd[1582]: time="2025-05-15T14:59:21.158281369Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 14:59:21.161592 containerd[1582]: time="2025-05-15T14:59:21.161519105Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 14:59:21.163768 containerd[1582]: time="2025-05-15T14:59:21.163682729Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.955928961s" May 15 14:59:21.163768 containerd[1582]: time="2025-05-15T14:59:21.163764317Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" May 15 14:59:21.164526 containerd[1582]: time="2025-05-15T14:59:21.164495139Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 15 14:59:21.672242 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3048040571.mount: Deactivated successfully. May 15 14:59:21.680134 containerd[1582]: time="2025-05-15T14:59:21.679867684Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 14:59:21.681452 containerd[1582]: time="2025-05-15T14:59:21.681379811Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 15 14:59:21.682667 containerd[1582]: time="2025-05-15T14:59:21.682602724Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 14:59:21.684762 containerd[1582]: time="2025-05-15T14:59:21.684677261Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 14:59:21.686118 containerd[1582]: time="2025-05-15T14:59:21.685609021Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 521.075473ms" May 15 14:59:21.686118 containerd[1582]: time="2025-05-15T14:59:21.685650212Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 15 14:59:21.686589 containerd[1582]: time="2025-05-15T14:59:21.686539030Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 15 14:59:22.224548 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3916384881.mount: Deactivated successfully. May 15 14:59:24.460240 containerd[1582]: time="2025-05-15T14:59:24.460062985Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 14:59:24.462385 containerd[1582]: time="2025-05-15T14:59:24.462326848Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551360" May 15 14:59:24.463166 containerd[1582]: time="2025-05-15T14:59:24.463043238Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 14:59:24.469134 containerd[1582]: time="2025-05-15T14:59:24.467625733Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 14:59:24.469892 containerd[1582]: time="2025-05-15T14:59:24.469834674Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.783251621s" May 15 14:59:24.470129 containerd[1582]: time="2025-05-15T14:59:24.470084554Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" May 15 14:59:26.127043 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 15 14:59:26.130502 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 14:59:26.342373 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 14:59:26.354723 (kubelet)[2222]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 14:59:26.422857 kubelet[2222]: E0515 14:59:26.422180 2222 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 14:59:26.426047 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 14:59:26.426839 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 14:59:26.427412 systemd[1]: kubelet.service: Consumed 202ms CPU time, 104.6M memory peak. May 15 14:59:28.827662 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 14:59:28.828257 systemd[1]: kubelet.service: Consumed 202ms CPU time, 104.6M memory peak. May 15 14:59:28.834573 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 14:59:28.886011 systemd[1]: Reload requested from client PID 2236 ('systemctl') (unit session-7.scope)... May 15 14:59:28.886043 systemd[1]: Reloading... May 15 14:59:29.068153 zram_generator::config[2282]: No configuration found. May 15 14:59:29.243237 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 14:59:29.403451 systemd[1]: Reloading finished in 516 ms. May 15 14:59:29.473825 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 15 14:59:29.474348 systemd[1]: kubelet.service: Failed with result 'signal'. May 15 14:59:29.474780 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 14:59:29.474841 systemd[1]: kubelet.service: Consumed 145ms CPU time, 91.7M memory peak. May 15 14:59:29.477246 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 14:59:29.687628 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 14:59:29.700302 (kubelet)[2333]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 15 14:59:29.766756 kubelet[2333]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 14:59:29.766756 kubelet[2333]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 15 14:59:29.766756 kubelet[2333]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 14:59:29.767562 kubelet[2333]: I0515 14:59:29.766797 2333 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 15 14:59:30.506459 kubelet[2333]: I0515 14:59:30.506379 2333 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 15 14:59:30.506459 kubelet[2333]: I0515 14:59:30.506431 2333 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 15 14:59:30.506909 kubelet[2333]: I0515 14:59:30.506888 2333 server.go:954] "Client rotation is on, will bootstrap in background" May 15 14:59:30.546983 kubelet[2333]: E0515 14:59:30.546887 2333 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://137.184.120.255:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 137.184.120.255:6443: connect: connection refused" logger="UnhandledError" May 15 14:59:30.548146 kubelet[2333]: I0515 14:59:30.548082 2333 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 15 14:59:30.564474 kubelet[2333]: I0515 14:59:30.564201 2333 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 15 14:59:30.573173 kubelet[2333]: I0515 14:59:30.573054 2333 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 15 14:59:30.577114 kubelet[2333]: I0515 14:59:30.577002 2333 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 15 14:59:30.577533 kubelet[2333]: I0515 14:59:30.577312 2333 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4334.0.0-a-cad88baf47","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 15 14:59:30.579942 kubelet[2333]: I0515 14:59:30.579531 2333 topology_manager.go:138] "Creating topology manager with none policy" May 15 14:59:30.579942 kubelet[2333]: I0515 14:59:30.579583 2333 container_manager_linux.go:304] "Creating device plugin manager" May 15 14:59:30.579942 kubelet[2333]: I0515 14:59:30.579774 2333 state_mem.go:36] "Initialized new in-memory state store" May 15 14:59:30.584260 kubelet[2333]: I0515 14:59:30.584085 2333 kubelet.go:446] "Attempting to sync node with API server" May 15 14:59:30.584260 kubelet[2333]: I0515 14:59:30.584159 2333 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 15 14:59:30.584260 kubelet[2333]: I0515 14:59:30.584192 2333 kubelet.go:352] "Adding apiserver pod source" May 15 14:59:30.584260 kubelet[2333]: I0515 14:59:30.584205 2333 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 15 14:59:30.589822 kubelet[2333]: W0515 14:59:30.589752 2333 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://137.184.120.255:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4334.0.0-a-cad88baf47&limit=500&resourceVersion=0": dial tcp 137.184.120.255:6443: connect: connection refused May 15 14:59:30.590153 kubelet[2333]: E0515 14:59:30.590093 2333 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://137.184.120.255:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4334.0.0-a-cad88baf47&limit=500&resourceVersion=0\": dial tcp 137.184.120.255:6443: connect: connection refused" logger="UnhandledError" May 15 14:59:30.590785 kubelet[2333]: W0515 14:59:30.590746 2333 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://137.184.120.255:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 137.184.120.255:6443: connect: connection refused May 15 14:59:30.590941 kubelet[2333]: E0515 14:59:30.590922 2333 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://137.184.120.255:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 137.184.120.255:6443: connect: connection refused" logger="UnhandledError" May 15 14:59:30.592266 kubelet[2333]: I0515 14:59:30.592239 2333 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 15 14:59:30.596126 kubelet[2333]: I0515 14:59:30.595807 2333 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 15 14:59:30.596126 kubelet[2333]: W0515 14:59:30.595883 2333 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 15 14:59:30.596712 kubelet[2333]: I0515 14:59:30.596686 2333 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 15 14:59:30.597016 kubelet[2333]: I0515 14:59:30.596983 2333 server.go:1287] "Started kubelet" May 15 14:59:30.598361 kubelet[2333]: I0515 14:59:30.597687 2333 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 15 14:59:30.599531 kubelet[2333]: I0515 14:59:30.599161 2333 server.go:490] "Adding debug handlers to kubelet server" May 15 14:59:30.601575 kubelet[2333]: I0515 14:59:30.601499 2333 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 15 14:59:30.602231 kubelet[2333]: I0515 14:59:30.602207 2333 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 15 14:59:30.603448 kubelet[2333]: I0515 14:59:30.603413 2333 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 15 14:59:30.604908 kubelet[2333]: E0515 14:59:30.603614 2333 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://137.184.120.255:6443/api/v1/namespaces/default/events\": dial tcp 137.184.120.255:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4334.0.0-a-cad88baf47.183fbb58d1f34333 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4334.0.0-a-cad88baf47,UID:ci-4334.0.0-a-cad88baf47,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4334.0.0-a-cad88baf47,},FirstTimestamp:2025-05-15 14:59:30.596938547 +0000 UTC m=+0.890105592,LastTimestamp:2025-05-15 14:59:30.596938547 +0000 UTC m=+0.890105592,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4334.0.0-a-cad88baf47,}" May 15 14:59:30.608795 kubelet[2333]: I0515 14:59:30.608529 2333 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 15 14:59:30.618569 kubelet[2333]: E0515 14:59:30.617625 2333 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4334.0.0-a-cad88baf47\" not found" May 15 14:59:30.618569 kubelet[2333]: I0515 14:59:30.617707 2333 volume_manager.go:297] "Starting Kubelet Volume Manager" May 15 14:59:30.618569 kubelet[2333]: I0515 14:59:30.618154 2333 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 15 14:59:30.618569 kubelet[2333]: I0515 14:59:30.618256 2333 reconciler.go:26] "Reconciler: start to sync state" May 15 14:59:30.618953 kubelet[2333]: W0515 14:59:30.618902 2333 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://137.184.120.255:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 137.184.120.255:6443: connect: connection refused May 15 14:59:30.619007 kubelet[2333]: E0515 14:59:30.618965 2333 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://137.184.120.255:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 137.184.120.255:6443: connect: connection refused" logger="UnhandledError" May 15 14:59:30.619291 kubelet[2333]: E0515 14:59:30.619263 2333 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://137.184.120.255:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4334.0.0-a-cad88baf47?timeout=10s\": dial tcp 137.184.120.255:6443: connect: connection refused" interval="200ms" May 15 14:59:30.626252 kubelet[2333]: I0515 14:59:30.625079 2333 factory.go:221] Registration of the systemd container factory successfully May 15 14:59:30.627509 kubelet[2333]: I0515 14:59:30.627464 2333 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 15 14:59:30.631412 kubelet[2333]: I0515 14:59:30.631359 2333 factory.go:221] Registration of the containerd container factory successfully May 15 14:59:30.632371 kubelet[2333]: E0515 14:59:30.632344 2333 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 15 14:59:30.653731 kubelet[2333]: I0515 14:59:30.653674 2333 cpu_manager.go:221] "Starting CPU manager" policy="none" May 15 14:59:30.653731 kubelet[2333]: I0515 14:59:30.653727 2333 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 15 14:59:30.653986 kubelet[2333]: I0515 14:59:30.653760 2333 state_mem.go:36] "Initialized new in-memory state store" May 15 14:59:30.654206 kubelet[2333]: I0515 14:59:30.653676 2333 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 15 14:59:30.658147 kubelet[2333]: I0515 14:59:30.658001 2333 policy_none.go:49] "None policy: Start" May 15 14:59:30.658147 kubelet[2333]: I0515 14:59:30.658048 2333 memory_manager.go:186] "Starting memorymanager" policy="None" May 15 14:59:30.658147 kubelet[2333]: I0515 14:59:30.658066 2333 state_mem.go:35] "Initializing new in-memory state store" May 15 14:59:30.659157 kubelet[2333]: I0515 14:59:30.658824 2333 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 15 14:59:30.659157 kubelet[2333]: I0515 14:59:30.658959 2333 status_manager.go:227] "Starting to sync pod status with apiserver" May 15 14:59:30.659157 kubelet[2333]: I0515 14:59:30.658985 2333 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 15 14:59:30.659157 kubelet[2333]: I0515 14:59:30.658992 2333 kubelet.go:2388] "Starting kubelet main sync loop" May 15 14:59:30.659157 kubelet[2333]: E0515 14:59:30.659059 2333 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 15 14:59:30.662698 kubelet[2333]: W0515 14:59:30.662637 2333 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://137.184.120.255:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 137.184.120.255:6443: connect: connection refused May 15 14:59:30.662802 kubelet[2333]: E0515 14:59:30.662709 2333 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://137.184.120.255:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 137.184.120.255:6443: connect: connection refused" logger="UnhandledError" May 15 14:59:30.668980 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 15 14:59:30.681355 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 15 14:59:30.685634 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 15 14:59:30.705710 kubelet[2333]: I0515 14:59:30.705672 2333 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 15 14:59:30.706227 kubelet[2333]: I0515 14:59:30.706208 2333 eviction_manager.go:189] "Eviction manager: starting control loop" May 15 14:59:30.706343 kubelet[2333]: I0515 14:59:30.706305 2333 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 15 14:59:30.709201 kubelet[2333]: I0515 14:59:30.707619 2333 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 15 14:59:30.711215 kubelet[2333]: E0515 14:59:30.711181 2333 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 15 14:59:30.711352 kubelet[2333]: E0515 14:59:30.711242 2333 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4334.0.0-a-cad88baf47\" not found" May 15 14:59:30.781296 systemd[1]: Created slice kubepods-burstable-podf87b68056d8924a7683668f8bc7b848d.slice - libcontainer container kubepods-burstable-podf87b68056d8924a7683668f8bc7b848d.slice. May 15 14:59:30.794839 kubelet[2333]: E0515 14:59:30.794772 2333 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4334.0.0-a-cad88baf47\" not found" node="ci-4334.0.0-a-cad88baf47" May 15 14:59:30.805395 systemd[1]: Created slice kubepods-burstable-podd10c1e03f4a26bb758a9fa59ea9d2cad.slice - libcontainer container kubepods-burstable-podd10c1e03f4a26bb758a9fa59ea9d2cad.slice. May 15 14:59:30.809783 kubelet[2333]: I0515 14:59:30.809733 2333 kubelet_node_status.go:76] "Attempting to register node" node="ci-4334.0.0-a-cad88baf47" May 15 14:59:30.810723 kubelet[2333]: E0515 14:59:30.810506 2333 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://137.184.120.255:6443/api/v1/nodes\": dial tcp 137.184.120.255:6443: connect: connection refused" node="ci-4334.0.0-a-cad88baf47" May 15 14:59:30.811231 kubelet[2333]: E0515 14:59:30.811181 2333 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4334.0.0-a-cad88baf47\" not found" node="ci-4334.0.0-a-cad88baf47" May 15 14:59:30.815430 systemd[1]: Created slice kubepods-burstable-pod0ba589338d5ea44b1edaae15ae21d8f0.slice - libcontainer container kubepods-burstable-pod0ba589338d5ea44b1edaae15ae21d8f0.slice. May 15 14:59:30.818631 kubelet[2333]: E0515 14:59:30.818585 2333 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4334.0.0-a-cad88baf47\" not found" node="ci-4334.0.0-a-cad88baf47" May 15 14:59:30.820282 kubelet[2333]: I0515 14:59:30.819830 2333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0ba589338d5ea44b1edaae15ae21d8f0-flexvolume-dir\") pod \"kube-controller-manager-ci-4334.0.0-a-cad88baf47\" (UID: \"0ba589338d5ea44b1edaae15ae21d8f0\") " pod="kube-system/kube-controller-manager-ci-4334.0.0-a-cad88baf47" May 15 14:59:30.820282 kubelet[2333]: I0515 14:59:30.819880 2333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0ba589338d5ea44b1edaae15ae21d8f0-k8s-certs\") pod \"kube-controller-manager-ci-4334.0.0-a-cad88baf47\" (UID: \"0ba589338d5ea44b1edaae15ae21d8f0\") " pod="kube-system/kube-controller-manager-ci-4334.0.0-a-cad88baf47" May 15 14:59:30.820282 kubelet[2333]: I0515 14:59:30.819906 2333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0ba589338d5ea44b1edaae15ae21d8f0-kubeconfig\") pod \"kube-controller-manager-ci-4334.0.0-a-cad88baf47\" (UID: \"0ba589338d5ea44b1edaae15ae21d8f0\") " pod="kube-system/kube-controller-manager-ci-4334.0.0-a-cad88baf47" May 15 14:59:30.820282 kubelet[2333]: I0515 14:59:30.819940 2333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0ba589338d5ea44b1edaae15ae21d8f0-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4334.0.0-a-cad88baf47\" (UID: \"0ba589338d5ea44b1edaae15ae21d8f0\") " pod="kube-system/kube-controller-manager-ci-4334.0.0-a-cad88baf47" May 15 14:59:30.820282 kubelet[2333]: I0515 14:59:30.819962 2333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d10c1e03f4a26bb758a9fa59ea9d2cad-kubeconfig\") pod \"kube-scheduler-ci-4334.0.0-a-cad88baf47\" (UID: \"d10c1e03f4a26bb758a9fa59ea9d2cad\") " pod="kube-system/kube-scheduler-ci-4334.0.0-a-cad88baf47" May 15 14:59:30.820643 kubelet[2333]: I0515 14:59:30.819982 2333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f87b68056d8924a7683668f8bc7b848d-k8s-certs\") pod \"kube-apiserver-ci-4334.0.0-a-cad88baf47\" (UID: \"f87b68056d8924a7683668f8bc7b848d\") " pod="kube-system/kube-apiserver-ci-4334.0.0-a-cad88baf47" May 15 14:59:30.820643 kubelet[2333]: I0515 14:59:30.820037 2333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f87b68056d8924a7683668f8bc7b848d-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4334.0.0-a-cad88baf47\" (UID: \"f87b68056d8924a7683668f8bc7b848d\") " pod="kube-system/kube-apiserver-ci-4334.0.0-a-cad88baf47" May 15 14:59:30.820643 kubelet[2333]: I0515 14:59:30.820059 2333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0ba589338d5ea44b1edaae15ae21d8f0-ca-certs\") pod \"kube-controller-manager-ci-4334.0.0-a-cad88baf47\" (UID: \"0ba589338d5ea44b1edaae15ae21d8f0\") " pod="kube-system/kube-controller-manager-ci-4334.0.0-a-cad88baf47" May 15 14:59:30.820643 kubelet[2333]: I0515 14:59:30.820081 2333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f87b68056d8924a7683668f8bc7b848d-ca-certs\") pod \"kube-apiserver-ci-4334.0.0-a-cad88baf47\" (UID: \"f87b68056d8924a7683668f8bc7b848d\") " pod="kube-system/kube-apiserver-ci-4334.0.0-a-cad88baf47" May 15 14:59:30.820643 kubelet[2333]: E0515 14:59:30.820226 2333 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://137.184.120.255:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4334.0.0-a-cad88baf47?timeout=10s\": dial tcp 137.184.120.255:6443: connect: connection refused" interval="400ms" May 15 14:59:31.013146 kubelet[2333]: I0515 14:59:31.012837 2333 kubelet_node_status.go:76] "Attempting to register node" node="ci-4334.0.0-a-cad88baf47" May 15 14:59:31.013641 kubelet[2333]: E0515 14:59:31.013610 2333 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://137.184.120.255:6443/api/v1/nodes\": dial tcp 137.184.120.255:6443: connect: connection refused" node="ci-4334.0.0-a-cad88baf47" May 15 14:59:31.096223 kubelet[2333]: E0515 14:59:31.095672 2333 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 14:59:31.097057 containerd[1582]: time="2025-05-15T14:59:31.096690221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4334.0.0-a-cad88baf47,Uid:f87b68056d8924a7683668f8bc7b848d,Namespace:kube-system,Attempt:0,}" May 15 14:59:31.112544 kubelet[2333]: E0515 14:59:31.112473 2333 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 14:59:31.120082 kubelet[2333]: E0515 14:59:31.119705 2333 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 14:59:31.120967 containerd[1582]: time="2025-05-15T14:59:31.120907817Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4334.0.0-a-cad88baf47,Uid:d10c1e03f4a26bb758a9fa59ea9d2cad,Namespace:kube-system,Attempt:0,}" May 15 14:59:31.121506 containerd[1582]: time="2025-05-15T14:59:31.121263262Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4334.0.0-a-cad88baf47,Uid:0ba589338d5ea44b1edaae15ae21d8f0,Namespace:kube-system,Attempt:0,}" May 15 14:59:31.220880 kubelet[2333]: E0515 14:59:31.220778 2333 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://137.184.120.255:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4334.0.0-a-cad88baf47?timeout=10s\": dial tcp 137.184.120.255:6443: connect: connection refused" interval="800ms" May 15 14:59:31.278636 containerd[1582]: time="2025-05-15T14:59:31.277938098Z" level=info msg="connecting to shim 0b213d59d640eb443cb50c72785138b15a6cecb9313fe0d1d0657f66f6fe351e" address="unix:///run/containerd/s/cfbb0035d58391d4f79749090dcf3fb2e57495c4fcaef69eb8f43cdbf8f23f4c" namespace=k8s.io protocol=ttrpc version=3 May 15 14:59:31.283305 containerd[1582]: time="2025-05-15T14:59:31.283238163Z" level=info msg="connecting to shim 94d7a14bf4f027415dd70789b7292cdb0e712a941775b79d8053eca9db5254c0" address="unix:///run/containerd/s/466afd267228f3947e3da4b5962ed3f84580522969fcde68215c8a460c208d06" namespace=k8s.io protocol=ttrpc version=3 May 15 14:59:31.296963 containerd[1582]: time="2025-05-15T14:59:31.296872911Z" level=info msg="connecting to shim 0fe8c457f73d91fa69cfcfa54106342cb9a5844d9ad8374ea4aba3214d71fe60" address="unix:///run/containerd/s/5b74b5264bdcf402a433d9cd3e22a99c4ee572d3b7271837f5626309094a4dfa" namespace=k8s.io protocol=ttrpc version=3 May 15 14:59:31.413510 systemd[1]: Started cri-containerd-0b213d59d640eb443cb50c72785138b15a6cecb9313fe0d1d0657f66f6fe351e.scope - libcontainer container 0b213d59d640eb443cb50c72785138b15a6cecb9313fe0d1d0657f66f6fe351e. May 15 14:59:31.417631 kubelet[2333]: I0515 14:59:31.417517 2333 kubelet_node_status.go:76] "Attempting to register node" node="ci-4334.0.0-a-cad88baf47" May 15 14:59:31.418446 kubelet[2333]: E0515 14:59:31.418394 2333 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://137.184.120.255:6443/api/v1/nodes\": dial tcp 137.184.120.255:6443: connect: connection refused" node="ci-4334.0.0-a-cad88baf47" May 15 14:59:31.428287 systemd[1]: Started cri-containerd-0fe8c457f73d91fa69cfcfa54106342cb9a5844d9ad8374ea4aba3214d71fe60.scope - libcontainer container 0fe8c457f73d91fa69cfcfa54106342cb9a5844d9ad8374ea4aba3214d71fe60. May 15 14:59:31.430647 systemd[1]: Started cri-containerd-94d7a14bf4f027415dd70789b7292cdb0e712a941775b79d8053eca9db5254c0.scope - libcontainer container 94d7a14bf4f027415dd70789b7292cdb0e712a941775b79d8053eca9db5254c0. May 15 14:59:31.459773 kubelet[2333]: W0515 14:59:31.459362 2333 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://137.184.120.255:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 137.184.120.255:6443: connect: connection refused May 15 14:59:31.459773 kubelet[2333]: E0515 14:59:31.459484 2333 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://137.184.120.255:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 137.184.120.255:6443: connect: connection refused" logger="UnhandledError" May 15 14:59:31.517750 kubelet[2333]: W0515 14:59:31.517379 2333 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://137.184.120.255:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 137.184.120.255:6443: connect: connection refused May 15 14:59:31.517750 kubelet[2333]: E0515 14:59:31.517467 2333 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://137.184.120.255:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 137.184.120.255:6443: connect: connection refused" logger="UnhandledError" May 15 14:59:31.585157 containerd[1582]: time="2025-05-15T14:59:31.584454150Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4334.0.0-a-cad88baf47,Uid:0ba589338d5ea44b1edaae15ae21d8f0,Namespace:kube-system,Attempt:0,} returns sandbox id \"94d7a14bf4f027415dd70789b7292cdb0e712a941775b79d8053eca9db5254c0\"" May 15 14:59:31.588713 kubelet[2333]: E0515 14:59:31.588452 2333 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 14:59:31.595788 containerd[1582]: time="2025-05-15T14:59:31.595718654Z" level=info msg="CreateContainer within sandbox \"94d7a14bf4f027415dd70789b7292cdb0e712a941775b79d8053eca9db5254c0\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 15 14:59:31.599985 containerd[1582]: time="2025-05-15T14:59:31.599938507Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4334.0.0-a-cad88baf47,Uid:f87b68056d8924a7683668f8bc7b848d,Namespace:kube-system,Attempt:0,} returns sandbox id \"0b213d59d640eb443cb50c72785138b15a6cecb9313fe0d1d0657f66f6fe351e\"" May 15 14:59:31.602916 kubelet[2333]: E0515 14:59:31.602384 2333 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 14:59:31.611742 containerd[1582]: time="2025-05-15T14:59:31.611682954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4334.0.0-a-cad88baf47,Uid:d10c1e03f4a26bb758a9fa59ea9d2cad,Namespace:kube-system,Attempt:0,} returns sandbox id \"0fe8c457f73d91fa69cfcfa54106342cb9a5844d9ad8374ea4aba3214d71fe60\"" May 15 14:59:31.612011 containerd[1582]: time="2025-05-15T14:59:31.611981327Z" level=info msg="CreateContainer within sandbox \"0b213d59d640eb443cb50c72785138b15a6cecb9313fe0d1d0657f66f6fe351e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 15 14:59:31.613340 kubelet[2333]: E0515 14:59:31.613274 2333 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 14:59:31.618967 containerd[1582]: time="2025-05-15T14:59:31.618386619Z" level=info msg="CreateContainer within sandbox \"0fe8c457f73d91fa69cfcfa54106342cb9a5844d9ad8374ea4aba3214d71fe60\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 15 14:59:31.622778 containerd[1582]: time="2025-05-15T14:59:31.622728657Z" level=info msg="Container 283fcdcde1d4fe58e43905f2546b0584300b69e828180f61309e403822357263: CDI devices from CRI Config.CDIDevices: []" May 15 14:59:31.630460 containerd[1582]: time="2025-05-15T14:59:31.630387223Z" level=info msg="Container c9f39f32d1052037459f41c8b280136d9005661d8662558f7d44a7bdbfafc2ae: CDI devices from CRI Config.CDIDevices: []" May 15 14:59:31.638311 containerd[1582]: time="2025-05-15T14:59:31.638253688Z" level=info msg="Container 73ba95e74b743aef32d7f28b0c369b163ccc29ba23a45b2b5dd07d2d7ab9a938: CDI devices from CRI Config.CDIDevices: []" May 15 14:59:31.639644 containerd[1582]: time="2025-05-15T14:59:31.638641273Z" level=info msg="CreateContainer within sandbox \"94d7a14bf4f027415dd70789b7292cdb0e712a941775b79d8053eca9db5254c0\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"283fcdcde1d4fe58e43905f2546b0584300b69e828180f61309e403822357263\"" May 15 14:59:31.642930 containerd[1582]: time="2025-05-15T14:59:31.642769500Z" level=info msg="StartContainer for \"283fcdcde1d4fe58e43905f2546b0584300b69e828180f61309e403822357263\"" May 15 14:59:31.644927 containerd[1582]: time="2025-05-15T14:59:31.644814369Z" level=info msg="connecting to shim 283fcdcde1d4fe58e43905f2546b0584300b69e828180f61309e403822357263" address="unix:///run/containerd/s/466afd267228f3947e3da4b5962ed3f84580522969fcde68215c8a460c208d06" protocol=ttrpc version=3 May 15 14:59:31.650729 containerd[1582]: time="2025-05-15T14:59:31.650659409Z" level=info msg="CreateContainer within sandbox \"0b213d59d640eb443cb50c72785138b15a6cecb9313fe0d1d0657f66f6fe351e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c9f39f32d1052037459f41c8b280136d9005661d8662558f7d44a7bdbfafc2ae\"" May 15 14:59:31.653183 containerd[1582]: time="2025-05-15T14:59:31.652653267Z" level=info msg="StartContainer for \"c9f39f32d1052037459f41c8b280136d9005661d8662558f7d44a7bdbfafc2ae\"" May 15 14:59:31.657109 containerd[1582]: time="2025-05-15T14:59:31.657048389Z" level=info msg="connecting to shim c9f39f32d1052037459f41c8b280136d9005661d8662558f7d44a7bdbfafc2ae" address="unix:///run/containerd/s/cfbb0035d58391d4f79749090dcf3fb2e57495c4fcaef69eb8f43cdbf8f23f4c" protocol=ttrpc version=3 May 15 14:59:31.664316 containerd[1582]: time="2025-05-15T14:59:31.662359869Z" level=info msg="CreateContainer within sandbox \"0fe8c457f73d91fa69cfcfa54106342cb9a5844d9ad8374ea4aba3214d71fe60\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"73ba95e74b743aef32d7f28b0c369b163ccc29ba23a45b2b5dd07d2d7ab9a938\"" May 15 14:59:31.666469 containerd[1582]: time="2025-05-15T14:59:31.666414220Z" level=info msg="StartContainer for \"73ba95e74b743aef32d7f28b0c369b163ccc29ba23a45b2b5dd07d2d7ab9a938\"" May 15 14:59:31.674810 containerd[1582]: time="2025-05-15T14:59:31.674538784Z" level=info msg="connecting to shim 73ba95e74b743aef32d7f28b0c369b163ccc29ba23a45b2b5dd07d2d7ab9a938" address="unix:///run/containerd/s/5b74b5264bdcf402a433d9cd3e22a99c4ee572d3b7271837f5626309094a4dfa" protocol=ttrpc version=3 May 15 14:59:31.709433 systemd[1]: Started cri-containerd-283fcdcde1d4fe58e43905f2546b0584300b69e828180f61309e403822357263.scope - libcontainer container 283fcdcde1d4fe58e43905f2546b0584300b69e828180f61309e403822357263. May 15 14:59:31.712535 systemd[1]: Started cri-containerd-c9f39f32d1052037459f41c8b280136d9005661d8662558f7d44a7bdbfafc2ae.scope - libcontainer container c9f39f32d1052037459f41c8b280136d9005661d8662558f7d44a7bdbfafc2ae. May 15 14:59:31.726614 systemd[1]: Started cri-containerd-73ba95e74b743aef32d7f28b0c369b163ccc29ba23a45b2b5dd07d2d7ab9a938.scope - libcontainer container 73ba95e74b743aef32d7f28b0c369b163ccc29ba23a45b2b5dd07d2d7ab9a938. May 15 14:59:31.841350 containerd[1582]: time="2025-05-15T14:59:31.841204774Z" level=info msg="StartContainer for \"283fcdcde1d4fe58e43905f2546b0584300b69e828180f61309e403822357263\" returns successfully" May 15 14:59:31.855270 containerd[1582]: time="2025-05-15T14:59:31.854739324Z" level=info msg="StartContainer for \"c9f39f32d1052037459f41c8b280136d9005661d8662558f7d44a7bdbfafc2ae\" returns successfully" May 15 14:59:31.895248 containerd[1582]: time="2025-05-15T14:59:31.895153082Z" level=info msg="StartContainer for \"73ba95e74b743aef32d7f28b0c369b163ccc29ba23a45b2b5dd07d2d7ab9a938\" returns successfully" May 15 14:59:31.977787 kubelet[2333]: W0515 14:59:31.977535 2333 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://137.184.120.255:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4334.0.0-a-cad88baf47&limit=500&resourceVersion=0": dial tcp 137.184.120.255:6443: connect: connection refused May 15 14:59:31.978357 kubelet[2333]: E0515 14:59:31.978286 2333 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://137.184.120.255:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4334.0.0-a-cad88baf47&limit=500&resourceVersion=0\": dial tcp 137.184.120.255:6443: connect: connection refused" logger="UnhandledError" May 15 14:59:32.221768 kubelet[2333]: I0515 14:59:32.221725 2333 kubelet_node_status.go:76] "Attempting to register node" node="ci-4334.0.0-a-cad88baf47" May 15 14:59:32.686337 kubelet[2333]: E0515 14:59:32.686289 2333 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4334.0.0-a-cad88baf47\" not found" node="ci-4334.0.0-a-cad88baf47" May 15 14:59:32.686523 kubelet[2333]: E0515 14:59:32.686481 2333 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 14:59:32.691648 kubelet[2333]: E0515 14:59:32.691559 2333 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4334.0.0-a-cad88baf47\" not found" node="ci-4334.0.0-a-cad88baf47" May 15 14:59:32.691828 kubelet[2333]: E0515 14:59:32.691760 2333 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 14:59:32.694451 kubelet[2333]: E0515 14:59:32.694419 2333 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4334.0.0-a-cad88baf47\" not found" node="ci-4334.0.0-a-cad88baf47" May 15 14:59:32.694588 kubelet[2333]: E0515 14:59:32.694555 2333 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 14:59:33.698974 kubelet[2333]: E0515 14:59:33.698922 2333 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4334.0.0-a-cad88baf47\" not found" node="ci-4334.0.0-a-cad88baf47" May 15 14:59:33.699760 kubelet[2333]: E0515 14:59:33.699071 2333 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 14:59:33.700516 kubelet[2333]: E0515 14:59:33.700479 2333 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4334.0.0-a-cad88baf47\" not found" node="ci-4334.0.0-a-cad88baf47" May 15 14:59:33.700650 kubelet[2333]: E0515 14:59:33.700617 2333 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 14:59:33.701331 kubelet[2333]: E0515 14:59:33.701302 2333 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4334.0.0-a-cad88baf47\" not found" node="ci-4334.0.0-a-cad88baf47" May 15 14:59:33.701456 kubelet[2333]: E0515 14:59:33.701429 2333 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 14:59:34.214624 kubelet[2333]: E0515 14:59:34.214543 2333 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4334.0.0-a-cad88baf47\" not found" node="ci-4334.0.0-a-cad88baf47" May 15 14:59:34.291542 kubelet[2333]: I0515 14:59:34.291480 2333 kubelet_node_status.go:79] "Successfully registered node" node="ci-4334.0.0-a-cad88baf47" May 15 14:59:34.291542 kubelet[2333]: E0515 14:59:34.291530 2333 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"ci-4334.0.0-a-cad88baf47\": node \"ci-4334.0.0-a-cad88baf47\" not found" May 15 14:59:34.299654 kubelet[2333]: E0515 14:59:34.299580 2333 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4334.0.0-a-cad88baf47\" not found" May 15 14:59:34.419715 kubelet[2333]: I0515 14:59:34.419643 2333 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4334.0.0-a-cad88baf47" May 15 14:59:34.433141 kubelet[2333]: E0515 14:59:34.432392 2333 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4334.0.0-a-cad88baf47\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4334.0.0-a-cad88baf47" May 15 14:59:34.433141 kubelet[2333]: I0515 14:59:34.432438 2333 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4334.0.0-a-cad88baf47" May 15 14:59:34.440518 kubelet[2333]: E0515 14:59:34.440418 2333 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4334.0.0-a-cad88baf47\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4334.0.0-a-cad88baf47" May 15 14:59:34.441335 kubelet[2333]: I0515 14:59:34.440963 2333 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4334.0.0-a-cad88baf47" May 15 14:59:34.447796 kubelet[2333]: E0515 14:59:34.447722 2333 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4334.0.0-a-cad88baf47\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4334.0.0-a-cad88baf47" May 15 14:59:34.593199 kubelet[2333]: I0515 14:59:34.592198 2333 apiserver.go:52] "Watching apiserver" May 15 14:59:34.619319 kubelet[2333]: I0515 14:59:34.619241 2333 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 15 14:59:34.698972 kubelet[2333]: I0515 14:59:34.698918 2333 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4334.0.0-a-cad88baf47" May 15 14:59:34.699303 kubelet[2333]: I0515 14:59:34.699177 2333 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4334.0.0-a-cad88baf47" May 15 14:59:34.702740 kubelet[2333]: E0515 14:59:34.702693 2333 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4334.0.0-a-cad88baf47\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4334.0.0-a-cad88baf47" May 15 14:59:34.702942 kubelet[2333]: E0515 14:59:34.702928 2333 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 14:59:34.703093 kubelet[2333]: E0515 14:59:34.703052 2333 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4334.0.0-a-cad88baf47\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4334.0.0-a-cad88baf47" May 15 14:59:34.703220 kubelet[2333]: E0515 14:59:34.703199 2333 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 14:59:36.679492 systemd[1]: Reload requested from client PID 2606 ('systemctl') (unit session-7.scope)... May 15 14:59:36.679513 systemd[1]: Reloading... May 15 14:59:36.818153 zram_generator::config[2650]: No configuration found. May 15 14:59:36.962921 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 14:59:37.175544 systemd[1]: Reloading finished in 495 ms. May 15 14:59:37.212727 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 15 14:59:37.228772 systemd[1]: kubelet.service: Deactivated successfully. May 15 14:59:37.229213 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 14:59:37.229337 systemd[1]: kubelet.service: Consumed 1.451s CPU time, 119.4M memory peak. May 15 14:59:37.233049 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 14:59:37.447527 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 14:59:37.466839 (kubelet)[2700]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 15 14:59:37.544007 kubelet[2700]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 14:59:37.544007 kubelet[2700]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 15 14:59:37.544007 kubelet[2700]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 14:59:37.544007 kubelet[2700]: I0515 14:59:37.542658 2700 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 15 14:59:37.560067 kubelet[2700]: I0515 14:59:37.560025 2700 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 15 14:59:37.560293 kubelet[2700]: I0515 14:59:37.560279 2700 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 15 14:59:37.560673 kubelet[2700]: I0515 14:59:37.560657 2700 server.go:954] "Client rotation is on, will bootstrap in background" May 15 14:59:37.564185 kubelet[2700]: I0515 14:59:37.564095 2700 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 15 14:59:37.578215 kubelet[2700]: I0515 14:59:37.577337 2700 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 15 14:59:37.585342 kubelet[2700]: I0515 14:59:37.585310 2700 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 15 14:59:37.591236 kubelet[2700]: I0515 14:59:37.591199 2700 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 15 14:59:37.592074 kubelet[2700]: I0515 14:59:37.592020 2700 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 15 14:59:37.592913 kubelet[2700]: I0515 14:59:37.592309 2700 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4334.0.0-a-cad88baf47","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 15 14:59:37.592913 kubelet[2700]: I0515 14:59:37.592549 2700 topology_manager.go:138] "Creating topology manager with none policy" May 15 14:59:37.592913 kubelet[2700]: I0515 14:59:37.592566 2700 container_manager_linux.go:304] "Creating device plugin manager" May 15 14:59:37.592913 kubelet[2700]: I0515 14:59:37.592631 2700 state_mem.go:36] "Initialized new in-memory state store" May 15 14:59:37.593278 kubelet[2700]: I0515 14:59:37.593258 2700 kubelet.go:446] "Attempting to sync node with API server" May 15 14:59:37.593369 kubelet[2700]: I0515 14:59:37.593358 2700 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 15 14:59:37.593437 kubelet[2700]: I0515 14:59:37.593430 2700 kubelet.go:352] "Adding apiserver pod source" May 15 14:59:37.593484 kubelet[2700]: I0515 14:59:37.593478 2700 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 15 14:59:37.599554 kubelet[2700]: I0515 14:59:37.599510 2700 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 15 14:59:37.603573 kubelet[2700]: I0515 14:59:37.603399 2700 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 15 14:59:37.607920 kubelet[2700]: I0515 14:59:37.606948 2700 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 15 14:59:37.607920 kubelet[2700]: I0515 14:59:37.607030 2700 server.go:1287] "Started kubelet" May 15 14:59:37.611786 kubelet[2700]: I0515 14:59:37.611702 2700 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 15 14:59:37.617613 kubelet[2700]: I0515 14:59:37.617521 2700 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 15 14:59:37.618119 kubelet[2700]: I0515 14:59:37.618064 2700 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 15 14:59:37.618196 kubelet[2700]: I0515 14:59:37.618082 2700 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 15 14:59:37.637178 kubelet[2700]: I0515 14:59:37.636510 2700 server.go:490] "Adding debug handlers to kubelet server" May 15 14:59:37.639761 kubelet[2700]: I0515 14:59:37.639702 2700 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 15 14:59:37.653741 kubelet[2700]: I0515 14:59:37.653670 2700 volume_manager.go:297] "Starting Kubelet Volume Manager" May 15 14:59:37.654805 kubelet[2700]: E0515 14:59:37.654200 2700 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4334.0.0-a-cad88baf47\" not found" May 15 14:59:37.669125 kubelet[2700]: I0515 14:59:37.656626 2700 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 15 14:59:37.669125 kubelet[2700]: I0515 14:59:37.656818 2700 reconciler.go:26] "Reconciler: start to sync state" May 15 14:59:37.669125 kubelet[2700]: I0515 14:59:37.667260 2700 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 15 14:59:37.673144 kubelet[2700]: I0515 14:59:37.670801 2700 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 15 14:59:37.673144 kubelet[2700]: I0515 14:59:37.670858 2700 status_manager.go:227] "Starting to sync pod status with apiserver" May 15 14:59:37.673144 kubelet[2700]: I0515 14:59:37.670891 2700 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 15 14:59:37.673144 kubelet[2700]: I0515 14:59:37.670899 2700 kubelet.go:2388] "Starting kubelet main sync loop" May 15 14:59:37.673144 kubelet[2700]: E0515 14:59:37.670973 2700 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 15 14:59:37.678982 kubelet[2700]: E0515 14:59:37.678946 2700 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 15 14:59:37.689248 kubelet[2700]: I0515 14:59:37.689208 2700 factory.go:221] Registration of the containerd container factory successfully May 15 14:59:37.690039 kubelet[2700]: I0515 14:59:37.690014 2700 factory.go:221] Registration of the systemd container factory successfully May 15 14:59:37.690696 kubelet[2700]: I0515 14:59:37.690667 2700 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 15 14:59:37.720495 sudo[2726]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 15 14:59:37.720842 sudo[2726]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 15 14:59:37.772191 kubelet[2700]: E0515 14:59:37.771534 2700 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 15 14:59:37.803927 kubelet[2700]: I0515 14:59:37.803656 2700 cpu_manager.go:221] "Starting CPU manager" policy="none" May 15 14:59:37.803927 kubelet[2700]: I0515 14:59:37.803675 2700 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 15 14:59:37.803927 kubelet[2700]: I0515 14:59:37.803702 2700 state_mem.go:36] "Initialized new in-memory state store" May 15 14:59:37.803927 kubelet[2700]: I0515 14:59:37.803900 2700 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 15 14:59:37.803927 kubelet[2700]: I0515 14:59:37.803911 2700 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 15 14:59:37.803927 kubelet[2700]: I0515 14:59:37.803932 2700 policy_none.go:49] "None policy: Start" May 15 14:59:37.805563 kubelet[2700]: I0515 14:59:37.803943 2700 memory_manager.go:186] "Starting memorymanager" policy="None" May 15 14:59:37.805563 kubelet[2700]: I0515 14:59:37.803954 2700 state_mem.go:35] "Initializing new in-memory state store" May 15 14:59:37.805563 kubelet[2700]: I0515 14:59:37.804060 2700 state_mem.go:75] "Updated machine memory state" May 15 14:59:37.813423 kubelet[2700]: I0515 14:59:37.813384 2700 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 15 14:59:37.813642 kubelet[2700]: I0515 14:59:37.813624 2700 eviction_manager.go:189] "Eviction manager: starting control loop" May 15 14:59:37.813702 kubelet[2700]: I0515 14:59:37.813646 2700 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 15 14:59:37.814953 kubelet[2700]: I0515 14:59:37.814265 2700 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 15 14:59:37.827243 kubelet[2700]: E0515 14:59:37.827120 2700 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 15 14:59:37.932138 kubelet[2700]: I0515 14:59:37.929768 2700 kubelet_node_status.go:76] "Attempting to register node" node="ci-4334.0.0-a-cad88baf47" May 15 14:59:37.941607 kubelet[2700]: I0515 14:59:37.939978 2700 kubelet_node_status.go:125] "Node was previously registered" node="ci-4334.0.0-a-cad88baf47" May 15 14:59:37.941607 kubelet[2700]: I0515 14:59:37.940144 2700 kubelet_node_status.go:79] "Successfully registered node" node="ci-4334.0.0-a-cad88baf47" May 15 14:59:37.975323 kubelet[2700]: I0515 14:59:37.975267 2700 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4334.0.0-a-cad88baf47" May 15 14:59:37.979378 kubelet[2700]: I0515 14:59:37.979329 2700 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4334.0.0-a-cad88baf47" May 15 14:59:37.979826 kubelet[2700]: I0515 14:59:37.979788 2700 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4334.0.0-a-cad88baf47" May 15 14:59:37.993713 kubelet[2700]: W0515 14:59:37.993664 2700 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 15 14:59:37.998146 kubelet[2700]: W0515 14:59:37.997859 2700 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 15 14:59:37.998768 kubelet[2700]: W0515 14:59:37.998675 2700 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 15 14:59:38.060487 kubelet[2700]: I0515 14:59:38.060244 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0ba589338d5ea44b1edaae15ae21d8f0-kubeconfig\") pod \"kube-controller-manager-ci-4334.0.0-a-cad88baf47\" (UID: \"0ba589338d5ea44b1edaae15ae21d8f0\") " pod="kube-system/kube-controller-manager-ci-4334.0.0-a-cad88baf47" May 15 14:59:38.060487 kubelet[2700]: I0515 14:59:38.060413 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0ba589338d5ea44b1edaae15ae21d8f0-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4334.0.0-a-cad88baf47\" (UID: \"0ba589338d5ea44b1edaae15ae21d8f0\") " pod="kube-system/kube-controller-manager-ci-4334.0.0-a-cad88baf47" May 15 14:59:38.060487 kubelet[2700]: I0515 14:59:38.060440 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f87b68056d8924a7683668f8bc7b848d-k8s-certs\") pod \"kube-apiserver-ci-4334.0.0-a-cad88baf47\" (UID: \"f87b68056d8924a7683668f8bc7b848d\") " pod="kube-system/kube-apiserver-ci-4334.0.0-a-cad88baf47" May 15 14:59:38.060758 kubelet[2700]: I0515 14:59:38.060502 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f87b68056d8924a7683668f8bc7b848d-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4334.0.0-a-cad88baf47\" (UID: \"f87b68056d8924a7683668f8bc7b848d\") " pod="kube-system/kube-apiserver-ci-4334.0.0-a-cad88baf47" May 15 14:59:38.060758 kubelet[2700]: I0515 14:59:38.060564 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0ba589338d5ea44b1edaae15ae21d8f0-ca-certs\") pod \"kube-controller-manager-ci-4334.0.0-a-cad88baf47\" (UID: \"0ba589338d5ea44b1edaae15ae21d8f0\") " pod="kube-system/kube-controller-manager-ci-4334.0.0-a-cad88baf47" May 15 14:59:38.060758 kubelet[2700]: I0515 14:59:38.060595 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0ba589338d5ea44b1edaae15ae21d8f0-flexvolume-dir\") pod \"kube-controller-manager-ci-4334.0.0-a-cad88baf47\" (UID: \"0ba589338d5ea44b1edaae15ae21d8f0\") " pod="kube-system/kube-controller-manager-ci-4334.0.0-a-cad88baf47" May 15 14:59:38.060758 kubelet[2700]: I0515 14:59:38.060686 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0ba589338d5ea44b1edaae15ae21d8f0-k8s-certs\") pod \"kube-controller-manager-ci-4334.0.0-a-cad88baf47\" (UID: \"0ba589338d5ea44b1edaae15ae21d8f0\") " pod="kube-system/kube-controller-manager-ci-4334.0.0-a-cad88baf47" May 15 14:59:38.060758 kubelet[2700]: I0515 14:59:38.060745 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f87b68056d8924a7683668f8bc7b848d-ca-certs\") pod \"kube-apiserver-ci-4334.0.0-a-cad88baf47\" (UID: \"f87b68056d8924a7683668f8bc7b848d\") " pod="kube-system/kube-apiserver-ci-4334.0.0-a-cad88baf47" May 15 14:59:38.060947 kubelet[2700]: I0515 14:59:38.060815 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d10c1e03f4a26bb758a9fa59ea9d2cad-kubeconfig\") pod \"kube-scheduler-ci-4334.0.0-a-cad88baf47\" (UID: \"d10c1e03f4a26bb758a9fa59ea9d2cad\") " pod="kube-system/kube-scheduler-ci-4334.0.0-a-cad88baf47" May 15 14:59:38.294940 kubelet[2700]: E0515 14:59:38.294877 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 14:59:38.299396 kubelet[2700]: E0515 14:59:38.299347 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 14:59:38.299572 kubelet[2700]: E0515 14:59:38.299494 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 14:59:38.585295 sudo[2726]: pam_unix(sudo:session): session closed for user root May 15 14:59:38.599249 kubelet[2700]: I0515 14:59:38.599156 2700 apiserver.go:52] "Watching apiserver" May 15 14:59:38.657759 kubelet[2700]: I0515 14:59:38.657696 2700 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 15 14:59:38.741778 kubelet[2700]: E0515 14:59:38.741122 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 14:59:38.741778 kubelet[2700]: E0515 14:59:38.741168 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 14:59:38.741778 kubelet[2700]: E0515 14:59:38.741676 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 14:59:38.803718 kubelet[2700]: I0515 14:59:38.803640 2700 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4334.0.0-a-cad88baf47" podStartSLOduration=1.8036123339999999 podStartE2EDuration="1.803612334s" podCreationTimestamp="2025-05-15 14:59:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 14:59:38.790837542 +0000 UTC m=+1.317573355" watchObservedRunningTime="2025-05-15 14:59:38.803612334 +0000 UTC m=+1.330348135" May 15 14:59:38.819349 kubelet[2700]: I0515 14:59:38.819273 2700 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4334.0.0-a-cad88baf47" podStartSLOduration=1.819249255 podStartE2EDuration="1.819249255s" podCreationTimestamp="2025-05-15 14:59:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 14:59:38.806381294 +0000 UTC m=+1.333117105" watchObservedRunningTime="2025-05-15 14:59:38.819249255 +0000 UTC m=+1.345985065" May 15 14:59:38.842324 kubelet[2700]: I0515 14:59:38.840368 2700 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4334.0.0-a-cad88baf47" podStartSLOduration=1.8403382320000001 podStartE2EDuration="1.840338232s" podCreationTimestamp="2025-05-15 14:59:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 14:59:38.822357245 +0000 UTC m=+1.349093059" watchObservedRunningTime="2025-05-15 14:59:38.840338232 +0000 UTC m=+1.367074054" May 15 14:59:39.127883 systemd-resolved[1404]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.2. May 15 14:59:39.860578 systemd-resolved[1404]: Clock change detected. Flushing caches. May 15 14:59:39.861131 systemd-timesyncd[1436]: Contacted time server 208.67.72.43:123 (2.flatcar.pool.ntp.org). May 15 14:59:39.861219 systemd-timesyncd[1436]: Initial clock synchronization to Thu 2025-05-15 14:59:39.860037 UTC. May 15 14:59:40.404307 kubelet[2700]: E0515 14:59:40.403857 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 14:59:40.407055 kubelet[2700]: E0515 14:59:40.403903 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 14:59:40.880690 sudo[1766]: pam_unix(sudo:session): session closed for user root May 15 14:59:40.885999 sshd[1765]: Connection closed by 139.178.68.195 port 55682 May 15 14:59:40.887303 sshd-session[1763]: pam_unix(sshd:session): session closed for user core May 15 14:59:40.895748 systemd-logind[1497]: Session 7 logged out. Waiting for processes to exit. May 15 14:59:40.896506 systemd[1]: sshd@6-137.184.120.255:22-139.178.68.195:55682.service: Deactivated successfully. May 15 14:59:40.901836 systemd[1]: session-7.scope: Deactivated successfully. May 15 14:59:40.902159 systemd[1]: session-7.scope: Consumed 7.072s CPU time, 225.2M memory peak. May 15 14:59:40.905939 systemd-logind[1497]: Removed session 7. May 15 14:59:41.742435 kubelet[2700]: I0515 14:59:41.742394 2700 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 15 14:59:41.743560 containerd[1582]: time="2025-05-15T14:59:41.743380767Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 15 14:59:41.744375 kubelet[2700]: I0515 14:59:41.743772 2700 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 15 14:59:42.409313 systemd[1]: Created slice kubepods-besteffort-pod060a8574_5a7c_476b_a278_c4d32d8768ee.slice - libcontainer container kubepods-besteffort-pod060a8574_5a7c_476b_a278_c4d32d8768ee.slice. May 15 14:59:42.425805 systemd[1]: Created slice kubepods-burstable-podbaa4dd91_1464_443d_a5fc_c7555674eb75.slice - libcontainer container kubepods-burstable-podbaa4dd91_1464_443d_a5fc_c7555674eb75.slice. May 15 14:59:42.452234 kubelet[2700]: I0515 14:59:42.452160 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/060a8574-5a7c-476b-a278-c4d32d8768ee-kube-proxy\") pod \"kube-proxy-9q85z\" (UID: \"060a8574-5a7c-476b-a278-c4d32d8768ee\") " pod="kube-system/kube-proxy-9q85z" May 15 14:59:42.452234 kubelet[2700]: I0515 14:59:42.452241 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4sm5z\" (UniqueName: \"kubernetes.io/projected/060a8574-5a7c-476b-a278-c4d32d8768ee-kube-api-access-4sm5z\") pod \"kube-proxy-9q85z\" (UID: \"060a8574-5a7c-476b-a278-c4d32d8768ee\") " pod="kube-system/kube-proxy-9q85z" May 15 14:59:42.452454 kubelet[2700]: I0515 14:59:42.452274 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/baa4dd91-1464-443d-a5fc-c7555674eb75-cilium-run\") pod \"cilium-kf2fb\" (UID: \"baa4dd91-1464-443d-a5fc-c7555674eb75\") " pod="kube-system/cilium-kf2fb" May 15 14:59:42.452454 kubelet[2700]: I0515 14:59:42.452306 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tgrx\" (UniqueName: \"kubernetes.io/projected/baa4dd91-1464-443d-a5fc-c7555674eb75-kube-api-access-9tgrx\") pod \"cilium-kf2fb\" (UID: \"baa4dd91-1464-443d-a5fc-c7555674eb75\") " pod="kube-system/cilium-kf2fb" May 15 14:59:42.452454 kubelet[2700]: I0515 14:59:42.452332 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/baa4dd91-1464-443d-a5fc-c7555674eb75-host-proc-sys-kernel\") pod \"cilium-kf2fb\" (UID: \"baa4dd91-1464-443d-a5fc-c7555674eb75\") " pod="kube-system/cilium-kf2fb" May 15 14:59:42.452454 kubelet[2700]: I0515 14:59:42.452357 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/baa4dd91-1464-443d-a5fc-c7555674eb75-cilium-cgroup\") pod \"cilium-kf2fb\" (UID: \"baa4dd91-1464-443d-a5fc-c7555674eb75\") " pod="kube-system/cilium-kf2fb" May 15 14:59:42.452454 kubelet[2700]: I0515 14:59:42.452378 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/baa4dd91-1464-443d-a5fc-c7555674eb75-etc-cni-netd\") pod \"cilium-kf2fb\" (UID: \"baa4dd91-1464-443d-a5fc-c7555674eb75\") " pod="kube-system/cilium-kf2fb" May 15 14:59:42.452454 kubelet[2700]: I0515 14:59:42.452407 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/baa4dd91-1464-443d-a5fc-c7555674eb75-lib-modules\") pod \"cilium-kf2fb\" (UID: \"baa4dd91-1464-443d-a5fc-c7555674eb75\") " pod="kube-system/cilium-kf2fb" May 15 14:59:42.452617 kubelet[2700]: I0515 14:59:42.452431 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/baa4dd91-1464-443d-a5fc-c7555674eb75-xtables-lock\") pod \"cilium-kf2fb\" (UID: \"baa4dd91-1464-443d-a5fc-c7555674eb75\") " pod="kube-system/cilium-kf2fb" May 15 14:59:42.452617 kubelet[2700]: I0515 14:59:42.452455 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/baa4dd91-1464-443d-a5fc-c7555674eb75-cilium-config-path\") pod \"cilium-kf2fb\" (UID: \"baa4dd91-1464-443d-a5fc-c7555674eb75\") " pod="kube-system/cilium-kf2fb" May 15 14:59:42.452617 kubelet[2700]: I0515 14:59:42.452479 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/baa4dd91-1464-443d-a5fc-c7555674eb75-cni-path\") pod \"cilium-kf2fb\" (UID: \"baa4dd91-1464-443d-a5fc-c7555674eb75\") " pod="kube-system/cilium-kf2fb" May 15 14:59:42.452617 kubelet[2700]: I0515 14:59:42.452547 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/baa4dd91-1464-443d-a5fc-c7555674eb75-host-proc-sys-net\") pod \"cilium-kf2fb\" (UID: \"baa4dd91-1464-443d-a5fc-c7555674eb75\") " pod="kube-system/cilium-kf2fb" May 15 14:59:42.452617 kubelet[2700]: I0515 14:59:42.452582 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/060a8574-5a7c-476b-a278-c4d32d8768ee-xtables-lock\") pod \"kube-proxy-9q85z\" (UID: \"060a8574-5a7c-476b-a278-c4d32d8768ee\") " pod="kube-system/kube-proxy-9q85z" May 15 14:59:42.453090 kubelet[2700]: I0515 14:59:42.452622 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/baa4dd91-1464-443d-a5fc-c7555674eb75-hubble-tls\") pod \"cilium-kf2fb\" (UID: \"baa4dd91-1464-443d-a5fc-c7555674eb75\") " pod="kube-system/cilium-kf2fb" May 15 14:59:42.453090 kubelet[2700]: I0515 14:59:42.452664 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/060a8574-5a7c-476b-a278-c4d32d8768ee-lib-modules\") pod \"kube-proxy-9q85z\" (UID: \"060a8574-5a7c-476b-a278-c4d32d8768ee\") " pod="kube-system/kube-proxy-9q85z" May 15 14:59:42.453090 kubelet[2700]: I0515 14:59:42.452693 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/baa4dd91-1464-443d-a5fc-c7555674eb75-hostproc\") pod \"cilium-kf2fb\" (UID: \"baa4dd91-1464-443d-a5fc-c7555674eb75\") " pod="kube-system/cilium-kf2fb" May 15 14:59:42.453090 kubelet[2700]: I0515 14:59:42.452733 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/baa4dd91-1464-443d-a5fc-c7555674eb75-bpf-maps\") pod \"cilium-kf2fb\" (UID: \"baa4dd91-1464-443d-a5fc-c7555674eb75\") " pod="kube-system/cilium-kf2fb" May 15 14:59:42.453090 kubelet[2700]: I0515 14:59:42.452763 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/baa4dd91-1464-443d-a5fc-c7555674eb75-clustermesh-secrets\") pod \"cilium-kf2fb\" (UID: \"baa4dd91-1464-443d-a5fc-c7555674eb75\") " pod="kube-system/cilium-kf2fb" May 15 14:59:42.572922 kubelet[2700]: E0515 14:59:42.572636 2700 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found May 15 14:59:42.572922 kubelet[2700]: E0515 14:59:42.572680 2700 projected.go:194] Error preparing data for projected volume kube-api-access-9tgrx for pod kube-system/cilium-kf2fb: configmap "kube-root-ca.crt" not found May 15 14:59:42.572922 kubelet[2700]: E0515 14:59:42.572741 2700 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/baa4dd91-1464-443d-a5fc-c7555674eb75-kube-api-access-9tgrx podName:baa4dd91-1464-443d-a5fc-c7555674eb75 nodeName:}" failed. No retries permitted until 2025-05-15 14:59:43.072717373 +0000 UTC m=+4.939678870 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-9tgrx" (UniqueName: "kubernetes.io/projected/baa4dd91-1464-443d-a5fc-c7555674eb75-kube-api-access-9tgrx") pod "cilium-kf2fb" (UID: "baa4dd91-1464-443d-a5fc-c7555674eb75") : configmap "kube-root-ca.crt" not found May 15 14:59:42.591793 kubelet[2700]: E0515 14:59:42.591711 2700 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found May 15 14:59:42.592186 kubelet[2700]: E0515 14:59:42.592031 2700 projected.go:194] Error preparing data for projected volume kube-api-access-4sm5z for pod kube-system/kube-proxy-9q85z: configmap "kube-root-ca.crt" not found May 15 14:59:42.592297 kubelet[2700]: E0515 14:59:42.592268 2700 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/060a8574-5a7c-476b-a278-c4d32d8768ee-kube-api-access-4sm5z podName:060a8574-5a7c-476b-a278-c4d32d8768ee nodeName:}" failed. No retries permitted until 2025-05-15 14:59:43.092134309 +0000 UTC m=+4.959095810 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-4sm5z" (UniqueName: "kubernetes.io/projected/060a8574-5a7c-476b-a278-c4d32d8768ee-kube-api-access-4sm5z") pod "kube-proxy-9q85z" (UID: "060a8574-5a7c-476b-a278-c4d32d8768ee") : configmap "kube-root-ca.crt" not found May 15 14:59:42.868554 systemd[1]: Created slice kubepods-besteffort-pod7d4892b1_6c49_44a8_be2e_ec0d16256122.slice - libcontainer container kubepods-besteffort-pod7d4892b1_6c49_44a8_be2e_ec0d16256122.slice. May 15 14:59:42.957895 kubelet[2700]: I0515 14:59:42.957763 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7d4892b1-6c49-44a8-be2e-ec0d16256122-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-cc8z8\" (UID: \"7d4892b1-6c49-44a8-be2e-ec0d16256122\") " pod="kube-system/cilium-operator-6c4d7847fc-cc8z8" May 15 14:59:42.957895 kubelet[2700]: I0515 14:59:42.957912 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95xb4\" (UniqueName: \"kubernetes.io/projected/7d4892b1-6c49-44a8-be2e-ec0d16256122-kube-api-access-95xb4\") pod \"cilium-operator-6c4d7847fc-cc8z8\" (UID: \"7d4892b1-6c49-44a8-be2e-ec0d16256122\") " pod="kube-system/cilium-operator-6c4d7847fc-cc8z8" May 15 14:59:43.178589 kubelet[2700]: E0515 14:59:43.178355 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 14:59:43.180424 containerd[1582]: time="2025-05-15T14:59:43.180314524Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-cc8z8,Uid:7d4892b1-6c49-44a8-be2e-ec0d16256122,Namespace:kube-system,Attempt:0,}" May 15 14:59:43.211915 containerd[1582]: time="2025-05-15T14:59:43.211779446Z" level=info msg="connecting to shim 8458e6fa4bf79f7459fce8917293de98d20101ee5c8b43d00e17b22647d39a26" address="unix:///run/containerd/s/de1b41638efcaab9612e5ed64fea40cd4d6efd4c32ee9b3720579a7bbdb99e43" namespace=k8s.io protocol=ttrpc version=3 May 15 14:59:43.253238 systemd[1]: Started cri-containerd-8458e6fa4bf79f7459fce8917293de98d20101ee5c8b43d00e17b22647d39a26.scope - libcontainer container 8458e6fa4bf79f7459fce8917293de98d20101ee5c8b43d00e17b22647d39a26. May 15 14:59:43.319412 kubelet[2700]: E0515 14:59:43.319361 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 14:59:43.322850 containerd[1582]: time="2025-05-15T14:59:43.322505674Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9q85z,Uid:060a8574-5a7c-476b-a278-c4d32d8768ee,Namespace:kube-system,Attempt:0,}" May 15 14:59:43.330453 containerd[1582]: time="2025-05-15T14:59:43.330373340Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-cc8z8,Uid:7d4892b1-6c49-44a8-be2e-ec0d16256122,Namespace:kube-system,Attempt:0,} returns sandbox id \"8458e6fa4bf79f7459fce8917293de98d20101ee5c8b43d00e17b22647d39a26\"" May 15 14:59:43.332191 kubelet[2700]: E0515 14:59:43.332142 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 14:59:43.335352 containerd[1582]: time="2025-05-15T14:59:43.335285008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kf2fb,Uid:baa4dd91-1464-443d-a5fc-c7555674eb75,Namespace:kube-system,Attempt:0,}" May 15 14:59:43.335904 kubelet[2700]: E0515 14:59:43.335707 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 14:59:43.340188 containerd[1582]: time="2025-05-15T14:59:43.340124819Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 15 14:59:43.378908 containerd[1582]: time="2025-05-15T14:59:43.378257160Z" level=info msg="connecting to shim c83398912c2303bde3c3d1c4a61d82956598dd338cd9efb7c09e85e8b391cd23" address="unix:///run/containerd/s/90a3666f9e449fe906a93c5490910d4970a108b5e58ac1fcd2ae56390ef42c72" namespace=k8s.io protocol=ttrpc version=3 May 15 14:59:43.386011 containerd[1582]: time="2025-05-15T14:59:43.385949564Z" level=info msg="connecting to shim 21f76990a58a3cc05aa4bd41359b0b486bfe71ba2bf91adcaa4e5398528b82ac" address="unix:///run/containerd/s/e86861fde0eea087073a86d5507b3a6e276a10fcb486958a1fde7a11dfafb2d5" namespace=k8s.io protocol=ttrpc version=3 May 15 14:59:43.425741 systemd[1]: Started cri-containerd-c83398912c2303bde3c3d1c4a61d82956598dd338cd9efb7c09e85e8b391cd23.scope - libcontainer container c83398912c2303bde3c3d1c4a61d82956598dd338cd9efb7c09e85e8b391cd23. May 15 14:59:43.436832 systemd[1]: Started cri-containerd-21f76990a58a3cc05aa4bd41359b0b486bfe71ba2bf91adcaa4e5398528b82ac.scope - libcontainer container 21f76990a58a3cc05aa4bd41359b0b486bfe71ba2bf91adcaa4e5398528b82ac. May 15 14:59:43.500623 containerd[1582]: time="2025-05-15T14:59:43.500466988Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9q85z,Uid:060a8574-5a7c-476b-a278-c4d32d8768ee,Namespace:kube-system,Attempt:0,} returns sandbox id \"c83398912c2303bde3c3d1c4a61d82956598dd338cd9efb7c09e85e8b391cd23\"" May 15 14:59:43.502217 kubelet[2700]: E0515 14:59:43.502175 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 14:59:43.508038 containerd[1582]: time="2025-05-15T14:59:43.507958602Z" level=info msg="CreateContainer within sandbox \"c83398912c2303bde3c3d1c4a61d82956598dd338cd9efb7c09e85e8b391cd23\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 15 14:59:43.520715 containerd[1582]: time="2025-05-15T14:59:43.520524565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kf2fb,Uid:baa4dd91-1464-443d-a5fc-c7555674eb75,Namespace:kube-system,Attempt:0,} returns sandbox id \"21f76990a58a3cc05aa4bd41359b0b486bfe71ba2bf91adcaa4e5398528b82ac\"" May 15 14:59:43.523061 kubelet[2700]: E0515 14:59:43.522929 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 14:59:43.529956 containerd[1582]: time="2025-05-15T14:59:43.529834316Z" level=info msg="Container 70c61efffba1bb13e039f5b7ba879951920f37500b99b9784e61cde5d2ae57d6: CDI devices from CRI Config.CDIDevices: []" May 15 14:59:43.547908 containerd[1582]: time="2025-05-15T14:59:43.547792481Z" level=info msg="CreateContainer within sandbox \"c83398912c2303bde3c3d1c4a61d82956598dd338cd9efb7c09e85e8b391cd23\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"70c61efffba1bb13e039f5b7ba879951920f37500b99b9784e61cde5d2ae57d6\"" May 15 14:59:43.549942 containerd[1582]: time="2025-05-15T14:59:43.548981415Z" level=info msg="StartContainer for \"70c61efffba1bb13e039f5b7ba879951920f37500b99b9784e61cde5d2ae57d6\"" May 15 14:59:43.552077 containerd[1582]: time="2025-05-15T14:59:43.552019039Z" level=info msg="connecting to shim 70c61efffba1bb13e039f5b7ba879951920f37500b99b9784e61cde5d2ae57d6" address="unix:///run/containerd/s/90a3666f9e449fe906a93c5490910d4970a108b5e58ac1fcd2ae56390ef42c72" protocol=ttrpc version=3 May 15 14:59:43.599228 systemd[1]: Started cri-containerd-70c61efffba1bb13e039f5b7ba879951920f37500b99b9784e61cde5d2ae57d6.scope - libcontainer container 70c61efffba1bb13e039f5b7ba879951920f37500b99b9784e61cde5d2ae57d6. May 15 14:59:43.670172 containerd[1582]: time="2025-05-15T14:59:43.670042982Z" level=info msg="StartContainer for \"70c61efffba1bb13e039f5b7ba879951920f37500b99b9784e61cde5d2ae57d6\" returns successfully" May 15 14:59:44.375714 kubelet[2700]: E0515 14:59:44.375665 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 14:59:44.423630 kubelet[2700]: E0515 14:59:44.423574 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 14:59:44.424131 kubelet[2700]: E0515 14:59:44.424102 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 14:59:44.479153 kubelet[2700]: I0515 14:59:44.478936 2700 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9q85z" podStartSLOduration=2.47891016 podStartE2EDuration="2.47891016s" podCreationTimestamp="2025-05-15 14:59:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 14:59:44.445301861 +0000 UTC m=+6.312263373" watchObservedRunningTime="2025-05-15 14:59:44.47891016 +0000 UTC m=+6.345871671" May 15 14:59:44.963834 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount67129183.mount: Deactivated successfully. May 15 14:59:45.425998 kubelet[2700]: E0515 14:59:45.425944 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 14:59:45.895949 containerd[1582]: time="2025-05-15T14:59:45.895653732Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 14:59:45.899089 containerd[1582]: time="2025-05-15T14:59:45.899013000Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" May 15 14:59:45.899410 containerd[1582]: time="2025-05-15T14:59:45.899374139Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 14:59:45.901856 containerd[1582]: time="2025-05-15T14:59:45.901791711Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.561595663s" May 15 14:59:45.901856 containerd[1582]: time="2025-05-15T14:59:45.901931442Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 15 14:59:45.906942 containerd[1582]: time="2025-05-15T14:59:45.905196933Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 15 14:59:45.907288 containerd[1582]: time="2025-05-15T14:59:45.906237751Z" level=info msg="CreateContainer within sandbox \"8458e6fa4bf79f7459fce8917293de98d20101ee5c8b43d00e17b22647d39a26\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 15 14:59:45.925924 containerd[1582]: time="2025-05-15T14:59:45.924765268Z" level=info msg="Container 42823f7338002b6337a80dac59b2ee76b092b024550116696cbbc8e64a0ce746: CDI devices from CRI Config.CDIDevices: []" May 15 14:59:45.934150 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount838326613.mount: Deactivated successfully. May 15 14:59:45.940684 containerd[1582]: time="2025-05-15T14:59:45.940610659Z" level=info msg="CreateContainer within sandbox \"8458e6fa4bf79f7459fce8917293de98d20101ee5c8b43d00e17b22647d39a26\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"42823f7338002b6337a80dac59b2ee76b092b024550116696cbbc8e64a0ce746\"" May 15 14:59:45.941702 containerd[1582]: time="2025-05-15T14:59:45.941654703Z" level=info msg="StartContainer for \"42823f7338002b6337a80dac59b2ee76b092b024550116696cbbc8e64a0ce746\"" May 15 14:59:45.943095 containerd[1582]: time="2025-05-15T14:59:45.942990862Z" level=info msg="connecting to shim 42823f7338002b6337a80dac59b2ee76b092b024550116696cbbc8e64a0ce746" address="unix:///run/containerd/s/de1b41638efcaab9612e5ed64fea40cd4d6efd4c32ee9b3720579a7bbdb99e43" protocol=ttrpc version=3 May 15 14:59:45.985374 systemd[1]: Started cri-containerd-42823f7338002b6337a80dac59b2ee76b092b024550116696cbbc8e64a0ce746.scope - libcontainer container 42823f7338002b6337a80dac59b2ee76b092b024550116696cbbc8e64a0ce746. May 15 14:59:46.045235 containerd[1582]: time="2025-05-15T14:59:46.045125567Z" level=info msg="StartContainer for \"42823f7338002b6337a80dac59b2ee76b092b024550116696cbbc8e64a0ce746\" returns successfully" May 15 14:59:46.113581 kubelet[2700]: E0515 14:59:46.112320 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 14:59:46.393213 kubelet[2700]: E0515 14:59:46.393156 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 14:59:46.436625 kubelet[2700]: E0515 14:59:46.436563 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 14:59:46.438581 kubelet[2700]: E0515 14:59:46.438526 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 14:59:46.439034 kubelet[2700]: E0515 14:59:46.439007 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 14:59:46.599209 kubelet[2700]: I0515 14:59:46.599126 2700 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-cc8z8" podStartSLOduration=2.033794564 podStartE2EDuration="4.599093209s" podCreationTimestamp="2025-05-15 14:59:42 +0000 UTC" firstStartedPulling="2025-05-15 14:59:43.33839025 +0000 UTC m=+5.205351762" lastFinishedPulling="2025-05-15 14:59:45.903688909 +0000 UTC m=+7.770650407" observedRunningTime="2025-05-15 14:59:46.546751477 +0000 UTC m=+8.413712999" watchObservedRunningTime="2025-05-15 14:59:46.599093209 +0000 UTC m=+8.466054720" May 15 14:59:47.266926 update_engine[1498]: I20250515 14:59:47.264950 1498 update_attempter.cc:509] Updating boot flags... May 15 14:59:47.510230 kubelet[2700]: E0515 14:59:47.510193 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 14:59:47.512706 kubelet[2700]: E0515 14:59:47.512602 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 14:59:52.255656 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3876540000.mount: Deactivated successfully. May 15 14:59:55.190161 containerd[1582]: time="2025-05-15T14:59:55.190061571Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 14:59:55.191959 containerd[1582]: time="2025-05-15T14:59:55.191868319Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" May 15 14:59:55.193834 containerd[1582]: time="2025-05-15T14:59:55.193738639Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 14:59:55.195755 containerd[1582]: time="2025-05-15T14:59:55.195314530Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 9.288292589s" May 15 14:59:55.195755 containerd[1582]: time="2025-05-15T14:59:55.195422848Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 15 14:59:55.199987 containerd[1582]: time="2025-05-15T14:59:55.199914110Z" level=info msg="CreateContainer within sandbox \"21f76990a58a3cc05aa4bd41359b0b486bfe71ba2bf91adcaa4e5398528b82ac\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 15 14:59:55.221935 containerd[1582]: time="2025-05-15T14:59:55.221150880Z" level=info msg="Container d7983d43b2590a9337c8ea15d85cee7c9cff3826c6eaaf2305e4f709ff42808d: CDI devices from CRI Config.CDIDevices: []" May 15 14:59:55.226683 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1441445072.mount: Deactivated successfully. May 15 14:59:55.234796 containerd[1582]: time="2025-05-15T14:59:55.234702066Z" level=info msg="CreateContainer within sandbox \"21f76990a58a3cc05aa4bd41359b0b486bfe71ba2bf91adcaa4e5398528b82ac\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d7983d43b2590a9337c8ea15d85cee7c9cff3826c6eaaf2305e4f709ff42808d\"" May 15 14:59:55.236053 containerd[1582]: time="2025-05-15T14:59:55.236015708Z" level=info msg="StartContainer for \"d7983d43b2590a9337c8ea15d85cee7c9cff3826c6eaaf2305e4f709ff42808d\"" May 15 14:59:55.237550 containerd[1582]: time="2025-05-15T14:59:55.237501930Z" level=info msg="connecting to shim d7983d43b2590a9337c8ea15d85cee7c9cff3826c6eaaf2305e4f709ff42808d" address="unix:///run/containerd/s/e86861fde0eea087073a86d5507b3a6e276a10fcb486958a1fde7a11dfafb2d5" protocol=ttrpc version=3 May 15 14:59:55.299432 systemd[1]: Started cri-containerd-d7983d43b2590a9337c8ea15d85cee7c9cff3826c6eaaf2305e4f709ff42808d.scope - libcontainer container d7983d43b2590a9337c8ea15d85cee7c9cff3826c6eaaf2305e4f709ff42808d. May 15 14:59:55.355314 containerd[1582]: time="2025-05-15T14:59:55.355247546Z" level=info msg="StartContainer for \"d7983d43b2590a9337c8ea15d85cee7c9cff3826c6eaaf2305e4f709ff42808d\" returns successfully" May 15 14:59:55.373602 systemd[1]: cri-containerd-d7983d43b2590a9337c8ea15d85cee7c9cff3826c6eaaf2305e4f709ff42808d.scope: Deactivated successfully. May 15 14:59:55.464740 containerd[1582]: time="2025-05-15T14:59:55.449327002Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d7983d43b2590a9337c8ea15d85cee7c9cff3826c6eaaf2305e4f709ff42808d\" id:\"d7983d43b2590a9337c8ea15d85cee7c9cff3826c6eaaf2305e4f709ff42808d\" pid:3172 exited_at:{seconds:1747321195 nanos:375291346}" May 15 14:59:55.476207 containerd[1582]: time="2025-05-15T14:59:55.476058308Z" level=info msg="received exit event container_id:\"d7983d43b2590a9337c8ea15d85cee7c9cff3826c6eaaf2305e4f709ff42808d\" id:\"d7983d43b2590a9337c8ea15d85cee7c9cff3826c6eaaf2305e4f709ff42808d\" pid:3172 exited_at:{seconds:1747321195 nanos:375291346}" May 15 14:59:55.516010 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d7983d43b2590a9337c8ea15d85cee7c9cff3826c6eaaf2305e4f709ff42808d-rootfs.mount: Deactivated successfully. May 15 14:59:55.565900 kubelet[2700]: E0515 14:59:55.565828 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 14:59:56.571275 kubelet[2700]: E0515 14:59:56.571188 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 14:59:56.593772 containerd[1582]: time="2025-05-15T14:59:56.593634939Z" level=info msg="CreateContainer within sandbox \"21f76990a58a3cc05aa4bd41359b0b486bfe71ba2bf91adcaa4e5398528b82ac\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 15 14:59:56.624047 containerd[1582]: time="2025-05-15T14:59:56.623949771Z" level=info msg="Container dd73e737f302049ff11623a828e49e8fcc97139aa74c039d1c96c2eb20d71c7e: CDI devices from CRI Config.CDIDevices: []" May 15 14:59:56.630844 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1239260012.mount: Deactivated successfully. May 15 14:59:56.646041 containerd[1582]: time="2025-05-15T14:59:56.645949493Z" level=info msg="CreateContainer within sandbox \"21f76990a58a3cc05aa4bd41359b0b486bfe71ba2bf91adcaa4e5398528b82ac\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"dd73e737f302049ff11623a828e49e8fcc97139aa74c039d1c96c2eb20d71c7e\"" May 15 14:59:56.647341 containerd[1582]: time="2025-05-15T14:59:56.647202885Z" level=info msg="StartContainer for \"dd73e737f302049ff11623a828e49e8fcc97139aa74c039d1c96c2eb20d71c7e\"" May 15 14:59:56.649960 containerd[1582]: time="2025-05-15T14:59:56.649865792Z" level=info msg="connecting to shim dd73e737f302049ff11623a828e49e8fcc97139aa74c039d1c96c2eb20d71c7e" address="unix:///run/containerd/s/e86861fde0eea087073a86d5507b3a6e276a10fcb486958a1fde7a11dfafb2d5" protocol=ttrpc version=3 May 15 14:59:56.694427 systemd[1]: Started cri-containerd-dd73e737f302049ff11623a828e49e8fcc97139aa74c039d1c96c2eb20d71c7e.scope - libcontainer container dd73e737f302049ff11623a828e49e8fcc97139aa74c039d1c96c2eb20d71c7e. May 15 14:59:56.756022 containerd[1582]: time="2025-05-15T14:59:56.755653335Z" level=info msg="StartContainer for \"dd73e737f302049ff11623a828e49e8fcc97139aa74c039d1c96c2eb20d71c7e\" returns successfully" May 15 14:59:56.781755 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 15 14:59:56.782749 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 15 14:59:56.783366 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 15 14:59:56.786338 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 15 14:59:56.790586 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 15 14:59:56.794103 systemd[1]: cri-containerd-dd73e737f302049ff11623a828e49e8fcc97139aa74c039d1c96c2eb20d71c7e.scope: Deactivated successfully. May 15 14:59:56.797798 containerd[1582]: time="2025-05-15T14:59:56.797576606Z" level=info msg="received exit event container_id:\"dd73e737f302049ff11623a828e49e8fcc97139aa74c039d1c96c2eb20d71c7e\" id:\"dd73e737f302049ff11623a828e49e8fcc97139aa74c039d1c96c2eb20d71c7e\" pid:3215 exited_at:{seconds:1747321196 nanos:796588590}" May 15 14:59:56.798341 containerd[1582]: time="2025-05-15T14:59:56.798293998Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dd73e737f302049ff11623a828e49e8fcc97139aa74c039d1c96c2eb20d71c7e\" id:\"dd73e737f302049ff11623a828e49e8fcc97139aa74c039d1c96c2eb20d71c7e\" pid:3215 exited_at:{seconds:1747321196 nanos:796588590}" May 15 14:59:56.850523 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 15 14:59:57.576527 kubelet[2700]: E0515 14:59:57.576404 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 14:59:57.585684 containerd[1582]: time="2025-05-15T14:59:57.585635195Z" level=info msg="CreateContainer within sandbox \"21f76990a58a3cc05aa4bd41359b0b486bfe71ba2bf91adcaa4e5398528b82ac\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 15 14:59:57.624044 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dd73e737f302049ff11623a828e49e8fcc97139aa74c039d1c96c2eb20d71c7e-rootfs.mount: Deactivated successfully. May 15 14:59:57.659775 containerd[1582]: time="2025-05-15T14:59:57.657127969Z" level=info msg="Container 75b90c71182f7103d87f8a3ff7bb115cd1934951379629027c744485bfbdf676: CDI devices from CRI Config.CDIDevices: []" May 15 14:59:57.696805 containerd[1582]: time="2025-05-15T14:59:57.696739306Z" level=info msg="CreateContainer within sandbox \"21f76990a58a3cc05aa4bd41359b0b486bfe71ba2bf91adcaa4e5398528b82ac\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"75b90c71182f7103d87f8a3ff7bb115cd1934951379629027c744485bfbdf676\"" May 15 14:59:57.701118 containerd[1582]: time="2025-05-15T14:59:57.701048114Z" level=info msg="StartContainer for \"75b90c71182f7103d87f8a3ff7bb115cd1934951379629027c744485bfbdf676\"" May 15 14:59:57.705489 containerd[1582]: time="2025-05-15T14:59:57.705423974Z" level=info msg="connecting to shim 75b90c71182f7103d87f8a3ff7bb115cd1934951379629027c744485bfbdf676" address="unix:///run/containerd/s/e86861fde0eea087073a86d5507b3a6e276a10fcb486958a1fde7a11dfafb2d5" protocol=ttrpc version=3 May 15 14:59:57.760625 systemd[1]: Started cri-containerd-75b90c71182f7103d87f8a3ff7bb115cd1934951379629027c744485bfbdf676.scope - libcontainer container 75b90c71182f7103d87f8a3ff7bb115cd1934951379629027c744485bfbdf676. May 15 14:59:57.816119 containerd[1582]: time="2025-05-15T14:59:57.816029703Z" level=info msg="StartContainer for \"75b90c71182f7103d87f8a3ff7bb115cd1934951379629027c744485bfbdf676\" returns successfully" May 15 14:59:57.818727 systemd[1]: cri-containerd-75b90c71182f7103d87f8a3ff7bb115cd1934951379629027c744485bfbdf676.scope: Deactivated successfully. May 15 14:59:57.819202 systemd[1]: cri-containerd-75b90c71182f7103d87f8a3ff7bb115cd1934951379629027c744485bfbdf676.scope: Consumed 28ms CPU time, 5.8M memory peak, 1M read from disk. May 15 14:59:57.822137 containerd[1582]: time="2025-05-15T14:59:57.822050229Z" level=info msg="received exit event container_id:\"75b90c71182f7103d87f8a3ff7bb115cd1934951379629027c744485bfbdf676\" id:\"75b90c71182f7103d87f8a3ff7bb115cd1934951379629027c744485bfbdf676\" pid:3262 exited_at:{seconds:1747321197 nanos:821499052}" May 15 14:59:57.823798 containerd[1582]: time="2025-05-15T14:59:57.823567599Z" level=info msg="TaskExit event in podsandbox handler container_id:\"75b90c71182f7103d87f8a3ff7bb115cd1934951379629027c744485bfbdf676\" id:\"75b90c71182f7103d87f8a3ff7bb115cd1934951379629027c744485bfbdf676\" pid:3262 exited_at:{seconds:1747321197 nanos:821499052}" May 15 14:59:57.856949 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-75b90c71182f7103d87f8a3ff7bb115cd1934951379629027c744485bfbdf676-rootfs.mount: Deactivated successfully. May 15 14:59:58.585507 kubelet[2700]: E0515 14:59:58.585367 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 14:59:58.594980 containerd[1582]: time="2025-05-15T14:59:58.593792279Z" level=info msg="CreateContainer within sandbox \"21f76990a58a3cc05aa4bd41359b0b486bfe71ba2bf91adcaa4e5398528b82ac\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 15 14:59:58.609095 containerd[1582]: time="2025-05-15T14:59:58.609040620Z" level=info msg="Container eaba067f7b1382786db94346ce6752029016429c3267d93bfc369b04950bcc7c: CDI devices from CRI Config.CDIDevices: []" May 15 14:59:58.623602 containerd[1582]: time="2025-05-15T14:59:58.622447994Z" level=info msg="CreateContainer within sandbox \"21f76990a58a3cc05aa4bd41359b0b486bfe71ba2bf91adcaa4e5398528b82ac\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"eaba067f7b1382786db94346ce6752029016429c3267d93bfc369b04950bcc7c\"" May 15 14:59:58.627280 containerd[1582]: time="2025-05-15T14:59:58.626260460Z" level=info msg="StartContainer for \"eaba067f7b1382786db94346ce6752029016429c3267d93bfc369b04950bcc7c\"" May 15 14:59:58.628550 containerd[1582]: time="2025-05-15T14:59:58.628496130Z" level=info msg="connecting to shim eaba067f7b1382786db94346ce6752029016429c3267d93bfc369b04950bcc7c" address="unix:///run/containerd/s/e86861fde0eea087073a86d5507b3a6e276a10fcb486958a1fde7a11dfafb2d5" protocol=ttrpc version=3 May 15 14:59:58.661609 systemd[1]: Started cri-containerd-eaba067f7b1382786db94346ce6752029016429c3267d93bfc369b04950bcc7c.scope - libcontainer container eaba067f7b1382786db94346ce6752029016429c3267d93bfc369b04950bcc7c. May 15 14:59:58.702595 systemd[1]: cri-containerd-eaba067f7b1382786db94346ce6752029016429c3267d93bfc369b04950bcc7c.scope: Deactivated successfully. May 15 14:59:58.705151 containerd[1582]: time="2025-05-15T14:59:58.705055379Z" level=info msg="TaskExit event in podsandbox handler container_id:\"eaba067f7b1382786db94346ce6752029016429c3267d93bfc369b04950bcc7c\" id:\"eaba067f7b1382786db94346ce6752029016429c3267d93bfc369b04950bcc7c\" pid:3302 exited_at:{seconds:1747321198 nanos:704177166}" May 15 14:59:58.706243 containerd[1582]: time="2025-05-15T14:59:58.706026672Z" level=info msg="received exit event container_id:\"eaba067f7b1382786db94346ce6752029016429c3267d93bfc369b04950bcc7c\" id:\"eaba067f7b1382786db94346ce6752029016429c3267d93bfc369b04950bcc7c\" pid:3302 exited_at:{seconds:1747321198 nanos:704177166}" May 15 14:59:58.717271 containerd[1582]: time="2025-05-15T14:59:58.717214972Z" level=info msg="StartContainer for \"eaba067f7b1382786db94346ce6752029016429c3267d93bfc369b04950bcc7c\" returns successfully" May 15 14:59:58.738357 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eaba067f7b1382786db94346ce6752029016429c3267d93bfc369b04950bcc7c-rootfs.mount: Deactivated successfully. May 15 14:59:58.837294 kubelet[2700]: I0515 14:59:58.837071 2700 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 15 14:59:58.838942 kubelet[2700]: I0515 14:59:58.838647 2700 container_gc.go:86] "Attempting to delete unused containers" May 15 14:59:58.843527 kubelet[2700]: I0515 14:59:58.843480 2700 image_gc_manager.go:431] "Attempting to delete unused images" May 15 14:59:58.863905 kubelet[2700]: I0515 14:59:58.863149 2700 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 15 14:59:58.863905 kubelet[2700]: I0515 14:59:58.863326 2700 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-kf2fb","kube-system/cilium-operator-6c4d7847fc-cc8z8","kube-system/kube-controller-manager-ci-4334.0.0-a-cad88baf47","kube-system/kube-proxy-9q85z","kube-system/kube-apiserver-ci-4334.0.0-a-cad88baf47","kube-system/kube-scheduler-ci-4334.0.0-a-cad88baf47"] May 15 14:59:58.863905 kubelet[2700]: E0515 14:59:58.863416 2700 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-kf2fb" May 15 14:59:58.863905 kubelet[2700]: E0515 14:59:58.863435 2700 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-cc8z8" May 15 14:59:58.863905 kubelet[2700]: E0515 14:59:58.863479 2700 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-ci-4334.0.0-a-cad88baf47" May 15 14:59:58.863905 kubelet[2700]: E0515 14:59:58.863496 2700 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-9q85z" May 15 14:59:58.863905 kubelet[2700]: E0515 14:59:58.863518 2700 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-ci-4334.0.0-a-cad88baf47" May 15 14:59:58.863905 kubelet[2700]: E0515 14:59:58.863550 2700 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-ci-4334.0.0-a-cad88baf47" May 15 14:59:58.863905 kubelet[2700]: I0515 14:59:58.863568 2700 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" May 15 14:59:59.596552 kubelet[2700]: E0515 14:59:59.594868 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 14:59:59.602100 containerd[1582]: time="2025-05-15T14:59:59.602030691Z" level=info msg="CreateContainer within sandbox \"21f76990a58a3cc05aa4bd41359b0b486bfe71ba2bf91adcaa4e5398528b82ac\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 15 14:59:59.653241 containerd[1582]: time="2025-05-15T14:59:59.653140998Z" level=info msg="Container bd0466d19cba1deda7a97b2a2156051f2f59172bb1864ee18e85817ee39f7e1a: CDI devices from CRI Config.CDIDevices: []" May 15 14:59:59.653666 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3099031456.mount: Deactivated successfully. May 15 14:59:59.665664 containerd[1582]: time="2025-05-15T14:59:59.665475610Z" level=info msg="CreateContainer within sandbox \"21f76990a58a3cc05aa4bd41359b0b486bfe71ba2bf91adcaa4e5398528b82ac\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"bd0466d19cba1deda7a97b2a2156051f2f59172bb1864ee18e85817ee39f7e1a\"" May 15 14:59:59.666729 containerd[1582]: time="2025-05-15T14:59:59.666687684Z" level=info msg="StartContainer for \"bd0466d19cba1deda7a97b2a2156051f2f59172bb1864ee18e85817ee39f7e1a\"" May 15 14:59:59.673129 containerd[1582]: time="2025-05-15T14:59:59.673065543Z" level=info msg="connecting to shim bd0466d19cba1deda7a97b2a2156051f2f59172bb1864ee18e85817ee39f7e1a" address="unix:///run/containerd/s/e86861fde0eea087073a86d5507b3a6e276a10fcb486958a1fde7a11dfafb2d5" protocol=ttrpc version=3 May 15 14:59:59.707324 systemd[1]: Started cri-containerd-bd0466d19cba1deda7a97b2a2156051f2f59172bb1864ee18e85817ee39f7e1a.scope - libcontainer container bd0466d19cba1deda7a97b2a2156051f2f59172bb1864ee18e85817ee39f7e1a. May 15 14:59:59.774535 containerd[1582]: time="2025-05-15T14:59:59.774441517Z" level=info msg="StartContainer for \"bd0466d19cba1deda7a97b2a2156051f2f59172bb1864ee18e85817ee39f7e1a\" returns successfully" May 15 14:59:59.891829 containerd[1582]: time="2025-05-15T14:59:59.891658891Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bd0466d19cba1deda7a97b2a2156051f2f59172bb1864ee18e85817ee39f7e1a\" id:\"d92f77e42a44968d9c293e30938b1678ec0e4501c037278f69ac7afb6b973970\" pid:3372 exited_at:{seconds:1747321199 nanos:891232514}" May 15 14:59:59.963964 kubelet[2700]: I0515 14:59:59.963926 2700 kubelet_node_status.go:502] "Fast updating node status as it just became ready" May 15 15:00:00.634960 kubelet[2700]: E0515 15:00:00.633751 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:00:01.647579 kubelet[2700]: E0515 15:00:01.647479 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:00:02.680854 kubelet[2700]: E0515 15:00:02.675802 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:00:02.770391 systemd-networkd[1458]: cilium_host: Link UP May 15 15:00:02.770684 systemd-networkd[1458]: cilium_net: Link UP May 15 15:00:02.770868 systemd-networkd[1458]: cilium_net: Gained carrier May 15 15:00:02.771050 systemd-networkd[1458]: cilium_host: Gained carrier May 15 15:00:03.272514 systemd-networkd[1458]: cilium_vxlan: Link UP May 15 15:00:03.272524 systemd-networkd[1458]: cilium_vxlan: Gained carrier May 15 15:00:03.280125 systemd-networkd[1458]: cilium_net: Gained IPv6LL May 15 15:00:03.600244 systemd-networkd[1458]: cilium_host: Gained IPv6LL May 15 15:00:04.219925 kernel: NET: Registered PF_ALG protocol family May 15 15:00:05.137819 systemd-networkd[1458]: cilium_vxlan: Gained IPv6LL May 15 15:00:06.311710 systemd-networkd[1458]: lxc_health: Link UP May 15 15:00:06.319448 systemd-networkd[1458]: lxc_health: Gained carrier May 15 15:00:07.336364 kubelet[2700]: E0515 15:00:07.336065 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:00:07.400563 kubelet[2700]: I0515 15:00:07.399899 2700 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-kf2fb" podStartSLOduration=13.726959507 podStartE2EDuration="25.399850235s" podCreationTimestamp="2025-05-15 14:59:42 +0000 UTC" firstStartedPulling="2025-05-15 14:59:43.52382636 +0000 UTC m=+5.390787862" lastFinishedPulling="2025-05-15 14:59:55.196717088 +0000 UTC m=+17.063678590" observedRunningTime="2025-05-15 15:00:00.711906411 +0000 UTC m=+22.578867931" watchObservedRunningTime="2025-05-15 15:00:07.399850235 +0000 UTC m=+29.266811752" May 15 15:00:07.567190 systemd-networkd[1458]: lxc_health: Gained IPv6LL May 15 15:00:08.926732 kubelet[2700]: I0515 15:00:08.924057 2700 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 15 15:00:08.926732 kubelet[2700]: I0515 15:00:08.924148 2700 container_gc.go:86] "Attempting to delete unused containers" May 15 15:00:08.931127 kubelet[2700]: I0515 15:00:08.930219 2700 image_gc_manager.go:431] "Attempting to delete unused images" May 15 15:00:08.962316 kubelet[2700]: I0515 15:00:08.962268 2700 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 15 15:00:08.963077 kubelet[2700]: I0515 15:00:08.962868 2700 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-cc8z8","kube-system/cilium-kf2fb","kube-system/kube-controller-manager-ci-4334.0.0-a-cad88baf47","kube-system/kube-proxy-9q85z","kube-system/kube-apiserver-ci-4334.0.0-a-cad88baf47","kube-system/kube-scheduler-ci-4334.0.0-a-cad88baf47"] May 15 15:00:08.963421 kubelet[2700]: E0515 15:00:08.963294 2700 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-cc8z8" May 15 15:00:08.963421 kubelet[2700]: E0515 15:00:08.963350 2700 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-kf2fb" May 15 15:00:08.963421 kubelet[2700]: E0515 15:00:08.963368 2700 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-ci-4334.0.0-a-cad88baf47" May 15 15:00:08.963421 kubelet[2700]: E0515 15:00:08.963383 2700 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-9q85z" May 15 15:00:08.963421 kubelet[2700]: E0515 15:00:08.963400 2700 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-ci-4334.0.0-a-cad88baf47" May 15 15:00:08.963769 kubelet[2700]: E0515 15:00:08.963702 2700 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-ci-4334.0.0-a-cad88baf47" May 15 15:00:08.963769 kubelet[2700]: I0515 15:00:08.963730 2700 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" May 15 15:00:11.784909 kubelet[2700]: I0515 15:00:11.784530 2700 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 15 15:00:11.787512 kubelet[2700]: E0515 15:00:11.786963 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:00:12.767458 kubelet[2700]: E0515 15:00:12.767216 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:00:18.989782 kubelet[2700]: I0515 15:00:18.989685 2700 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 15 15:00:18.990929 kubelet[2700]: I0515 15:00:18.990329 2700 container_gc.go:86] "Attempting to delete unused containers" May 15 15:00:18.996708 kubelet[2700]: I0515 15:00:18.996656 2700 image_gc_manager.go:431] "Attempting to delete unused images" May 15 15:00:19.034544 kubelet[2700]: I0515 15:00:19.034495 2700 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 15 15:00:19.035044 kubelet[2700]: I0515 15:00:19.035000 2700 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-cc8z8","kube-system/cilium-kf2fb","kube-system/kube-controller-manager-ci-4334.0.0-a-cad88baf47","kube-system/kube-proxy-9q85z","kube-system/kube-apiserver-ci-4334.0.0-a-cad88baf47","kube-system/kube-scheduler-ci-4334.0.0-a-cad88baf47"] May 15 15:00:19.035366 kubelet[2700]: E0515 15:00:19.035305 2700 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-cc8z8" May 15 15:00:19.035366 kubelet[2700]: E0515 15:00:19.035337 2700 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-kf2fb" May 15 15:00:19.035602 kubelet[2700]: E0515 15:00:19.035530 2700 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-ci-4334.0.0-a-cad88baf47" May 15 15:00:19.035602 kubelet[2700]: E0515 15:00:19.035567 2700 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-9q85z" May 15 15:00:19.035814 kubelet[2700]: E0515 15:00:19.035691 2700 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-ci-4334.0.0-a-cad88baf47" May 15 15:00:19.035814 kubelet[2700]: E0515 15:00:19.035711 2700 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-ci-4334.0.0-a-cad88baf47" May 15 15:00:19.035814 kubelet[2700]: I0515 15:00:19.035732 2700 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" May 15 15:00:29.056648 kubelet[2700]: I0515 15:00:29.056563 2700 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 15 15:00:29.056648 kubelet[2700]: I0515 15:00:29.056620 2700 container_gc.go:86] "Attempting to delete unused containers" May 15 15:00:29.063553 kubelet[2700]: I0515 15:00:29.063514 2700 image_gc_manager.go:431] "Attempting to delete unused images" May 15 15:00:29.101916 kubelet[2700]: I0515 15:00:29.101595 2700 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 15 15:00:29.101916 kubelet[2700]: I0515 15:00:29.101768 2700 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-cc8z8","kube-system/cilium-kf2fb","kube-system/kube-controller-manager-ci-4334.0.0-a-cad88baf47","kube-system/kube-proxy-9q85z","kube-system/kube-apiserver-ci-4334.0.0-a-cad88baf47","kube-system/kube-scheduler-ci-4334.0.0-a-cad88baf47"] May 15 15:00:29.101916 kubelet[2700]: E0515 15:00:29.101832 2700 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-cc8z8" May 15 15:00:29.101916 kubelet[2700]: E0515 15:00:29.101852 2700 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-kf2fb" May 15 15:00:29.102402 kubelet[2700]: E0515 15:00:29.101867 2700 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-ci-4334.0.0-a-cad88baf47" May 15 15:00:29.102402 kubelet[2700]: E0515 15:00:29.102321 2700 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-9q85z" May 15 15:00:29.102402 kubelet[2700]: E0515 15:00:29.102344 2700 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-ci-4334.0.0-a-cad88baf47" May 15 15:00:29.102402 kubelet[2700]: E0515 15:00:29.102360 2700 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-ci-4334.0.0-a-cad88baf47" May 15 15:00:29.102402 kubelet[2700]: I0515 15:00:29.102381 2700 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" May 15 15:00:30.461589 systemd[1]: Started sshd@7-137.184.120.255:22-139.178.68.195:45788.service - OpenSSH per-connection server daemon (139.178.68.195:45788). May 15 15:00:30.594488 sshd[3818]: Accepted publickey for core from 139.178.68.195 port 45788 ssh2: RSA SHA256:CR2QFGI8Wi38j7m0fVendNlhmaPvJh+gYMXcH5yQYrY May 15 15:00:30.597814 sshd-session[3818]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:00:30.611041 systemd-logind[1497]: New session 8 of user core. May 15 15:00:30.618426 systemd[1]: Started session-8.scope - Session 8 of User core. May 15 15:00:31.377973 sshd[3820]: Connection closed by 139.178.68.195 port 45788 May 15 15:00:31.379020 sshd-session[3818]: pam_unix(sshd:session): session closed for user core May 15 15:00:31.387647 systemd[1]: sshd@7-137.184.120.255:22-139.178.68.195:45788.service: Deactivated successfully. May 15 15:00:31.392354 systemd[1]: session-8.scope: Deactivated successfully. May 15 15:00:31.394386 systemd-logind[1497]: Session 8 logged out. Waiting for processes to exit. May 15 15:00:31.397623 systemd-logind[1497]: Removed session 8. May 15 15:00:36.400367 systemd[1]: Started sshd@8-137.184.120.255:22-139.178.68.195:55876.service - OpenSSH per-connection server daemon (139.178.68.195:55876). May 15 15:00:36.484344 sshd[3834]: Accepted publickey for core from 139.178.68.195 port 55876 ssh2: RSA SHA256:CR2QFGI8Wi38j7m0fVendNlhmaPvJh+gYMXcH5yQYrY May 15 15:00:36.486100 sshd-session[3834]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:00:36.494005 systemd-logind[1497]: New session 9 of user core. May 15 15:00:36.501461 systemd[1]: Started session-9.scope - Session 9 of User core. May 15 15:00:36.652651 sshd[3836]: Connection closed by 139.178.68.195 port 55876 May 15 15:00:36.653901 sshd-session[3834]: pam_unix(sshd:session): session closed for user core May 15 15:00:36.660156 systemd[1]: sshd@8-137.184.120.255:22-139.178.68.195:55876.service: Deactivated successfully. May 15 15:00:36.663660 systemd[1]: session-9.scope: Deactivated successfully. May 15 15:00:36.668187 systemd-logind[1497]: Session 9 logged out. Waiting for processes to exit. May 15 15:00:36.670239 systemd-logind[1497]: Removed session 9. May 15 15:00:39.125295 kubelet[2700]: I0515 15:00:39.124971 2700 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 15 15:00:39.125295 kubelet[2700]: I0515 15:00:39.125085 2700 container_gc.go:86] "Attempting to delete unused containers" May 15 15:00:39.131254 kubelet[2700]: I0515 15:00:39.131179 2700 image_gc_manager.go:431] "Attempting to delete unused images" May 15 15:00:39.158292 kubelet[2700]: I0515 15:00:39.158229 2700 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 15 15:00:39.158508 kubelet[2700]: I0515 15:00:39.158460 2700 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-cc8z8","kube-system/cilium-kf2fb","kube-system/kube-controller-manager-ci-4334.0.0-a-cad88baf47","kube-system/kube-proxy-9q85z","kube-system/kube-apiserver-ci-4334.0.0-a-cad88baf47","kube-system/kube-scheduler-ci-4334.0.0-a-cad88baf47"] May 15 15:00:39.158610 kubelet[2700]: E0515 15:00:39.158514 2700 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-cc8z8" May 15 15:00:39.158610 kubelet[2700]: E0515 15:00:39.158532 2700 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-kf2fb" May 15 15:00:39.158610 kubelet[2700]: E0515 15:00:39.158548 2700 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-ci-4334.0.0-a-cad88baf47" May 15 15:00:39.158610 kubelet[2700]: E0515 15:00:39.158562 2700 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-9q85z" May 15 15:00:39.158610 kubelet[2700]: E0515 15:00:39.158577 2700 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-ci-4334.0.0-a-cad88baf47" May 15 15:00:39.158610 kubelet[2700]: E0515 15:00:39.158588 2700 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-ci-4334.0.0-a-cad88baf47" May 15 15:00:39.158610 kubelet[2700]: I0515 15:00:39.158601 2700 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" May 15 15:00:41.674315 systemd[1]: Started sshd@9-137.184.120.255:22-139.178.68.195:55892.service - OpenSSH per-connection server daemon (139.178.68.195:55892). May 15 15:00:41.746318 sshd[3851]: Accepted publickey for core from 139.178.68.195 port 55892 ssh2: RSA SHA256:CR2QFGI8Wi38j7m0fVendNlhmaPvJh+gYMXcH5yQYrY May 15 15:00:41.748990 sshd-session[3851]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:00:41.756258 systemd-logind[1497]: New session 10 of user core. May 15 15:00:41.763144 systemd[1]: Started session-10.scope - Session 10 of User core. May 15 15:00:41.936224 sshd[3853]: Connection closed by 139.178.68.195 port 55892 May 15 15:00:41.937017 sshd-session[3851]: pam_unix(sshd:session): session closed for user core May 15 15:00:41.943350 systemd[1]: sshd@9-137.184.120.255:22-139.178.68.195:55892.service: Deactivated successfully. May 15 15:00:41.946960 systemd[1]: session-10.scope: Deactivated successfully. May 15 15:00:41.948316 systemd-logind[1497]: Session 10 logged out. Waiting for processes to exit. May 15 15:00:41.951510 systemd-logind[1497]: Removed session 10. May 15 15:00:46.333614 kubelet[2700]: E0515 15:00:46.333554 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:00:46.958624 systemd[1]: Started sshd@10-137.184.120.255:22-139.178.68.195:59888.service - OpenSSH per-connection server daemon (139.178.68.195:59888). May 15 15:00:47.038108 sshd[3868]: Accepted publickey for core from 139.178.68.195 port 59888 ssh2: RSA SHA256:CR2QFGI8Wi38j7m0fVendNlhmaPvJh+gYMXcH5yQYrY May 15 15:00:47.040725 sshd-session[3868]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:00:47.049300 systemd-logind[1497]: New session 11 of user core. May 15 15:00:47.055262 systemd[1]: Started session-11.scope - Session 11 of User core. May 15 15:00:47.213725 sshd[3870]: Connection closed by 139.178.68.195 port 59888 May 15 15:00:47.214595 sshd-session[3868]: pam_unix(sshd:session): session closed for user core May 15 15:00:47.232113 systemd[1]: sshd@10-137.184.120.255:22-139.178.68.195:59888.service: Deactivated successfully. May 15 15:00:47.234944 systemd[1]: session-11.scope: Deactivated successfully. May 15 15:00:47.236465 systemd-logind[1497]: Session 11 logged out. Waiting for processes to exit. May 15 15:00:47.241867 systemd[1]: Started sshd@11-137.184.120.255:22-139.178.68.195:59900.service - OpenSSH per-connection server daemon (139.178.68.195:59900). May 15 15:00:47.243716 systemd-logind[1497]: Removed session 11. May 15 15:00:47.306798 sshd[3883]: Accepted publickey for core from 139.178.68.195 port 59900 ssh2: RSA SHA256:CR2QFGI8Wi38j7m0fVendNlhmaPvJh+gYMXcH5yQYrY May 15 15:00:47.309018 sshd-session[3883]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:00:47.317581 systemd-logind[1497]: New session 12 of user core. May 15 15:00:47.327275 systemd[1]: Started session-12.scope - Session 12 of User core. May 15 15:00:47.544322 sshd[3885]: Connection closed by 139.178.68.195 port 59900 May 15 15:00:47.545093 sshd-session[3883]: pam_unix(sshd:session): session closed for user core May 15 15:00:47.562662 systemd[1]: sshd@11-137.184.120.255:22-139.178.68.195:59900.service: Deactivated successfully. May 15 15:00:47.569965 systemd[1]: session-12.scope: Deactivated successfully. May 15 15:00:47.574077 systemd-logind[1497]: Session 12 logged out. Waiting for processes to exit. May 15 15:00:47.581895 systemd[1]: Started sshd@12-137.184.120.255:22-139.178.68.195:59916.service - OpenSSH per-connection server daemon (139.178.68.195:59916). May 15 15:00:47.583526 systemd-logind[1497]: Removed session 12. May 15 15:00:47.677972 sshd[3895]: Accepted publickey for core from 139.178.68.195 port 59916 ssh2: RSA SHA256:CR2QFGI8Wi38j7m0fVendNlhmaPvJh+gYMXcH5yQYrY May 15 15:00:47.680980 sshd-session[3895]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:00:47.691770 systemd-logind[1497]: New session 13 of user core. May 15 15:00:47.699341 systemd[1]: Started session-13.scope - Session 13 of User core. May 15 15:00:47.878647 sshd[3897]: Connection closed by 139.178.68.195 port 59916 May 15 15:00:47.879347 sshd-session[3895]: pam_unix(sshd:session): session closed for user core May 15 15:00:47.885887 systemd[1]: sshd@12-137.184.120.255:22-139.178.68.195:59916.service: Deactivated successfully. May 15 15:00:47.890228 systemd[1]: session-13.scope: Deactivated successfully. May 15 15:00:47.892220 systemd-logind[1497]: Session 13 logged out. Waiting for processes to exit. May 15 15:00:47.894463 systemd-logind[1497]: Removed session 13. May 15 15:00:49.205173 kubelet[2700]: I0515 15:00:49.205110 2700 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 15 15:00:49.205173 kubelet[2700]: I0515 15:00:49.205180 2700 container_gc.go:86] "Attempting to delete unused containers" May 15 15:00:49.211376 kubelet[2700]: I0515 15:00:49.211318 2700 image_gc_manager.go:431] "Attempting to delete unused images" May 15 15:00:49.235710 kubelet[2700]: I0515 15:00:49.235658 2700 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 15 15:00:49.235985 kubelet[2700]: I0515 15:00:49.235858 2700 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-cc8z8","kube-system/cilium-kf2fb","kube-system/kube-controller-manager-ci-4334.0.0-a-cad88baf47","kube-system/kube-proxy-9q85z","kube-system/kube-apiserver-ci-4334.0.0-a-cad88baf47","kube-system/kube-scheduler-ci-4334.0.0-a-cad88baf47"] May 15 15:00:49.236306 kubelet[2700]: E0515 15:00:49.236275 2700 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-cc8z8" May 15 15:00:49.236306 kubelet[2700]: E0515 15:00:49.236312 2700 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-kf2fb" May 15 15:00:49.236472 kubelet[2700]: E0515 15:00:49.236329 2700 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-ci-4334.0.0-a-cad88baf47" May 15 15:00:49.236472 kubelet[2700]: E0515 15:00:49.236346 2700 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-9q85z" May 15 15:00:49.236472 kubelet[2700]: E0515 15:00:49.236358 2700 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-ci-4334.0.0-a-cad88baf47" May 15 15:00:49.236472 kubelet[2700]: E0515 15:00:49.236370 2700 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-ci-4334.0.0-a-cad88baf47" May 15 15:00:49.236472 kubelet[2700]: I0515 15:00:49.236388 2700 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" May 15 15:00:50.332928 kubelet[2700]: E0515 15:00:50.332336 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:00:52.332906 kubelet[2700]: E0515 15:00:52.332573 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:00:52.893448 systemd[1]: Started sshd@13-137.184.120.255:22-139.178.68.195:59918.service - OpenSSH per-connection server daemon (139.178.68.195:59918). May 15 15:00:52.966373 sshd[3909]: Accepted publickey for core from 139.178.68.195 port 59918 ssh2: RSA SHA256:CR2QFGI8Wi38j7m0fVendNlhmaPvJh+gYMXcH5yQYrY May 15 15:00:52.968372 sshd-session[3909]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:00:52.977802 systemd-logind[1497]: New session 14 of user core. May 15 15:00:52.983263 systemd[1]: Started session-14.scope - Session 14 of User core. May 15 15:00:53.137514 sshd[3911]: Connection closed by 139.178.68.195 port 59918 May 15 15:00:53.138437 sshd-session[3909]: pam_unix(sshd:session): session closed for user core May 15 15:00:53.144572 systemd-logind[1497]: Session 14 logged out. Waiting for processes to exit. May 15 15:00:53.146455 systemd[1]: sshd@13-137.184.120.255:22-139.178.68.195:59918.service: Deactivated successfully. May 15 15:00:53.150488 systemd[1]: session-14.scope: Deactivated successfully. May 15 15:00:53.153539 systemd-logind[1497]: Removed session 14. May 15 15:00:58.162513 systemd[1]: Started sshd@14-137.184.120.255:22-139.178.68.195:48826.service - OpenSSH per-connection server daemon (139.178.68.195:48826). May 15 15:00:58.239638 sshd[3923]: Accepted publickey for core from 139.178.68.195 port 48826 ssh2: RSA SHA256:CR2QFGI8Wi38j7m0fVendNlhmaPvJh+gYMXcH5yQYrY May 15 15:00:58.242199 sshd-session[3923]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:00:58.256051 systemd-logind[1497]: New session 15 of user core. May 15 15:00:58.263216 systemd[1]: Started session-15.scope - Session 15 of User core. May 15 15:00:58.489161 sshd[3925]: Connection closed by 139.178.68.195 port 48826 May 15 15:00:58.490235 sshd-session[3923]: pam_unix(sshd:session): session closed for user core May 15 15:00:58.503945 systemd[1]: sshd@14-137.184.120.255:22-139.178.68.195:48826.service: Deactivated successfully. May 15 15:00:58.511233 systemd[1]: session-15.scope: Deactivated successfully. May 15 15:00:58.514207 systemd-logind[1497]: Session 15 logged out. Waiting for processes to exit. May 15 15:00:58.518810 systemd-logind[1497]: Removed session 15. May 15 15:00:59.264784 kubelet[2700]: I0515 15:00:59.264535 2700 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 15 15:00:59.264784 kubelet[2700]: I0515 15:00:59.264608 2700 container_gc.go:86] "Attempting to delete unused containers" May 15 15:00:59.271318 kubelet[2700]: I0515 15:00:59.271154 2700 image_gc_manager.go:431] "Attempting to delete unused images" May 15 15:00:59.287917 kubelet[2700]: I0515 15:00:59.287758 2700 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 15 15:00:59.288171 kubelet[2700]: I0515 15:00:59.288141 2700 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-cc8z8","kube-system/cilium-kf2fb","kube-system/kube-controller-manager-ci-4334.0.0-a-cad88baf47","kube-system/kube-proxy-9q85z","kube-system/kube-apiserver-ci-4334.0.0-a-cad88baf47","kube-system/kube-scheduler-ci-4334.0.0-a-cad88baf47"] May 15 15:00:59.288271 kubelet[2700]: E0515 15:00:59.288260 2700 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-cc8z8" May 15 15:00:59.288487 kubelet[2700]: E0515 15:00:59.288411 2700 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-kf2fb" May 15 15:00:59.288487 kubelet[2700]: E0515 15:00:59.288427 2700 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-ci-4334.0.0-a-cad88baf47" May 15 15:00:59.288487 kubelet[2700]: E0515 15:00:59.288443 2700 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-9q85z" May 15 15:00:59.288487 kubelet[2700]: E0515 15:00:59.288453 2700 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-ci-4334.0.0-a-cad88baf47" May 15 15:00:59.288487 kubelet[2700]: E0515 15:00:59.288463 2700 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-ci-4334.0.0-a-cad88baf47" May 15 15:00:59.288487 kubelet[2700]: I0515 15:00:59.288475 2700 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" May 15 15:01:03.518270 systemd[1]: Started sshd@15-137.184.120.255:22-139.178.68.195:49176.service - OpenSSH per-connection server daemon (139.178.68.195:49176). May 15 15:01:03.629784 sshd[3944]: Accepted publickey for core from 139.178.68.195 port 49176 ssh2: RSA SHA256:CR2QFGI8Wi38j7m0fVendNlhmaPvJh+gYMXcH5yQYrY May 15 15:01:03.632490 sshd-session[3944]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:01:03.641286 systemd-logind[1497]: New session 16 of user core. May 15 15:01:03.653390 systemd[1]: Started session-16.scope - Session 16 of User core. May 15 15:01:03.823168 sshd[3946]: Connection closed by 139.178.68.195 port 49176 May 15 15:01:03.824239 sshd-session[3944]: pam_unix(sshd:session): session closed for user core May 15 15:01:03.828199 systemd[1]: sshd@15-137.184.120.255:22-139.178.68.195:49176.service: Deactivated successfully. May 15 15:01:03.832350 systemd[1]: session-16.scope: Deactivated successfully. May 15 15:01:03.835175 systemd-logind[1497]: Session 16 logged out. Waiting for processes to exit. May 15 15:01:03.838134 systemd-logind[1497]: Removed session 16. May 15 15:01:08.333425 kubelet[2700]: E0515 15:01:08.332210 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:01:08.840285 systemd[1]: Started sshd@16-137.184.120.255:22-139.178.68.195:49186.service - OpenSSH per-connection server daemon (139.178.68.195:49186). May 15 15:01:08.915399 sshd[3958]: Accepted publickey for core from 139.178.68.195 port 49186 ssh2: RSA SHA256:CR2QFGI8Wi38j7m0fVendNlhmaPvJh+gYMXcH5yQYrY May 15 15:01:08.917040 sshd-session[3958]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:01:08.926313 systemd-logind[1497]: New session 17 of user core. May 15 15:01:08.931298 systemd[1]: Started session-17.scope - Session 17 of User core. May 15 15:01:09.070440 systemd[1]: Started sshd@17-137.184.120.255:22-218.92.0.154:30356.service - OpenSSH per-connection server daemon (218.92.0.154:30356). May 15 15:01:09.129127 sshd[3960]: Connection closed by 139.178.68.195 port 49186 May 15 15:01:09.129351 sshd-session[3958]: pam_unix(sshd:session): session closed for user core May 15 15:01:09.138013 systemd[1]: sshd@16-137.184.120.255:22-139.178.68.195:49186.service: Deactivated successfully. May 15 15:01:09.143066 systemd[1]: session-17.scope: Deactivated successfully. May 15 15:01:09.145126 systemd-logind[1497]: Session 17 logged out. Waiting for processes to exit. May 15 15:01:09.148071 systemd-logind[1497]: Removed session 17. May 15 15:01:09.318773 kubelet[2700]: I0515 15:01:09.318683 2700 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 15 15:01:09.319031 kubelet[2700]: I0515 15:01:09.318824 2700 container_gc.go:86] "Attempting to delete unused containers" May 15 15:01:09.330634 kubelet[2700]: I0515 15:01:09.330599 2700 image_gc_manager.go:431] "Attempting to delete unused images" May 15 15:01:09.353169 kubelet[2700]: I0515 15:01:09.353104 2700 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 15 15:01:09.353849 kubelet[2700]: I0515 15:01:09.353239 2700 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-cc8z8","kube-system/cilium-kf2fb","kube-system/kube-controller-manager-ci-4334.0.0-a-cad88baf47","kube-system/kube-proxy-9q85z","kube-system/kube-apiserver-ci-4334.0.0-a-cad88baf47","kube-system/kube-scheduler-ci-4334.0.0-a-cad88baf47"] May 15 15:01:09.353849 kubelet[2700]: E0515 15:01:09.353284 2700 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-cc8z8" May 15 15:01:09.353849 kubelet[2700]: E0515 15:01:09.353304 2700 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-kf2fb" May 15 15:01:09.353849 kubelet[2700]: E0515 15:01:09.353319 2700 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-ci-4334.0.0-a-cad88baf47" May 15 15:01:09.353849 kubelet[2700]: E0515 15:01:09.353333 2700 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-9q85z" May 15 15:01:09.353849 kubelet[2700]: E0515 15:01:09.353347 2700 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-ci-4334.0.0-a-cad88baf47" May 15 15:01:09.353849 kubelet[2700]: E0515 15:01:09.353362 2700 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-ci-4334.0.0-a-cad88baf47" May 15 15:01:09.353849 kubelet[2700]: I0515 15:01:09.353375 2700 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" May 15 15:01:10.991768 sshd-session[3974]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.154 user=root May 15 15:01:13.175108 sshd[3969]: PAM: Permission denied for root from 218.92.0.154 May 15 15:01:13.332351 kubelet[2700]: E0515 15:01:13.331823 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:01:13.332351 kubelet[2700]: E0515 15:01:13.332137 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:01:13.507520 sshd-session[3975]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.154 user=root May 15 15:01:14.146223 systemd[1]: Started sshd@18-137.184.120.255:22-139.178.68.195:46538.service - OpenSSH per-connection server daemon (139.178.68.195:46538). May 15 15:01:14.217429 sshd[3980]: Accepted publickey for core from 139.178.68.195 port 46538 ssh2: RSA SHA256:CR2QFGI8Wi38j7m0fVendNlhmaPvJh+gYMXcH5yQYrY May 15 15:01:14.219338 sshd-session[3980]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:01:14.228159 systemd-logind[1497]: New session 18 of user core. May 15 15:01:14.234262 systemd[1]: Started session-18.scope - Session 18 of User core. May 15 15:01:14.429810 sshd[3982]: Connection closed by 139.178.68.195 port 46538 May 15 15:01:14.431513 sshd-session[3980]: pam_unix(sshd:session): session closed for user core May 15 15:01:14.440118 systemd[1]: sshd@18-137.184.120.255:22-139.178.68.195:46538.service: Deactivated successfully. May 15 15:01:14.443507 systemd[1]: session-18.scope: Deactivated successfully. May 15 15:01:14.446630 systemd-logind[1497]: Session 18 logged out. Waiting for processes to exit. May 15 15:01:14.449686 systemd-logind[1497]: Removed session 18. May 15 15:01:14.767451 sshd[3969]: PAM: Permission denied for root from 218.92.0.154 May 15 15:01:15.098103 sshd-session[3993]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.154 user=root May 15 15:01:16.828187 sshd[3969]: PAM: Permission denied for root from 218.92.0.154 May 15 15:01:16.994264 sshd[3969]: Received disconnect from 218.92.0.154 port 30356:11: [preauth] May 15 15:01:16.994264 sshd[3969]: Disconnected from authenticating user root 218.92.0.154 port 30356 [preauth] May 15 15:01:16.997602 systemd[1]: sshd@17-137.184.120.255:22-218.92.0.154:30356.service: Deactivated successfully. May 15 15:01:19.371334 kubelet[2700]: I0515 15:01:19.371110 2700 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 15 15:01:19.371334 kubelet[2700]: I0515 15:01:19.371199 2700 container_gc.go:86] "Attempting to delete unused containers" May 15 15:01:19.375123 kubelet[2700]: I0515 15:01:19.374867 2700 image_gc_manager.go:431] "Attempting to delete unused images" May 15 15:01:19.395809 kubelet[2700]: I0515 15:01:19.395745 2700 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 15 15:01:19.396076 kubelet[2700]: I0515 15:01:19.396013 2700 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-cc8z8","kube-system/cilium-kf2fb","kube-system/kube-controller-manager-ci-4334.0.0-a-cad88baf47","kube-system/kube-proxy-9q85z","kube-system/kube-apiserver-ci-4334.0.0-a-cad88baf47","kube-system/kube-scheduler-ci-4334.0.0-a-cad88baf47"] May 15 15:01:19.396135 kubelet[2700]: E0515 15:01:19.396090 2700 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-cc8z8" May 15 15:01:19.396135 kubelet[2700]: E0515 15:01:19.396108 2700 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-kf2fb" May 15 15:01:19.396135 kubelet[2700]: E0515 15:01:19.396135 2700 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-ci-4334.0.0-a-cad88baf47" May 15 15:01:19.396217 kubelet[2700]: E0515 15:01:19.396146 2700 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-9q85z" May 15 15:01:19.396217 kubelet[2700]: E0515 15:01:19.396155 2700 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-ci-4334.0.0-a-cad88baf47" May 15 15:01:19.396217 kubelet[2700]: E0515 15:01:19.396167 2700 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-ci-4334.0.0-a-cad88baf47" May 15 15:01:19.396217 kubelet[2700]: I0515 15:01:19.396178 2700 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" May 15 15:01:19.448226 systemd[1]: Started sshd@19-137.184.120.255:22-139.178.68.195:46548.service - OpenSSH per-connection server daemon (139.178.68.195:46548). May 15 15:01:19.518819 sshd[3997]: Accepted publickey for core from 139.178.68.195 port 46548 ssh2: RSA SHA256:CR2QFGI8Wi38j7m0fVendNlhmaPvJh+gYMXcH5yQYrY May 15 15:01:19.522139 sshd-session[3997]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:01:19.531383 systemd-logind[1497]: New session 19 of user core. May 15 15:01:19.548284 systemd[1]: Started session-19.scope - Session 19 of User core. May 15 15:01:19.710719 sshd[3999]: Connection closed by 139.178.68.195 port 46548 May 15 15:01:19.711643 sshd-session[3997]: pam_unix(sshd:session): session closed for user core May 15 15:01:19.727231 systemd[1]: sshd@19-137.184.120.255:22-139.178.68.195:46548.service: Deactivated successfully. May 15 15:01:19.730604 systemd[1]: session-19.scope: Deactivated successfully. May 15 15:01:19.732294 systemd-logind[1497]: Session 19 logged out. Waiting for processes to exit. May 15 15:01:19.741216 systemd[1]: Started sshd@20-137.184.120.255:22-139.178.68.195:46564.service - OpenSSH per-connection server daemon (139.178.68.195:46564). May 15 15:01:19.746065 systemd-logind[1497]: Removed session 19. May 15 15:01:19.805790 sshd[4011]: Accepted publickey for core from 139.178.68.195 port 46564 ssh2: RSA SHA256:CR2QFGI8Wi38j7m0fVendNlhmaPvJh+gYMXcH5yQYrY May 15 15:01:19.807450 sshd-session[4011]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:01:19.815226 systemd-logind[1497]: New session 20 of user core. May 15 15:01:19.823284 systemd[1]: Started session-20.scope - Session 20 of User core. May 15 15:01:20.117360 sshd[4013]: Connection closed by 139.178.68.195 port 46564 May 15 15:01:20.117823 sshd-session[4011]: pam_unix(sshd:session): session closed for user core May 15 15:01:20.136644 systemd[1]: sshd@20-137.184.120.255:22-139.178.68.195:46564.service: Deactivated successfully. May 15 15:01:20.141426 systemd[1]: session-20.scope: Deactivated successfully. May 15 15:01:20.142602 systemd-logind[1497]: Session 20 logged out. Waiting for processes to exit. May 15 15:01:20.148315 systemd[1]: Started sshd@21-137.184.120.255:22-139.178.68.195:46572.service - OpenSSH per-connection server daemon (139.178.68.195:46572). May 15 15:01:20.151488 systemd-logind[1497]: Removed session 20. May 15 15:01:20.255031 sshd[4023]: Accepted publickey for core from 139.178.68.195 port 46572 ssh2: RSA SHA256:CR2QFGI8Wi38j7m0fVendNlhmaPvJh+gYMXcH5yQYrY May 15 15:01:20.256785 sshd-session[4023]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:01:20.264113 systemd-logind[1497]: New session 21 of user core. May 15 15:01:20.273239 systemd[1]: Started session-21.scope - Session 21 of User core. May 15 15:01:21.433789 sshd[4025]: Connection closed by 139.178.68.195 port 46572 May 15 15:01:21.434571 sshd-session[4023]: pam_unix(sshd:session): session closed for user core May 15 15:01:21.448784 systemd[1]: sshd@21-137.184.120.255:22-139.178.68.195:46572.service: Deactivated successfully. May 15 15:01:21.453682 systemd[1]: session-21.scope: Deactivated successfully. May 15 15:01:21.457557 systemd-logind[1497]: Session 21 logged out. Waiting for processes to exit. May 15 15:01:21.462451 systemd-logind[1497]: Removed session 21. May 15 15:01:21.470502 systemd[1]: Started sshd@22-137.184.120.255:22-139.178.68.195:46584.service - OpenSSH per-connection server daemon (139.178.68.195:46584). May 15 15:01:21.540764 sshd[4040]: Accepted publickey for core from 139.178.68.195 port 46584 ssh2: RSA SHA256:CR2QFGI8Wi38j7m0fVendNlhmaPvJh+gYMXcH5yQYrY May 15 15:01:21.543477 sshd-session[4040]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:01:21.549797 systemd-logind[1497]: New session 22 of user core. May 15 15:01:21.560201 systemd[1]: Started session-22.scope - Session 22 of User core. May 15 15:01:21.935597 sshd[4044]: Connection closed by 139.178.68.195 port 46584 May 15 15:01:21.936674 sshd-session[4040]: pam_unix(sshd:session): session closed for user core May 15 15:01:21.951268 systemd[1]: sshd@22-137.184.120.255:22-139.178.68.195:46584.service: Deactivated successfully. May 15 15:01:21.957452 systemd[1]: session-22.scope: Deactivated successfully. May 15 15:01:21.960261 systemd-logind[1497]: Session 22 logged out. Waiting for processes to exit. May 15 15:01:21.969356 systemd[1]: Started sshd@23-137.184.120.255:22-139.178.68.195:46600.service - OpenSSH per-connection server daemon (139.178.68.195:46600). May 15 15:01:21.971025 systemd-logind[1497]: Removed session 22. May 15 15:01:22.035316 sshd[4054]: Accepted publickey for core from 139.178.68.195 port 46600 ssh2: RSA SHA256:CR2QFGI8Wi38j7m0fVendNlhmaPvJh+gYMXcH5yQYrY May 15 15:01:22.037280 sshd-session[4054]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:01:22.044437 systemd-logind[1497]: New session 23 of user core. May 15 15:01:22.051189 systemd[1]: Started session-23.scope - Session 23 of User core. May 15 15:01:22.205681 sshd[4056]: Connection closed by 139.178.68.195 port 46600 May 15 15:01:22.206429 sshd-session[4054]: pam_unix(sshd:session): session closed for user core May 15 15:01:22.211754 systemd-logind[1497]: Session 23 logged out. Waiting for processes to exit. May 15 15:01:22.211959 systemd[1]: sshd@23-137.184.120.255:22-139.178.68.195:46600.service: Deactivated successfully. May 15 15:01:22.215039 systemd[1]: session-23.scope: Deactivated successfully. May 15 15:01:22.219784 systemd-logind[1497]: Removed session 23. May 15 15:01:27.222628 systemd[1]: Started sshd@24-137.184.120.255:22-139.178.68.195:50782.service - OpenSSH per-connection server daemon (139.178.68.195:50782). May 15 15:01:27.286928 sshd[4068]: Accepted publickey for core from 139.178.68.195 port 50782 ssh2: RSA SHA256:CR2QFGI8Wi38j7m0fVendNlhmaPvJh+gYMXcH5yQYrY May 15 15:01:27.288211 sshd-session[4068]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:01:27.294918 systemd-logind[1497]: New session 24 of user core. May 15 15:01:27.301331 systemd[1]: Started session-24.scope - Session 24 of User core. May 15 15:01:27.475697 sshd[4070]: Connection closed by 139.178.68.195 port 50782 May 15 15:01:27.476575 sshd-session[4068]: pam_unix(sshd:session): session closed for user core May 15 15:01:27.482314 systemd-logind[1497]: Session 24 logged out. Waiting for processes to exit. May 15 15:01:27.483739 systemd[1]: sshd@24-137.184.120.255:22-139.178.68.195:50782.service: Deactivated successfully. May 15 15:01:27.486366 systemd[1]: session-24.scope: Deactivated successfully. May 15 15:01:27.489182 systemd-logind[1497]: Removed session 24. May 15 15:01:29.416515 kubelet[2700]: I0515 15:01:29.416457 2700 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 15 15:01:29.419186 kubelet[2700]: I0515 15:01:29.418989 2700 container_gc.go:86] "Attempting to delete unused containers" May 15 15:01:29.423408 kubelet[2700]: I0515 15:01:29.422063 2700 image_gc_manager.go:431] "Attempting to delete unused images" May 15 15:01:29.441543 kubelet[2700]: I0515 15:01:29.441507 2700 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 15 15:01:29.441687 kubelet[2700]: I0515 15:01:29.441635 2700 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-cc8z8","kube-system/cilium-kf2fb","kube-system/kube-controller-manager-ci-4334.0.0-a-cad88baf47","kube-system/kube-proxy-9q85z","kube-system/kube-apiserver-ci-4334.0.0-a-cad88baf47","kube-system/kube-scheduler-ci-4334.0.0-a-cad88baf47"] May 15 15:01:29.441687 kubelet[2700]: E0515 15:01:29.441681 2700 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-cc8z8" May 15 15:01:29.441778 kubelet[2700]: E0515 15:01:29.441701 2700 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-kf2fb" May 15 15:01:29.441778 kubelet[2700]: E0515 15:01:29.441715 2700 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-ci-4334.0.0-a-cad88baf47" May 15 15:01:29.441778 kubelet[2700]: E0515 15:01:29.441729 2700 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-9q85z" May 15 15:01:29.441778 kubelet[2700]: E0515 15:01:29.441742 2700 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-ci-4334.0.0-a-cad88baf47" May 15 15:01:29.441778 kubelet[2700]: E0515 15:01:29.441754 2700 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-ci-4334.0.0-a-cad88baf47" May 15 15:01:29.441778 kubelet[2700]: I0515 15:01:29.441769 2700 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" May 15 15:01:32.498294 systemd[1]: Started sshd@25-137.184.120.255:22-139.178.68.195:50786.service - OpenSSH per-connection server daemon (139.178.68.195:50786). May 15 15:01:32.569231 sshd[4084]: Accepted publickey for core from 139.178.68.195 port 50786 ssh2: RSA SHA256:CR2QFGI8Wi38j7m0fVendNlhmaPvJh+gYMXcH5yQYrY May 15 15:01:32.571498 sshd-session[4084]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:01:32.579101 systemd-logind[1497]: New session 25 of user core. May 15 15:01:32.587656 systemd[1]: Started session-25.scope - Session 25 of User core. May 15 15:01:32.744055 sshd[4086]: Connection closed by 139.178.68.195 port 50786 May 15 15:01:32.746430 sshd-session[4084]: pam_unix(sshd:session): session closed for user core May 15 15:01:32.753534 systemd[1]: sshd@25-137.184.120.255:22-139.178.68.195:50786.service: Deactivated successfully. May 15 15:01:32.757785 systemd[1]: session-25.scope: Deactivated successfully. May 15 15:01:32.760157 systemd-logind[1497]: Session 25 logged out. Waiting for processes to exit. May 15 15:01:32.763032 systemd-logind[1497]: Removed session 25. May 15 15:01:37.760224 systemd[1]: Started sshd@26-137.184.120.255:22-139.178.68.195:50044.service - OpenSSH per-connection server daemon (139.178.68.195:50044). May 15 15:01:37.825391 sshd[4098]: Accepted publickey for core from 139.178.68.195 port 50044 ssh2: RSA SHA256:CR2QFGI8Wi38j7m0fVendNlhmaPvJh+gYMXcH5yQYrY May 15 15:01:37.827336 sshd-session[4098]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:01:37.834088 systemd-logind[1497]: New session 26 of user core. May 15 15:01:37.841265 systemd[1]: Started session-26.scope - Session 26 of User core. May 15 15:01:37.991165 sshd[4100]: Connection closed by 139.178.68.195 port 50044 May 15 15:01:37.993302 sshd-session[4098]: pam_unix(sshd:session): session closed for user core May 15 15:01:37.997948 systemd[1]: sshd@26-137.184.120.255:22-139.178.68.195:50044.service: Deactivated successfully. May 15 15:01:38.000527 systemd[1]: session-26.scope: Deactivated successfully. May 15 15:01:38.004940 systemd-logind[1497]: Session 26 logged out. Waiting for processes to exit. May 15 15:01:38.006533 systemd-logind[1497]: Removed session 26. May 15 15:01:39.506236 kubelet[2700]: I0515 15:01:39.506169 2700 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 15 15:01:39.507516 kubelet[2700]: I0515 15:01:39.506273 2700 container_gc.go:86] "Attempting to delete unused containers" May 15 15:01:39.510951 kubelet[2700]: I0515 15:01:39.510829 2700 image_gc_manager.go:431] "Attempting to delete unused images" May 15 15:01:39.514840 kubelet[2700]: I0515 15:01:39.514791 2700 image_gc_manager.go:487] "Removing image to free bytes" imageID="sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc" size=57680541 runtimeHandler="" May 15 15:01:39.516461 containerd[1582]: time="2025-05-15T15:01:39.516188571Z" level=info msg="RemoveImage \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" May 15 15:01:39.521191 containerd[1582]: time="2025-05-15T15:01:39.521014392Z" level=info msg="ImageDelete event name:\"registry.k8s.io/etcd:3.5.16-0\"" May 15 15:01:39.523377 containerd[1582]: time="2025-05-15T15:01:39.523211641Z" level=info msg="ImageDelete event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\"" May 15 15:01:39.524674 containerd[1582]: time="2025-05-15T15:01:39.524503598Z" level=info msg="RemoveImage \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" returns successfully" May 15 15:01:39.524674 containerd[1582]: time="2025-05-15T15:01:39.524595744Z" level=info msg="ImageDelete event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" May 15 15:01:39.525595 kubelet[2700]: I0515 15:01:39.525525 2700 image_gc_manager.go:487] "Removing image to free bytes" imageID="sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" size=18562039 runtimeHandler="" May 15 15:01:39.526252 containerd[1582]: time="2025-05-15T15:01:39.526098488Z" level=info msg="RemoveImage \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" May 15 15:01:39.527959 containerd[1582]: time="2025-05-15T15:01:39.527900130Z" level=info msg="ImageDelete event name:\"registry.k8s.io/coredns/coredns:v1.11.3\"" May 15 15:01:39.529147 containerd[1582]: time="2025-05-15T15:01:39.529002357Z" level=info msg="ImageDelete event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\"" May 15 15:01:39.530094 containerd[1582]: time="2025-05-15T15:01:39.529864047Z" level=info msg="RemoveImage \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" returns successfully" May 15 15:01:39.530094 containerd[1582]: time="2025-05-15T15:01:39.529963908Z" level=info msg="ImageDelete event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" May 15 15:01:39.558963 kubelet[2700]: I0515 15:01:39.558777 2700 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 15 15:01:39.559304 kubelet[2700]: I0515 15:01:39.559126 2700 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-cc8z8","kube-system/cilium-kf2fb","kube-system/kube-controller-manager-ci-4334.0.0-a-cad88baf47","kube-system/kube-proxy-9q85z","kube-system/kube-apiserver-ci-4334.0.0-a-cad88baf47","kube-system/kube-scheduler-ci-4334.0.0-a-cad88baf47"] May 15 15:01:39.559304 kubelet[2700]: E0515 15:01:39.559193 2700 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-cc8z8" May 15 15:01:39.559304 kubelet[2700]: E0515 15:01:39.559217 2700 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-kf2fb" May 15 15:01:39.559304 kubelet[2700]: E0515 15:01:39.559232 2700 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-ci-4334.0.0-a-cad88baf47" May 15 15:01:39.559304 kubelet[2700]: E0515 15:01:39.559247 2700 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-9q85z" May 15 15:01:39.559304 kubelet[2700]: E0515 15:01:39.559259 2700 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-ci-4334.0.0-a-cad88baf47" May 15 15:01:39.559304 kubelet[2700]: E0515 15:01:39.559272 2700 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-ci-4334.0.0-a-cad88baf47" May 15 15:01:39.559304 kubelet[2700]: I0515 15:01:39.559289 2700 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" May 15 15:01:43.012655 systemd[1]: Started sshd@27-137.184.120.255:22-139.178.68.195:50048.service - OpenSSH per-connection server daemon (139.178.68.195:50048). May 15 15:01:43.087925 sshd[4114]: Accepted publickey for core from 139.178.68.195 port 50048 ssh2: RSA SHA256:CR2QFGI8Wi38j7m0fVendNlhmaPvJh+gYMXcH5yQYrY May 15 15:01:43.089952 sshd-session[4114]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:01:43.095620 systemd-logind[1497]: New session 27 of user core. May 15 15:01:43.104267 systemd[1]: Started session-27.scope - Session 27 of User core. May 15 15:01:43.275241 sshd[4116]: Connection closed by 139.178.68.195 port 50048 May 15 15:01:43.276322 sshd-session[4114]: pam_unix(sshd:session): session closed for user core May 15 15:01:43.281916 update_engine[1498]: I20250515 15:01:43.279485 1498 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs May 15 15:01:43.281916 update_engine[1498]: I20250515 15:01:43.279575 1498 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs May 15 15:01:43.281916 update_engine[1498]: I20250515 15:01:43.281268 1498 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs May 15 15:01:43.282484 update_engine[1498]: I20250515 15:01:43.282004 1498 omaha_request_params.cc:62] Current group set to developer May 15 15:01:43.285798 update_engine[1498]: I20250515 15:01:43.285495 1498 update_attempter.cc:499] Already updated boot flags. Skipping. May 15 15:01:43.285798 update_engine[1498]: I20250515 15:01:43.285552 1498 update_attempter.cc:643] Scheduling an action processor start. May 15 15:01:43.285798 update_engine[1498]: I20250515 15:01:43.285661 1498 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 15 15:01:43.294014 systemd[1]: sshd@27-137.184.120.255:22-139.178.68.195:50048.service: Deactivated successfully. May 15 15:01:43.298862 systemd[1]: session-27.scope: Deactivated successfully. May 15 15:01:43.301607 update_engine[1498]: I20250515 15:01:43.300561 1498 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs May 15 15:01:43.301607 update_engine[1498]: I20250515 15:01:43.300738 1498 omaha_request_action.cc:271] Posting an Omaha request to disabled May 15 15:01:43.301607 update_engine[1498]: I20250515 15:01:43.300755 1498 omaha_request_action.cc:272] Request: May 15 15:01:43.301607 update_engine[1498]: May 15 15:01:43.301607 update_engine[1498]: May 15 15:01:43.301607 update_engine[1498]: May 15 15:01:43.301607 update_engine[1498]: May 15 15:01:43.301607 update_engine[1498]: May 15 15:01:43.301607 update_engine[1498]: May 15 15:01:43.301607 update_engine[1498]: May 15 15:01:43.301607 update_engine[1498]: May 15 15:01:43.301607 update_engine[1498]: I20250515 15:01:43.300766 1498 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 15 15:01:43.307613 systemd-logind[1497]: Session 27 logged out. Waiting for processes to exit. May 15 15:01:43.329937 update_engine[1498]: I20250515 15:01:43.329321 1498 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 15 15:01:43.330158 update_engine[1498]: I20250515 15:01:43.330089 1498 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 15 15:01:43.330371 systemd[1]: Started sshd@28-137.184.120.255:22-139.178.68.195:50058.service - OpenSSH per-connection server daemon (139.178.68.195:50058). May 15 15:01:43.332740 update_engine[1498]: E20250515 15:01:43.332557 1498 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 15 15:01:43.332740 update_engine[1498]: I20250515 15:01:43.332692 1498 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 May 15 15:01:43.336028 systemd-logind[1497]: Removed session 27. May 15 15:01:43.342026 locksmithd[1526]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 May 15 15:01:43.401960 sshd[4128]: Accepted publickey for core from 139.178.68.195 port 50058 ssh2: RSA SHA256:CR2QFGI8Wi38j7m0fVendNlhmaPvJh+gYMXcH5yQYrY May 15 15:01:43.403547 sshd-session[4128]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:01:43.412713 systemd-logind[1497]: New session 28 of user core. May 15 15:01:43.421478 systemd[1]: Started session-28.scope - Session 28 of User core. May 15 15:01:44.906842 containerd[1582]: time="2025-05-15T15:01:44.906773568Z" level=info msg="StopContainer for \"42823f7338002b6337a80dac59b2ee76b092b024550116696cbbc8e64a0ce746\" with timeout 30 (s)" May 15 15:01:44.911450 containerd[1582]: time="2025-05-15T15:01:44.911044691Z" level=info msg="Stop container \"42823f7338002b6337a80dac59b2ee76b092b024550116696cbbc8e64a0ce746\" with signal terminated" May 15 15:01:44.954225 systemd[1]: cri-containerd-42823f7338002b6337a80dac59b2ee76b092b024550116696cbbc8e64a0ce746.scope: Deactivated successfully. May 15 15:01:44.955330 systemd[1]: cri-containerd-42823f7338002b6337a80dac59b2ee76b092b024550116696cbbc8e64a0ce746.scope: Consumed 623ms CPU time, 29M memory peak, 1.4M read from disk, 4K written to disk. May 15 15:01:44.962371 containerd[1582]: time="2025-05-15T15:01:44.961747636Z" level=info msg="received exit event container_id:\"42823f7338002b6337a80dac59b2ee76b092b024550116696cbbc8e64a0ce746\" id:\"42823f7338002b6337a80dac59b2ee76b092b024550116696cbbc8e64a0ce746\" pid:3094 exited_at:{seconds:1747321304 nanos:960673757}" May 15 15:01:44.962820 containerd[1582]: time="2025-05-15T15:01:44.962787698Z" level=info msg="TaskExit event in podsandbox handler container_id:\"42823f7338002b6337a80dac59b2ee76b092b024550116696cbbc8e64a0ce746\" id:\"42823f7338002b6337a80dac59b2ee76b092b024550116696cbbc8e64a0ce746\" pid:3094 exited_at:{seconds:1747321304 nanos:960673757}" May 15 15:01:44.984490 containerd[1582]: time="2025-05-15T15:01:44.984407124Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 15 15:01:44.992249 containerd[1582]: time="2025-05-15T15:01:44.992119389Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bd0466d19cba1deda7a97b2a2156051f2f59172bb1864ee18e85817ee39f7e1a\" id:\"2251f6160584334297e18c52b8564c99ccaba504ad8641a634c6be6b358428f0\" pid:4158 exited_at:{seconds:1747321304 nanos:991543916}" May 15 15:01:44.996500 containerd[1582]: time="2025-05-15T15:01:44.996419826Z" level=info msg="StopContainer for \"bd0466d19cba1deda7a97b2a2156051f2f59172bb1864ee18e85817ee39f7e1a\" with timeout 2 (s)" May 15 15:01:44.997385 containerd[1582]: time="2025-05-15T15:01:44.997334377Z" level=info msg="Stop container \"bd0466d19cba1deda7a97b2a2156051f2f59172bb1864ee18e85817ee39f7e1a\" with signal terminated" May 15 15:01:45.027533 systemd-networkd[1458]: lxc_health: Link DOWN May 15 15:01:45.027549 systemd-networkd[1458]: lxc_health: Lost carrier May 15 15:01:45.039184 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-42823f7338002b6337a80dac59b2ee76b092b024550116696cbbc8e64a0ce746-rootfs.mount: Deactivated successfully. May 15 15:01:45.063856 systemd[1]: cri-containerd-bd0466d19cba1deda7a97b2a2156051f2f59172bb1864ee18e85817ee39f7e1a.scope: Deactivated successfully. May 15 15:01:45.064309 systemd[1]: cri-containerd-bd0466d19cba1deda7a97b2a2156051f2f59172bb1864ee18e85817ee39f7e1a.scope: Consumed 11.957s CPU time, 154M memory peak, 32.5M read from disk, 13.3M written to disk. May 15 15:01:45.069482 containerd[1582]: time="2025-05-15T15:01:45.069351086Z" level=info msg="received exit event container_id:\"bd0466d19cba1deda7a97b2a2156051f2f59172bb1864ee18e85817ee39f7e1a\" id:\"bd0466d19cba1deda7a97b2a2156051f2f59172bb1864ee18e85817ee39f7e1a\" pid:3340 exited_at:{seconds:1747321305 nanos:68736394}" May 15 15:01:45.070091 containerd[1582]: time="2025-05-15T15:01:45.069980246Z" level=info msg="StopContainer for \"42823f7338002b6337a80dac59b2ee76b092b024550116696cbbc8e64a0ce746\" returns successfully" May 15 15:01:45.070091 containerd[1582]: time="2025-05-15T15:01:45.070035575Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bd0466d19cba1deda7a97b2a2156051f2f59172bb1864ee18e85817ee39f7e1a\" id:\"bd0466d19cba1deda7a97b2a2156051f2f59172bb1864ee18e85817ee39f7e1a\" pid:3340 exited_at:{seconds:1747321305 nanos:68736394}" May 15 15:01:45.076451 containerd[1582]: time="2025-05-15T15:01:45.076383156Z" level=info msg="StopPodSandbox for \"8458e6fa4bf79f7459fce8917293de98d20101ee5c8b43d00e17b22647d39a26\"" May 15 15:01:45.076665 containerd[1582]: time="2025-05-15T15:01:45.076501419Z" level=info msg="Container to stop \"42823f7338002b6337a80dac59b2ee76b092b024550116696cbbc8e64a0ce746\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 15:01:45.088350 systemd[1]: cri-containerd-8458e6fa4bf79f7459fce8917293de98d20101ee5c8b43d00e17b22647d39a26.scope: Deactivated successfully. May 15 15:01:45.093441 containerd[1582]: time="2025-05-15T15:01:45.093355324Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8458e6fa4bf79f7459fce8917293de98d20101ee5c8b43d00e17b22647d39a26\" id:\"8458e6fa4bf79f7459fce8917293de98d20101ee5c8b43d00e17b22647d39a26\" pid:2802 exit_status:137 exited_at:{seconds:1747321305 nanos:92737408}" May 15 15:01:45.120360 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bd0466d19cba1deda7a97b2a2156051f2f59172bb1864ee18e85817ee39f7e1a-rootfs.mount: Deactivated successfully. May 15 15:01:45.134835 containerd[1582]: time="2025-05-15T15:01:45.134654185Z" level=info msg="StopContainer for \"bd0466d19cba1deda7a97b2a2156051f2f59172bb1864ee18e85817ee39f7e1a\" returns successfully" May 15 15:01:45.136074 containerd[1582]: time="2025-05-15T15:01:45.136031480Z" level=info msg="StopPodSandbox for \"21f76990a58a3cc05aa4bd41359b0b486bfe71ba2bf91adcaa4e5398528b82ac\"" May 15 15:01:45.136429 containerd[1582]: time="2025-05-15T15:01:45.136380766Z" level=info msg="Container to stop \"d7983d43b2590a9337c8ea15d85cee7c9cff3826c6eaaf2305e4f709ff42808d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 15:01:45.136607 containerd[1582]: time="2025-05-15T15:01:45.136502745Z" level=info msg="Container to stop \"dd73e737f302049ff11623a828e49e8fcc97139aa74c039d1c96c2eb20d71c7e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 15:01:45.136607 containerd[1582]: time="2025-05-15T15:01:45.136521856Z" level=info msg="Container to stop \"bd0466d19cba1deda7a97b2a2156051f2f59172bb1864ee18e85817ee39f7e1a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 15:01:45.136607 containerd[1582]: time="2025-05-15T15:01:45.136535020Z" level=info msg="Container to stop \"75b90c71182f7103d87f8a3ff7bb115cd1934951379629027c744485bfbdf676\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 15:01:45.136775 containerd[1582]: time="2025-05-15T15:01:45.136547541Z" level=info msg="Container to stop \"eaba067f7b1382786db94346ce6752029016429c3267d93bfc369b04950bcc7c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 15:01:45.157382 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8458e6fa4bf79f7459fce8917293de98d20101ee5c8b43d00e17b22647d39a26-rootfs.mount: Deactivated successfully. May 15 15:01:45.162303 systemd[1]: cri-containerd-21f76990a58a3cc05aa4bd41359b0b486bfe71ba2bf91adcaa4e5398528b82ac.scope: Deactivated successfully. May 15 15:01:45.167255 containerd[1582]: time="2025-05-15T15:01:45.167144796Z" level=info msg="shim disconnected" id=8458e6fa4bf79f7459fce8917293de98d20101ee5c8b43d00e17b22647d39a26 namespace=k8s.io May 15 15:01:45.167255 containerd[1582]: time="2025-05-15T15:01:45.167212366Z" level=warning msg="cleaning up after shim disconnected" id=8458e6fa4bf79f7459fce8917293de98d20101ee5c8b43d00e17b22647d39a26 namespace=k8s.io May 15 15:01:45.167255 containerd[1582]: time="2025-05-15T15:01:45.167223294Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 15:01:45.203773 containerd[1582]: time="2025-05-15T15:01:45.203711276Z" level=info msg="TaskExit event in podsandbox handler container_id:\"21f76990a58a3cc05aa4bd41359b0b486bfe71ba2bf91adcaa4e5398528b82ac\" id:\"21f76990a58a3cc05aa4bd41359b0b486bfe71ba2bf91adcaa4e5398528b82ac\" pid:2886 exit_status:137 exited_at:{seconds:1747321305 nanos:160762000}" May 15 15:01:45.204481 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-21f76990a58a3cc05aa4bd41359b0b486bfe71ba2bf91adcaa4e5398528b82ac-rootfs.mount: Deactivated successfully. May 15 15:01:45.208926 containerd[1582]: time="2025-05-15T15:01:45.206624593Z" level=info msg="received exit event sandbox_id:\"8458e6fa4bf79f7459fce8917293de98d20101ee5c8b43d00e17b22647d39a26\" exit_status:137 exited_at:{seconds:1747321305 nanos:92737408}" May 15 15:01:45.215538 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8458e6fa4bf79f7459fce8917293de98d20101ee5c8b43d00e17b22647d39a26-shm.mount: Deactivated successfully. May 15 15:01:45.220824 containerd[1582]: time="2025-05-15T15:01:45.220753843Z" level=info msg="shim disconnected" id=21f76990a58a3cc05aa4bd41359b0b486bfe71ba2bf91adcaa4e5398528b82ac namespace=k8s.io May 15 15:01:45.220824 containerd[1582]: time="2025-05-15T15:01:45.220798303Z" level=warning msg="cleaning up after shim disconnected" id=21f76990a58a3cc05aa4bd41359b0b486bfe71ba2bf91adcaa4e5398528b82ac namespace=k8s.io May 15 15:01:45.220824 containerd[1582]: time="2025-05-15T15:01:45.220806730Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 15:01:45.226761 containerd[1582]: time="2025-05-15T15:01:45.226661683Z" level=info msg="TearDown network for sandbox \"8458e6fa4bf79f7459fce8917293de98d20101ee5c8b43d00e17b22647d39a26\" successfully" May 15 15:01:45.226761 containerd[1582]: time="2025-05-15T15:01:45.226741572Z" level=info msg="StopPodSandbox for \"8458e6fa4bf79f7459fce8917293de98d20101ee5c8b43d00e17b22647d39a26\" returns successfully" May 15 15:01:45.228325 containerd[1582]: time="2025-05-15T15:01:45.227458985Z" level=info msg="received exit event sandbox_id:\"21f76990a58a3cc05aa4bd41359b0b486bfe71ba2bf91adcaa4e5398528b82ac\" exit_status:137 exited_at:{seconds:1747321305 nanos:160762000}" May 15 15:01:45.234945 containerd[1582]: time="2025-05-15T15:01:45.234554792Z" level=info msg="TearDown network for sandbox \"21f76990a58a3cc05aa4bd41359b0b486bfe71ba2bf91adcaa4e5398528b82ac\" successfully" May 15 15:01:45.234945 containerd[1582]: time="2025-05-15T15:01:45.234609308Z" level=info msg="StopPodSandbox for \"21f76990a58a3cc05aa4bd41359b0b486bfe71ba2bf91adcaa4e5398528b82ac\" returns successfully" May 15 15:01:45.270783 kubelet[2700]: I0515 15:01:45.270716 2700 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-95xb4\" (UniqueName: \"kubernetes.io/projected/7d4892b1-6c49-44a8-be2e-ec0d16256122-kube-api-access-95xb4\") pod \"7d4892b1-6c49-44a8-be2e-ec0d16256122\" (UID: \"7d4892b1-6c49-44a8-be2e-ec0d16256122\") " May 15 15:01:45.270783 kubelet[2700]: I0515 15:01:45.270786 2700 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7d4892b1-6c49-44a8-be2e-ec0d16256122-cilium-config-path\") pod \"7d4892b1-6c49-44a8-be2e-ec0d16256122\" (UID: \"7d4892b1-6c49-44a8-be2e-ec0d16256122\") " May 15 15:01:45.280913 kubelet[2700]: I0515 15:01:45.279839 2700 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d4892b1-6c49-44a8-be2e-ec0d16256122-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "7d4892b1-6c49-44a8-be2e-ec0d16256122" (UID: "7d4892b1-6c49-44a8-be2e-ec0d16256122"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 15 15:01:45.297951 kubelet[2700]: I0515 15:01:45.297802 2700 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d4892b1-6c49-44a8-be2e-ec0d16256122-kube-api-access-95xb4" (OuterVolumeSpecName: "kube-api-access-95xb4") pod "7d4892b1-6c49-44a8-be2e-ec0d16256122" (UID: "7d4892b1-6c49-44a8-be2e-ec0d16256122"). InnerVolumeSpecName "kube-api-access-95xb4". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 15 15:01:45.372956 kubelet[2700]: I0515 15:01:45.371599 2700 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/baa4dd91-1464-443d-a5fc-c7555674eb75-lib-modules\") pod \"baa4dd91-1464-443d-a5fc-c7555674eb75\" (UID: \"baa4dd91-1464-443d-a5fc-c7555674eb75\") " May 15 15:01:45.372956 kubelet[2700]: I0515 15:01:45.371652 2700 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/baa4dd91-1464-443d-a5fc-c7555674eb75-host-proc-sys-net\") pod \"baa4dd91-1464-443d-a5fc-c7555674eb75\" (UID: \"baa4dd91-1464-443d-a5fc-c7555674eb75\") " May 15 15:01:45.372956 kubelet[2700]: I0515 15:01:45.371670 2700 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/baa4dd91-1464-443d-a5fc-c7555674eb75-xtables-lock\") pod \"baa4dd91-1464-443d-a5fc-c7555674eb75\" (UID: \"baa4dd91-1464-443d-a5fc-c7555674eb75\") " May 15 15:01:45.372956 kubelet[2700]: I0515 15:01:45.371694 2700 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/baa4dd91-1464-443d-a5fc-c7555674eb75-bpf-maps\") pod \"baa4dd91-1464-443d-a5fc-c7555674eb75\" (UID: \"baa4dd91-1464-443d-a5fc-c7555674eb75\") " May 15 15:01:45.372956 kubelet[2700]: I0515 15:01:45.371722 2700 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/baa4dd91-1464-443d-a5fc-c7555674eb75-cilium-config-path\") pod \"baa4dd91-1464-443d-a5fc-c7555674eb75\" (UID: \"baa4dd91-1464-443d-a5fc-c7555674eb75\") " May 15 15:01:45.372956 kubelet[2700]: I0515 15:01:45.371737 2700 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/baa4dd91-1464-443d-a5fc-c7555674eb75-cilium-cgroup\") pod \"baa4dd91-1464-443d-a5fc-c7555674eb75\" (UID: \"baa4dd91-1464-443d-a5fc-c7555674eb75\") " May 15 15:01:45.373514 kubelet[2700]: I0515 15:01:45.371755 2700 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/baa4dd91-1464-443d-a5fc-c7555674eb75-hostproc\") pod \"baa4dd91-1464-443d-a5fc-c7555674eb75\" (UID: \"baa4dd91-1464-443d-a5fc-c7555674eb75\") " May 15 15:01:45.373514 kubelet[2700]: I0515 15:01:45.371771 2700 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/baa4dd91-1464-443d-a5fc-c7555674eb75-cilium-run\") pod \"baa4dd91-1464-443d-a5fc-c7555674eb75\" (UID: \"baa4dd91-1464-443d-a5fc-c7555674eb75\") " May 15 15:01:45.373514 kubelet[2700]: I0515 15:01:45.371770 2700 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/baa4dd91-1464-443d-a5fc-c7555674eb75-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "baa4dd91-1464-443d-a5fc-c7555674eb75" (UID: "baa4dd91-1464-443d-a5fc-c7555674eb75"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 15:01:45.373514 kubelet[2700]: I0515 15:01:45.371828 2700 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/baa4dd91-1464-443d-a5fc-c7555674eb75-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "baa4dd91-1464-443d-a5fc-c7555674eb75" (UID: "baa4dd91-1464-443d-a5fc-c7555674eb75"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 15:01:45.373514 kubelet[2700]: I0515 15:01:45.371791 2700 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/baa4dd91-1464-443d-a5fc-c7555674eb75-etc-cni-netd\") pod \"baa4dd91-1464-443d-a5fc-c7555674eb75\" (UID: \"baa4dd91-1464-443d-a5fc-c7555674eb75\") " May 15 15:01:45.373708 kubelet[2700]: I0515 15:01:45.371861 2700 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/baa4dd91-1464-443d-a5fc-c7555674eb75-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "baa4dd91-1464-443d-a5fc-c7555674eb75" (UID: "baa4dd91-1464-443d-a5fc-c7555674eb75"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 15:01:45.373708 kubelet[2700]: I0515 15:01:45.371923 2700 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/baa4dd91-1464-443d-a5fc-c7555674eb75-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "baa4dd91-1464-443d-a5fc-c7555674eb75" (UID: "baa4dd91-1464-443d-a5fc-c7555674eb75"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 15:01:45.373708 kubelet[2700]: I0515 15:01:45.371948 2700 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/baa4dd91-1464-443d-a5fc-c7555674eb75-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "baa4dd91-1464-443d-a5fc-c7555674eb75" (UID: "baa4dd91-1464-443d-a5fc-c7555674eb75"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 15:01:45.373708 kubelet[2700]: I0515 15:01:45.371951 2700 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/baa4dd91-1464-443d-a5fc-c7555674eb75-cni-path\") pod \"baa4dd91-1464-443d-a5fc-c7555674eb75\" (UID: \"baa4dd91-1464-443d-a5fc-c7555674eb75\") " May 15 15:01:45.373708 kubelet[2700]: I0515 15:01:45.372002 2700 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/baa4dd91-1464-443d-a5fc-c7555674eb75-host-proc-sys-kernel\") pod \"baa4dd91-1464-443d-a5fc-c7555674eb75\" (UID: \"baa4dd91-1464-443d-a5fc-c7555674eb75\") " May 15 15:01:45.373934 kubelet[2700]: I0515 15:01:45.372031 2700 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/baa4dd91-1464-443d-a5fc-c7555674eb75-clustermesh-secrets\") pod \"baa4dd91-1464-443d-a5fc-c7555674eb75\" (UID: \"baa4dd91-1464-443d-a5fc-c7555674eb75\") " May 15 15:01:45.373934 kubelet[2700]: I0515 15:01:45.372050 2700 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/baa4dd91-1464-443d-a5fc-c7555674eb75-hubble-tls\") pod \"baa4dd91-1464-443d-a5fc-c7555674eb75\" (UID: \"baa4dd91-1464-443d-a5fc-c7555674eb75\") " May 15 15:01:45.373934 kubelet[2700]: I0515 15:01:45.372085 2700 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9tgrx\" (UniqueName: \"kubernetes.io/projected/baa4dd91-1464-443d-a5fc-c7555674eb75-kube-api-access-9tgrx\") pod \"baa4dd91-1464-443d-a5fc-c7555674eb75\" (UID: \"baa4dd91-1464-443d-a5fc-c7555674eb75\") " May 15 15:01:45.373934 kubelet[2700]: I0515 15:01:45.372125 2700 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/baa4dd91-1464-443d-a5fc-c7555674eb75-etc-cni-netd\") on node \"ci-4334.0.0-a-cad88baf47\" DevicePath \"\"" May 15 15:01:45.373934 kubelet[2700]: I0515 15:01:45.372149 2700 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/baa4dd91-1464-443d-a5fc-c7555674eb75-lib-modules\") on node \"ci-4334.0.0-a-cad88baf47\" DevicePath \"\"" May 15 15:01:45.373934 kubelet[2700]: I0515 15:01:45.372159 2700 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-95xb4\" (UniqueName: \"kubernetes.io/projected/7d4892b1-6c49-44a8-be2e-ec0d16256122-kube-api-access-95xb4\") on node \"ci-4334.0.0-a-cad88baf47\" DevicePath \"\"" May 15 15:01:45.373934 kubelet[2700]: I0515 15:01:45.372168 2700 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/baa4dd91-1464-443d-a5fc-c7555674eb75-host-proc-sys-net\") on node \"ci-4334.0.0-a-cad88baf47\" DevicePath \"\"" May 15 15:01:45.374135 kubelet[2700]: I0515 15:01:45.372182 2700 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7d4892b1-6c49-44a8-be2e-ec0d16256122-cilium-config-path\") on node \"ci-4334.0.0-a-cad88baf47\" DevicePath \"\"" May 15 15:01:45.374135 kubelet[2700]: I0515 15:01:45.372193 2700 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/baa4dd91-1464-443d-a5fc-c7555674eb75-xtables-lock\") on node \"ci-4334.0.0-a-cad88baf47\" DevicePath \"\"" May 15 15:01:45.374135 kubelet[2700]: I0515 15:01:45.372308 2700 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/baa4dd91-1464-443d-a5fc-c7555674eb75-bpf-maps\") on node \"ci-4334.0.0-a-cad88baf47\" DevicePath \"\"" May 15 15:01:45.374422 kubelet[2700]: I0515 15:01:45.374337 2700 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/baa4dd91-1464-443d-a5fc-c7555674eb75-cni-path" (OuterVolumeSpecName: "cni-path") pod "baa4dd91-1464-443d-a5fc-c7555674eb75" (UID: "baa4dd91-1464-443d-a5fc-c7555674eb75"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 15:01:45.374509 kubelet[2700]: I0515 15:01:45.374452 2700 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/baa4dd91-1464-443d-a5fc-c7555674eb75-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "baa4dd91-1464-443d-a5fc-c7555674eb75" (UID: "baa4dd91-1464-443d-a5fc-c7555674eb75"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 15:01:45.374591 kubelet[2700]: I0515 15:01:45.374560 2700 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/baa4dd91-1464-443d-a5fc-c7555674eb75-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "baa4dd91-1464-443d-a5fc-c7555674eb75" (UID: "baa4dd91-1464-443d-a5fc-c7555674eb75"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 15:01:45.374634 kubelet[2700]: I0515 15:01:45.374609 2700 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/baa4dd91-1464-443d-a5fc-c7555674eb75-hostproc" (OuterVolumeSpecName: "hostproc") pod "baa4dd91-1464-443d-a5fc-c7555674eb75" (UID: "baa4dd91-1464-443d-a5fc-c7555674eb75"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 15:01:45.374671 kubelet[2700]: I0515 15:01:45.374631 2700 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/baa4dd91-1464-443d-a5fc-c7555674eb75-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "baa4dd91-1464-443d-a5fc-c7555674eb75" (UID: "baa4dd91-1464-443d-a5fc-c7555674eb75"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 15:01:45.376862 kubelet[2700]: I0515 15:01:45.376360 2700 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/baa4dd91-1464-443d-a5fc-c7555674eb75-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "baa4dd91-1464-443d-a5fc-c7555674eb75" (UID: "baa4dd91-1464-443d-a5fc-c7555674eb75"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 15 15:01:45.378075 kubelet[2700]: I0515 15:01:45.378027 2700 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/baa4dd91-1464-443d-a5fc-c7555674eb75-kube-api-access-9tgrx" (OuterVolumeSpecName: "kube-api-access-9tgrx") pod "baa4dd91-1464-443d-a5fc-c7555674eb75" (UID: "baa4dd91-1464-443d-a5fc-c7555674eb75"). InnerVolumeSpecName "kube-api-access-9tgrx". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 15 15:01:45.381560 kubelet[2700]: I0515 15:01:45.381490 2700 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/baa4dd91-1464-443d-a5fc-c7555674eb75-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "baa4dd91-1464-443d-a5fc-c7555674eb75" (UID: "baa4dd91-1464-443d-a5fc-c7555674eb75"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 15 15:01:45.382816 kubelet[2700]: I0515 15:01:45.382765 2700 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/baa4dd91-1464-443d-a5fc-c7555674eb75-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "baa4dd91-1464-443d-a5fc-c7555674eb75" (UID: "baa4dd91-1464-443d-a5fc-c7555674eb75"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 15 15:01:45.473327 kubelet[2700]: I0515 15:01:45.473153 2700 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/baa4dd91-1464-443d-a5fc-c7555674eb75-cilium-config-path\") on node \"ci-4334.0.0-a-cad88baf47\" DevicePath \"\"" May 15 15:01:45.475926 kubelet[2700]: I0515 15:01:45.473614 2700 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/baa4dd91-1464-443d-a5fc-c7555674eb75-cilium-cgroup\") on node \"ci-4334.0.0-a-cad88baf47\" DevicePath \"\"" May 15 15:01:45.475926 kubelet[2700]: I0515 15:01:45.473646 2700 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/baa4dd91-1464-443d-a5fc-c7555674eb75-hostproc\") on node \"ci-4334.0.0-a-cad88baf47\" DevicePath \"\"" May 15 15:01:45.475926 kubelet[2700]: I0515 15:01:45.473663 2700 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/baa4dd91-1464-443d-a5fc-c7555674eb75-cilium-run\") on node \"ci-4334.0.0-a-cad88baf47\" DevicePath \"\"" May 15 15:01:45.475926 kubelet[2700]: I0515 15:01:45.473679 2700 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/baa4dd91-1464-443d-a5fc-c7555674eb75-clustermesh-secrets\") on node \"ci-4334.0.0-a-cad88baf47\" DevicePath \"\"" May 15 15:01:45.475926 kubelet[2700]: I0515 15:01:45.473696 2700 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/baa4dd91-1464-443d-a5fc-c7555674eb75-cni-path\") on node \"ci-4334.0.0-a-cad88baf47\" DevicePath \"\"" May 15 15:01:45.475926 kubelet[2700]: I0515 15:01:45.473713 2700 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/baa4dd91-1464-443d-a5fc-c7555674eb75-host-proc-sys-kernel\") on node \"ci-4334.0.0-a-cad88baf47\" DevicePath \"\"" May 15 15:01:45.475926 kubelet[2700]: I0515 15:01:45.473733 2700 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9tgrx\" (UniqueName: \"kubernetes.io/projected/baa4dd91-1464-443d-a5fc-c7555674eb75-kube-api-access-9tgrx\") on node \"ci-4334.0.0-a-cad88baf47\" DevicePath \"\"" May 15 15:01:45.475926 kubelet[2700]: I0515 15:01:45.473757 2700 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/baa4dd91-1464-443d-a5fc-c7555674eb75-hubble-tls\") on node \"ci-4334.0.0-a-cad88baf47\" DevicePath \"\"" May 15 15:01:46.038098 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-21f76990a58a3cc05aa4bd41359b0b486bfe71ba2bf91adcaa4e5398528b82ac-shm.mount: Deactivated successfully. May 15 15:01:46.038280 systemd[1]: var-lib-kubelet-pods-baa4dd91\x2d1464\x2d443d\x2da5fc\x2dc7555674eb75-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9tgrx.mount: Deactivated successfully. May 15 15:01:46.038380 systemd[1]: var-lib-kubelet-pods-7d4892b1\x2d6c49\x2d44a8\x2dbe2e\x2dec0d16256122-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d95xb4.mount: Deactivated successfully. May 15 15:01:46.038477 systemd[1]: var-lib-kubelet-pods-baa4dd91\x2d1464\x2d443d\x2da5fc\x2dc7555674eb75-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 15 15:01:46.038615 systemd[1]: var-lib-kubelet-pods-baa4dd91\x2d1464\x2d443d\x2da5fc\x2dc7555674eb75-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 15 15:01:46.051416 kubelet[2700]: I0515 15:01:46.050002 2700 scope.go:117] "RemoveContainer" containerID="42823f7338002b6337a80dac59b2ee76b092b024550116696cbbc8e64a0ce746" May 15 15:01:46.054120 containerd[1582]: time="2025-05-15T15:01:46.054031935Z" level=info msg="RemoveContainer for \"42823f7338002b6337a80dac59b2ee76b092b024550116696cbbc8e64a0ce746\"" May 15 15:01:46.061271 containerd[1582]: time="2025-05-15T15:01:46.061220333Z" level=info msg="RemoveContainer for \"42823f7338002b6337a80dac59b2ee76b092b024550116696cbbc8e64a0ce746\" returns successfully" May 15 15:01:46.063349 kubelet[2700]: I0515 15:01:46.063299 2700 scope.go:117] "RemoveContainer" containerID="42823f7338002b6337a80dac59b2ee76b092b024550116696cbbc8e64a0ce746" May 15 15:01:46.071858 systemd[1]: Removed slice kubepods-besteffort-pod7d4892b1_6c49_44a8_be2e_ec0d16256122.slice - libcontainer container kubepods-besteffort-pod7d4892b1_6c49_44a8_be2e_ec0d16256122.slice. May 15 15:01:46.071998 systemd[1]: kubepods-besteffort-pod7d4892b1_6c49_44a8_be2e_ec0d16256122.slice: Consumed 664ms CPU time, 29.3M memory peak, 1.4M read from disk, 4K written to disk. May 15 15:01:46.088902 containerd[1582]: time="2025-05-15T15:01:46.063605175Z" level=error msg="ContainerStatus for \"42823f7338002b6337a80dac59b2ee76b092b024550116696cbbc8e64a0ce746\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"42823f7338002b6337a80dac59b2ee76b092b024550116696cbbc8e64a0ce746\": not found" May 15 15:01:46.098852 kubelet[2700]: E0515 15:01:46.098041 2700 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"42823f7338002b6337a80dac59b2ee76b092b024550116696cbbc8e64a0ce746\": not found" containerID="42823f7338002b6337a80dac59b2ee76b092b024550116696cbbc8e64a0ce746" May 15 15:01:46.098852 kubelet[2700]: I0515 15:01:46.098111 2700 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"42823f7338002b6337a80dac59b2ee76b092b024550116696cbbc8e64a0ce746"} err="failed to get container status \"42823f7338002b6337a80dac59b2ee76b092b024550116696cbbc8e64a0ce746\": rpc error: code = NotFound desc = an error occurred when try to find container \"42823f7338002b6337a80dac59b2ee76b092b024550116696cbbc8e64a0ce746\": not found" May 15 15:01:46.102031 kubelet[2700]: I0515 15:01:46.101960 2700 scope.go:117] "RemoveContainer" containerID="bd0466d19cba1deda7a97b2a2156051f2f59172bb1864ee18e85817ee39f7e1a" May 15 15:01:46.114400 containerd[1582]: time="2025-05-15T15:01:46.114335458Z" level=info msg="RemoveContainer for \"bd0466d19cba1deda7a97b2a2156051f2f59172bb1864ee18e85817ee39f7e1a\"" May 15 15:01:46.114966 systemd[1]: Removed slice kubepods-burstable-podbaa4dd91_1464_443d_a5fc_c7555674eb75.slice - libcontainer container kubepods-burstable-podbaa4dd91_1464_443d_a5fc_c7555674eb75.slice. May 15 15:01:46.115188 systemd[1]: kubepods-burstable-podbaa4dd91_1464_443d_a5fc_c7555674eb75.slice: Consumed 12.088s CPU time, 154.4M memory peak, 33.6M read from disk, 13.3M written to disk. May 15 15:01:46.124762 containerd[1582]: time="2025-05-15T15:01:46.124660566Z" level=info msg="RemoveContainer for \"bd0466d19cba1deda7a97b2a2156051f2f59172bb1864ee18e85817ee39f7e1a\" returns successfully" May 15 15:01:46.125531 kubelet[2700]: I0515 15:01:46.125191 2700 scope.go:117] "RemoveContainer" containerID="eaba067f7b1382786db94346ce6752029016429c3267d93bfc369b04950bcc7c" May 15 15:01:46.129465 containerd[1582]: time="2025-05-15T15:01:46.129266924Z" level=info msg="RemoveContainer for \"eaba067f7b1382786db94346ce6752029016429c3267d93bfc369b04950bcc7c\"" May 15 15:01:46.134857 containerd[1582]: time="2025-05-15T15:01:46.134799717Z" level=info msg="RemoveContainer for \"eaba067f7b1382786db94346ce6752029016429c3267d93bfc369b04950bcc7c\" returns successfully" May 15 15:01:46.135497 kubelet[2700]: I0515 15:01:46.135449 2700 scope.go:117] "RemoveContainer" containerID="75b90c71182f7103d87f8a3ff7bb115cd1934951379629027c744485bfbdf676" May 15 15:01:46.143755 containerd[1582]: time="2025-05-15T15:01:46.143612816Z" level=info msg="RemoveContainer for \"75b90c71182f7103d87f8a3ff7bb115cd1934951379629027c744485bfbdf676\"" May 15 15:01:46.153906 containerd[1582]: time="2025-05-15T15:01:46.153737686Z" level=info msg="RemoveContainer for \"75b90c71182f7103d87f8a3ff7bb115cd1934951379629027c744485bfbdf676\" returns successfully" May 15 15:01:46.154989 kubelet[2700]: I0515 15:01:46.154344 2700 scope.go:117] "RemoveContainer" containerID="dd73e737f302049ff11623a828e49e8fcc97139aa74c039d1c96c2eb20d71c7e" May 15 15:01:46.157156 containerd[1582]: time="2025-05-15T15:01:46.157118572Z" level=info msg="RemoveContainer for \"dd73e737f302049ff11623a828e49e8fcc97139aa74c039d1c96c2eb20d71c7e\"" May 15 15:01:46.161072 containerd[1582]: time="2025-05-15T15:01:46.161021858Z" level=info msg="RemoveContainer for \"dd73e737f302049ff11623a828e49e8fcc97139aa74c039d1c96c2eb20d71c7e\" returns successfully" May 15 15:01:46.161861 kubelet[2700]: I0515 15:01:46.161693 2700 scope.go:117] "RemoveContainer" containerID="d7983d43b2590a9337c8ea15d85cee7c9cff3826c6eaaf2305e4f709ff42808d" May 15 15:01:46.168904 containerd[1582]: time="2025-05-15T15:01:46.168817090Z" level=info msg="RemoveContainer for \"d7983d43b2590a9337c8ea15d85cee7c9cff3826c6eaaf2305e4f709ff42808d\"" May 15 15:01:46.175074 containerd[1582]: time="2025-05-15T15:01:46.174901134Z" level=info msg="RemoveContainer for \"d7983d43b2590a9337c8ea15d85cee7c9cff3826c6eaaf2305e4f709ff42808d\" returns successfully" May 15 15:01:46.178116 kubelet[2700]: I0515 15:01:46.178072 2700 scope.go:117] "RemoveContainer" containerID="bd0466d19cba1deda7a97b2a2156051f2f59172bb1864ee18e85817ee39f7e1a" May 15 15:01:46.178892 containerd[1582]: time="2025-05-15T15:01:46.178778562Z" level=error msg="ContainerStatus for \"bd0466d19cba1deda7a97b2a2156051f2f59172bb1864ee18e85817ee39f7e1a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bd0466d19cba1deda7a97b2a2156051f2f59172bb1864ee18e85817ee39f7e1a\": not found" May 15 15:01:46.179313 kubelet[2700]: E0515 15:01:46.179172 2700 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bd0466d19cba1deda7a97b2a2156051f2f59172bb1864ee18e85817ee39f7e1a\": not found" containerID="bd0466d19cba1deda7a97b2a2156051f2f59172bb1864ee18e85817ee39f7e1a" May 15 15:01:46.179313 kubelet[2700]: I0515 15:01:46.179212 2700 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bd0466d19cba1deda7a97b2a2156051f2f59172bb1864ee18e85817ee39f7e1a"} err="failed to get container status \"bd0466d19cba1deda7a97b2a2156051f2f59172bb1864ee18e85817ee39f7e1a\": rpc error: code = NotFound desc = an error occurred when try to find container \"bd0466d19cba1deda7a97b2a2156051f2f59172bb1864ee18e85817ee39f7e1a\": not found" May 15 15:01:46.179313 kubelet[2700]: I0515 15:01:46.179239 2700 scope.go:117] "RemoveContainer" containerID="eaba067f7b1382786db94346ce6752029016429c3267d93bfc369b04950bcc7c" May 15 15:01:46.179835 containerd[1582]: time="2025-05-15T15:01:46.179786016Z" level=error msg="ContainerStatus for \"eaba067f7b1382786db94346ce6752029016429c3267d93bfc369b04950bcc7c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"eaba067f7b1382786db94346ce6752029016429c3267d93bfc369b04950bcc7c\": not found" May 15 15:01:46.180186 kubelet[2700]: E0515 15:01:46.180072 2700 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"eaba067f7b1382786db94346ce6752029016429c3267d93bfc369b04950bcc7c\": not found" containerID="eaba067f7b1382786db94346ce6752029016429c3267d93bfc369b04950bcc7c" May 15 15:01:46.180186 kubelet[2700]: I0515 15:01:46.180100 2700 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"eaba067f7b1382786db94346ce6752029016429c3267d93bfc369b04950bcc7c"} err="failed to get container status \"eaba067f7b1382786db94346ce6752029016429c3267d93bfc369b04950bcc7c\": rpc error: code = NotFound desc = an error occurred when try to find container \"eaba067f7b1382786db94346ce6752029016429c3267d93bfc369b04950bcc7c\": not found" May 15 15:01:46.180186 kubelet[2700]: I0515 15:01:46.180120 2700 scope.go:117] "RemoveContainer" containerID="75b90c71182f7103d87f8a3ff7bb115cd1934951379629027c744485bfbdf676" May 15 15:01:46.180571 containerd[1582]: time="2025-05-15T15:01:46.180541484Z" level=error msg="ContainerStatus for \"75b90c71182f7103d87f8a3ff7bb115cd1934951379629027c744485bfbdf676\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"75b90c71182f7103d87f8a3ff7bb115cd1934951379629027c744485bfbdf676\": not found" May 15 15:01:46.180982 kubelet[2700]: E0515 15:01:46.180919 2700 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"75b90c71182f7103d87f8a3ff7bb115cd1934951379629027c744485bfbdf676\": not found" containerID="75b90c71182f7103d87f8a3ff7bb115cd1934951379629027c744485bfbdf676" May 15 15:01:46.181256 kubelet[2700]: I0515 15:01:46.181073 2700 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"75b90c71182f7103d87f8a3ff7bb115cd1934951379629027c744485bfbdf676"} err="failed to get container status \"75b90c71182f7103d87f8a3ff7bb115cd1934951379629027c744485bfbdf676\": rpc error: code = NotFound desc = an error occurred when try to find container \"75b90c71182f7103d87f8a3ff7bb115cd1934951379629027c744485bfbdf676\": not found" May 15 15:01:46.181256 kubelet[2700]: I0515 15:01:46.181120 2700 scope.go:117] "RemoveContainer" containerID="dd73e737f302049ff11623a828e49e8fcc97139aa74c039d1c96c2eb20d71c7e" May 15 15:01:46.181811 containerd[1582]: time="2025-05-15T15:01:46.181744104Z" level=error msg="ContainerStatus for \"dd73e737f302049ff11623a828e49e8fcc97139aa74c039d1c96c2eb20d71c7e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"dd73e737f302049ff11623a828e49e8fcc97139aa74c039d1c96c2eb20d71c7e\": not found" May 15 15:01:46.182044 kubelet[2700]: E0515 15:01:46.182027 2700 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"dd73e737f302049ff11623a828e49e8fcc97139aa74c039d1c96c2eb20d71c7e\": not found" containerID="dd73e737f302049ff11623a828e49e8fcc97139aa74c039d1c96c2eb20d71c7e" May 15 15:01:46.182199 kubelet[2700]: I0515 15:01:46.182175 2700 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"dd73e737f302049ff11623a828e49e8fcc97139aa74c039d1c96c2eb20d71c7e"} err="failed to get container status \"dd73e737f302049ff11623a828e49e8fcc97139aa74c039d1c96c2eb20d71c7e\": rpc error: code = NotFound desc = an error occurred when try to find container \"dd73e737f302049ff11623a828e49e8fcc97139aa74c039d1c96c2eb20d71c7e\": not found" May 15 15:01:46.182283 kubelet[2700]: I0515 15:01:46.182273 2700 scope.go:117] "RemoveContainer" containerID="d7983d43b2590a9337c8ea15d85cee7c9cff3826c6eaaf2305e4f709ff42808d" May 15 15:01:46.182575 containerd[1582]: time="2025-05-15T15:01:46.182542008Z" level=error msg="ContainerStatus for \"d7983d43b2590a9337c8ea15d85cee7c9cff3826c6eaaf2305e4f709ff42808d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d7983d43b2590a9337c8ea15d85cee7c9cff3826c6eaaf2305e4f709ff42808d\": not found" May 15 15:01:46.182952 kubelet[2700]: E0515 15:01:46.182911 2700 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d7983d43b2590a9337c8ea15d85cee7c9cff3826c6eaaf2305e4f709ff42808d\": not found" containerID="d7983d43b2590a9337c8ea15d85cee7c9cff3826c6eaaf2305e4f709ff42808d" May 15 15:01:46.182952 kubelet[2700]: I0515 15:01:46.182933 2700 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d7983d43b2590a9337c8ea15d85cee7c9cff3826c6eaaf2305e4f709ff42808d"} err="failed to get container status \"d7983d43b2590a9337c8ea15d85cee7c9cff3826c6eaaf2305e4f709ff42808d\": rpc error: code = NotFound desc = an error occurred when try to find container \"d7983d43b2590a9337c8ea15d85cee7c9cff3826c6eaaf2305e4f709ff42808d\": not found" May 15 15:01:46.336058 kubelet[2700]: I0515 15:01:46.335702 2700 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d4892b1-6c49-44a8-be2e-ec0d16256122" path="/var/lib/kubelet/pods/7d4892b1-6c49-44a8-be2e-ec0d16256122/volumes" May 15 15:01:46.336591 kubelet[2700]: I0515 15:01:46.336401 2700 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="baa4dd91-1464-443d-a5fc-c7555674eb75" path="/var/lib/kubelet/pods/baa4dd91-1464-443d-a5fc-c7555674eb75/volumes" May 15 15:01:46.816899 sshd[4130]: Connection closed by 139.178.68.195 port 50058 May 15 15:01:46.822607 sshd-session[4128]: pam_unix(sshd:session): session closed for user core May 15 15:01:46.828699 systemd[1]: sshd@28-137.184.120.255:22-139.178.68.195:50058.service: Deactivated successfully. May 15 15:01:46.831535 systemd[1]: session-28.scope: Deactivated successfully. May 15 15:01:46.834016 systemd-logind[1497]: Session 28 logged out. Waiting for processes to exit. May 15 15:01:46.837557 systemd[1]: Started sshd@29-137.184.120.255:22-139.178.68.195:52566.service - OpenSSH per-connection server daemon (139.178.68.195:52566). May 15 15:01:46.839625 systemd-logind[1497]: Removed session 28. May 15 15:01:46.942627 sshd[4284]: Accepted publickey for core from 139.178.68.195 port 52566 ssh2: RSA SHA256:CR2QFGI8Wi38j7m0fVendNlhmaPvJh+gYMXcH5yQYrY May 15 15:01:46.944826 sshd-session[4284]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:01:46.952108 systemd-logind[1497]: New session 29 of user core. May 15 15:01:46.970265 systemd[1]: Started session-29.scope - Session 29 of User core. May 15 15:01:48.118826 sshd[4286]: Connection closed by 139.178.68.195 port 52566 May 15 15:01:48.118177 sshd-session[4284]: pam_unix(sshd:session): session closed for user core May 15 15:01:48.129536 systemd[1]: sshd@29-137.184.120.255:22-139.178.68.195:52566.service: Deactivated successfully. May 15 15:01:48.132859 systemd[1]: session-29.scope: Deactivated successfully. May 15 15:01:48.133629 systemd[1]: session-29.scope: Consumed 1.020s CPU time, 25.1M memory peak. May 15 15:01:48.136977 systemd-logind[1497]: Session 29 logged out. Waiting for processes to exit. May 15 15:01:48.141633 systemd[1]: Started sshd@30-137.184.120.255:22-139.178.68.195:52570.service - OpenSSH per-connection server daemon (139.178.68.195:52570). May 15 15:01:48.146946 systemd-logind[1497]: Removed session 29. May 15 15:01:48.176114 kubelet[2700]: I0515 15:01:48.176055 2700 memory_manager.go:355] "RemoveStaleState removing state" podUID="7d4892b1-6c49-44a8-be2e-ec0d16256122" containerName="cilium-operator" May 15 15:01:48.177629 kubelet[2700]: I0515 15:01:48.176934 2700 memory_manager.go:355] "RemoveStaleState removing state" podUID="baa4dd91-1464-443d-a5fc-c7555674eb75" containerName="cilium-agent" May 15 15:01:48.204270 systemd[1]: Created slice kubepods-burstable-pod66ece67b_21ca_494e_9a96_525b2b9f243f.slice - libcontainer container kubepods-burstable-pod66ece67b_21ca_494e_9a96_525b2b9f243f.slice. May 15 15:01:48.238496 sshd[4297]: Accepted publickey for core from 139.178.68.195 port 52570 ssh2: RSA SHA256:CR2QFGI8Wi38j7m0fVendNlhmaPvJh+gYMXcH5yQYrY May 15 15:01:48.241310 sshd-session[4297]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:01:48.253474 systemd-logind[1497]: New session 30 of user core. May 15 15:01:48.257200 systemd[1]: Started session-30.scope - Session 30 of User core. May 15 15:01:48.290299 kubelet[2700]: I0515 15:01:48.290120 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/66ece67b-21ca-494e-9a96-525b2b9f243f-hostproc\") pod \"cilium-mwd8h\" (UID: \"66ece67b-21ca-494e-9a96-525b2b9f243f\") " pod="kube-system/cilium-mwd8h" May 15 15:01:48.290299 kubelet[2700]: I0515 15:01:48.290175 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/66ece67b-21ca-494e-9a96-525b2b9f243f-cni-path\") pod \"cilium-mwd8h\" (UID: \"66ece67b-21ca-494e-9a96-525b2b9f243f\") " pod="kube-system/cilium-mwd8h" May 15 15:01:48.290299 kubelet[2700]: I0515 15:01:48.290200 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/66ece67b-21ca-494e-9a96-525b2b9f243f-host-proc-sys-net\") pod \"cilium-mwd8h\" (UID: \"66ece67b-21ca-494e-9a96-525b2b9f243f\") " pod="kube-system/cilium-mwd8h" May 15 15:01:48.290299 kubelet[2700]: I0515 15:01:48.290218 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/66ece67b-21ca-494e-9a96-525b2b9f243f-cilium-config-path\") pod \"cilium-mwd8h\" (UID: \"66ece67b-21ca-494e-9a96-525b2b9f243f\") " pod="kube-system/cilium-mwd8h" May 15 15:01:48.290299 kubelet[2700]: I0515 15:01:48.290237 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/66ece67b-21ca-494e-9a96-525b2b9f243f-host-proc-sys-kernel\") pod \"cilium-mwd8h\" (UID: \"66ece67b-21ca-494e-9a96-525b2b9f243f\") " pod="kube-system/cilium-mwd8h" May 15 15:01:48.290599 kubelet[2700]: I0515 15:01:48.290318 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/66ece67b-21ca-494e-9a96-525b2b9f243f-clustermesh-secrets\") pod \"cilium-mwd8h\" (UID: \"66ece67b-21ca-494e-9a96-525b2b9f243f\") " pod="kube-system/cilium-mwd8h" May 15 15:01:48.290599 kubelet[2700]: I0515 15:01:48.290372 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/66ece67b-21ca-494e-9a96-525b2b9f243f-lib-modules\") pod \"cilium-mwd8h\" (UID: \"66ece67b-21ca-494e-9a96-525b2b9f243f\") " pod="kube-system/cilium-mwd8h" May 15 15:01:48.290599 kubelet[2700]: I0515 15:01:48.290411 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/66ece67b-21ca-494e-9a96-525b2b9f243f-hubble-tls\") pod \"cilium-mwd8h\" (UID: \"66ece67b-21ca-494e-9a96-525b2b9f243f\") " pod="kube-system/cilium-mwd8h" May 15 15:01:48.290599 kubelet[2700]: I0515 15:01:48.290445 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/66ece67b-21ca-494e-9a96-525b2b9f243f-bpf-maps\") pod \"cilium-mwd8h\" (UID: \"66ece67b-21ca-494e-9a96-525b2b9f243f\") " pod="kube-system/cilium-mwd8h" May 15 15:01:48.290599 kubelet[2700]: I0515 15:01:48.290462 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/66ece67b-21ca-494e-9a96-525b2b9f243f-cilium-cgroup\") pod \"cilium-mwd8h\" (UID: \"66ece67b-21ca-494e-9a96-525b2b9f243f\") " pod="kube-system/cilium-mwd8h" May 15 15:01:48.290599 kubelet[2700]: I0515 15:01:48.290482 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/66ece67b-21ca-494e-9a96-525b2b9f243f-cilium-ipsec-secrets\") pod \"cilium-mwd8h\" (UID: \"66ece67b-21ca-494e-9a96-525b2b9f243f\") " pod="kube-system/cilium-mwd8h" May 15 15:01:48.290743 kubelet[2700]: I0515 15:01:48.290500 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/66ece67b-21ca-494e-9a96-525b2b9f243f-cilium-run\") pod \"cilium-mwd8h\" (UID: \"66ece67b-21ca-494e-9a96-525b2b9f243f\") " pod="kube-system/cilium-mwd8h" May 15 15:01:48.290743 kubelet[2700]: I0515 15:01:48.290518 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/66ece67b-21ca-494e-9a96-525b2b9f243f-etc-cni-netd\") pod \"cilium-mwd8h\" (UID: \"66ece67b-21ca-494e-9a96-525b2b9f243f\") " pod="kube-system/cilium-mwd8h" May 15 15:01:48.290743 kubelet[2700]: I0515 15:01:48.290535 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7kwh\" (UniqueName: \"kubernetes.io/projected/66ece67b-21ca-494e-9a96-525b2b9f243f-kube-api-access-m7kwh\") pod \"cilium-mwd8h\" (UID: \"66ece67b-21ca-494e-9a96-525b2b9f243f\") " pod="kube-system/cilium-mwd8h" May 15 15:01:48.290743 kubelet[2700]: I0515 15:01:48.290554 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/66ece67b-21ca-494e-9a96-525b2b9f243f-xtables-lock\") pod \"cilium-mwd8h\" (UID: \"66ece67b-21ca-494e-9a96-525b2b9f243f\") " pod="kube-system/cilium-mwd8h" May 15 15:01:48.323965 sshd[4299]: Connection closed by 139.178.68.195 port 52570 May 15 15:01:48.324621 sshd-session[4297]: pam_unix(sshd:session): session closed for user core May 15 15:01:48.338526 systemd[1]: sshd@30-137.184.120.255:22-139.178.68.195:52570.service: Deactivated successfully. May 15 15:01:48.343500 systemd[1]: session-30.scope: Deactivated successfully. May 15 15:01:48.347304 systemd-logind[1497]: Session 30 logged out. Waiting for processes to exit. May 15 15:01:48.358422 systemd[1]: Started sshd@31-137.184.120.255:22-139.178.68.195:52582.service - OpenSSH per-connection server daemon (139.178.68.195:52582). May 15 15:01:48.360330 systemd-logind[1497]: Removed session 30. May 15 15:01:48.472671 sshd[4306]: Accepted publickey for core from 139.178.68.195 port 52582 ssh2: RSA SHA256:CR2QFGI8Wi38j7m0fVendNlhmaPvJh+gYMXcH5yQYrY May 15 15:01:48.474851 sshd-session[4306]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:01:48.482617 systemd-logind[1497]: New session 31 of user core. May 15 15:01:48.494272 systemd[1]: Started session-31.scope - Session 31 of User core. May 15 15:01:48.516518 kubelet[2700]: E0515 15:01:48.516467 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:01:48.521265 containerd[1582]: time="2025-05-15T15:01:48.521193064Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mwd8h,Uid:66ece67b-21ca-494e-9a96-525b2b9f243f,Namespace:kube-system,Attempt:0,}" May 15 15:01:48.543189 containerd[1582]: time="2025-05-15T15:01:48.543126670Z" level=info msg="connecting to shim 67e00445a606f8408705b587cd48b6f4d10600efb767350c621e75479e0a88bf" address="unix:///run/containerd/s/23d2cd6c087b72f94df82c30664e11c14431185d375d536a4b8302f1a1ea722e" namespace=k8s.io protocol=ttrpc version=3 May 15 15:01:48.561356 kubelet[2700]: E0515 15:01:48.561204 2700 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 15 15:01:48.584334 systemd[1]: Started cri-containerd-67e00445a606f8408705b587cd48b6f4d10600efb767350c621e75479e0a88bf.scope - libcontainer container 67e00445a606f8408705b587cd48b6f4d10600efb767350c621e75479e0a88bf. May 15 15:01:48.649002 containerd[1582]: time="2025-05-15T15:01:48.648841118Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mwd8h,Uid:66ece67b-21ca-494e-9a96-525b2b9f243f,Namespace:kube-system,Attempt:0,} returns sandbox id \"67e00445a606f8408705b587cd48b6f4d10600efb767350c621e75479e0a88bf\"" May 15 15:01:48.651277 kubelet[2700]: E0515 15:01:48.650608 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:01:48.664268 containerd[1582]: time="2025-05-15T15:01:48.662042836Z" level=info msg="CreateContainer within sandbox \"67e00445a606f8408705b587cd48b6f4d10600efb767350c621e75479e0a88bf\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 15 15:01:48.674369 containerd[1582]: time="2025-05-15T15:01:48.674286857Z" level=info msg="Container e76f87b2f078a968a9983e41f9fdd440a2671e8c894060049a513208a71ec918: CDI devices from CRI Config.CDIDevices: []" May 15 15:01:48.685214 containerd[1582]: time="2025-05-15T15:01:48.685142697Z" level=info msg="CreateContainer within sandbox \"67e00445a606f8408705b587cd48b6f4d10600efb767350c621e75479e0a88bf\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e76f87b2f078a968a9983e41f9fdd440a2671e8c894060049a513208a71ec918\"" May 15 15:01:48.686614 containerd[1582]: time="2025-05-15T15:01:48.686564467Z" level=info msg="StartContainer for \"e76f87b2f078a968a9983e41f9fdd440a2671e8c894060049a513208a71ec918\"" May 15 15:01:48.690256 containerd[1582]: time="2025-05-15T15:01:48.690123708Z" level=info msg="connecting to shim e76f87b2f078a968a9983e41f9fdd440a2671e8c894060049a513208a71ec918" address="unix:///run/containerd/s/23d2cd6c087b72f94df82c30664e11c14431185d375d536a4b8302f1a1ea722e" protocol=ttrpc version=3 May 15 15:01:48.725301 systemd[1]: Started cri-containerd-e76f87b2f078a968a9983e41f9fdd440a2671e8c894060049a513208a71ec918.scope - libcontainer container e76f87b2f078a968a9983e41f9fdd440a2671e8c894060049a513208a71ec918. May 15 15:01:48.778859 containerd[1582]: time="2025-05-15T15:01:48.778785203Z" level=info msg="StartContainer for \"e76f87b2f078a968a9983e41f9fdd440a2671e8c894060049a513208a71ec918\" returns successfully" May 15 15:01:48.794940 systemd[1]: cri-containerd-e76f87b2f078a968a9983e41f9fdd440a2671e8c894060049a513208a71ec918.scope: Deactivated successfully. May 15 15:01:48.803348 containerd[1582]: time="2025-05-15T15:01:48.803271603Z" level=info msg="received exit event container_id:\"e76f87b2f078a968a9983e41f9fdd440a2671e8c894060049a513208a71ec918\" id:\"e76f87b2f078a968a9983e41f9fdd440a2671e8c894060049a513208a71ec918\" pid:4381 exited_at:{seconds:1747321308 nanos:800591494}" May 15 15:01:48.805007 containerd[1582]: time="2025-05-15T15:01:48.804707553Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e76f87b2f078a968a9983e41f9fdd440a2671e8c894060049a513208a71ec918\" id:\"e76f87b2f078a968a9983e41f9fdd440a2671e8c894060049a513208a71ec918\" pid:4381 exited_at:{seconds:1747321308 nanos:800591494}" May 15 15:01:49.119162 kubelet[2700]: E0515 15:01:49.117784 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:01:49.123520 containerd[1582]: time="2025-05-15T15:01:49.123450979Z" level=info msg="CreateContainer within sandbox \"67e00445a606f8408705b587cd48b6f4d10600efb767350c621e75479e0a88bf\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 15 15:01:49.137607 containerd[1582]: time="2025-05-15T15:01:49.137539453Z" level=info msg="Container dd302ebe4d6c628e83cf641acaaa9cc0f6360ab7390dc54264478dfd5fe75627: CDI devices from CRI Config.CDIDevices: []" May 15 15:01:49.153783 containerd[1582]: time="2025-05-15T15:01:49.151750876Z" level=info msg="CreateContainer within sandbox \"67e00445a606f8408705b587cd48b6f4d10600efb767350c621e75479e0a88bf\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"dd302ebe4d6c628e83cf641acaaa9cc0f6360ab7390dc54264478dfd5fe75627\"" May 15 15:01:49.156019 containerd[1582]: time="2025-05-15T15:01:49.155770667Z" level=info msg="StartContainer for \"dd302ebe4d6c628e83cf641acaaa9cc0f6360ab7390dc54264478dfd5fe75627\"" May 15 15:01:49.159976 containerd[1582]: time="2025-05-15T15:01:49.159445664Z" level=info msg="connecting to shim dd302ebe4d6c628e83cf641acaaa9cc0f6360ab7390dc54264478dfd5fe75627" address="unix:///run/containerd/s/23d2cd6c087b72f94df82c30664e11c14431185d375d536a4b8302f1a1ea722e" protocol=ttrpc version=3 May 15 15:01:49.189246 systemd[1]: Started cri-containerd-dd302ebe4d6c628e83cf641acaaa9cc0f6360ab7390dc54264478dfd5fe75627.scope - libcontainer container dd302ebe4d6c628e83cf641acaaa9cc0f6360ab7390dc54264478dfd5fe75627. May 15 15:01:49.226848 containerd[1582]: time="2025-05-15T15:01:49.226658326Z" level=info msg="StartContainer for \"dd302ebe4d6c628e83cf641acaaa9cc0f6360ab7390dc54264478dfd5fe75627\" returns successfully" May 15 15:01:49.235185 systemd[1]: cri-containerd-dd302ebe4d6c628e83cf641acaaa9cc0f6360ab7390dc54264478dfd5fe75627.scope: Deactivated successfully. May 15 15:01:49.240138 containerd[1582]: time="2025-05-15T15:01:49.240063637Z" level=info msg="received exit event container_id:\"dd302ebe4d6c628e83cf641acaaa9cc0f6360ab7390dc54264478dfd5fe75627\" id:\"dd302ebe4d6c628e83cf641acaaa9cc0f6360ab7390dc54264478dfd5fe75627\" pid:4427 exited_at:{seconds:1747321309 nanos:239543645}" May 15 15:01:49.240739 containerd[1582]: time="2025-05-15T15:01:49.240695106Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dd302ebe4d6c628e83cf641acaaa9cc0f6360ab7390dc54264478dfd5fe75627\" id:\"dd302ebe4d6c628e83cf641acaaa9cc0f6360ab7390dc54264478dfd5fe75627\" pid:4427 exited_at:{seconds:1747321309 nanos:239543645}" May 15 15:01:50.124342 kubelet[2700]: E0515 15:01:50.124071 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:01:50.127917 containerd[1582]: time="2025-05-15T15:01:50.127790425Z" level=info msg="CreateContainer within sandbox \"67e00445a606f8408705b587cd48b6f4d10600efb767350c621e75479e0a88bf\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 15 15:01:50.147120 containerd[1582]: time="2025-05-15T15:01:50.147056711Z" level=info msg="Container c3da816d13f821ef9c3e673082cade17f798f20b12537af78b2e27c327750db3: CDI devices from CRI Config.CDIDevices: []" May 15 15:01:50.153573 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount296732698.mount: Deactivated successfully. May 15 15:01:50.161897 containerd[1582]: time="2025-05-15T15:01:50.161793828Z" level=info msg="CreateContainer within sandbox \"67e00445a606f8408705b587cd48b6f4d10600efb767350c621e75479e0a88bf\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"c3da816d13f821ef9c3e673082cade17f798f20b12537af78b2e27c327750db3\"" May 15 15:01:50.163092 containerd[1582]: time="2025-05-15T15:01:50.163046227Z" level=info msg="StartContainer for \"c3da816d13f821ef9c3e673082cade17f798f20b12537af78b2e27c327750db3\"" May 15 15:01:50.165559 containerd[1582]: time="2025-05-15T15:01:50.165438038Z" level=info msg="connecting to shim c3da816d13f821ef9c3e673082cade17f798f20b12537af78b2e27c327750db3" address="unix:///run/containerd/s/23d2cd6c087b72f94df82c30664e11c14431185d375d536a4b8302f1a1ea722e" protocol=ttrpc version=3 May 15 15:01:50.197261 systemd[1]: Started cri-containerd-c3da816d13f821ef9c3e673082cade17f798f20b12537af78b2e27c327750db3.scope - libcontainer container c3da816d13f821ef9c3e673082cade17f798f20b12537af78b2e27c327750db3. May 15 15:01:50.255075 containerd[1582]: time="2025-05-15T15:01:50.255004421Z" level=info msg="StartContainer for \"c3da816d13f821ef9c3e673082cade17f798f20b12537af78b2e27c327750db3\" returns successfully" May 15 15:01:50.259560 systemd[1]: cri-containerd-c3da816d13f821ef9c3e673082cade17f798f20b12537af78b2e27c327750db3.scope: Deactivated successfully. May 15 15:01:50.263247 containerd[1582]: time="2025-05-15T15:01:50.263175638Z" level=info msg="received exit event container_id:\"c3da816d13f821ef9c3e673082cade17f798f20b12537af78b2e27c327750db3\" id:\"c3da816d13f821ef9c3e673082cade17f798f20b12537af78b2e27c327750db3\" pid:4471 exited_at:{seconds:1747321310 nanos:262606410}" May 15 15:01:50.263868 containerd[1582]: time="2025-05-15T15:01:50.263434178Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c3da816d13f821ef9c3e673082cade17f798f20b12537af78b2e27c327750db3\" id:\"c3da816d13f821ef9c3e673082cade17f798f20b12537af78b2e27c327750db3\" pid:4471 exited_at:{seconds:1747321310 nanos:262606410}" May 15 15:01:50.304296 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c3da816d13f821ef9c3e673082cade17f798f20b12537af78b2e27c327750db3-rootfs.mount: Deactivated successfully. May 15 15:01:51.130811 kubelet[2700]: E0515 15:01:51.130745 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:01:51.138631 containerd[1582]: time="2025-05-15T15:01:51.138578286Z" level=info msg="CreateContainer within sandbox \"67e00445a606f8408705b587cd48b6f4d10600efb767350c621e75479e0a88bf\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 15 15:01:51.147320 containerd[1582]: time="2025-05-15T15:01:51.147272829Z" level=info msg="Container a09e1b3e84457bc0806c89f161bd4c7078f37cebc9040d8a5e94cd07775cdeb8: CDI devices from CRI Config.CDIDevices: []" May 15 15:01:51.156472 kubelet[2700]: I0515 15:01:51.156417 2700 setters.go:602] "Node became not ready" node="ci-4334.0.0-a-cad88baf47" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-15T15:01:51Z","lastTransitionTime":"2025-05-15T15:01:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 15 15:01:51.162825 containerd[1582]: time="2025-05-15T15:01:51.162762215Z" level=info msg="CreateContainer within sandbox \"67e00445a606f8408705b587cd48b6f4d10600efb767350c621e75479e0a88bf\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a09e1b3e84457bc0806c89f161bd4c7078f37cebc9040d8a5e94cd07775cdeb8\"" May 15 15:01:51.175157 containerd[1582]: time="2025-05-15T15:01:51.175103206Z" level=info msg="StartContainer for \"a09e1b3e84457bc0806c89f161bd4c7078f37cebc9040d8a5e94cd07775cdeb8\"" May 15 15:01:51.177634 containerd[1582]: time="2025-05-15T15:01:51.177582256Z" level=info msg="connecting to shim a09e1b3e84457bc0806c89f161bd4c7078f37cebc9040d8a5e94cd07775cdeb8" address="unix:///run/containerd/s/23d2cd6c087b72f94df82c30664e11c14431185d375d536a4b8302f1a1ea722e" protocol=ttrpc version=3 May 15 15:01:51.226205 systemd[1]: Started cri-containerd-a09e1b3e84457bc0806c89f161bd4c7078f37cebc9040d8a5e94cd07775cdeb8.scope - libcontainer container a09e1b3e84457bc0806c89f161bd4c7078f37cebc9040d8a5e94cd07775cdeb8. May 15 15:01:51.265584 systemd[1]: cri-containerd-a09e1b3e84457bc0806c89f161bd4c7078f37cebc9040d8a5e94cd07775cdeb8.scope: Deactivated successfully. May 15 15:01:51.270464 containerd[1582]: time="2025-05-15T15:01:51.270368858Z" level=info msg="received exit event container_id:\"a09e1b3e84457bc0806c89f161bd4c7078f37cebc9040d8a5e94cd07775cdeb8\" id:\"a09e1b3e84457bc0806c89f161bd4c7078f37cebc9040d8a5e94cd07775cdeb8\" pid:4510 exited_at:{seconds:1747321311 nanos:270069927}" May 15 15:01:51.270868 containerd[1582]: time="2025-05-15T15:01:51.270673224Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a09e1b3e84457bc0806c89f161bd4c7078f37cebc9040d8a5e94cd07775cdeb8\" id:\"a09e1b3e84457bc0806c89f161bd4c7078f37cebc9040d8a5e94cd07775cdeb8\" pid:4510 exited_at:{seconds:1747321311 nanos:270069927}" May 15 15:01:51.287688 containerd[1582]: time="2025-05-15T15:01:51.287612011Z" level=info msg="StartContainer for \"a09e1b3e84457bc0806c89f161bd4c7078f37cebc9040d8a5e94cd07775cdeb8\" returns successfully" May 15 15:01:51.317585 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a09e1b3e84457bc0806c89f161bd4c7078f37cebc9040d8a5e94cd07775cdeb8-rootfs.mount: Deactivated successfully. May 15 15:01:52.138769 kubelet[2700]: E0515 15:01:52.137593 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:01:52.141932 containerd[1582]: time="2025-05-15T15:01:52.141522584Z" level=info msg="CreateContainer within sandbox \"67e00445a606f8408705b587cd48b6f4d10600efb767350c621e75479e0a88bf\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 15 15:01:52.169635 containerd[1582]: time="2025-05-15T15:01:52.166135038Z" level=info msg="Container fa6fd319bc36e82fa32740babd3d1a10844480ff34cc2b5f15d2eb58bb2d9f00: CDI devices from CRI Config.CDIDevices: []" May 15 15:01:52.184739 containerd[1582]: time="2025-05-15T15:01:52.184687309Z" level=info msg="CreateContainer within sandbox \"67e00445a606f8408705b587cd48b6f4d10600efb767350c621e75479e0a88bf\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"fa6fd319bc36e82fa32740babd3d1a10844480ff34cc2b5f15d2eb58bb2d9f00\"" May 15 15:01:52.187783 containerd[1582]: time="2025-05-15T15:01:52.187482121Z" level=info msg="StartContainer for \"fa6fd319bc36e82fa32740babd3d1a10844480ff34cc2b5f15d2eb58bb2d9f00\"" May 15 15:01:52.191077 containerd[1582]: time="2025-05-15T15:01:52.190995161Z" level=info msg="connecting to shim fa6fd319bc36e82fa32740babd3d1a10844480ff34cc2b5f15d2eb58bb2d9f00" address="unix:///run/containerd/s/23d2cd6c087b72f94df82c30664e11c14431185d375d536a4b8302f1a1ea722e" protocol=ttrpc version=3 May 15 15:01:52.236335 systemd[1]: Started cri-containerd-fa6fd319bc36e82fa32740babd3d1a10844480ff34cc2b5f15d2eb58bb2d9f00.scope - libcontainer container fa6fd319bc36e82fa32740babd3d1a10844480ff34cc2b5f15d2eb58bb2d9f00. May 15 15:01:52.285235 containerd[1582]: time="2025-05-15T15:01:52.285059814Z" level=info msg="StartContainer for \"fa6fd319bc36e82fa32740babd3d1a10844480ff34cc2b5f15d2eb58bb2d9f00\" returns successfully" May 15 15:01:52.385273 containerd[1582]: time="2025-05-15T15:01:52.385161433Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fa6fd319bc36e82fa32740babd3d1a10844480ff34cc2b5f15d2eb58bb2d9f00\" id:\"974787f0a151c9a4ca9fdbe601672a63ecffc6f345db3d14c4c45ac38d79265f\" pid:4579 exited_at:{seconds:1747321312 nanos:384487414}" May 15 15:01:52.741937 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) May 15 15:01:53.146211 kubelet[2700]: E0515 15:01:53.146115 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:01:53.167008 kubelet[2700]: I0515 15:01:53.166943 2700 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-mwd8h" podStartSLOduration=5.166921674 podStartE2EDuration="5.166921674s" podCreationTimestamp="2025-05-15 15:01:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 15:01:53.166710999 +0000 UTC m=+135.033672507" watchObservedRunningTime="2025-05-15 15:01:53.166921674 +0000 UTC m=+135.033883177" May 15 15:01:53.264600 update_engine[1498]: I20250515 15:01:53.264463 1498 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 15 15:01:53.265233 update_engine[1498]: I20250515 15:01:53.264976 1498 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 15 15:01:53.265504 update_engine[1498]: I20250515 15:01:53.265451 1498 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 15 15:01:53.268007 update_engine[1498]: E20250515 15:01:53.267866 1498 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 15 15:01:53.268174 update_engine[1498]: I20250515 15:01:53.268060 1498 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 May 15 15:01:54.332442 kubelet[2700]: E0515 15:01:54.332382 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:01:54.518555 kubelet[2700]: E0515 15:01:54.518493 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:01:55.192436 containerd[1582]: time="2025-05-15T15:01:55.192232670Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fa6fd319bc36e82fa32740babd3d1a10844480ff34cc2b5f15d2eb58bb2d9f00\" id:\"993caf75b397cb5553e130542ec7540a78b86f16aa58a10ab26f7ad055c80db8\" pid:4842 exit_status:1 exited_at:{seconds:1747321315 nanos:191390804}" May 15 15:01:56.264662 systemd-networkd[1458]: lxc_health: Link UP May 15 15:01:56.269307 systemd-networkd[1458]: lxc_health: Gained carrier May 15 15:01:56.520502 kubelet[2700]: E0515 15:01:56.519860 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:01:57.157554 kubelet[2700]: E0515 15:01:57.157500 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:01:57.510169 containerd[1582]: time="2025-05-15T15:01:57.509944117Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fa6fd319bc36e82fa32740babd3d1a10844480ff34cc2b5f15d2eb58bb2d9f00\" id:\"c4217a0dbb4d27e2073ccb3998b3671eb38b3db4b84fd109c96b3d8e5b83e4a8\" pid:5090 exited_at:{seconds:1747321317 nanos:508474205}" May 15 15:01:58.030178 systemd-networkd[1458]: lxc_health: Gained IPv6LL May 15 15:01:58.161988 kubelet[2700]: E0515 15:01:58.161941 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:01:59.735463 containerd[1582]: time="2025-05-15T15:01:59.735255459Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fa6fd319bc36e82fa32740babd3d1a10844480ff34cc2b5f15d2eb58bb2d9f00\" id:\"9256c19b7d26d64868187f14c0f1b581b0d7ae8c6c24e1044ed4bee1afafd042\" pid:5121 exited_at:{seconds:1747321319 nanos:733443339}" May 15 15:02:01.983806 containerd[1582]: time="2025-05-15T15:02:01.983572527Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fa6fd319bc36e82fa32740babd3d1a10844480ff34cc2b5f15d2eb58bb2d9f00\" id:\"fe3dff511b2b6d7a86f3e1e477b9dfd27edba1281c9c79a72d4febf0caa82f9a\" pid:5146 exited_at:{seconds:1747321321 nanos:982115240}" May 15 15:02:02.103965 sshd[4313]: Connection closed by 139.178.68.195 port 52582 May 15 15:02:02.103618 sshd-session[4306]: pam_unix(sshd:session): session closed for user core May 15 15:02:02.118741 systemd[1]: sshd@31-137.184.120.255:22-139.178.68.195:52582.service: Deactivated successfully. May 15 15:02:02.125611 systemd[1]: session-31.scope: Deactivated successfully. May 15 15:02:02.127302 systemd-logind[1497]: Session 31 logged out. Waiting for processes to exit. May 15 15:02:02.131997 systemd-logind[1497]: Removed session 31. May 15 15:02:03.265277 update_engine[1498]: I20250515 15:02:03.265036 1498 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 15 15:02:03.266652 update_engine[1498]: I20250515 15:02:03.265477 1498 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 15 15:02:03.266652 update_engine[1498]: I20250515 15:02:03.265902 1498 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 15 15:02:03.266652 update_engine[1498]: E20250515 15:02:03.266619 1498 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 15 15:02:03.266937 update_engine[1498]: I20250515 15:02:03.266691 1498 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 May 15 15:02:03.333919 kubelet[2700]: E0515 15:02:03.333429 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2"