May 14 18:07:47.912737 kernel: Linux version 6.12.20-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Wed May 14 16:37:27 -00 2025 May 14 18:07:47.912775 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=adf4ab3cd3fc72d424aa1ba920dfa0e67212fa35eadab2c698966b09b9e294b0 May 14 18:07:47.912785 kernel: BIOS-provided physical RAM map: May 14 18:07:47.912792 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable May 14 18:07:47.912798 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved May 14 18:07:47.912805 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 14 18:07:47.912815 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable May 14 18:07:47.912832 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved May 14 18:07:47.912843 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 14 18:07:47.912850 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 14 18:07:47.912857 kernel: NX (Execute Disable) protection: active May 14 18:07:47.912864 kernel: APIC: Static calls initialized May 14 18:07:47.912871 kernel: SMBIOS 2.8 present. May 14 18:07:47.912879 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 May 14 18:07:47.912891 kernel: DMI: Memory slots populated: 1/1 May 14 18:07:47.912899 kernel: Hypervisor detected: KVM May 14 18:07:47.912910 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 14 18:07:47.912917 kernel: kvm-clock: using sched offset of 4671944261 cycles May 14 18:07:47.912926 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 14 18:07:47.912934 kernel: tsc: Detected 2494.138 MHz processor May 14 18:07:47.912942 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 14 18:07:47.912951 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 14 18:07:47.912959 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 May 14 18:07:47.912970 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs May 14 18:07:47.912978 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 14 18:07:47.912986 kernel: ACPI: Early table checksum verification disabled May 14 18:07:47.912994 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) May 14 18:07:47.913002 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 18:07:47.913010 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 18:07:47.913018 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 18:07:47.913026 kernel: ACPI: FACS 0x000000007FFE0000 000040 May 14 18:07:47.913034 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 18:07:47.913045 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 18:07:47.913052 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 18:07:47.913060 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 18:07:47.913068 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] May 14 18:07:47.913076 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] May 14 18:07:47.913084 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] May 14 18:07:47.913091 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] May 14 18:07:47.913099 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] May 14 18:07:47.913115 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] May 14 18:07:47.913123 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] May 14 18:07:47.913135 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] May 14 18:07:47.913147 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] May 14 18:07:47.913159 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00001000-0x7ffdafff] May 14 18:07:47.913172 kernel: NODE_DATA(0) allocated [mem 0x7ffd3dc0-0x7ffdafff] May 14 18:07:47.913188 kernel: Zone ranges: May 14 18:07:47.915279 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 14 18:07:47.915308 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] May 14 18:07:47.915323 kernel: Normal empty May 14 18:07:47.915336 kernel: Device empty May 14 18:07:47.915350 kernel: Movable zone start for each node May 14 18:07:47.915362 kernel: Early memory node ranges May 14 18:07:47.915371 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 14 18:07:47.915379 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] May 14 18:07:47.915396 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] May 14 18:07:47.915410 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 14 18:07:47.915422 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 14 18:07:47.915434 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges May 14 18:07:47.915445 kernel: ACPI: PM-Timer IO Port: 0x608 May 14 18:07:47.915459 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 14 18:07:47.915479 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 14 18:07:47.915488 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 14 18:07:47.915498 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 14 18:07:47.915511 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 14 18:07:47.915523 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 14 18:07:47.915532 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 14 18:07:47.915540 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 14 18:07:47.915549 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 14 18:07:47.915558 kernel: TSC deadline timer available May 14 18:07:47.915566 kernel: CPU topo: Max. logical packages: 1 May 14 18:07:47.915575 kernel: CPU topo: Max. logical dies: 1 May 14 18:07:47.915584 kernel: CPU topo: Max. dies per package: 1 May 14 18:07:47.915592 kernel: CPU topo: Max. threads per core: 1 May 14 18:07:47.915605 kernel: CPU topo: Num. cores per package: 2 May 14 18:07:47.915614 kernel: CPU topo: Num. threads per package: 2 May 14 18:07:47.915622 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs May 14 18:07:47.915631 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 14 18:07:47.915639 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices May 14 18:07:47.915648 kernel: Booting paravirtualized kernel on KVM May 14 18:07:47.915657 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 14 18:07:47.915666 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 May 14 18:07:47.915675 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 May 14 18:07:47.915688 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 May 14 18:07:47.915696 kernel: pcpu-alloc: [0] 0 1 May 14 18:07:47.915705 kernel: kvm-guest: PV spinlocks disabled, no host support May 14 18:07:47.915716 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=adf4ab3cd3fc72d424aa1ba920dfa0e67212fa35eadab2c698966b09b9e294b0 May 14 18:07:47.915726 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 14 18:07:47.915735 kernel: random: crng init done May 14 18:07:47.915744 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 14 18:07:47.915752 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) May 14 18:07:47.915764 kernel: Fallback order for Node 0: 0 May 14 18:07:47.915772 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524153 May 14 18:07:47.915781 kernel: Policy zone: DMA32 May 14 18:07:47.915789 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 14 18:07:47.915798 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 14 18:07:47.915806 kernel: Kernel/User page tables isolation: enabled May 14 18:07:47.915815 kernel: ftrace: allocating 40065 entries in 157 pages May 14 18:07:47.915824 kernel: ftrace: allocated 157 pages with 5 groups May 14 18:07:47.915833 kernel: Dynamic Preempt: voluntary May 14 18:07:47.915845 kernel: rcu: Preemptible hierarchical RCU implementation. May 14 18:07:47.915856 kernel: rcu: RCU event tracing is enabled. May 14 18:07:47.915865 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 14 18:07:47.915874 kernel: Trampoline variant of Tasks RCU enabled. May 14 18:07:47.915890 kernel: Rude variant of Tasks RCU enabled. May 14 18:07:47.915899 kernel: Tracing variant of Tasks RCU enabled. May 14 18:07:47.915912 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 14 18:07:47.915924 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 14 18:07:47.915932 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 14 18:07:47.915948 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 14 18:07:47.915957 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 14 18:07:47.915965 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 May 14 18:07:47.915974 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 14 18:07:47.915982 kernel: Console: colour VGA+ 80x25 May 14 18:07:47.915990 kernel: printk: legacy console [tty0] enabled May 14 18:07:47.915999 kernel: printk: legacy console [ttyS0] enabled May 14 18:07:47.916007 kernel: ACPI: Core revision 20240827 May 14 18:07:47.916016 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 14 18:07:47.916038 kernel: APIC: Switch to symmetric I/O mode setup May 14 18:07:47.916047 kernel: x2apic enabled May 14 18:07:47.916056 kernel: APIC: Switched APIC routing to: physical x2apic May 14 18:07:47.916069 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 14 18:07:47.916080 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39838d43, max_idle_ns: 440795267131 ns May 14 18:07:47.916089 kernel: Calibrating delay loop (skipped) preset value.. 4988.27 BogoMIPS (lpj=2494138) May 14 18:07:47.916098 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 May 14 18:07:47.916107 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 May 14 18:07:47.916117 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 14 18:07:47.916130 kernel: Spectre V2 : Mitigation: Retpolines May 14 18:07:47.916139 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch May 14 18:07:47.916148 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT May 14 18:07:47.916157 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls May 14 18:07:47.916165 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 14 18:07:47.916174 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 14 18:07:47.916183 kernel: MDS: Mitigation: Clear CPU buffers May 14 18:07:47.916215 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode May 14 18:07:47.916224 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 14 18:07:47.916234 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 14 18:07:47.916243 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 14 18:07:47.916252 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 14 18:07:47.916261 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. May 14 18:07:47.916270 kernel: Freeing SMP alternatives memory: 32K May 14 18:07:47.916279 kernel: pid_max: default: 32768 minimum: 301 May 14 18:07:47.916288 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima May 14 18:07:47.916301 kernel: landlock: Up and running. May 14 18:07:47.916309 kernel: SELinux: Initializing. May 14 18:07:47.916318 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 14 18:07:47.916327 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 14 18:07:47.916337 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) May 14 18:07:47.916346 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. May 14 18:07:47.916356 kernel: signal: max sigframe size: 1776 May 14 18:07:47.916365 kernel: rcu: Hierarchical SRCU implementation. May 14 18:07:47.916374 kernel: rcu: Max phase no-delay instances is 400. May 14 18:07:47.916387 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level May 14 18:07:47.916396 kernel: NMI watchdog: Perf NMI watchdog permanently disabled May 14 18:07:47.916405 kernel: smp: Bringing up secondary CPUs ... May 14 18:07:47.916414 kernel: smpboot: x86: Booting SMP configuration: May 14 18:07:47.916426 kernel: .... node #0, CPUs: #1 May 14 18:07:47.916440 kernel: smp: Brought up 1 node, 2 CPUs May 14 18:07:47.916454 kernel: smpboot: Total of 2 processors activated (9976.55 BogoMIPS) May 14 18:07:47.916466 kernel: Memory: 1966908K/2096612K available (14336K kernel code, 2438K rwdata, 9944K rodata, 54424K init, 2536K bss, 125140K reserved, 0K cma-reserved) May 14 18:07:47.916481 kernel: devtmpfs: initialized May 14 18:07:47.916496 kernel: x86/mm: Memory block size: 128MB May 14 18:07:47.916506 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 14 18:07:47.916515 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 14 18:07:47.916524 kernel: pinctrl core: initialized pinctrl subsystem May 14 18:07:47.916533 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 14 18:07:47.916542 kernel: audit: initializing netlink subsys (disabled) May 14 18:07:47.916552 kernel: audit: type=2000 audit(1747246064.334:1): state=initialized audit_enabled=0 res=1 May 14 18:07:47.916561 kernel: thermal_sys: Registered thermal governor 'step_wise' May 14 18:07:47.916570 kernel: thermal_sys: Registered thermal governor 'user_space' May 14 18:07:47.916583 kernel: cpuidle: using governor menu May 14 18:07:47.916592 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 14 18:07:47.916601 kernel: dca service started, version 1.12.1 May 14 18:07:47.916610 kernel: PCI: Using configuration type 1 for base access May 14 18:07:47.916619 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 14 18:07:47.916629 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 14 18:07:47.916641 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 14 18:07:47.916656 kernel: ACPI: Added _OSI(Module Device) May 14 18:07:47.916669 kernel: ACPI: Added _OSI(Processor Device) May 14 18:07:47.916686 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 14 18:07:47.916700 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 14 18:07:47.916713 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 14 18:07:47.916722 kernel: ACPI: Interpreter enabled May 14 18:07:47.916731 kernel: ACPI: PM: (supports S0 S5) May 14 18:07:47.916740 kernel: ACPI: Using IOAPIC for interrupt routing May 14 18:07:47.916749 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 14 18:07:47.916759 kernel: PCI: Using E820 reservations for host bridge windows May 14 18:07:47.916774 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F May 14 18:07:47.916794 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 14 18:07:47.917105 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] May 14 18:07:47.918104 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] May 14 18:07:47.918295 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge May 14 18:07:47.918321 kernel: acpiphp: Slot [3] registered May 14 18:07:47.918336 kernel: acpiphp: Slot [4] registered May 14 18:07:47.918350 kernel: acpiphp: Slot [5] registered May 14 18:07:47.918374 kernel: acpiphp: Slot [6] registered May 14 18:07:47.918388 kernel: acpiphp: Slot [7] registered May 14 18:07:47.918398 kernel: acpiphp: Slot [8] registered May 14 18:07:47.918408 kernel: acpiphp: Slot [9] registered May 14 18:07:47.918417 kernel: acpiphp: Slot [10] registered May 14 18:07:47.918426 kernel: acpiphp: Slot [11] registered May 14 18:07:47.918435 kernel: acpiphp: Slot [12] registered May 14 18:07:47.918444 kernel: acpiphp: Slot [13] registered May 14 18:07:47.918453 kernel: acpiphp: Slot [14] registered May 14 18:07:47.918462 kernel: acpiphp: Slot [15] registered May 14 18:07:47.918475 kernel: acpiphp: Slot [16] registered May 14 18:07:47.918485 kernel: acpiphp: Slot [17] registered May 14 18:07:47.918494 kernel: acpiphp: Slot [18] registered May 14 18:07:47.918503 kernel: acpiphp: Slot [19] registered May 14 18:07:47.918512 kernel: acpiphp: Slot [20] registered May 14 18:07:47.918521 kernel: acpiphp: Slot [21] registered May 14 18:07:47.918529 kernel: acpiphp: Slot [22] registered May 14 18:07:47.918538 kernel: acpiphp: Slot [23] registered May 14 18:07:47.918547 kernel: acpiphp: Slot [24] registered May 14 18:07:47.918559 kernel: acpiphp: Slot [25] registered May 14 18:07:47.918568 kernel: acpiphp: Slot [26] registered May 14 18:07:47.918577 kernel: acpiphp: Slot [27] registered May 14 18:07:47.918586 kernel: acpiphp: Slot [28] registered May 14 18:07:47.918595 kernel: acpiphp: Slot [29] registered May 14 18:07:47.918622 kernel: acpiphp: Slot [30] registered May 14 18:07:47.918631 kernel: acpiphp: Slot [31] registered May 14 18:07:47.918640 kernel: PCI host bridge to bus 0000:00 May 14 18:07:47.918797 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 14 18:07:47.918891 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 14 18:07:47.918974 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 14 18:07:47.919069 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] May 14 18:07:47.919180 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] May 14 18:07:47.919325 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 14 18:07:47.919521 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint May 14 18:07:47.919723 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint May 14 18:07:47.919896 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint May 14 18:07:47.920021 kernel: pci 0000:00:01.1: BAR 4 [io 0xc1e0-0xc1ef] May 14 18:07:47.920129 kernel: pci 0000:00:01.1: BAR 0 [io 0x01f0-0x01f7]: legacy IDE quirk May 14 18:07:47.920258 kernel: pci 0000:00:01.1: BAR 1 [io 0x03f6]: legacy IDE quirk May 14 18:07:47.920352 kernel: pci 0000:00:01.1: BAR 2 [io 0x0170-0x0177]: legacy IDE quirk May 14 18:07:47.920443 kernel: pci 0000:00:01.1: BAR 3 [io 0x0376]: legacy IDE quirk May 14 18:07:47.920564 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint May 14 18:07:47.920660 kernel: pci 0000:00:01.2: BAR 4 [io 0xc180-0xc19f] May 14 18:07:47.920826 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint May 14 18:07:47.920960 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI May 14 18:07:47.921054 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB May 14 18:07:47.921174 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint May 14 18:07:47.921341 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref] May 14 18:07:47.921513 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref] May 14 18:07:47.921744 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfebf0000-0xfebf0fff] May 14 18:07:47.922010 kernel: pci 0000:00:02.0: ROM [mem 0xfebe0000-0xfebeffff pref] May 14 18:07:47.922258 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 14 18:07:47.922391 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint May 14 18:07:47.922492 kernel: pci 0000:00:03.0: BAR 0 [io 0xc1a0-0xc1bf] May 14 18:07:47.922614 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebf1000-0xfebf1fff] May 14 18:07:47.922707 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref] May 14 18:07:47.922816 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint May 14 18:07:47.922907 kernel: pci 0000:00:04.0: BAR 0 [io 0xc1c0-0xc1df] May 14 18:07:47.922999 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebf2000-0xfebf2fff] May 14 18:07:47.923092 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref] May 14 18:07:47.923249 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint May 14 18:07:47.923354 kernel: pci 0000:00:05.0: BAR 0 [io 0xc100-0xc13f] May 14 18:07:47.923448 kernel: pci 0000:00:05.0: BAR 1 [mem 0xfebf3000-0xfebf3fff] May 14 18:07:47.923542 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref] May 14 18:07:47.923663 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint May 14 18:07:47.923755 kernel: pci 0000:00:06.0: BAR 0 [io 0xc000-0xc07f] May 14 18:07:47.923847 kernel: pci 0000:00:06.0: BAR 1 [mem 0xfebf4000-0xfebf4fff] May 14 18:07:47.923940 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref] May 14 18:07:47.924053 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint May 14 18:07:47.924158 kernel: pci 0000:00:07.0: BAR 0 [io 0xc080-0xc0ff] May 14 18:07:47.924283 kernel: pci 0000:00:07.0: BAR 1 [mem 0xfebf5000-0xfebf5fff] May 14 18:07:47.924377 kernel: pci 0000:00:07.0: BAR 4 [mem 0xfe814000-0xfe817fff 64bit pref] May 14 18:07:47.924483 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint May 14 18:07:47.924578 kernel: pci 0000:00:08.0: BAR 0 [io 0xc140-0xc17f] May 14 18:07:47.926407 kernel: pci 0000:00:08.0: BAR 4 [mem 0xfe818000-0xfe81bfff 64bit pref] May 14 18:07:47.926448 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 14 18:07:47.926465 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 14 18:07:47.926480 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 14 18:07:47.926494 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 14 18:07:47.926507 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 May 14 18:07:47.926517 kernel: iommu: Default domain type: Translated May 14 18:07:47.926526 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 14 18:07:47.926544 kernel: PCI: Using ACPI for IRQ routing May 14 18:07:47.926553 kernel: PCI: pci_cache_line_size set to 64 bytes May 14 18:07:47.926563 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] May 14 18:07:47.926572 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] May 14 18:07:47.926773 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device May 14 18:07:47.926872 kernel: pci 0000:00:02.0: vgaarb: bridge control possible May 14 18:07:47.926964 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 14 18:07:47.926976 kernel: vgaarb: loaded May 14 18:07:47.926986 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 14 18:07:47.927001 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 14 18:07:47.927010 kernel: clocksource: Switched to clocksource kvm-clock May 14 18:07:47.927019 kernel: VFS: Disk quotas dquot_6.6.0 May 14 18:07:47.927029 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 14 18:07:47.927039 kernel: pnp: PnP ACPI init May 14 18:07:47.927048 kernel: pnp: PnP ACPI: found 4 devices May 14 18:07:47.927058 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 14 18:07:47.927067 kernel: NET: Registered PF_INET protocol family May 14 18:07:47.927076 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) May 14 18:07:47.927089 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) May 14 18:07:47.927099 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 14 18:07:47.927109 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) May 14 18:07:47.927118 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) May 14 18:07:47.927127 kernel: TCP: Hash tables configured (established 16384 bind 16384) May 14 18:07:47.927136 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) May 14 18:07:47.927145 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) May 14 18:07:47.927155 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 14 18:07:47.927167 kernel: NET: Registered PF_XDP protocol family May 14 18:07:47.929385 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 14 18:07:47.929511 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 14 18:07:47.929607 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 14 18:07:47.929713 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] May 14 18:07:47.929795 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] May 14 18:07:47.929904 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release May 14 18:07:47.930009 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers May 14 18:07:47.930031 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 May 14 18:07:47.930141 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x720 took 26245 usecs May 14 18:07:47.930154 kernel: PCI: CLS 0 bytes, default 64 May 14 18:07:47.930164 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer May 14 18:07:47.930174 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39838d43, max_idle_ns: 440795267131 ns May 14 18:07:47.930183 kernel: Initialise system trusted keyrings May 14 18:07:47.930193 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 May 14 18:07:47.930229 kernel: Key type asymmetric registered May 14 18:07:47.930238 kernel: Asymmetric key parser 'x509' registered May 14 18:07:47.930253 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 14 18:07:47.930262 kernel: io scheduler mq-deadline registered May 14 18:07:47.930271 kernel: io scheduler kyber registered May 14 18:07:47.930280 kernel: io scheduler bfq registered May 14 18:07:47.930289 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 14 18:07:47.930299 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 May 14 18:07:47.930308 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 May 14 18:07:47.930318 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 May 14 18:07:47.930327 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 14 18:07:47.930340 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 14 18:07:47.930350 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 14 18:07:47.930359 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 14 18:07:47.930368 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 14 18:07:47.930540 kernel: rtc_cmos 00:03: RTC can wake from S4 May 14 18:07:47.930561 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 14 18:07:47.930708 kernel: rtc_cmos 00:03: registered as rtc0 May 14 18:07:47.930858 kernel: rtc_cmos 00:03: setting system clock to 2025-05-14T18:07:47 UTC (1747246067) May 14 18:07:47.930998 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram May 14 18:07:47.931017 kernel: intel_pstate: CPU model not supported May 14 18:07:47.931033 kernel: NET: Registered PF_INET6 protocol family May 14 18:07:47.931049 kernel: Segment Routing with IPv6 May 14 18:07:47.931063 kernel: In-situ OAM (IOAM) with IPv6 May 14 18:07:47.931076 kernel: NET: Registered PF_PACKET protocol family May 14 18:07:47.931098 kernel: Key type dns_resolver registered May 14 18:07:47.931112 kernel: IPI shorthand broadcast: enabled May 14 18:07:47.931125 kernel: sched_clock: Marking stable (3350005210, 91193326)->(3461878700, -20680164) May 14 18:07:47.931145 kernel: registered taskstats version 1 May 14 18:07:47.931157 kernel: Loading compiled-in X.509 certificates May 14 18:07:47.931170 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.20-flatcar: 41e2a150aa08ec2528be2394819b3db677e5f4ef' May 14 18:07:47.931182 kernel: Demotion targets for Node 0: null May 14 18:07:47.933228 kernel: Key type .fscrypt registered May 14 18:07:47.933274 kernel: Key type fscrypt-provisioning registered May 14 18:07:47.933314 kernel: ima: No TPM chip found, activating TPM-bypass! May 14 18:07:47.933329 kernel: ima: Allocated hash algorithm: sha1 May 14 18:07:47.933342 kernel: ima: No architecture policies found May 14 18:07:47.933352 kernel: clk: Disabling unused clocks May 14 18:07:47.933361 kernel: Warning: unable to open an initial console. May 14 18:07:47.933373 kernel: Freeing unused kernel image (initmem) memory: 54424K May 14 18:07:47.933383 kernel: Write protecting the kernel read-only data: 24576k May 14 18:07:47.933392 kernel: Freeing unused kernel image (rodata/data gap) memory: 296K May 14 18:07:47.933402 kernel: Run /init as init process May 14 18:07:47.933412 kernel: with arguments: May 14 18:07:47.933422 kernel: /init May 14 18:07:47.933431 kernel: with environment: May 14 18:07:47.933444 kernel: HOME=/ May 14 18:07:47.933453 kernel: TERM=linux May 14 18:07:47.933462 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 14 18:07:47.933474 systemd[1]: Successfully made /usr/ read-only. May 14 18:07:47.933489 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 14 18:07:47.933500 systemd[1]: Detected virtualization kvm. May 14 18:07:47.933511 systemd[1]: Detected architecture x86-64. May 14 18:07:47.933524 systemd[1]: Running in initrd. May 14 18:07:47.933534 systemd[1]: No hostname configured, using default hostname. May 14 18:07:47.933544 systemd[1]: Hostname set to . May 14 18:07:47.933554 systemd[1]: Initializing machine ID from VM UUID. May 14 18:07:47.933564 systemd[1]: Queued start job for default target initrd.target. May 14 18:07:47.933579 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 18:07:47.933594 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 18:07:47.933610 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 14 18:07:47.933630 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 14 18:07:47.933645 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 14 18:07:47.933665 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 14 18:07:47.933680 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 14 18:07:47.933694 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 14 18:07:47.933704 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 18:07:47.933714 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 14 18:07:47.933724 systemd[1]: Reached target paths.target - Path Units. May 14 18:07:47.933734 systemd[1]: Reached target slices.target - Slice Units. May 14 18:07:47.933744 systemd[1]: Reached target swap.target - Swaps. May 14 18:07:47.933754 systemd[1]: Reached target timers.target - Timer Units. May 14 18:07:47.933764 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 14 18:07:47.933778 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 14 18:07:47.933788 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 14 18:07:47.933798 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 14 18:07:47.933808 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 14 18:07:47.933818 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 14 18:07:47.933828 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 14 18:07:47.933839 systemd[1]: Reached target sockets.target - Socket Units. May 14 18:07:47.933849 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 14 18:07:47.933859 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 14 18:07:47.933873 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 14 18:07:47.933884 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). May 14 18:07:47.933893 systemd[1]: Starting systemd-fsck-usr.service... May 14 18:07:47.933903 systemd[1]: Starting systemd-journald.service - Journal Service... May 14 18:07:47.933914 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 14 18:07:47.933924 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 18:07:47.933934 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 14 18:07:47.933949 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 14 18:07:47.933960 systemd[1]: Finished systemd-fsck-usr.service. May 14 18:07:47.933970 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 14 18:07:47.934038 systemd-journald[211]: Collecting audit messages is disabled. May 14 18:07:47.934071 systemd-journald[211]: Journal started May 14 18:07:47.934094 systemd-journald[211]: Runtime Journal (/run/log/journal/733a020debfb44fdaefc65077de83ba7) is 4.9M, max 39.5M, 34.6M free. May 14 18:07:47.936230 systemd[1]: Started systemd-journald.service - Journal Service. May 14 18:07:47.936210 systemd-modules-load[212]: Inserted module 'overlay' May 14 18:07:47.938066 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 14 18:07:47.951508 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 14 18:07:47.986983 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 14 18:07:47.987025 kernel: Bridge firewalling registered May 14 18:07:47.974709 systemd-modules-load[212]: Inserted module 'br_netfilter' May 14 18:07:47.988997 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 14 18:07:47.992608 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 14 18:07:47.993943 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 18:07:47.998404 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 14 18:07:48.002361 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 14 18:07:48.008588 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 18:07:48.017134 systemd-tmpfiles[229]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. May 14 18:07:48.025060 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 14 18:07:48.025816 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 18:07:48.030409 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 14 18:07:48.036304 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 18:07:48.038349 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 14 18:07:48.065387 dracut-cmdline[251]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=adf4ab3cd3fc72d424aa1ba920dfa0e67212fa35eadab2c698966b09b9e294b0 May 14 18:07:48.084891 systemd-resolved[248]: Positive Trust Anchors: May 14 18:07:48.084916 systemd-resolved[248]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 14 18:07:48.084978 systemd-resolved[248]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 14 18:07:48.088860 systemd-resolved[248]: Defaulting to hostname 'linux'. May 14 18:07:48.090518 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 14 18:07:48.091240 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 14 18:07:48.180244 kernel: SCSI subsystem initialized May 14 18:07:48.192228 kernel: Loading iSCSI transport class v2.0-870. May 14 18:07:48.203231 kernel: iscsi: registered transport (tcp) May 14 18:07:48.229256 kernel: iscsi: registered transport (qla4xxx) May 14 18:07:48.229351 kernel: QLogic iSCSI HBA Driver May 14 18:07:48.253321 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 14 18:07:48.276651 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 14 18:07:48.278092 systemd[1]: Reached target network-pre.target - Preparation for Network. May 14 18:07:48.339627 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 14 18:07:48.342540 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 14 18:07:48.403242 kernel: raid6: avx2x4 gen() 14440 MB/s May 14 18:07:48.420258 kernel: raid6: avx2x2 gen() 13374 MB/s May 14 18:07:48.437720 kernel: raid6: avx2x1 gen() 13545 MB/s May 14 18:07:48.437823 kernel: raid6: using algorithm avx2x4 gen() 14440 MB/s May 14 18:07:48.455259 kernel: raid6: .... xor() 4582 MB/s, rmw enabled May 14 18:07:48.455364 kernel: raid6: using avx2x2 recovery algorithm May 14 18:07:48.484258 kernel: xor: automatically using best checksumming function avx May 14 18:07:48.724242 kernel: Btrfs loaded, zoned=no, fsverity=no May 14 18:07:48.735531 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 14 18:07:48.737982 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 18:07:48.777708 systemd-udevd[460]: Using default interface naming scheme 'v255'. May 14 18:07:48.787991 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 18:07:48.792458 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 14 18:07:48.827953 dracut-pre-trigger[467]: rd.md=0: removing MD RAID activation May 14 18:07:48.867566 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 14 18:07:48.871023 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 14 18:07:48.956585 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 14 18:07:48.959596 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 14 18:07:49.057259 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues May 14 18:07:49.147960 kernel: virtio_scsi virtio3: 2/0/0 default/read/poll queues May 14 18:07:49.148192 kernel: scsi host0: Virtio SCSI HBA May 14 18:07:49.148422 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 May 14 18:07:49.148447 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) May 14 18:07:49.148652 kernel: libata version 3.00 loaded. May 14 18:07:49.148683 kernel: cryptd: max_cpu_qlen set to 1000 May 14 18:07:49.148700 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 14 18:07:49.148719 kernel: GPT:9289727 != 125829119 May 14 18:07:49.148735 kernel: GPT:Alternate GPT header not at the end of the disk. May 14 18:07:49.148751 kernel: GPT:9289727 != 125829119 May 14 18:07:49.148765 kernel: GPT: Use GNU Parted to correct GPT errors. May 14 18:07:49.148780 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 14 18:07:49.148796 kernel: ACPI: bus type USB registered May 14 18:07:49.148813 kernel: usbcore: registered new interface driver usbfs May 14 18:07:49.148838 kernel: usbcore: registered new interface driver hub May 14 18:07:49.148859 kernel: usbcore: registered new device driver usb May 14 18:07:49.148877 kernel: ata_piix 0000:00:01.1: version 2.13 May 14 18:07:49.162882 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues May 14 18:07:49.163184 kernel: virtio_blk virtio5: [vdb] 976 512-byte logical blocks (500 kB/488 KiB) May 14 18:07:49.163459 kernel: scsi host1: ata_piix May 14 18:07:49.163690 kernel: AES CTR mode by8 optimization enabled May 14 18:07:49.163727 kernel: scsi host2: ata_piix May 14 18:07:49.163913 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 lpm-pol 0 May 14 18:07:49.163933 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 lpm-pol 0 May 14 18:07:49.157739 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 14 18:07:49.157938 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 14 18:07:49.162381 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 14 18:07:49.168099 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 18:07:49.263920 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 18:07:49.336342 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller May 14 18:07:49.346640 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 May 14 18:07:49.347084 kernel: uhci_hcd 0000:00:01.2: detected 2 ports May 14 18:07:49.347255 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 May 14 18:07:49.347369 kernel: hub 1-0:1.0: USB hub found May 14 18:07:49.347507 kernel: hub 1-0:1.0: 2 ports detected May 14 18:07:49.374823 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 14 18:07:49.394075 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 14 18:07:49.395187 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 14 18:07:49.415070 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 14 18:07:49.423004 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 14 18:07:49.423620 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 14 18:07:49.424615 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 14 18:07:49.425561 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 18:07:49.426412 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 14 18:07:49.428336 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 14 18:07:49.431436 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 14 18:07:49.451175 disk-uuid[618]: Primary Header is updated. May 14 18:07:49.451175 disk-uuid[618]: Secondary Entries is updated. May 14 18:07:49.451175 disk-uuid[618]: Secondary Header is updated. May 14 18:07:49.457240 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 14 18:07:49.462386 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 14 18:07:49.470256 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 14 18:07:50.467010 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 14 18:07:50.467575 disk-uuid[619]: The operation has completed successfully. May 14 18:07:50.538406 systemd[1]: disk-uuid.service: Deactivated successfully. May 14 18:07:50.538570 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 14 18:07:50.566319 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 14 18:07:50.601151 sh[637]: Success May 14 18:07:50.625243 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 14 18:07:50.626254 kernel: device-mapper: uevent: version 1.0.3 May 14 18:07:50.628241 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev May 14 18:07:50.640247 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" May 14 18:07:50.715525 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 14 18:07:50.724570 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 14 18:07:50.730072 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 14 18:07:50.769269 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' May 14 18:07:50.769369 kernel: BTRFS: device fsid dedcf745-d4ff-44ac-b61c-5ec1bad114c7 devid 1 transid 39 /dev/mapper/usr (253:0) scanned by mount (650) May 14 18:07:50.774042 kernel: BTRFS info (device dm-0): first mount of filesystem dedcf745-d4ff-44ac-b61c-5ec1bad114c7 May 14 18:07:50.774124 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 14 18:07:50.774152 kernel: BTRFS info (device dm-0): using free-space-tree May 14 18:07:50.784830 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 14 18:07:50.786064 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. May 14 18:07:50.787277 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 14 18:07:50.789140 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 14 18:07:50.790965 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 14 18:07:50.823284 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (683) May 14 18:07:50.823376 kernel: BTRFS info (device vda6): first mount of filesystem 9b1e3c61-417b-43c0-b064-c7db19a42998 May 14 18:07:50.824739 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 14 18:07:50.825268 kernel: BTRFS info (device vda6): using free-space-tree May 14 18:07:50.836241 kernel: BTRFS info (device vda6): last unmount of filesystem 9b1e3c61-417b-43c0-b064-c7db19a42998 May 14 18:07:50.838596 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 14 18:07:50.842315 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 14 18:07:51.028433 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 14 18:07:51.032398 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 14 18:07:51.089113 ignition[726]: Ignition 2.21.0 May 14 18:07:51.089134 ignition[726]: Stage: fetch-offline May 14 18:07:51.093574 ignition[726]: no configs at "/usr/lib/ignition/base.d" May 14 18:07:51.093671 ignition[726]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 14 18:07:51.094017 ignition[726]: parsed url from cmdline: "" May 14 18:07:51.094041 ignition[726]: no config URL provided May 14 18:07:51.094058 ignition[726]: reading system config file "/usr/lib/ignition/user.ign" May 14 18:07:51.094072 ignition[726]: no config at "/usr/lib/ignition/user.ign" May 14 18:07:51.094082 ignition[726]: failed to fetch config: resource requires networking May 14 18:07:51.095822 ignition[726]: Ignition finished successfully May 14 18:07:51.101434 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 14 18:07:51.107997 systemd-networkd[824]: lo: Link UP May 14 18:07:51.108017 systemd-networkd[824]: lo: Gained carrier May 14 18:07:51.111905 systemd-networkd[824]: Enumeration completed May 14 18:07:51.112382 systemd[1]: Started systemd-networkd.service - Network Configuration. May 14 18:07:51.113082 systemd-networkd[824]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. May 14 18:07:51.113090 systemd-networkd[824]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. May 14 18:07:51.113726 systemd[1]: Reached target network.target - Network. May 14 18:07:51.114436 systemd-networkd[824]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 18:07:51.114442 systemd-networkd[824]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. May 14 18:07:51.115134 systemd-networkd[824]: eth0: Link UP May 14 18:07:51.115139 systemd-networkd[824]: eth0: Gained carrier May 14 18:07:51.115152 systemd-networkd[824]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. May 14 18:07:51.117978 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 14 18:07:51.120644 systemd-networkd[824]: eth1: Link UP May 14 18:07:51.120650 systemd-networkd[824]: eth1: Gained carrier May 14 18:07:51.120668 systemd-networkd[824]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 18:07:51.136298 systemd-networkd[824]: eth0: DHCPv4 address 164.92.104.130/19, gateway 164.92.96.1 acquired from 169.254.169.253 May 14 18:07:51.140371 systemd-networkd[824]: eth1: DHCPv4 address 10.124.0.21/20 acquired from 169.254.169.253 May 14 18:07:51.161160 ignition[831]: Ignition 2.21.0 May 14 18:07:51.161179 ignition[831]: Stage: fetch May 14 18:07:51.162030 ignition[831]: no configs at "/usr/lib/ignition/base.d" May 14 18:07:51.162047 ignition[831]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 14 18:07:51.162158 ignition[831]: parsed url from cmdline: "" May 14 18:07:51.162162 ignition[831]: no config URL provided May 14 18:07:51.162169 ignition[831]: reading system config file "/usr/lib/ignition/user.ign" May 14 18:07:51.162177 ignition[831]: no config at "/usr/lib/ignition/user.ign" May 14 18:07:51.162245 ignition[831]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 May 14 18:07:51.186906 ignition[831]: GET result: OK May 14 18:07:51.187117 ignition[831]: parsing config with SHA512: 5649b4ffb8df7d0dfac6caf186b921811e63018520ace9808803f70ec7be544432e322b8edec7921928903a668d6b250de303cf041b5cdf86769202c0ab5f3d1 May 14 18:07:51.195888 unknown[831]: fetched base config from "system" May 14 18:07:51.195908 unknown[831]: fetched base config from "system" May 14 18:07:51.195917 unknown[831]: fetched user config from "digitalocean" May 14 18:07:51.196785 ignition[831]: fetch: fetch complete May 14 18:07:51.196797 ignition[831]: fetch: fetch passed May 14 18:07:51.196906 ignition[831]: Ignition finished successfully May 14 18:07:51.200592 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 14 18:07:51.202420 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 14 18:07:51.249596 ignition[838]: Ignition 2.21.0 May 14 18:07:51.249610 ignition[838]: Stage: kargs May 14 18:07:51.249781 ignition[838]: no configs at "/usr/lib/ignition/base.d" May 14 18:07:51.249791 ignition[838]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 14 18:07:51.250988 ignition[838]: kargs: kargs passed May 14 18:07:51.252437 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 14 18:07:51.251055 ignition[838]: Ignition finished successfully May 14 18:07:51.255259 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 14 18:07:51.292828 ignition[844]: Ignition 2.21.0 May 14 18:07:51.292854 ignition[844]: Stage: disks May 14 18:07:51.293239 ignition[844]: no configs at "/usr/lib/ignition/base.d" May 14 18:07:51.293264 ignition[844]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 14 18:07:51.295276 ignition[844]: disks: disks passed May 14 18:07:51.295367 ignition[844]: Ignition finished successfully May 14 18:07:51.297796 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 14 18:07:51.298775 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 14 18:07:51.299360 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 14 18:07:51.300314 systemd[1]: Reached target local-fs.target - Local File Systems. May 14 18:07:51.301369 systemd[1]: Reached target sysinit.target - System Initialization. May 14 18:07:51.302146 systemd[1]: Reached target basic.target - Basic System. May 14 18:07:51.304527 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 14 18:07:51.340478 systemd-fsck[853]: ROOT: clean, 15/553520 files, 52789/553472 blocks May 14 18:07:51.342639 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 14 18:07:51.345981 systemd[1]: Mounting sysroot.mount - /sysroot... May 14 18:07:51.484258 kernel: EXT4-fs (vda9): mounted filesystem d6072e19-4548-4806-a012-87bb17c59f4c r/w with ordered data mode. Quota mode: none. May 14 18:07:51.486188 systemd[1]: Mounted sysroot.mount - /sysroot. May 14 18:07:51.488167 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 14 18:07:51.491592 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 14 18:07:51.494230 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 14 18:07:51.505432 systemd[1]: Starting flatcar-afterburn-network.service - Flatcar Afterburn network service... May 14 18:07:51.507654 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... May 14 18:07:51.509608 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 14 18:07:51.509738 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 14 18:07:51.514439 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 14 18:07:51.518354 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 14 18:07:51.526230 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (861) May 14 18:07:51.532279 kernel: BTRFS info (device vda6): first mount of filesystem 9b1e3c61-417b-43c0-b064-c7db19a42998 May 14 18:07:51.532505 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 14 18:07:51.532536 kernel: BTRFS info (device vda6): using free-space-tree May 14 18:07:51.553135 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 14 18:07:51.609634 coreos-metadata[864]: May 14 18:07:51.609 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 May 14 18:07:51.613383 initrd-setup-root[892]: cut: /sysroot/etc/passwd: No such file or directory May 14 18:07:51.621699 initrd-setup-root[899]: cut: /sysroot/etc/group: No such file or directory May 14 18:07:51.622539 coreos-metadata[863]: May 14 18:07:51.621 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 May 14 18:07:51.623472 coreos-metadata[864]: May 14 18:07:51.623 INFO Fetch successful May 14 18:07:51.629115 initrd-setup-root[906]: cut: /sysroot/etc/shadow: No such file or directory May 14 18:07:51.630754 coreos-metadata[864]: May 14 18:07:51.629 INFO wrote hostname ci-4334.0.0-a-9d82e253c5 to /sysroot/etc/hostname May 14 18:07:51.631320 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 14 18:07:51.635049 initrd-setup-root[914]: cut: /sysroot/etc/gshadow: No such file or directory May 14 18:07:51.636752 coreos-metadata[863]: May 14 18:07:51.636 INFO Fetch successful May 14 18:07:51.644331 systemd[1]: flatcar-afterburn-network.service: Deactivated successfully. May 14 18:07:51.644480 systemd[1]: Finished flatcar-afterburn-network.service - Flatcar Afterburn network service. May 14 18:07:51.756714 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 14 18:07:51.759065 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 14 18:07:51.760637 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 14 18:07:51.787126 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 14 18:07:51.788277 kernel: BTRFS info (device vda6): last unmount of filesystem 9b1e3c61-417b-43c0-b064-c7db19a42998 May 14 18:07:51.813525 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 14 18:07:51.826309 ignition[983]: INFO : Ignition 2.21.0 May 14 18:07:51.826309 ignition[983]: INFO : Stage: mount May 14 18:07:51.828056 ignition[983]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 18:07:51.828056 ignition[983]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 14 18:07:51.830662 ignition[983]: INFO : mount: mount passed May 14 18:07:51.830662 ignition[983]: INFO : Ignition finished successfully May 14 18:07:51.831426 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 14 18:07:51.834386 systemd[1]: Starting ignition-files.service - Ignition (files)... May 14 18:07:51.854056 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 14 18:07:51.883242 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (995) May 14 18:07:51.887626 kernel: BTRFS info (device vda6): first mount of filesystem 9b1e3c61-417b-43c0-b064-c7db19a42998 May 14 18:07:51.887719 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 14 18:07:51.887734 kernel: BTRFS info (device vda6): using free-space-tree May 14 18:07:51.892857 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 14 18:07:51.928951 ignition[1011]: INFO : Ignition 2.21.0 May 14 18:07:51.928951 ignition[1011]: INFO : Stage: files May 14 18:07:51.930191 ignition[1011]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 18:07:51.930191 ignition[1011]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 14 18:07:51.932050 ignition[1011]: DEBUG : files: compiled without relabeling support, skipping May 14 18:07:51.932854 ignition[1011]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 14 18:07:51.932854 ignition[1011]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 14 18:07:51.936418 ignition[1011]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 14 18:07:51.937147 ignition[1011]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 14 18:07:51.937923 ignition[1011]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 14 18:07:51.937819 unknown[1011]: wrote ssh authorized keys file for user: core May 14 18:07:51.939861 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 14 18:07:51.940828 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 14 18:07:51.991585 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 14 18:07:52.252862 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 14 18:07:52.252862 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 14 18:07:52.254777 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 May 14 18:07:52.720320 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 14 18:07:52.791888 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 14 18:07:52.791888 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 14 18:07:52.796931 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 14 18:07:52.796931 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 14 18:07:52.796931 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 14 18:07:52.796931 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 14 18:07:52.796931 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 14 18:07:52.796931 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 14 18:07:52.796931 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 14 18:07:52.796931 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 14 18:07:52.796931 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 14 18:07:52.796931 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 14 18:07:52.796931 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 14 18:07:52.796931 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 14 18:07:52.796931 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 May 14 18:07:52.998504 systemd-networkd[824]: eth0: Gained IPv6LL May 14 18:07:53.062793 systemd-networkd[824]: eth1: Gained IPv6LL May 14 18:07:53.215651 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 14 18:07:53.522513 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 14 18:07:53.523662 ignition[1011]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 14 18:07:53.524920 ignition[1011]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 14 18:07:53.527247 ignition[1011]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 14 18:07:53.527247 ignition[1011]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 14 18:07:53.527247 ignition[1011]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" May 14 18:07:53.527247 ignition[1011]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" May 14 18:07:53.527247 ignition[1011]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" May 14 18:07:53.527247 ignition[1011]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" May 14 18:07:53.527247 ignition[1011]: INFO : files: files passed May 14 18:07:53.531936 ignition[1011]: INFO : Ignition finished successfully May 14 18:07:53.530509 systemd[1]: Finished ignition-files.service - Ignition (files). May 14 18:07:53.534093 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 14 18:07:53.536365 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 14 18:07:53.556243 systemd[1]: ignition-quench.service: Deactivated successfully. May 14 18:07:53.556744 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 14 18:07:53.566616 initrd-setup-root-after-ignition[1041]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 14 18:07:53.566616 initrd-setup-root-after-ignition[1041]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 14 18:07:53.568828 initrd-setup-root-after-ignition[1045]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 14 18:07:53.569833 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 14 18:07:53.571026 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 14 18:07:53.572498 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 14 18:07:53.644275 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 14 18:07:53.644447 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 14 18:07:53.645584 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 14 18:07:53.646141 systemd[1]: Reached target initrd.target - Initrd Default Target. May 14 18:07:53.647085 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 14 18:07:53.648020 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 14 18:07:53.678278 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 14 18:07:53.680576 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 14 18:07:53.706151 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 14 18:07:53.707375 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 18:07:53.708583 systemd[1]: Stopped target timers.target - Timer Units. May 14 18:07:53.709485 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 14 18:07:53.709998 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 14 18:07:53.711327 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 14 18:07:53.712236 systemd[1]: Stopped target basic.target - Basic System. May 14 18:07:53.712946 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 14 18:07:53.713826 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 14 18:07:53.714816 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 14 18:07:53.715658 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. May 14 18:07:53.716463 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 14 18:07:53.716953 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 14 18:07:53.717389 systemd[1]: Stopped target sysinit.target - System Initialization. May 14 18:07:53.717785 systemd[1]: Stopped target local-fs.target - Local File Systems. May 14 18:07:53.718630 systemd[1]: Stopped target swap.target - Swaps. May 14 18:07:53.719241 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 14 18:07:53.719396 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 14 18:07:53.720243 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 14 18:07:53.720704 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 18:07:53.721254 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 14 18:07:53.721367 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 18:07:53.721895 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 14 18:07:53.722142 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 14 18:07:53.723228 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 14 18:07:53.723404 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 14 18:07:53.724420 systemd[1]: ignition-files.service: Deactivated successfully. May 14 18:07:53.724642 systemd[1]: Stopped ignition-files.service - Ignition (files). May 14 18:07:53.725220 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. May 14 18:07:53.725405 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 14 18:07:53.727095 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 14 18:07:53.728473 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 14 18:07:53.729339 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 14 18:07:53.734414 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 14 18:07:53.737333 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 14 18:07:53.737572 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 14 18:07:53.739290 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 14 18:07:53.739931 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 14 18:07:53.751910 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 14 18:07:53.752063 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 14 18:07:53.769217 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 14 18:07:53.774477 ignition[1065]: INFO : Ignition 2.21.0 May 14 18:07:53.774477 ignition[1065]: INFO : Stage: umount May 14 18:07:53.791708 ignition[1065]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 18:07:53.791708 ignition[1065]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 14 18:07:53.791708 ignition[1065]: INFO : umount: umount passed May 14 18:07:53.791708 ignition[1065]: INFO : Ignition finished successfully May 14 18:07:53.779788 systemd[1]: ignition-mount.service: Deactivated successfully. May 14 18:07:53.779893 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 14 18:07:53.794350 systemd[1]: ignition-disks.service: Deactivated successfully. May 14 18:07:53.794565 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 14 18:07:53.797076 systemd[1]: ignition-kargs.service: Deactivated successfully. May 14 18:07:53.797153 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 14 18:07:53.805733 systemd[1]: ignition-fetch.service: Deactivated successfully. May 14 18:07:53.805814 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 14 18:07:53.806178 systemd[1]: Stopped target network.target - Network. May 14 18:07:53.806548 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 14 18:07:53.806611 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 14 18:07:53.806977 systemd[1]: Stopped target paths.target - Path Units. May 14 18:07:53.809348 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 14 18:07:53.814331 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 18:07:53.814845 systemd[1]: Stopped target slices.target - Slice Units. May 14 18:07:53.815234 systemd[1]: Stopped target sockets.target - Socket Units. May 14 18:07:53.815746 systemd[1]: iscsid.socket: Deactivated successfully. May 14 18:07:53.815814 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 14 18:07:53.816717 systemd[1]: iscsiuio.socket: Deactivated successfully. May 14 18:07:53.816773 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 14 18:07:53.817407 systemd[1]: ignition-setup.service: Deactivated successfully. May 14 18:07:53.817492 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 14 18:07:53.818092 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 14 18:07:53.818179 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 14 18:07:53.819034 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 14 18:07:53.819772 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 14 18:07:53.820803 systemd[1]: sysroot-boot.service: Deactivated successfully. May 14 18:07:53.820903 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 14 18:07:53.822513 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 14 18:07:53.822686 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 14 18:07:53.828157 systemd[1]: systemd-resolved.service: Deactivated successfully. May 14 18:07:53.828310 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 14 18:07:53.832343 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 14 18:07:53.832747 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 14 18:07:53.832803 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 18:07:53.836134 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 14 18:07:53.837684 systemd[1]: systemd-networkd.service: Deactivated successfully. May 14 18:07:53.837810 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 14 18:07:53.839699 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 14 18:07:53.840488 systemd[1]: Stopped target network-pre.target - Preparation for Network. May 14 18:07:53.840998 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 14 18:07:53.841060 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 14 18:07:53.842767 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 14 18:07:53.843142 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 14 18:07:53.843238 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 14 18:07:53.843688 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 14 18:07:53.843740 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 14 18:07:53.844236 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 14 18:07:53.844282 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 14 18:07:53.844883 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 18:07:53.847974 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 14 18:07:53.862156 systemd[1]: systemd-udevd.service: Deactivated successfully. May 14 18:07:53.862358 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 18:07:53.867331 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 14 18:07:53.867462 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 14 18:07:53.867911 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 14 18:07:53.867946 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 14 18:07:53.870252 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 14 18:07:53.870373 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 14 18:07:53.871558 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 14 18:07:53.871621 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 14 18:07:53.872603 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 14 18:07:53.872657 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 18:07:53.876818 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 14 18:07:53.877847 systemd[1]: systemd-network-generator.service: Deactivated successfully. May 14 18:07:53.877960 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. May 14 18:07:53.879502 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 14 18:07:53.879590 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 18:07:53.880394 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 14 18:07:53.880465 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 14 18:07:53.884541 systemd[1]: network-cleanup.service: Deactivated successfully. May 14 18:07:53.885047 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 14 18:07:53.899673 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 14 18:07:53.899835 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 14 18:07:53.900943 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 14 18:07:53.905484 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 14 18:07:53.926108 systemd[1]: Switching root. May 14 18:07:53.967712 systemd-journald[211]: Journal stopped May 14 18:07:55.362927 systemd-journald[211]: Received SIGTERM from PID 1 (systemd). May 14 18:07:55.363030 kernel: SELinux: policy capability network_peer_controls=1 May 14 18:07:55.363048 kernel: SELinux: policy capability open_perms=1 May 14 18:07:55.363060 kernel: SELinux: policy capability extended_socket_class=1 May 14 18:07:55.363073 kernel: SELinux: policy capability always_check_network=0 May 14 18:07:55.363085 kernel: SELinux: policy capability cgroup_seclabel=1 May 14 18:07:55.363109 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 14 18:07:55.363121 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 14 18:07:55.363139 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 14 18:07:55.363151 kernel: SELinux: policy capability userspace_initial_context=0 May 14 18:07:55.363164 kernel: audit: type=1403 audit(1747246074.110:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 14 18:07:55.363178 systemd[1]: Successfully loaded SELinux policy in 62.430ms. May 14 18:07:55.377286 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 13.992ms. May 14 18:07:55.377340 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 14 18:07:55.377357 systemd[1]: Detected virtualization kvm. May 14 18:07:55.377370 systemd[1]: Detected architecture x86-64. May 14 18:07:55.377391 systemd[1]: Detected first boot. May 14 18:07:55.377405 systemd[1]: Hostname set to . May 14 18:07:55.377420 systemd[1]: Initializing machine ID from VM UUID. May 14 18:07:55.377434 zram_generator::config[1108]: No configuration found. May 14 18:07:55.377450 kernel: Guest personality initialized and is inactive May 14 18:07:55.377466 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 14 18:07:55.377478 kernel: Initialized host personality May 14 18:07:55.377509 kernel: NET: Registered PF_VSOCK protocol family May 14 18:07:55.377522 systemd[1]: Populated /etc with preset unit settings. May 14 18:07:55.377541 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 14 18:07:55.377553 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 14 18:07:55.377567 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 14 18:07:55.377581 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 14 18:07:55.377595 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 14 18:07:55.377609 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 14 18:07:55.377623 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 14 18:07:55.377637 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 14 18:07:55.377654 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 14 18:07:55.377668 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 14 18:07:55.377688 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 14 18:07:55.377702 systemd[1]: Created slice user.slice - User and Session Slice. May 14 18:07:55.377717 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 18:07:55.377732 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 18:07:55.377746 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 14 18:07:55.377760 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 14 18:07:55.377776 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 14 18:07:55.377799 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 14 18:07:55.377821 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 14 18:07:55.377838 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 18:07:55.377860 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 14 18:07:55.377879 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 14 18:07:55.377899 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 14 18:07:55.377926 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 14 18:07:55.377943 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 14 18:07:55.377956 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 18:07:55.377970 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 14 18:07:55.377984 systemd[1]: Reached target slices.target - Slice Units. May 14 18:07:55.377997 systemd[1]: Reached target swap.target - Swaps. May 14 18:07:55.378010 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 14 18:07:55.378024 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 14 18:07:55.378038 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 14 18:07:55.378051 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 14 18:07:55.378068 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 14 18:07:55.378082 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 14 18:07:55.378095 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 14 18:07:55.378109 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 14 18:07:55.378121 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 14 18:07:55.378137 systemd[1]: Mounting media.mount - External Media Directory... May 14 18:07:55.378151 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 18:07:55.378163 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 14 18:07:55.378176 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 14 18:07:55.378193 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 14 18:07:55.378355 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 14 18:07:55.378374 systemd[1]: Reached target machines.target - Containers. May 14 18:07:55.378387 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 14 18:07:55.378402 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 18:07:55.378415 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 14 18:07:55.378430 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 14 18:07:55.378443 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 18:07:55.378463 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 14 18:07:55.378483 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 18:07:55.378500 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 14 18:07:55.378515 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 18:07:55.378533 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 14 18:07:55.378554 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 14 18:07:55.378573 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 14 18:07:55.378594 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 14 18:07:55.378618 systemd[1]: Stopped systemd-fsck-usr.service. May 14 18:07:55.378642 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 18:07:55.378661 systemd[1]: Starting systemd-journald.service - Journal Service... May 14 18:07:55.378685 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 14 18:07:55.378699 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 14 18:07:55.378714 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 14 18:07:55.378727 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 14 18:07:55.378746 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 14 18:07:55.378761 systemd[1]: verity-setup.service: Deactivated successfully. May 14 18:07:55.378773 systemd[1]: Stopped verity-setup.service. May 14 18:07:55.378792 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 18:07:55.378817 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 14 18:07:55.378836 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 14 18:07:55.378855 systemd[1]: Mounted media.mount - External Media Directory. May 14 18:07:55.378874 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 14 18:07:55.378893 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 14 18:07:55.378915 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 14 18:07:55.378935 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 14 18:07:55.378958 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 14 18:07:55.378986 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 14 18:07:55.379002 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 18:07:55.379015 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 18:07:55.379031 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 18:07:55.379050 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 18:07:55.379069 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 14 18:07:55.379089 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 14 18:07:55.379109 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 14 18:07:55.379124 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 14 18:07:55.379142 systemd[1]: Reached target network-pre.target - Preparation for Network. May 14 18:07:55.379156 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 14 18:07:55.379170 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 14 18:07:55.379183 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 14 18:07:55.383263 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 14 18:07:55.383338 systemd[1]: Reached target local-fs.target - Local File Systems. May 14 18:07:55.383354 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 14 18:07:55.383381 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 14 18:07:55.383399 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 18:07:55.383425 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 14 18:07:55.383447 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 14 18:07:55.383467 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 14 18:07:55.383489 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 14 18:07:55.383511 kernel: fuse: init (API version 7.41) May 14 18:07:55.383533 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 14 18:07:55.383552 kernel: ACPI: bus type drm_connector registered May 14 18:07:55.383570 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 14 18:07:55.383588 systemd[1]: modprobe@drm.service: Deactivated successfully. May 14 18:07:55.383616 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 14 18:07:55.383637 kernel: loop: module loaded May 14 18:07:55.383656 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 14 18:07:55.383677 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 14 18:07:55.383701 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 14 18:07:55.383719 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 14 18:07:55.383733 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 14 18:07:55.383747 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 18:07:55.383775 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 18:07:55.383798 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 14 18:07:55.383872 systemd-journald[1178]: Collecting audit messages is disabled. May 14 18:07:55.383946 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 14 18:07:55.383961 kernel: loop0: detected capacity change from 0 to 8 May 14 18:07:55.383976 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 14 18:07:55.383989 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 14 18:07:55.384005 systemd-journald[1178]: Journal started May 14 18:07:55.384036 systemd-journald[1178]: Runtime Journal (/run/log/journal/733a020debfb44fdaefc65077de83ba7) is 4.9M, max 39.5M, 34.6M free. May 14 18:07:54.833341 systemd[1]: Queued start job for default target multi-user.target. May 14 18:07:54.843962 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 14 18:07:54.844802 systemd[1]: systemd-journald.service: Deactivated successfully. May 14 18:07:55.389855 systemd[1]: Started systemd-journald.service - Journal Service. May 14 18:07:55.418041 kernel: loop1: detected capacity change from 0 to 113872 May 14 18:07:55.432302 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 14 18:07:55.473262 kernel: loop2: detected capacity change from 0 to 210664 May 14 18:07:55.489083 systemd-journald[1178]: Time spent on flushing to /var/log/journal/733a020debfb44fdaefc65077de83ba7 is 78.452ms for 1016 entries. May 14 18:07:55.489083 systemd-journald[1178]: System Journal (/var/log/journal/733a020debfb44fdaefc65077de83ba7) is 8M, max 195.6M, 187.6M free. May 14 18:07:55.584541 systemd-journald[1178]: Received client request to flush runtime journal. May 14 18:07:55.584625 kernel: loop3: detected capacity change from 0 to 146240 May 14 18:07:55.497902 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 14 18:07:55.503867 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 14 18:07:55.597797 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 14 18:07:55.601460 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 14 18:07:55.609264 kernel: loop4: detected capacity change from 0 to 8 May 14 18:07:55.612332 kernel: loop5: detected capacity change from 0 to 113872 May 14 18:07:55.645999 kernel: loop6: detected capacity change from 0 to 210664 May 14 18:07:55.654944 systemd-tmpfiles[1248]: ACLs are not supported, ignoring. May 14 18:07:55.654971 systemd-tmpfiles[1248]: ACLs are not supported, ignoring. May 14 18:07:55.672351 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 18:07:55.681267 kernel: loop7: detected capacity change from 0 to 146240 May 14 18:07:55.698525 (sd-merge)[1254]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. May 14 18:07:55.699091 (sd-merge)[1254]: Merged extensions into '/usr'. May 14 18:07:55.706457 systemd[1]: Reload requested from client PID 1213 ('systemd-sysext') (unit systemd-sysext.service)... May 14 18:07:55.706481 systemd[1]: Reloading... May 14 18:07:55.900229 zram_generator::config[1284]: No configuration found. May 14 18:07:56.120891 ldconfig[1210]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 14 18:07:56.176311 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 18:07:56.336996 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 14 18:07:56.337150 systemd[1]: Reloading finished in 630 ms. May 14 18:07:56.366223 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 14 18:07:56.367832 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 14 18:07:56.377348 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 14 18:07:56.384403 systemd[1]: Starting ensure-sysext.service... May 14 18:07:56.389615 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 14 18:07:56.404508 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 14 18:07:56.435521 systemd[1]: Reload requested from client PID 1325 ('systemctl') (unit ensure-sysext.service)... May 14 18:07:56.435556 systemd[1]: Reloading... May 14 18:07:56.454047 systemd-tmpfiles[1326]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. May 14 18:07:56.454611 systemd-tmpfiles[1326]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. May 14 18:07:56.455001 systemd-tmpfiles[1326]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 14 18:07:56.455484 systemd-tmpfiles[1326]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 14 18:07:56.456417 systemd-tmpfiles[1326]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 14 18:07:56.456812 systemd-tmpfiles[1326]: ACLs are not supported, ignoring. May 14 18:07:56.456973 systemd-tmpfiles[1326]: ACLs are not supported, ignoring. May 14 18:07:56.473502 systemd-tmpfiles[1326]: Detected autofs mount point /boot during canonicalization of boot. May 14 18:07:56.473696 systemd-tmpfiles[1326]: Skipping /boot May 14 18:07:56.500132 systemd-tmpfiles[1326]: Detected autofs mount point /boot during canonicalization of boot. May 14 18:07:56.500150 systemd-tmpfiles[1326]: Skipping /boot May 14 18:07:56.574234 zram_generator::config[1350]: No configuration found. May 14 18:07:56.726545 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 18:07:56.821442 systemd[1]: Reloading finished in 385 ms. May 14 18:07:56.843266 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 14 18:07:56.851532 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 18:07:56.861416 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 14 18:07:56.863827 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 14 18:07:56.868790 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 14 18:07:56.873922 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 14 18:07:56.880861 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 18:07:56.885219 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 14 18:07:56.895594 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 18:07:56.895850 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 18:07:56.899628 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 18:07:56.903639 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 18:07:56.910682 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 18:07:56.911301 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 18:07:56.911454 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 18:07:56.911586 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 18:07:56.920650 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 14 18:07:56.924213 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 18:07:56.924421 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 18:07:56.924605 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 18:07:56.924692 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 18:07:56.924778 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 18:07:56.930677 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 18:07:56.930994 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 18:07:56.933573 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 14 18:07:56.934181 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 18:07:56.934357 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 18:07:56.934509 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 18:07:56.941591 systemd[1]: Finished ensure-sysext.service. May 14 18:07:56.949567 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 14 18:07:56.975602 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 14 18:07:56.977304 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 14 18:07:56.981774 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 14 18:07:57.000610 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 14 18:07:57.001778 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 14 18:07:57.015656 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 18:07:57.017328 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 18:07:57.018092 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 14 18:07:57.034624 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 18:07:57.034857 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 18:07:57.035958 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 18:07:57.038620 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 18:07:57.040020 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 14 18:07:57.043655 systemd[1]: modprobe@drm.service: Deactivated successfully. May 14 18:07:57.043916 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 14 18:07:57.053498 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 14 18:07:57.056220 systemd-udevd[1403]: Using default interface naming scheme 'v255'. May 14 18:07:57.057282 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 14 18:07:57.101322 augenrules[1446]: No rules May 14 18:07:57.101511 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 18:07:57.103746 systemd[1]: audit-rules.service: Deactivated successfully. May 14 18:07:57.104068 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 14 18:07:57.112442 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 14 18:07:57.229493 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 14 18:07:57.230093 systemd[1]: Reached target time-set.target - System Time Set. May 14 18:07:57.256677 systemd-resolved[1402]: Positive Trust Anchors: May 14 18:07:57.256695 systemd-resolved[1402]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 14 18:07:57.256738 systemd-resolved[1402]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 14 18:07:57.257476 systemd-networkd[1459]: lo: Link UP May 14 18:07:57.257481 systemd-networkd[1459]: lo: Gained carrier May 14 18:07:57.258453 systemd-networkd[1459]: Enumeration completed May 14 18:07:57.258589 systemd[1]: Started systemd-networkd.service - Network Configuration. May 14 18:07:57.263336 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 14 18:07:57.265773 systemd-resolved[1402]: Using system hostname 'ci-4334.0.0-a-9d82e253c5'. May 14 18:07:57.267620 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 14 18:07:57.274173 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 14 18:07:57.274904 systemd[1]: Reached target network.target - Network. May 14 18:07:57.276346 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 14 18:07:57.276910 systemd[1]: Reached target sysinit.target - System Initialization. May 14 18:07:57.277561 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 14 18:07:57.278127 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 14 18:07:57.278625 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. May 14 18:07:57.279158 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 14 18:07:57.279637 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 14 18:07:57.280013 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 14 18:07:57.280409 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 14 18:07:57.280442 systemd[1]: Reached target paths.target - Path Units. May 14 18:07:57.280739 systemd[1]: Reached target timers.target - Timer Units. May 14 18:07:57.282180 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 14 18:07:57.285456 systemd[1]: Starting docker.socket - Docker Socket for the API... May 14 18:07:57.290445 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 14 18:07:57.291629 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 14 18:07:57.292154 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 14 18:07:57.301561 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 14 18:07:57.304067 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 14 18:07:57.305780 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 14 18:07:57.307194 systemd[1]: Reached target sockets.target - Socket Units. May 14 18:07:57.307591 systemd[1]: Reached target basic.target - Basic System. May 14 18:07:57.307941 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 14 18:07:57.307969 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 14 18:07:57.311575 systemd[1]: Starting containerd.service - containerd container runtime... May 14 18:07:57.318674 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 14 18:07:57.321750 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 14 18:07:57.328498 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 14 18:07:57.332595 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 14 18:07:57.340889 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 14 18:07:57.341434 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 14 18:07:57.353367 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... May 14 18:07:57.361403 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 14 18:07:57.369505 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 14 18:07:57.374591 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 14 18:07:57.378615 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 14 18:07:57.396821 jq[1488]: false May 14 18:07:57.395681 systemd[1]: Starting systemd-logind.service - User Login Management... May 14 18:07:57.397936 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 14 18:07:57.400678 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 14 18:07:57.402020 systemd[1]: Starting update-engine.service - Update Engine... May 14 18:07:57.410568 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 14 18:07:57.413517 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 14 18:07:57.425304 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 14 18:07:57.426782 oslogin_cache_refresh[1490]: Refreshing passwd entry cache May 14 18:07:57.430129 google_oslogin_nss_cache[1490]: oslogin_cache_refresh[1490]: Refreshing passwd entry cache May 14 18:07:57.426764 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 14 18:07:57.427105 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 14 18:07:57.445239 google_oslogin_nss_cache[1490]: oslogin_cache_refresh[1490]: Failure getting users, quitting May 14 18:07:57.445239 google_oslogin_nss_cache[1490]: oslogin_cache_refresh[1490]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 14 18:07:57.445239 google_oslogin_nss_cache[1490]: oslogin_cache_refresh[1490]: Refreshing group entry cache May 14 18:07:57.438682 oslogin_cache_refresh[1490]: Failure getting users, quitting May 14 18:07:57.438705 oslogin_cache_refresh[1490]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 14 18:07:57.438769 oslogin_cache_refresh[1490]: Refreshing group entry cache May 14 18:07:57.449569 oslogin_cache_refresh[1490]: Failure getting groups, quitting May 14 18:07:57.450678 google_oslogin_nss_cache[1490]: oslogin_cache_refresh[1490]: Failure getting groups, quitting May 14 18:07:57.450678 google_oslogin_nss_cache[1490]: oslogin_cache_refresh[1490]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 14 18:07:57.449603 oslogin_cache_refresh[1490]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 14 18:07:57.451493 systemd[1]: google-oslogin-cache.service: Deactivated successfully. May 14 18:07:57.452350 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. May 14 18:07:57.469661 coreos-metadata[1485]: May 14 18:07:57.469 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 May 14 18:07:57.469661 coreos-metadata[1485]: May 14 18:07:57.469 INFO Failed to fetch: error sending request for url (http://169.254.169.254/metadata/v1.json) May 14 18:07:57.470961 extend-filesystems[1489]: Found loop4 May 14 18:07:57.472180 extend-filesystems[1489]: Found loop5 May 14 18:07:57.474506 extend-filesystems[1489]: Found loop6 May 14 18:07:57.474506 extend-filesystems[1489]: Found loop7 May 14 18:07:57.474506 extend-filesystems[1489]: Found vda May 14 18:07:57.474506 extend-filesystems[1489]: Found vda1 May 14 18:07:57.474506 extend-filesystems[1489]: Found vda2 May 14 18:07:57.474506 extend-filesystems[1489]: Found vda3 May 14 18:07:57.474506 extend-filesystems[1489]: Found usr May 14 18:07:57.474506 extend-filesystems[1489]: Found vda4 May 14 18:07:57.474506 extend-filesystems[1489]: Found vda6 May 14 18:07:57.474506 extend-filesystems[1489]: Found vda7 May 14 18:07:57.474506 extend-filesystems[1489]: Found vda9 May 14 18:07:57.474506 extend-filesystems[1489]: Found vdb May 14 18:07:57.472996 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 14 18:07:57.473296 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 14 18:07:57.496402 jq[1499]: true May 14 18:07:57.475945 systemd[1]: extend-filesystems.service: Deactivated successfully. May 14 18:07:57.477157 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 14 18:07:57.518683 dbus-daemon[1486]: [system] SELinux support is enabled May 14 18:07:57.518954 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 14 18:07:57.522524 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 14 18:07:57.522565 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 14 18:07:57.523079 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 14 18:07:57.523100 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 14 18:07:57.535830 (ntainerd)[1515]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 14 18:07:57.564641 jq[1518]: true May 14 18:07:57.566808 update_engine[1498]: I20250514 18:07:57.560173 1498 main.cc:92] Flatcar Update Engine starting May 14 18:07:57.569094 tar[1502]: linux-amd64/helm May 14 18:07:57.575652 systemd[1]: Started update-engine.service - Update Engine. May 14 18:07:57.576363 update_engine[1498]: I20250514 18:07:57.575891 1498 update_check_scheduler.cc:74] Next update check in 7m10s May 14 18:07:57.609507 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 14 18:07:57.611956 systemd[1]: motdgen.service: Deactivated successfully. May 14 18:07:57.613064 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 14 18:07:57.786256 bash[1547]: Updated "/home/core/.ssh/authorized_keys" May 14 18:07:57.789312 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 14 18:07:57.805438 systemd[1]: Starting sshkeys.service... May 14 18:07:57.902709 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 14 18:07:57.908098 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 14 18:07:57.987050 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 14 18:07:57.989790 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 14 18:07:57.995433 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 14 18:07:58.058387 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 14 18:07:58.086660 coreos-metadata[1552]: May 14 18:07:58.086 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 May 14 18:07:58.088774 coreos-metadata[1552]: May 14 18:07:58.088 INFO Failed to fetch: error sending request for url (http://169.254.169.254/metadata/v1.json) May 14 18:07:58.094916 locksmithd[1527]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 14 18:07:58.117948 containerd[1515]: time="2025-05-14T18:07:58Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 14 18:07:58.119514 sshd_keygen[1523]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 14 18:07:58.131125 containerd[1515]: time="2025-05-14T18:07:58.131058255Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 May 14 18:07:58.190053 systemd-logind[1497]: New seat seat0. May 14 18:07:58.191289 systemd[1]: Started systemd-logind.service - User Login Management. May 14 18:07:58.200239 containerd[1515]: time="2025-05-14T18:07:58.198436242Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="11.727µs" May 14 18:07:58.200239 containerd[1515]: time="2025-05-14T18:07:58.198476833Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 14 18:07:58.200239 containerd[1515]: time="2025-05-14T18:07:58.198499050Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 14 18:07:58.200239 containerd[1515]: time="2025-05-14T18:07:58.198673251Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 14 18:07:58.200239 containerd[1515]: time="2025-05-14T18:07:58.198686668Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 14 18:07:58.200239 containerd[1515]: time="2025-05-14T18:07:58.198712847Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 14 18:07:58.200239 containerd[1515]: time="2025-05-14T18:07:58.198793148Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 14 18:07:58.200239 containerd[1515]: time="2025-05-14T18:07:58.198807325Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 14 18:07:58.200239 containerd[1515]: time="2025-05-14T18:07:58.199078696Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 14 18:07:58.200239 containerd[1515]: time="2025-05-14T18:07:58.199094441Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 14 18:07:58.200239 containerd[1515]: time="2025-05-14T18:07:58.199104795Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 14 18:07:58.200239 containerd[1515]: time="2025-05-14T18:07:58.199113158Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 14 18:07:58.200607 containerd[1515]: time="2025-05-14T18:07:58.199187747Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 14 18:07:58.203664 containerd[1515]: time="2025-05-14T18:07:58.200913835Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 14 18:07:58.203664 containerd[1515]: time="2025-05-14T18:07:58.200961348Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 14 18:07:58.203664 containerd[1515]: time="2025-05-14T18:07:58.200974535Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 14 18:07:58.203664 containerd[1515]: time="2025-05-14T18:07:58.201028766Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 14 18:07:58.215608 containerd[1515]: time="2025-05-14T18:07:58.215560633Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 14 18:07:58.228553 containerd[1515]: time="2025-05-14T18:07:58.227420936Z" level=info msg="metadata content store policy set" policy=shared May 14 18:07:58.229574 systemd[1]: Condition check resulted in dev-disk-by\x2dlabel-config\x2d2.device - /dev/disk/by-label/config-2 being skipped. May 14 18:07:58.239606 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... May 14 18:07:58.241303 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 14 18:07:58.243232 containerd[1515]: time="2025-05-14T18:07:58.243116693Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 14 18:07:58.247234 containerd[1515]: time="2025-05-14T18:07:58.243209357Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 14 18:07:58.247234 containerd[1515]: time="2025-05-14T18:07:58.243486357Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 14 18:07:58.247234 containerd[1515]: time="2025-05-14T18:07:58.243506229Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 14 18:07:58.247234 containerd[1515]: time="2025-05-14T18:07:58.243524604Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 14 18:07:58.247234 containerd[1515]: time="2025-05-14T18:07:58.243541386Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 14 18:07:58.247234 containerd[1515]: time="2025-05-14T18:07:58.243566017Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 14 18:07:58.247234 containerd[1515]: time="2025-05-14T18:07:58.243578573Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 14 18:07:58.247234 containerd[1515]: time="2025-05-14T18:07:58.243603878Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 14 18:07:58.247234 containerd[1515]: time="2025-05-14T18:07:58.243618072Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 14 18:07:58.247234 containerd[1515]: time="2025-05-14T18:07:58.243633708Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 14 18:07:58.247234 containerd[1515]: time="2025-05-14T18:07:58.243668091Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 14 18:07:58.247234 containerd[1515]: time="2025-05-14T18:07:58.243884926Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 14 18:07:58.247234 containerd[1515]: time="2025-05-14T18:07:58.243922625Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 14 18:07:58.247234 containerd[1515]: time="2025-05-14T18:07:58.243946903Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 14 18:07:58.247644 containerd[1515]: time="2025-05-14T18:07:58.243963214Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 14 18:07:58.247644 containerd[1515]: time="2025-05-14T18:07:58.243979053Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 14 18:07:58.247644 containerd[1515]: time="2025-05-14T18:07:58.243992983Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 14 18:07:58.247644 containerd[1515]: time="2025-05-14T18:07:58.244007368Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 14 18:07:58.247644 containerd[1515]: time="2025-05-14T18:07:58.244021297Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 14 18:07:58.247644 containerd[1515]: time="2025-05-14T18:07:58.244041601Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 14 18:07:58.247644 containerd[1515]: time="2025-05-14T18:07:58.244057962Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 14 18:07:58.247644 containerd[1515]: time="2025-05-14T18:07:58.244075529Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 14 18:07:58.247644 containerd[1515]: time="2025-05-14T18:07:58.244171307Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 14 18:07:58.247644 containerd[1515]: time="2025-05-14T18:07:58.244194457Z" level=info msg="Start snapshots syncer" May 14 18:07:58.247644 containerd[1515]: time="2025-05-14T18:07:58.246576979Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 14 18:07:58.247973 containerd[1515]: time="2025-05-14T18:07:58.246859101Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 14 18:07:58.247973 containerd[1515]: time="2025-05-14T18:07:58.246930574Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 14 18:07:58.248152 containerd[1515]: time="2025-05-14T18:07:58.247065125Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 14 18:07:58.255236 containerd[1515]: time="2025-05-14T18:07:58.253344907Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 14 18:07:58.255236 containerd[1515]: time="2025-05-14T18:07:58.253404560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 14 18:07:58.255236 containerd[1515]: time="2025-05-14T18:07:58.253421880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 14 18:07:58.255236 containerd[1515]: time="2025-05-14T18:07:58.253438134Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 14 18:07:58.255236 containerd[1515]: time="2025-05-14T18:07:58.253452087Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 14 18:07:58.255236 containerd[1515]: time="2025-05-14T18:07:58.253463240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 14 18:07:58.255236 containerd[1515]: time="2025-05-14T18:07:58.253501489Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 14 18:07:58.255236 containerd[1515]: time="2025-05-14T18:07:58.253575384Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 14 18:07:58.255236 containerd[1515]: time="2025-05-14T18:07:58.253594343Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 14 18:07:58.255236 containerd[1515]: time="2025-05-14T18:07:58.253626577Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 14 18:07:58.255236 containerd[1515]: time="2025-05-14T18:07:58.253681464Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 14 18:07:58.255236 containerd[1515]: time="2025-05-14T18:07:58.253702445Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 14 18:07:58.255236 containerd[1515]: time="2025-05-14T18:07:58.253711822Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 14 18:07:58.255600 containerd[1515]: time="2025-05-14T18:07:58.253721869Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 14 18:07:58.255600 containerd[1515]: time="2025-05-14T18:07:58.253729965Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 14 18:07:58.255600 containerd[1515]: time="2025-05-14T18:07:58.253739341Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 14 18:07:58.255600 containerd[1515]: time="2025-05-14T18:07:58.253750160Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 14 18:07:58.255600 containerd[1515]: time="2025-05-14T18:07:58.253767675Z" level=info msg="runtime interface created" May 14 18:07:58.255600 containerd[1515]: time="2025-05-14T18:07:58.253774783Z" level=info msg="created NRI interface" May 14 18:07:58.255600 containerd[1515]: time="2025-05-14T18:07:58.253786686Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 14 18:07:58.255600 containerd[1515]: time="2025-05-14T18:07:58.253809646Z" level=info msg="Connect containerd service" May 14 18:07:58.255600 containerd[1515]: time="2025-05-14T18:07:58.253849221Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 14 18:07:58.263919 containerd[1515]: time="2025-05-14T18:07:58.262613294Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 14 18:07:58.319436 kernel: ISO 9660 Extensions: RRIP_1991A May 14 18:07:58.322694 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. May 14 18:07:58.325019 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). May 14 18:07:58.330626 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 14 18:07:58.335122 kernel: mousedev: PS/2 mouse device common for all mice May 14 18:07:58.337418 systemd[1]: Starting issuegen.service - Generate /run/issue... May 14 18:07:58.375589 systemd[1]: issuegen.service: Deactivated successfully. May 14 18:07:58.376314 systemd[1]: Finished issuegen.service - Generate /run/issue. May 14 18:07:58.381576 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 14 18:07:58.410925 systemd-networkd[1459]: eth1: Configuring with /run/systemd/network/10-ba:76:69:c0:fb:05.network. May 14 18:07:58.416094 systemd-networkd[1459]: eth0: Configuring with /run/systemd/network/10-76:5e:f8:27:42:d7.network. May 14 18:07:58.416745 systemd-networkd[1459]: eth1: Link UP May 14 18:07:58.416970 systemd-networkd[1459]: eth1: Gained carrier May 14 18:07:58.427807 systemd-networkd[1459]: eth0: Link UP May 14 18:07:58.432514 systemd-networkd[1459]: eth0: Gained carrier May 14 18:07:58.448988 systemd-timesyncd[1417]: Network configuration changed, trying to establish connection. May 14 18:07:58.458074 systemd-timesyncd[1417]: Network configuration changed, trying to establish connection. May 14 18:07:58.471421 coreos-metadata[1485]: May 14 18:07:58.471 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #2 May 14 18:07:58.481924 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 May 14 18:07:58.482449 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 14 18:07:58.483527 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 14 18:07:58.490598 coreos-metadata[1485]: May 14 18:07:58.489 INFO Fetch successful May 14 18:07:58.492737 systemd[1]: Started getty@tty1.service - Getty on tty1. May 14 18:07:58.498303 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 14 18:07:58.499532 systemd[1]: Reached target getty.target - Login Prompts. May 14 18:07:58.549229 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 May 14 18:07:58.554306 kernel: ACPI: button: Power Button [PWRF] May 14 18:07:58.562772 containerd[1515]: time="2025-05-14T18:07:58.562714080Z" level=info msg="Start subscribing containerd event" May 14 18:07:58.562890 containerd[1515]: time="2025-05-14T18:07:58.562810103Z" level=info msg="Start recovering state" May 14 18:07:58.562950 containerd[1515]: time="2025-05-14T18:07:58.562932546Z" level=info msg="Start event monitor" May 14 18:07:58.563007 containerd[1515]: time="2025-05-14T18:07:58.562959245Z" level=info msg="Start cni network conf syncer for default" May 14 18:07:58.563007 containerd[1515]: time="2025-05-14T18:07:58.562971610Z" level=info msg="Start streaming server" May 14 18:07:58.563007 containerd[1515]: time="2025-05-14T18:07:58.562982382Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 14 18:07:58.563007 containerd[1515]: time="2025-05-14T18:07:58.562992688Z" level=info msg="runtime interface starting up..." May 14 18:07:58.563007 containerd[1515]: time="2025-05-14T18:07:58.563002085Z" level=info msg="starting plugins..." May 14 18:07:58.563105 containerd[1515]: time="2025-05-14T18:07:58.563023194Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 14 18:07:58.564006 containerd[1515]: time="2025-05-14T18:07:58.563974849Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 14 18:07:58.564353 containerd[1515]: time="2025-05-14T18:07:58.564334680Z" level=info msg=serving... address=/run/containerd/containerd.sock May 14 18:07:58.564592 containerd[1515]: time="2025-05-14T18:07:58.564576250Z" level=info msg="containerd successfully booted in 0.447174s" May 14 18:07:58.564621 systemd[1]: Started containerd.service - containerd container runtime. May 14 18:07:58.597555 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 14 18:07:58.600681 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 14 18:07:58.817235 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 May 14 18:07:58.817366 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console May 14 18:07:58.820623 kernel: Console: switching to colour dummy device 80x25 May 14 18:07:58.821799 kernel: [drm] features: -virgl +edid -resource_blob -host_visible May 14 18:07:58.821909 kernel: [drm] features: -context_init May 14 18:07:58.823225 kernel: [drm] number of scanouts: 1 May 14 18:07:58.823290 kernel: [drm] number of cap sets: 0 May 14 18:07:58.824222 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0 May 14 18:07:58.849654 systemd-logind[1497]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 14 18:07:58.868676 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 18:07:58.880359 systemd-logind[1497]: Watching system buttons on /dev/input/event2 (Power Button) May 14 18:07:58.918733 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 14 18:07:58.919587 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 14 18:07:58.929522 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 18:07:59.005249 kernel: EDAC MC: Ver: 3.0.0 May 14 18:07:59.046808 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 18:07:59.089080 coreos-metadata[1552]: May 14 18:07:59.088 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #2 May 14 18:07:59.103358 coreos-metadata[1552]: May 14 18:07:59.102 INFO Fetch successful May 14 18:07:59.110341 unknown[1552]: wrote ssh authorized keys file for user: core May 14 18:07:59.116141 tar[1502]: linux-amd64/LICENSE May 14 18:07:59.116740 tar[1502]: linux-amd64/README.md May 14 18:07:59.133356 update-ssh-keys[1636]: Updated "/home/core/.ssh/authorized_keys" May 14 18:07:59.133755 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 14 18:07:59.135682 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 14 18:07:59.138702 systemd[1]: Finished sshkeys.service. May 14 18:07:59.590492 systemd-networkd[1459]: eth1: Gained IPv6LL May 14 18:07:59.591315 systemd-timesyncd[1417]: Network configuration changed, trying to establish connection. May 14 18:07:59.594791 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 14 18:07:59.596500 systemd[1]: Reached target network-online.target - Network is Online. May 14 18:07:59.599634 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 18:07:59.603354 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 14 18:07:59.637243 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 14 18:08:00.358443 systemd-networkd[1459]: eth0: Gained IPv6LL May 14 18:08:00.359153 systemd-timesyncd[1417]: Network configuration changed, trying to establish connection. May 14 18:08:00.645510 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 18:08:00.649609 systemd[1]: Reached target multi-user.target - Multi-User System. May 14 18:08:00.650452 systemd[1]: Startup finished in 3.461s (kernel) + 6.429s (initrd) + 6.601s (userspace) = 16.492s. May 14 18:08:00.654811 (kubelet)[1659]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 18:08:01.450707 kubelet[1659]: E0514 18:08:01.450630 1659 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 18:08:01.453856 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 18:08:01.454152 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 18:08:01.454955 systemd[1]: kubelet.service: Consumed 1.302s CPU time, 240.7M memory peak. May 14 18:08:01.721031 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 14 18:08:01.723367 systemd[1]: Started sshd@0-164.92.104.130:22-139.178.89.65:46702.service - OpenSSH per-connection server daemon (139.178.89.65:46702). May 14 18:08:01.842465 sshd[1673]: Accepted publickey for core from 139.178.89.65 port 46702 ssh2: RSA SHA256:I6v7602y95t0HxsKZunlpQRdbWqTS6jK7hLc8ah5Xaw May 14 18:08:01.847082 sshd-session[1673]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:08:01.868124 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 14 18:08:01.870416 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 14 18:08:01.878330 systemd-logind[1497]: New session 1 of user core. May 14 18:08:01.908379 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 14 18:08:01.914550 systemd[1]: Starting user@500.service - User Manager for UID 500... May 14 18:08:01.936462 (systemd)[1677]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 14 18:08:01.942625 systemd-logind[1497]: New session c1 of user core. May 14 18:08:02.134949 systemd[1677]: Queued start job for default target default.target. May 14 18:08:02.143130 systemd[1677]: Created slice app.slice - User Application Slice. May 14 18:08:02.143186 systemd[1677]: Reached target paths.target - Paths. May 14 18:08:02.143313 systemd[1677]: Reached target timers.target - Timers. May 14 18:08:02.145650 systemd[1677]: Starting dbus.socket - D-Bus User Message Bus Socket... May 14 18:08:02.181820 systemd[1677]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 14 18:08:02.181935 systemd[1677]: Reached target sockets.target - Sockets. May 14 18:08:02.182036 systemd[1677]: Reached target basic.target - Basic System. May 14 18:08:02.182168 systemd[1677]: Reached target default.target - Main User Target. May 14 18:08:02.182614 systemd[1]: Started user@500.service - User Manager for UID 500. May 14 18:08:02.182871 systemd[1677]: Startup finished in 227ms. May 14 18:08:02.197763 systemd[1]: Started session-1.scope - Session 1 of User core. May 14 18:08:02.271655 systemd[1]: Started sshd@1-164.92.104.130:22-139.178.89.65:46704.service - OpenSSH per-connection server daemon (139.178.89.65:46704). May 14 18:08:02.347795 sshd[1688]: Accepted publickey for core from 139.178.89.65 port 46704 ssh2: RSA SHA256:I6v7602y95t0HxsKZunlpQRdbWqTS6jK7hLc8ah5Xaw May 14 18:08:02.351030 sshd-session[1688]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:08:02.360483 systemd-logind[1497]: New session 2 of user core. May 14 18:08:02.375632 systemd[1]: Started session-2.scope - Session 2 of User core. May 14 18:08:02.446562 sshd[1690]: Connection closed by 139.178.89.65 port 46704 May 14 18:08:02.447394 sshd-session[1688]: pam_unix(sshd:session): session closed for user core May 14 18:08:02.463413 systemd[1]: sshd@1-164.92.104.130:22-139.178.89.65:46704.service: Deactivated successfully. May 14 18:08:02.467054 systemd[1]: session-2.scope: Deactivated successfully. May 14 18:08:02.468969 systemd-logind[1497]: Session 2 logged out. Waiting for processes to exit. May 14 18:08:02.476636 systemd[1]: Started sshd@2-164.92.104.130:22-139.178.89.65:46714.service - OpenSSH per-connection server daemon (139.178.89.65:46714). May 14 18:08:02.478358 systemd-logind[1497]: Removed session 2. May 14 18:08:02.560424 sshd[1696]: Accepted publickey for core from 139.178.89.65 port 46714 ssh2: RSA SHA256:I6v7602y95t0HxsKZunlpQRdbWqTS6jK7hLc8ah5Xaw May 14 18:08:02.563335 sshd-session[1696]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:08:02.573983 systemd-logind[1497]: New session 3 of user core. May 14 18:08:02.580607 systemd[1]: Started session-3.scope - Session 3 of User core. May 14 18:08:02.643329 sshd[1698]: Connection closed by 139.178.89.65 port 46714 May 14 18:08:02.644297 sshd-session[1696]: pam_unix(sshd:session): session closed for user core May 14 18:08:02.661971 systemd[1]: sshd@2-164.92.104.130:22-139.178.89.65:46714.service: Deactivated successfully. May 14 18:08:02.665093 systemd[1]: session-3.scope: Deactivated successfully. May 14 18:08:02.666611 systemd-logind[1497]: Session 3 logged out. Waiting for processes to exit. May 14 18:08:02.671263 systemd[1]: Started sshd@3-164.92.104.130:22-139.178.89.65:46720.service - OpenSSH per-connection server daemon (139.178.89.65:46720). May 14 18:08:02.672615 systemd-logind[1497]: Removed session 3. May 14 18:08:02.752572 sshd[1704]: Accepted publickey for core from 139.178.89.65 port 46720 ssh2: RSA SHA256:I6v7602y95t0HxsKZunlpQRdbWqTS6jK7hLc8ah5Xaw May 14 18:08:02.754804 sshd-session[1704]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:08:02.763975 systemd-logind[1497]: New session 4 of user core. May 14 18:08:02.770632 systemd[1]: Started session-4.scope - Session 4 of User core. May 14 18:08:02.838395 sshd[1706]: Connection closed by 139.178.89.65 port 46720 May 14 18:08:02.839158 sshd-session[1704]: pam_unix(sshd:session): session closed for user core May 14 18:08:02.853526 systemd[1]: sshd@3-164.92.104.130:22-139.178.89.65:46720.service: Deactivated successfully. May 14 18:08:02.857109 systemd[1]: session-4.scope: Deactivated successfully. May 14 18:08:02.859065 systemd-logind[1497]: Session 4 logged out. Waiting for processes to exit. May 14 18:08:02.865716 systemd[1]: Started sshd@4-164.92.104.130:22-139.178.89.65:46730.service - OpenSSH per-connection server daemon (139.178.89.65:46730). May 14 18:08:02.868418 systemd-logind[1497]: Removed session 4. May 14 18:08:02.950786 sshd[1712]: Accepted publickey for core from 139.178.89.65 port 46730 ssh2: RSA SHA256:I6v7602y95t0HxsKZunlpQRdbWqTS6jK7hLc8ah5Xaw May 14 18:08:02.952962 sshd-session[1712]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:08:02.961263 systemd-logind[1497]: New session 5 of user core. May 14 18:08:02.972722 systemd[1]: Started session-5.scope - Session 5 of User core. May 14 18:08:03.058627 sudo[1715]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 14 18:08:03.058968 sudo[1715]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 18:08:03.079429 sudo[1715]: pam_unix(sudo:session): session closed for user root May 14 18:08:03.084309 sshd[1714]: Connection closed by 139.178.89.65 port 46730 May 14 18:08:03.085932 sshd-session[1712]: pam_unix(sshd:session): session closed for user core May 14 18:08:03.102652 systemd[1]: sshd@4-164.92.104.130:22-139.178.89.65:46730.service: Deactivated successfully. May 14 18:08:03.106364 systemd[1]: session-5.scope: Deactivated successfully. May 14 18:08:03.108517 systemd-logind[1497]: Session 5 logged out. Waiting for processes to exit. May 14 18:08:03.114698 systemd[1]: Started sshd@5-164.92.104.130:22-139.178.89.65:46746.service - OpenSSH per-connection server daemon (139.178.89.65:46746). May 14 18:08:03.116430 systemd-logind[1497]: Removed session 5. May 14 18:08:03.199991 sshd[1721]: Accepted publickey for core from 139.178.89.65 port 46746 ssh2: RSA SHA256:I6v7602y95t0HxsKZunlpQRdbWqTS6jK7hLc8ah5Xaw May 14 18:08:03.202848 sshd-session[1721]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:08:03.213626 systemd-logind[1497]: New session 6 of user core. May 14 18:08:03.219735 systemd[1]: Started session-6.scope - Session 6 of User core. May 14 18:08:03.288073 sudo[1725]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 14 18:08:03.288578 sudo[1725]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 18:08:03.296716 sudo[1725]: pam_unix(sudo:session): session closed for user root May 14 18:08:03.307335 sudo[1724]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 14 18:08:03.307809 sudo[1724]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 18:08:03.330772 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 14 18:08:03.395006 augenrules[1747]: No rules May 14 18:08:03.397261 systemd[1]: audit-rules.service: Deactivated successfully. May 14 18:08:03.397739 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 14 18:08:03.400458 sudo[1724]: pam_unix(sudo:session): session closed for user root May 14 18:08:03.403661 sshd[1723]: Connection closed by 139.178.89.65 port 46746 May 14 18:08:03.404784 sshd-session[1721]: pam_unix(sshd:session): session closed for user core May 14 18:08:03.417013 systemd[1]: sshd@5-164.92.104.130:22-139.178.89.65:46746.service: Deactivated successfully. May 14 18:08:03.420321 systemd[1]: session-6.scope: Deactivated successfully. May 14 18:08:03.421888 systemd-logind[1497]: Session 6 logged out. Waiting for processes to exit. May 14 18:08:03.427082 systemd[1]: Started sshd@6-164.92.104.130:22-139.178.89.65:46762.service - OpenSSH per-connection server daemon (139.178.89.65:46762). May 14 18:08:03.428022 systemd-logind[1497]: Removed session 6. May 14 18:08:03.496720 sshd[1756]: Accepted publickey for core from 139.178.89.65 port 46762 ssh2: RSA SHA256:I6v7602y95t0HxsKZunlpQRdbWqTS6jK7hLc8ah5Xaw May 14 18:08:03.499679 sshd-session[1756]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:08:03.508059 systemd-logind[1497]: New session 7 of user core. May 14 18:08:03.527650 systemd[1]: Started session-7.scope - Session 7 of User core. May 14 18:08:03.593853 sudo[1759]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 14 18:08:03.595361 sudo[1759]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 18:08:04.233438 systemd[1]: Starting docker.service - Docker Application Container Engine... May 14 18:08:04.249957 (dockerd)[1776]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 14 18:08:04.662280 dockerd[1776]: time="2025-05-14T18:08:04.661448591Z" level=info msg="Starting up" May 14 18:08:04.669576 dockerd[1776]: time="2025-05-14T18:08:04.668970276Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 14 18:08:04.741803 dockerd[1776]: time="2025-05-14T18:08:04.741532172Z" level=info msg="Loading containers: start." May 14 18:08:04.754288 kernel: Initializing XFRM netlink socket May 14 18:08:05.023582 systemd-timesyncd[1417]: Network configuration changed, trying to establish connection. May 14 18:08:05.024329 systemd-timesyncd[1417]: Network configuration changed, trying to establish connection. May 14 18:08:05.038747 systemd-timesyncd[1417]: Network configuration changed, trying to establish connection. May 14 18:08:05.082531 systemd-networkd[1459]: docker0: Link UP May 14 18:08:05.082937 systemd-timesyncd[1417]: Network configuration changed, trying to establish connection. May 14 18:08:05.085923 dockerd[1776]: time="2025-05-14T18:08:05.085840425Z" level=info msg="Loading containers: done." May 14 18:08:05.107834 dockerd[1776]: time="2025-05-14T18:08:05.107742603Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 14 18:08:05.108081 dockerd[1776]: time="2025-05-14T18:08:05.107866999Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 May 14 18:08:05.108081 dockerd[1776]: time="2025-05-14T18:08:05.108005184Z" level=info msg="Initializing buildkit" May 14 18:08:05.135053 dockerd[1776]: time="2025-05-14T18:08:05.134974572Z" level=info msg="Completed buildkit initialization" May 14 18:08:05.147391 dockerd[1776]: time="2025-05-14T18:08:05.147321009Z" level=info msg="Daemon has completed initialization" May 14 18:08:05.148280 dockerd[1776]: time="2025-05-14T18:08:05.147607858Z" level=info msg="API listen on /run/docker.sock" May 14 18:08:05.147671 systemd[1]: Started docker.service - Docker Application Container Engine. May 14 18:08:06.160604 containerd[1515]: time="2025-05-14T18:08:06.160515258Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" May 14 18:08:06.745878 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount852627676.mount: Deactivated successfully. May 14 18:08:08.172241 containerd[1515]: time="2025-05-14T18:08:08.171943497Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:08:08.174682 containerd[1515]: time="2025-05-14T18:08:08.174598887Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.12: active requests=0, bytes read=32674873" May 14 18:08:08.175197 containerd[1515]: time="2025-05-14T18:08:08.175103914Z" level=info msg="ImageCreate event name:\"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:08:08.178383 containerd[1515]: time="2025-05-14T18:08:08.178313050Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:08:08.179993 containerd[1515]: time="2025-05-14T18:08:08.179684566Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.12\" with image id \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\", size \"32671673\" in 2.019112385s" May 14 18:08:08.179993 containerd[1515]: time="2025-05-14T18:08:08.179745578Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\"" May 14 18:08:08.214074 containerd[1515]: time="2025-05-14T18:08:08.213982018Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" May 14 18:08:10.334673 containerd[1515]: time="2025-05-14T18:08:10.334483466Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:08:10.339291 containerd[1515]: time="2025-05-14T18:08:10.338046246Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.12: active requests=0, bytes read=29617534" May 14 18:08:10.339995 containerd[1515]: time="2025-05-14T18:08:10.339766654Z" level=info msg="ImageCreate event name:\"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:08:10.350882 containerd[1515]: time="2025-05-14T18:08:10.350656663Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:08:10.354453 containerd[1515]: time="2025-05-14T18:08:10.354383038Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.12\" with image id \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\", size \"31105907\" in 2.140327951s" May 14 18:08:10.355617 containerd[1515]: time="2025-05-14T18:08:10.354489026Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\"" May 14 18:08:10.435560 containerd[1515]: time="2025-05-14T18:08:10.435508948Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" May 14 18:08:11.626937 containerd[1515]: time="2025-05-14T18:08:11.626864798Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:08:11.628282 containerd[1515]: time="2025-05-14T18:08:11.628079421Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.12: active requests=0, bytes read=17903682" May 14 18:08:11.628872 containerd[1515]: time="2025-05-14T18:08:11.628823196Z" level=info msg="ImageCreate event name:\"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:08:11.632449 containerd[1515]: time="2025-05-14T18:08:11.632363603Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:08:11.634000 containerd[1515]: time="2025-05-14T18:08:11.633422092Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.12\" with image id \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\", size \"19392073\" in 1.197663184s" May 14 18:08:11.634000 containerd[1515]: time="2025-05-14T18:08:11.633462119Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\"" May 14 18:08:11.670933 containerd[1515]: time="2025-05-14T18:08:11.670891426Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" May 14 18:08:11.705831 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 14 18:08:11.709859 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 18:08:11.991961 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 18:08:12.003159 (kubelet)[2086]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 18:08:12.085425 kubelet[2086]: E0514 18:08:12.085306 2086 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 18:08:12.093473 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 18:08:12.093831 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 18:08:12.094517 systemd[1]: kubelet.service: Consumed 322ms CPU time, 96.3M memory peak. May 14 18:08:12.877448 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2386285940.mount: Deactivated successfully. May 14 18:08:13.481120 containerd[1515]: time="2025-05-14T18:08:13.481066508Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:08:13.482220 containerd[1515]: time="2025-05-14T18:08:13.481963654Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=29185817" May 14 18:08:13.482923 containerd[1515]: time="2025-05-14T18:08:13.482880903Z" level=info msg="ImageCreate event name:\"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:08:13.484832 containerd[1515]: time="2025-05-14T18:08:13.484786384Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:08:13.486047 containerd[1515]: time="2025-05-14T18:08:13.486003610Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"29184836\" in 1.815070632s" May 14 18:08:13.486243 containerd[1515]: time="2025-05-14T18:08:13.486193582Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\"" May 14 18:08:13.516599 containerd[1515]: time="2025-05-14T18:08:13.516546722Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 14 18:08:13.518818 systemd-resolved[1402]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. May 14 18:08:14.006004 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2262373333.mount: Deactivated successfully. May 14 18:08:14.805361 containerd[1515]: time="2025-05-14T18:08:14.805277623Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:08:14.808255 containerd[1515]: time="2025-05-14T18:08:14.808170223Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" May 14 18:08:14.809796 containerd[1515]: time="2025-05-14T18:08:14.809705293Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:08:14.813345 containerd[1515]: time="2025-05-14T18:08:14.813270453Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:08:14.814364 containerd[1515]: time="2025-05-14T18:08:14.814316211Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.297429734s" May 14 18:08:14.814364 containerd[1515]: time="2025-05-14T18:08:14.814363937Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" May 14 18:08:14.838646 containerd[1515]: time="2025-05-14T18:08:14.838585303Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" May 14 18:08:15.303849 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3120294422.mount: Deactivated successfully. May 14 18:08:15.310960 containerd[1515]: time="2025-05-14T18:08:15.310182012Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:08:15.312076 containerd[1515]: time="2025-05-14T18:08:15.312029438Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" May 14 18:08:15.313111 containerd[1515]: time="2025-05-14T18:08:15.313073712Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:08:15.317277 containerd[1515]: time="2025-05-14T18:08:15.317192509Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:08:15.319157 containerd[1515]: time="2025-05-14T18:08:15.319101314Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 480.239846ms" May 14 18:08:15.319535 containerd[1515]: time="2025-05-14T18:08:15.319352471Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" May 14 18:08:15.349903 containerd[1515]: time="2025-05-14T18:08:15.349726710Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" May 14 18:08:15.834161 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4204491980.mount: Deactivated successfully. May 14 18:08:16.615402 systemd-resolved[1402]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. May 14 18:08:17.812472 containerd[1515]: time="2025-05-14T18:08:17.812393126Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:08:17.813734 containerd[1515]: time="2025-05-14T18:08:17.813690405Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" May 14 18:08:17.815246 containerd[1515]: time="2025-05-14T18:08:17.814241246Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:08:17.818974 containerd[1515]: time="2025-05-14T18:08:17.816917661Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:08:17.818974 containerd[1515]: time="2025-05-14T18:08:17.818468875Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 2.468287487s" May 14 18:08:17.818974 containerd[1515]: time="2025-05-14T18:08:17.818524784Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" May 14 18:08:22.004702 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 18:08:22.005887 systemd[1]: kubelet.service: Consumed 322ms CPU time, 96.3M memory peak. May 14 18:08:22.009658 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 18:08:22.050632 systemd[1]: Reload requested from client PID 2302 ('systemctl') (unit session-7.scope)... May 14 18:08:22.050661 systemd[1]: Reloading... May 14 18:08:22.235243 zram_generator::config[2345]: No configuration found. May 14 18:08:22.421139 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 18:08:22.582060 systemd[1]: Reloading finished in 530 ms. May 14 18:08:22.647878 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 14 18:08:22.647993 systemd[1]: kubelet.service: Failed with result 'signal'. May 14 18:08:22.648278 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 18:08:22.648333 systemd[1]: kubelet.service: Consumed 131ms CPU time, 83.6M memory peak. May 14 18:08:22.650452 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 18:08:22.851352 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 18:08:22.865041 (kubelet)[2399]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 14 18:08:22.929070 kubelet[2399]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 18:08:22.929070 kubelet[2399]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 14 18:08:22.929070 kubelet[2399]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 18:08:22.933481 kubelet[2399]: I0514 18:08:22.933231 2399 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 14 18:08:23.284705 kubelet[2399]: I0514 18:08:23.283958 2399 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 14 18:08:23.284705 kubelet[2399]: I0514 18:08:23.284372 2399 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 14 18:08:23.284705 kubelet[2399]: I0514 18:08:23.284701 2399 server.go:927] "Client rotation is on, will bootstrap in background" May 14 18:08:23.307555 kubelet[2399]: I0514 18:08:23.307474 2399 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 14 18:08:23.309572 kubelet[2399]: E0514 18:08:23.309510 2399 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://164.92.104.130:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 164.92.104.130:6443: connect: connection refused May 14 18:08:23.324539 kubelet[2399]: I0514 18:08:23.324494 2399 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 14 18:08:23.327721 kubelet[2399]: I0514 18:08:23.327596 2399 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 14 18:08:23.328010 kubelet[2399]: I0514 18:08:23.327692 2399 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4334.0.0-a-9d82e253c5","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 14 18:08:23.328767 kubelet[2399]: I0514 18:08:23.328714 2399 topology_manager.go:138] "Creating topology manager with none policy" May 14 18:08:23.328767 kubelet[2399]: I0514 18:08:23.328745 2399 container_manager_linux.go:301] "Creating device plugin manager" May 14 18:08:23.329896 kubelet[2399]: I0514 18:08:23.329841 2399 state_mem.go:36] "Initialized new in-memory state store" May 14 18:08:23.331577 kubelet[2399]: W0514 18:08:23.331469 2399 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://164.92.104.130:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4334.0.0-a-9d82e253c5&limit=500&resourceVersion=0": dial tcp 164.92.104.130:6443: connect: connection refused May 14 18:08:23.331577 kubelet[2399]: E0514 18:08:23.331551 2399 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://164.92.104.130:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4334.0.0-a-9d82e253c5&limit=500&resourceVersion=0": dial tcp 164.92.104.130:6443: connect: connection refused May 14 18:08:23.332702 kubelet[2399]: I0514 18:08:23.332660 2399 kubelet.go:400] "Attempting to sync node with API server" May 14 18:08:23.332702 kubelet[2399]: I0514 18:08:23.332700 2399 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 14 18:08:23.332816 kubelet[2399]: I0514 18:08:23.332743 2399 kubelet.go:312] "Adding apiserver pod source" May 14 18:08:23.332816 kubelet[2399]: I0514 18:08:23.332769 2399 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 14 18:08:23.337179 kubelet[2399]: I0514 18:08:23.337046 2399 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 14 18:08:23.338751 kubelet[2399]: I0514 18:08:23.338696 2399 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 14 18:08:23.338906 kubelet[2399]: W0514 18:08:23.338821 2399 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 14 18:08:23.340246 kubelet[2399]: I0514 18:08:23.339561 2399 server.go:1264] "Started kubelet" May 14 18:08:23.340246 kubelet[2399]: W0514 18:08:23.339732 2399 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://164.92.104.130:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 164.92.104.130:6443: connect: connection refused May 14 18:08:23.340246 kubelet[2399]: E0514 18:08:23.339784 2399 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://164.92.104.130:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 164.92.104.130:6443: connect: connection refused May 14 18:08:23.351161 kubelet[2399]: E0514 18:08:23.350834 2399 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://164.92.104.130:6443/api/v1/namespaces/default/events\": dial tcp 164.92.104.130:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4334.0.0-a-9d82e253c5.183f7712dcbcc253 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4334.0.0-a-9d82e253c5,UID:ci-4334.0.0-a-9d82e253c5,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4334.0.0-a-9d82e253c5,},FirstTimestamp:2025-05-14 18:08:23.339516499 +0000 UTC m=+0.468682451,LastTimestamp:2025-05-14 18:08:23.339516499 +0000 UTC m=+0.468682451,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4334.0.0-a-9d82e253c5,}" May 14 18:08:23.352023 kubelet[2399]: I0514 18:08:23.351870 2399 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 14 18:08:23.352712 kubelet[2399]: I0514 18:08:23.352690 2399 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 14 18:08:23.356218 kubelet[2399]: I0514 18:08:23.356150 2399 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 14 18:08:23.359278 kubelet[2399]: I0514 18:08:23.356174 2399 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 14 18:08:23.363234 kubelet[2399]: I0514 18:08:23.362609 2399 server.go:455] "Adding debug handlers to kubelet server" May 14 18:08:23.374535 kubelet[2399]: I0514 18:08:23.374498 2399 volume_manager.go:291] "Starting Kubelet Volume Manager" May 14 18:08:23.374751 kubelet[2399]: I0514 18:08:23.374731 2399 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 14 18:08:23.374839 kubelet[2399]: I0514 18:08:23.374824 2399 reconciler.go:26] "Reconciler: start to sync state" May 14 18:08:23.375572 kubelet[2399]: E0514 18:08:23.375534 2399 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 14 18:08:23.376753 kubelet[2399]: W0514 18:08:23.376668 2399 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://164.92.104.130:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 164.92.104.130:6443: connect: connection refused May 14 18:08:23.376892 kubelet[2399]: E0514 18:08:23.376767 2399 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://164.92.104.130:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 164.92.104.130:6443: connect: connection refused May 14 18:08:23.377536 kubelet[2399]: E0514 18:08:23.377466 2399 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://164.92.104.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4334.0.0-a-9d82e253c5?timeout=10s\": dial tcp 164.92.104.130:6443: connect: connection refused" interval="200ms" May 14 18:08:23.379948 kubelet[2399]: I0514 18:08:23.379843 2399 factory.go:221] Registration of the systemd container factory successfully May 14 18:08:23.380314 kubelet[2399]: I0514 18:08:23.380279 2399 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 14 18:08:23.384233 kubelet[2399]: I0514 18:08:23.383571 2399 factory.go:221] Registration of the containerd container factory successfully May 14 18:08:23.399964 kubelet[2399]: I0514 18:08:23.399886 2399 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 14 18:08:23.401968 kubelet[2399]: I0514 18:08:23.401849 2399 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 14 18:08:23.401968 kubelet[2399]: I0514 18:08:23.401919 2399 status_manager.go:217] "Starting to sync pod status with apiserver" May 14 18:08:23.401968 kubelet[2399]: I0514 18:08:23.401949 2399 kubelet.go:2337] "Starting kubelet main sync loop" May 14 18:08:23.402165 kubelet[2399]: E0514 18:08:23.402029 2399 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 14 18:08:23.410485 kubelet[2399]: W0514 18:08:23.410404 2399 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://164.92.104.130:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 164.92.104.130:6443: connect: connection refused May 14 18:08:23.410485 kubelet[2399]: E0514 18:08:23.410493 2399 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://164.92.104.130:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 164.92.104.130:6443: connect: connection refused May 14 18:08:23.415422 kubelet[2399]: I0514 18:08:23.415382 2399 cpu_manager.go:214] "Starting CPU manager" policy="none" May 14 18:08:23.415422 kubelet[2399]: I0514 18:08:23.415425 2399 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 14 18:08:23.415685 kubelet[2399]: I0514 18:08:23.415454 2399 state_mem.go:36] "Initialized new in-memory state store" May 14 18:08:23.418443 kubelet[2399]: I0514 18:08:23.418374 2399 policy_none.go:49] "None policy: Start" May 14 18:08:23.420550 kubelet[2399]: I0514 18:08:23.420519 2399 memory_manager.go:170] "Starting memorymanager" policy="None" May 14 18:08:23.420677 kubelet[2399]: I0514 18:08:23.420578 2399 state_mem.go:35] "Initializing new in-memory state store" May 14 18:08:23.429092 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 14 18:08:23.447715 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 14 18:08:23.453103 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 14 18:08:23.469898 kubelet[2399]: I0514 18:08:23.469843 2399 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 14 18:08:23.470397 kubelet[2399]: I0514 18:08:23.470286 2399 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 14 18:08:23.470888 kubelet[2399]: I0514 18:08:23.470547 2399 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 14 18:08:23.473975 kubelet[2399]: E0514 18:08:23.473935 2399 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4334.0.0-a-9d82e253c5\" not found" May 14 18:08:23.476256 kubelet[2399]: I0514 18:08:23.475860 2399 kubelet_node_status.go:73] "Attempting to register node" node="ci-4334.0.0-a-9d82e253c5" May 14 18:08:23.476965 kubelet[2399]: E0514 18:08:23.476853 2399 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://164.92.104.130:6443/api/v1/nodes\": dial tcp 164.92.104.130:6443: connect: connection refused" node="ci-4334.0.0-a-9d82e253c5" May 14 18:08:23.502864 kubelet[2399]: I0514 18:08:23.502784 2399 topology_manager.go:215] "Topology Admit Handler" podUID="34d425c257f9b9441fff4c4014f4ebb5" podNamespace="kube-system" podName="kube-apiserver-ci-4334.0.0-a-9d82e253c5" May 14 18:08:23.504766 kubelet[2399]: I0514 18:08:23.504193 2399 topology_manager.go:215] "Topology Admit Handler" podUID="bf82ef96dd0e26012a385951275e5ea5" podNamespace="kube-system" podName="kube-controller-manager-ci-4334.0.0-a-9d82e253c5" May 14 18:08:23.505508 kubelet[2399]: I0514 18:08:23.505468 2399 topology_manager.go:215] "Topology Admit Handler" podUID="6e8e11cbec95923b9387f88aef613db2" podNamespace="kube-system" podName="kube-scheduler-ci-4334.0.0-a-9d82e253c5" May 14 18:08:23.518331 systemd[1]: Created slice kubepods-burstable-pod34d425c257f9b9441fff4c4014f4ebb5.slice - libcontainer container kubepods-burstable-pod34d425c257f9b9441fff4c4014f4ebb5.slice. May 14 18:08:23.542269 systemd[1]: Created slice kubepods-burstable-podbf82ef96dd0e26012a385951275e5ea5.slice - libcontainer container kubepods-burstable-podbf82ef96dd0e26012a385951275e5ea5.slice. May 14 18:08:23.564488 systemd[1]: Created slice kubepods-burstable-pod6e8e11cbec95923b9387f88aef613db2.slice - libcontainer container kubepods-burstable-pod6e8e11cbec95923b9387f88aef613db2.slice. May 14 18:08:23.580709 kubelet[2399]: E0514 18:08:23.580629 2399 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://164.92.104.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4334.0.0-a-9d82e253c5?timeout=10s\": dial tcp 164.92.104.130:6443: connect: connection refused" interval="400ms" May 14 18:08:23.678146 kubelet[2399]: I0514 18:08:23.676149 2399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bf82ef96dd0e26012a385951275e5ea5-ca-certs\") pod \"kube-controller-manager-ci-4334.0.0-a-9d82e253c5\" (UID: \"bf82ef96dd0e26012a385951275e5ea5\") " pod="kube-system/kube-controller-manager-ci-4334.0.0-a-9d82e253c5" May 14 18:08:23.678146 kubelet[2399]: I0514 18:08:23.677914 2399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bf82ef96dd0e26012a385951275e5ea5-kubeconfig\") pod \"kube-controller-manager-ci-4334.0.0-a-9d82e253c5\" (UID: \"bf82ef96dd0e26012a385951275e5ea5\") " pod="kube-system/kube-controller-manager-ci-4334.0.0-a-9d82e253c5" May 14 18:08:23.678146 kubelet[2399]: I0514 18:08:23.677946 2399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bf82ef96dd0e26012a385951275e5ea5-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4334.0.0-a-9d82e253c5\" (UID: \"bf82ef96dd0e26012a385951275e5ea5\") " pod="kube-system/kube-controller-manager-ci-4334.0.0-a-9d82e253c5" May 14 18:08:23.678146 kubelet[2399]: I0514 18:08:23.677970 2399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e8e11cbec95923b9387f88aef613db2-kubeconfig\") pod \"kube-scheduler-ci-4334.0.0-a-9d82e253c5\" (UID: \"6e8e11cbec95923b9387f88aef613db2\") " pod="kube-system/kube-scheduler-ci-4334.0.0-a-9d82e253c5" May 14 18:08:23.678146 kubelet[2399]: I0514 18:08:23.678001 2399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/34d425c257f9b9441fff4c4014f4ebb5-ca-certs\") pod \"kube-apiserver-ci-4334.0.0-a-9d82e253c5\" (UID: \"34d425c257f9b9441fff4c4014f4ebb5\") " pod="kube-system/kube-apiserver-ci-4334.0.0-a-9d82e253c5" May 14 18:08:23.678432 kubelet[2399]: I0514 18:08:23.678018 2399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/34d425c257f9b9441fff4c4014f4ebb5-k8s-certs\") pod \"kube-apiserver-ci-4334.0.0-a-9d82e253c5\" (UID: \"34d425c257f9b9441fff4c4014f4ebb5\") " pod="kube-system/kube-apiserver-ci-4334.0.0-a-9d82e253c5" May 14 18:08:23.678432 kubelet[2399]: I0514 18:08:23.678036 2399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bf82ef96dd0e26012a385951275e5ea5-k8s-certs\") pod \"kube-controller-manager-ci-4334.0.0-a-9d82e253c5\" (UID: \"bf82ef96dd0e26012a385951275e5ea5\") " pod="kube-system/kube-controller-manager-ci-4334.0.0-a-9d82e253c5" May 14 18:08:23.678432 kubelet[2399]: I0514 18:08:23.678052 2399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/34d425c257f9b9441fff4c4014f4ebb5-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4334.0.0-a-9d82e253c5\" (UID: \"34d425c257f9b9441fff4c4014f4ebb5\") " pod="kube-system/kube-apiserver-ci-4334.0.0-a-9d82e253c5" May 14 18:08:23.678432 kubelet[2399]: I0514 18:08:23.678084 2399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/bf82ef96dd0e26012a385951275e5ea5-flexvolume-dir\") pod \"kube-controller-manager-ci-4334.0.0-a-9d82e253c5\" (UID: \"bf82ef96dd0e26012a385951275e5ea5\") " pod="kube-system/kube-controller-manager-ci-4334.0.0-a-9d82e253c5" May 14 18:08:23.679359 kubelet[2399]: I0514 18:08:23.678867 2399 kubelet_node_status.go:73] "Attempting to register node" node="ci-4334.0.0-a-9d82e253c5" May 14 18:08:23.679718 kubelet[2399]: E0514 18:08:23.679676 2399 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://164.92.104.130:6443/api/v1/nodes\": dial tcp 164.92.104.130:6443: connect: connection refused" node="ci-4334.0.0-a-9d82e253c5" May 14 18:08:23.834528 kubelet[2399]: E0514 18:08:23.834304 2399 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 14 18:08:23.835584 containerd[1515]: time="2025-05-14T18:08:23.835536247Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4334.0.0-a-9d82e253c5,Uid:34d425c257f9b9441fff4c4014f4ebb5,Namespace:kube-system,Attempt:0,}" May 14 18:08:23.844647 systemd-resolved[1402]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.2. May 14 18:08:23.850058 kubelet[2399]: E0514 18:08:23.849988 2399 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 14 18:08:23.856436 containerd[1515]: time="2025-05-14T18:08:23.856333633Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4334.0.0-a-9d82e253c5,Uid:bf82ef96dd0e26012a385951275e5ea5,Namespace:kube-system,Attempt:0,}" May 14 18:08:23.869231 kubelet[2399]: E0514 18:08:23.869079 2399 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 14 18:08:23.870138 containerd[1515]: time="2025-05-14T18:08:23.870059677Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4334.0.0-a-9d82e253c5,Uid:6e8e11cbec95923b9387f88aef613db2,Namespace:kube-system,Attempt:0,}" May 14 18:08:23.981679 kubelet[2399]: E0514 18:08:23.981619 2399 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://164.92.104.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4334.0.0-a-9d82e253c5?timeout=10s\": dial tcp 164.92.104.130:6443: connect: connection refused" interval="800ms" May 14 18:08:24.082298 kubelet[2399]: I0514 18:08:24.082256 2399 kubelet_node_status.go:73] "Attempting to register node" node="ci-4334.0.0-a-9d82e253c5" May 14 18:08:24.082915 kubelet[2399]: E0514 18:08:24.082875 2399 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://164.92.104.130:6443/api/v1/nodes\": dial tcp 164.92.104.130:6443: connect: connection refused" node="ci-4334.0.0-a-9d82e253c5" May 14 18:08:24.275724 kubelet[2399]: W0514 18:08:24.275524 2399 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://164.92.104.130:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 164.92.104.130:6443: connect: connection refused May 14 18:08:24.275724 kubelet[2399]: E0514 18:08:24.275617 2399 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://164.92.104.130:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 164.92.104.130:6443: connect: connection refused May 14 18:08:24.302450 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3913274343.mount: Deactivated successfully. May 14 18:08:24.309026 containerd[1515]: time="2025-05-14T18:08:24.308950928Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 18:08:24.317749 containerd[1515]: time="2025-05-14T18:08:24.317376421Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" May 14 18:08:24.317749 containerd[1515]: time="2025-05-14T18:08:24.317614578Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 14 18:08:24.317948 containerd[1515]: time="2025-05-14T18:08:24.317773260Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 18:08:24.318967 containerd[1515]: time="2025-05-14T18:08:24.318376294Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" May 14 18:08:24.318967 containerd[1515]: time="2025-05-14T18:08:24.318525517Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 18:08:24.319143 containerd[1515]: time="2025-05-14T18:08:24.319087043Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 18:08:24.321221 containerd[1515]: time="2025-05-14T18:08:24.321146413Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 18:08:24.322266 containerd[1515]: time="2025-05-14T18:08:24.322232455Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 440.104583ms" May 14 18:08:24.324504 containerd[1515]: time="2025-05-14T18:08:24.324463221Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 464.872706ms" May 14 18:08:24.326588 containerd[1515]: time="2025-05-14T18:08:24.326483611Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 483.65199ms" May 14 18:08:24.360224 kubelet[2399]: W0514 18:08:24.359719 2399 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://164.92.104.130:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 164.92.104.130:6443: connect: connection refused May 14 18:08:24.360224 kubelet[2399]: E0514 18:08:24.359781 2399 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://164.92.104.130:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 164.92.104.130:6443: connect: connection refused May 14 18:08:24.373798 kubelet[2399]: W0514 18:08:24.372838 2399 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://164.92.104.130:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4334.0.0-a-9d82e253c5&limit=500&resourceVersion=0": dial tcp 164.92.104.130:6443: connect: connection refused May 14 18:08:24.373798 kubelet[2399]: E0514 18:08:24.372924 2399 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://164.92.104.130:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4334.0.0-a-9d82e253c5&limit=500&resourceVersion=0": dial tcp 164.92.104.130:6443: connect: connection refused May 14 18:08:24.454098 containerd[1515]: time="2025-05-14T18:08:24.454035600Z" level=info msg="connecting to shim 8a4c4ad9f0804b1da2f3c111555e7cc2e4ee7d8edb3048a5741803e994b55bbf" address="unix:///run/containerd/s/18ac173f033a19b5e8ff73850c2d483dafcdb1edf699eeb4d5d4813847ff417a" namespace=k8s.io protocol=ttrpc version=3 May 14 18:08:24.454946 containerd[1515]: time="2025-05-14T18:08:24.454893533Z" level=info msg="connecting to shim 6e60d0ec13b5b0505f726b99b96c2a94ba985112ae7c90985257afe0283fad7d" address="unix:///run/containerd/s/221eb924be2b8278cd8ea6a9e12332508e4771b2f061d3ac2e356d443e598c42" namespace=k8s.io protocol=ttrpc version=3 May 14 18:08:24.463805 containerd[1515]: time="2025-05-14T18:08:24.463390437Z" level=info msg="connecting to shim 2d0154ec526da94cf1d50bb435b5aaaacc976e87a16bc225b1510e9dca76a3df" address="unix:///run/containerd/s/2ae55811ba69cbd762f1bd2d25fc86a3ea4734d0615516b217c0a1191ed41d28" namespace=k8s.io protocol=ttrpc version=3 May 14 18:08:24.570543 systemd[1]: Started cri-containerd-8a4c4ad9f0804b1da2f3c111555e7cc2e4ee7d8edb3048a5741803e994b55bbf.scope - libcontainer container 8a4c4ad9f0804b1da2f3c111555e7cc2e4ee7d8edb3048a5741803e994b55bbf. May 14 18:08:24.581755 systemd[1]: Started cri-containerd-2d0154ec526da94cf1d50bb435b5aaaacc976e87a16bc225b1510e9dca76a3df.scope - libcontainer container 2d0154ec526da94cf1d50bb435b5aaaacc976e87a16bc225b1510e9dca76a3df. May 14 18:08:24.585728 systemd[1]: Started cri-containerd-6e60d0ec13b5b0505f726b99b96c2a94ba985112ae7c90985257afe0283fad7d.scope - libcontainer container 6e60d0ec13b5b0505f726b99b96c2a94ba985112ae7c90985257afe0283fad7d. May 14 18:08:24.689784 containerd[1515]: time="2025-05-14T18:08:24.689681709Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4334.0.0-a-9d82e253c5,Uid:6e8e11cbec95923b9387f88aef613db2,Namespace:kube-system,Attempt:0,} returns sandbox id \"8a4c4ad9f0804b1da2f3c111555e7cc2e4ee7d8edb3048a5741803e994b55bbf\"" May 14 18:08:24.694637 kubelet[2399]: E0514 18:08:24.694595 2399 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 14 18:08:24.700375 containerd[1515]: time="2025-05-14T18:08:24.698586234Z" level=info msg="CreateContainer within sandbox \"8a4c4ad9f0804b1da2f3c111555e7cc2e4ee7d8edb3048a5741803e994b55bbf\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 14 18:08:24.724060 containerd[1515]: time="2025-05-14T18:08:24.724006391Z" level=info msg="Container 773782a55fe7ebc1df49634884957af6e31454028850d4cc1841b5927ad30330: CDI devices from CRI Config.CDIDevices: []" May 14 18:08:24.739391 containerd[1515]: time="2025-05-14T18:08:24.739335922Z" level=info msg="CreateContainer within sandbox \"8a4c4ad9f0804b1da2f3c111555e7cc2e4ee7d8edb3048a5741803e994b55bbf\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"773782a55fe7ebc1df49634884957af6e31454028850d4cc1841b5927ad30330\"" May 14 18:08:24.741466 containerd[1515]: time="2025-05-14T18:08:24.741422252Z" level=info msg="StartContainer for \"773782a55fe7ebc1df49634884957af6e31454028850d4cc1841b5927ad30330\"" May 14 18:08:24.743328 containerd[1515]: time="2025-05-14T18:08:24.742262233Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4334.0.0-a-9d82e253c5,Uid:34d425c257f9b9441fff4c4014f4ebb5,Namespace:kube-system,Attempt:0,} returns sandbox id \"2d0154ec526da94cf1d50bb435b5aaaacc976e87a16bc225b1510e9dca76a3df\"" May 14 18:08:24.744391 containerd[1515]: time="2025-05-14T18:08:24.744259199Z" level=info msg="connecting to shim 773782a55fe7ebc1df49634884957af6e31454028850d4cc1841b5927ad30330" address="unix:///run/containerd/s/18ac173f033a19b5e8ff73850c2d483dafcdb1edf699eeb4d5d4813847ff417a" protocol=ttrpc version=3 May 14 18:08:24.745845 kubelet[2399]: E0514 18:08:24.745811 2399 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 14 18:08:24.752971 containerd[1515]: time="2025-05-14T18:08:24.752533816Z" level=info msg="CreateContainer within sandbox \"2d0154ec526da94cf1d50bb435b5aaaacc976e87a16bc225b1510e9dca76a3df\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 14 18:08:24.758578 containerd[1515]: time="2025-05-14T18:08:24.758500512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4334.0.0-a-9d82e253c5,Uid:bf82ef96dd0e26012a385951275e5ea5,Namespace:kube-system,Attempt:0,} returns sandbox id \"6e60d0ec13b5b0505f726b99b96c2a94ba985112ae7c90985257afe0283fad7d\"" May 14 18:08:24.761405 kubelet[2399]: E0514 18:08:24.761364 2399 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 14 18:08:24.766242 containerd[1515]: time="2025-05-14T18:08:24.765673901Z" level=info msg="Container 673f2e4e790311034e1f41a61e79a1c83e275449925509f93cdda1debaa1ef62: CDI devices from CRI Config.CDIDevices: []" May 14 18:08:24.767221 containerd[1515]: time="2025-05-14T18:08:24.767161952Z" level=info msg="CreateContainer within sandbox \"6e60d0ec13b5b0505f726b99b96c2a94ba985112ae7c90985257afe0283fad7d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 14 18:08:24.777750 containerd[1515]: time="2025-05-14T18:08:24.777686430Z" level=info msg="CreateContainer within sandbox \"2d0154ec526da94cf1d50bb435b5aaaacc976e87a16bc225b1510e9dca76a3df\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"673f2e4e790311034e1f41a61e79a1c83e275449925509f93cdda1debaa1ef62\"" May 14 18:08:24.779108 containerd[1515]: time="2025-05-14T18:08:24.779061855Z" level=info msg="StartContainer for \"673f2e4e790311034e1f41a61e79a1c83e275449925509f93cdda1debaa1ef62\"" May 14 18:08:24.781042 containerd[1515]: time="2025-05-14T18:08:24.780990608Z" level=info msg="Container 0acfd7a9a95247e32cda26fb2689a686a2d4680492b4726b128334e8f61cac38: CDI devices from CRI Config.CDIDevices: []" May 14 18:08:24.781664 containerd[1515]: time="2025-05-14T18:08:24.781625073Z" level=info msg="connecting to shim 673f2e4e790311034e1f41a61e79a1c83e275449925509f93cdda1debaa1ef62" address="unix:///run/containerd/s/2ae55811ba69cbd762f1bd2d25fc86a3ea4734d0615516b217c0a1191ed41d28" protocol=ttrpc version=3 May 14 18:08:24.783603 kubelet[2399]: E0514 18:08:24.783546 2399 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://164.92.104.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4334.0.0-a-9d82e253c5?timeout=10s\": dial tcp 164.92.104.130:6443: connect: connection refused" interval="1.6s" May 14 18:08:24.799517 systemd[1]: Started cri-containerd-773782a55fe7ebc1df49634884957af6e31454028850d4cc1841b5927ad30330.scope - libcontainer container 773782a55fe7ebc1df49634884957af6e31454028850d4cc1841b5927ad30330. May 14 18:08:24.803056 containerd[1515]: time="2025-05-14T18:08:24.803000313Z" level=info msg="CreateContainer within sandbox \"6e60d0ec13b5b0505f726b99b96c2a94ba985112ae7c90985257afe0283fad7d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0acfd7a9a95247e32cda26fb2689a686a2d4680492b4726b128334e8f61cac38\"" May 14 18:08:24.806138 containerd[1515]: time="2025-05-14T18:08:24.806082819Z" level=info msg="StartContainer for \"0acfd7a9a95247e32cda26fb2689a686a2d4680492b4726b128334e8f61cac38\"" May 14 18:08:24.817536 containerd[1515]: time="2025-05-14T18:08:24.817351624Z" level=info msg="connecting to shim 0acfd7a9a95247e32cda26fb2689a686a2d4680492b4726b128334e8f61cac38" address="unix:///run/containerd/s/221eb924be2b8278cd8ea6a9e12332508e4771b2f061d3ac2e356d443e598c42" protocol=ttrpc version=3 May 14 18:08:24.851051 kubelet[2399]: W0514 18:08:24.850759 2399 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://164.92.104.130:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 164.92.104.130:6443: connect: connection refused May 14 18:08:24.852276 kubelet[2399]: E0514 18:08:24.852055 2399 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://164.92.104.130:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 164.92.104.130:6443: connect: connection refused May 14 18:08:24.866522 systemd[1]: Started cri-containerd-673f2e4e790311034e1f41a61e79a1c83e275449925509f93cdda1debaa1ef62.scope - libcontainer container 673f2e4e790311034e1f41a61e79a1c83e275449925509f93cdda1debaa1ef62. May 14 18:08:24.877624 systemd[1]: Started cri-containerd-0acfd7a9a95247e32cda26fb2689a686a2d4680492b4726b128334e8f61cac38.scope - libcontainer container 0acfd7a9a95247e32cda26fb2689a686a2d4680492b4726b128334e8f61cac38. May 14 18:08:24.888383 kubelet[2399]: I0514 18:08:24.888337 2399 kubelet_node_status.go:73] "Attempting to register node" node="ci-4334.0.0-a-9d82e253c5" May 14 18:08:24.889383 kubelet[2399]: E0514 18:08:24.889344 2399 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://164.92.104.130:6443/api/v1/nodes\": dial tcp 164.92.104.130:6443: connect: connection refused" node="ci-4334.0.0-a-9d82e253c5" May 14 18:08:24.936653 containerd[1515]: time="2025-05-14T18:08:24.936602724Z" level=info msg="StartContainer for \"773782a55fe7ebc1df49634884957af6e31454028850d4cc1841b5927ad30330\" returns successfully" May 14 18:08:25.022151 containerd[1515]: time="2025-05-14T18:08:25.022046644Z" level=info msg="StartContainer for \"673f2e4e790311034e1f41a61e79a1c83e275449925509f93cdda1debaa1ef62\" returns successfully" May 14 18:08:25.044311 containerd[1515]: time="2025-05-14T18:08:25.044260281Z" level=info msg="StartContainer for \"0acfd7a9a95247e32cda26fb2689a686a2d4680492b4726b128334e8f61cac38\" returns successfully" May 14 18:08:25.423966 kubelet[2399]: E0514 18:08:25.422950 2399 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 14 18:08:25.431018 kubelet[2399]: E0514 18:08:25.430961 2399 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 14 18:08:25.436116 kubelet[2399]: E0514 18:08:25.436076 2399 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 14 18:08:26.440839 kubelet[2399]: E0514 18:08:26.440748 2399 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 14 18:08:26.492019 kubelet[2399]: I0514 18:08:26.491982 2399 kubelet_node_status.go:73] "Attempting to register node" node="ci-4334.0.0-a-9d82e253c5" May 14 18:08:27.042572 kubelet[2399]: E0514 18:08:27.042526 2399 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4334.0.0-a-9d82e253c5\" not found" node="ci-4334.0.0-a-9d82e253c5" May 14 18:08:27.109474 kubelet[2399]: I0514 18:08:27.109370 2399 kubelet_node_status.go:76] "Successfully registered node" node="ci-4334.0.0-a-9d82e253c5" May 14 18:08:27.336979 kubelet[2399]: I0514 18:08:27.336358 2399 apiserver.go:52] "Watching apiserver" May 14 18:08:27.375329 kubelet[2399]: I0514 18:08:27.375226 2399 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 14 18:08:29.307438 systemd[1]: Reload requested from client PID 2675 ('systemctl') (unit session-7.scope)... May 14 18:08:29.307461 systemd[1]: Reloading... May 14 18:08:29.435268 zram_generator::config[2718]: No configuration found. May 14 18:08:29.604810 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 18:08:29.901710 systemd[1]: Reloading finished in 593 ms. May 14 18:08:29.957362 kubelet[2399]: E0514 18:08:29.957042 2399 event.go:319] "Unable to write event (broadcaster is shut down)" event="&Event{ObjectMeta:{ci-4334.0.0-a-9d82e253c5.183f7712dcbcc253 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4334.0.0-a-9d82e253c5,UID:ci-4334.0.0-a-9d82e253c5,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4334.0.0-a-9d82e253c5,},FirstTimestamp:2025-05-14 18:08:23.339516499 +0000 UTC m=+0.468682451,LastTimestamp:2025-05-14 18:08:23.339516499 +0000 UTC m=+0.468682451,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4334.0.0-a-9d82e253c5,}" May 14 18:08:29.957902 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 14 18:08:29.970121 systemd[1]: kubelet.service: Deactivated successfully. May 14 18:08:29.970528 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 18:08:29.970608 systemd[1]: kubelet.service: Consumed 960ms CPU time, 109.9M memory peak. May 14 18:08:29.974913 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 18:08:30.152873 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 18:08:30.165745 (kubelet)[2769]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 14 18:08:30.246194 kubelet[2769]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 18:08:30.246194 kubelet[2769]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 14 18:08:30.246194 kubelet[2769]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 18:08:30.246674 kubelet[2769]: I0514 18:08:30.246387 2769 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 14 18:08:30.253676 kubelet[2769]: I0514 18:08:30.253635 2769 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 14 18:08:30.253918 kubelet[2769]: I0514 18:08:30.253863 2769 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 14 18:08:30.254230 kubelet[2769]: I0514 18:08:30.254194 2769 server.go:927] "Client rotation is on, will bootstrap in background" May 14 18:08:30.257045 kubelet[2769]: I0514 18:08:30.256988 2769 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 14 18:08:30.260907 kubelet[2769]: I0514 18:08:30.260471 2769 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 14 18:08:30.269298 kubelet[2769]: I0514 18:08:30.269252 2769 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 14 18:08:30.269777 kubelet[2769]: I0514 18:08:30.269739 2769 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 14 18:08:30.270024 kubelet[2769]: I0514 18:08:30.269851 2769 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4334.0.0-a-9d82e253c5","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 14 18:08:30.270167 kubelet[2769]: I0514 18:08:30.270156 2769 topology_manager.go:138] "Creating topology manager with none policy" May 14 18:08:30.270238 kubelet[2769]: I0514 18:08:30.270230 2769 container_manager_linux.go:301] "Creating device plugin manager" May 14 18:08:30.270333 kubelet[2769]: I0514 18:08:30.270325 2769 state_mem.go:36] "Initialized new in-memory state store" May 14 18:08:30.270503 kubelet[2769]: I0514 18:08:30.270493 2769 kubelet.go:400] "Attempting to sync node with API server" May 14 18:08:30.270571 kubelet[2769]: I0514 18:08:30.270562 2769 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 14 18:08:30.270629 kubelet[2769]: I0514 18:08:30.270623 2769 kubelet.go:312] "Adding apiserver pod source" May 14 18:08:30.270745 kubelet[2769]: I0514 18:08:30.270736 2769 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 14 18:08:30.275862 kubelet[2769]: I0514 18:08:30.273813 2769 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 14 18:08:30.275862 kubelet[2769]: I0514 18:08:30.274142 2769 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 14 18:08:30.275862 kubelet[2769]: I0514 18:08:30.274780 2769 server.go:1264] "Started kubelet" May 14 18:08:30.282883 kubelet[2769]: I0514 18:08:30.280982 2769 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 14 18:08:30.283543 kubelet[2769]: I0514 18:08:30.283074 2769 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 14 18:08:30.285570 kubelet[2769]: I0514 18:08:30.285525 2769 server.go:455] "Adding debug handlers to kubelet server" May 14 18:08:30.291724 kubelet[2769]: I0514 18:08:30.283114 2769 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 14 18:08:30.291724 kubelet[2769]: I0514 18:08:30.289838 2769 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 14 18:08:30.302302 kubelet[2769]: I0514 18:08:30.300451 2769 volume_manager.go:291] "Starting Kubelet Volume Manager" May 14 18:08:30.302302 kubelet[2769]: I0514 18:08:30.300981 2769 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 14 18:08:30.302302 kubelet[2769]: I0514 18:08:30.301160 2769 reconciler.go:26] "Reconciler: start to sync state" May 14 18:08:30.314817 kubelet[2769]: I0514 18:08:30.314780 2769 factory.go:221] Registration of the systemd container factory successfully May 14 18:08:30.314972 kubelet[2769]: I0514 18:08:30.314940 2769 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 14 18:08:30.320866 kubelet[2769]: E0514 18:08:30.319817 2769 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 14 18:08:30.326128 kubelet[2769]: I0514 18:08:30.326086 2769 factory.go:221] Registration of the containerd container factory successfully May 14 18:08:30.338851 kubelet[2769]: I0514 18:08:30.337425 2769 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 14 18:08:30.342455 kubelet[2769]: I0514 18:08:30.342407 2769 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 14 18:08:30.342455 kubelet[2769]: I0514 18:08:30.342457 2769 status_manager.go:217] "Starting to sync pod status with apiserver" May 14 18:08:30.342669 kubelet[2769]: I0514 18:08:30.342482 2769 kubelet.go:2337] "Starting kubelet main sync loop" May 14 18:08:30.342669 kubelet[2769]: E0514 18:08:30.342537 2769 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 14 18:08:30.354222 sudo[2789]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 14 18:08:30.354834 sudo[2789]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 14 18:08:30.402812 kubelet[2769]: I0514 18:08:30.402731 2769 kubelet_node_status.go:73] "Attempting to register node" node="ci-4334.0.0-a-9d82e253c5" May 14 18:08:30.421334 kubelet[2769]: I0514 18:08:30.419959 2769 kubelet_node_status.go:112] "Node was previously registered" node="ci-4334.0.0-a-9d82e253c5" May 14 18:08:30.421698 kubelet[2769]: I0514 18:08:30.421659 2769 kubelet_node_status.go:76] "Successfully registered node" node="ci-4334.0.0-a-9d82e253c5" May 14 18:08:30.442755 kubelet[2769]: E0514 18:08:30.442617 2769 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 14 18:08:30.450257 kubelet[2769]: I0514 18:08:30.449469 2769 cpu_manager.go:214] "Starting CPU manager" policy="none" May 14 18:08:30.450257 kubelet[2769]: I0514 18:08:30.449495 2769 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 14 18:08:30.450257 kubelet[2769]: I0514 18:08:30.449525 2769 state_mem.go:36] "Initialized new in-memory state store" May 14 18:08:30.450257 kubelet[2769]: I0514 18:08:30.449733 2769 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 14 18:08:30.450257 kubelet[2769]: I0514 18:08:30.449747 2769 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 14 18:08:30.450257 kubelet[2769]: I0514 18:08:30.449774 2769 policy_none.go:49] "None policy: Start" May 14 18:08:30.453070 kubelet[2769]: I0514 18:08:30.453038 2769 memory_manager.go:170] "Starting memorymanager" policy="None" May 14 18:08:30.454184 kubelet[2769]: I0514 18:08:30.453352 2769 state_mem.go:35] "Initializing new in-memory state store" May 14 18:08:30.454184 kubelet[2769]: I0514 18:08:30.453841 2769 state_mem.go:75] "Updated machine memory state" May 14 18:08:30.468156 kubelet[2769]: I0514 18:08:30.468118 2769 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 14 18:08:30.468527 kubelet[2769]: I0514 18:08:30.468395 2769 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 14 18:08:30.470129 kubelet[2769]: I0514 18:08:30.469721 2769 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 14 18:08:30.643162 kubelet[2769]: I0514 18:08:30.643076 2769 topology_manager.go:215] "Topology Admit Handler" podUID="34d425c257f9b9441fff4c4014f4ebb5" podNamespace="kube-system" podName="kube-apiserver-ci-4334.0.0-a-9d82e253c5" May 14 18:08:30.644120 kubelet[2769]: I0514 18:08:30.643617 2769 topology_manager.go:215] "Topology Admit Handler" podUID="bf82ef96dd0e26012a385951275e5ea5" podNamespace="kube-system" podName="kube-controller-manager-ci-4334.0.0-a-9d82e253c5" May 14 18:08:30.644120 kubelet[2769]: I0514 18:08:30.644010 2769 topology_manager.go:215] "Topology Admit Handler" podUID="6e8e11cbec95923b9387f88aef613db2" podNamespace="kube-system" podName="kube-scheduler-ci-4334.0.0-a-9d82e253c5" May 14 18:08:30.670691 kubelet[2769]: W0514 18:08:30.670486 2769 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 14 18:08:30.671633 kubelet[2769]: W0514 18:08:30.671490 2769 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 14 18:08:30.671774 kubelet[2769]: W0514 18:08:30.671649 2769 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 14 18:08:30.706415 kubelet[2769]: I0514 18:08:30.706080 2769 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/34d425c257f9b9441fff4c4014f4ebb5-k8s-certs\") pod \"kube-apiserver-ci-4334.0.0-a-9d82e253c5\" (UID: \"34d425c257f9b9441fff4c4014f4ebb5\") " pod="kube-system/kube-apiserver-ci-4334.0.0-a-9d82e253c5" May 14 18:08:30.707314 kubelet[2769]: I0514 18:08:30.707059 2769 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bf82ef96dd0e26012a385951275e5ea5-ca-certs\") pod \"kube-controller-manager-ci-4334.0.0-a-9d82e253c5\" (UID: \"bf82ef96dd0e26012a385951275e5ea5\") " pod="kube-system/kube-controller-manager-ci-4334.0.0-a-9d82e253c5" May 14 18:08:30.707314 kubelet[2769]: I0514 18:08:30.707158 2769 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/bf82ef96dd0e26012a385951275e5ea5-flexvolume-dir\") pod \"kube-controller-manager-ci-4334.0.0-a-9d82e253c5\" (UID: \"bf82ef96dd0e26012a385951275e5ea5\") " pod="kube-system/kube-controller-manager-ci-4334.0.0-a-9d82e253c5" May 14 18:08:30.708122 kubelet[2769]: I0514 18:08:30.708003 2769 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bf82ef96dd0e26012a385951275e5ea5-k8s-certs\") pod \"kube-controller-manager-ci-4334.0.0-a-9d82e253c5\" (UID: \"bf82ef96dd0e26012a385951275e5ea5\") " pod="kube-system/kube-controller-manager-ci-4334.0.0-a-9d82e253c5" May 14 18:08:30.708122 kubelet[2769]: I0514 18:08:30.708075 2769 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bf82ef96dd0e26012a385951275e5ea5-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4334.0.0-a-9d82e253c5\" (UID: \"bf82ef96dd0e26012a385951275e5ea5\") " pod="kube-system/kube-controller-manager-ci-4334.0.0-a-9d82e253c5" May 14 18:08:30.708791 kubelet[2769]: I0514 18:08:30.708244 2769 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/34d425c257f9b9441fff4c4014f4ebb5-ca-certs\") pod \"kube-apiserver-ci-4334.0.0-a-9d82e253c5\" (UID: \"34d425c257f9b9441fff4c4014f4ebb5\") " pod="kube-system/kube-apiserver-ci-4334.0.0-a-9d82e253c5" May 14 18:08:30.708791 kubelet[2769]: I0514 18:08:30.708305 2769 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bf82ef96dd0e26012a385951275e5ea5-kubeconfig\") pod \"kube-controller-manager-ci-4334.0.0-a-9d82e253c5\" (UID: \"bf82ef96dd0e26012a385951275e5ea5\") " pod="kube-system/kube-controller-manager-ci-4334.0.0-a-9d82e253c5" May 14 18:08:30.708791 kubelet[2769]: I0514 18:08:30.708349 2769 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e8e11cbec95923b9387f88aef613db2-kubeconfig\") pod \"kube-scheduler-ci-4334.0.0-a-9d82e253c5\" (UID: \"6e8e11cbec95923b9387f88aef613db2\") " pod="kube-system/kube-scheduler-ci-4334.0.0-a-9d82e253c5" May 14 18:08:30.708791 kubelet[2769]: I0514 18:08:30.708394 2769 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/34d425c257f9b9441fff4c4014f4ebb5-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4334.0.0-a-9d82e253c5\" (UID: \"34d425c257f9b9441fff4c4014f4ebb5\") " pod="kube-system/kube-apiserver-ci-4334.0.0-a-9d82e253c5" May 14 18:08:30.976079 kubelet[2769]: E0514 18:08:30.975556 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 14 18:08:30.977413 kubelet[2769]: E0514 18:08:30.977356 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 14 18:08:30.977854 kubelet[2769]: E0514 18:08:30.977817 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 14 18:08:31.189297 sudo[2789]: pam_unix(sudo:session): session closed for user root May 14 18:08:31.279548 kubelet[2769]: I0514 18:08:31.279491 2769 apiserver.go:52] "Watching apiserver" May 14 18:08:31.302110 kubelet[2769]: I0514 18:08:31.301993 2769 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 14 18:08:31.396170 kubelet[2769]: E0514 18:08:31.395491 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 14 18:08:31.406844 kubelet[2769]: W0514 18:08:31.406808 2769 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 14 18:08:31.407962 kubelet[2769]: E0514 18:08:31.407242 2769 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4334.0.0-a-9d82e253c5\" already exists" pod="kube-system/kube-apiserver-ci-4334.0.0-a-9d82e253c5" May 14 18:08:31.408886 kubelet[2769]: E0514 18:08:31.408781 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 14 18:08:31.411284 kubelet[2769]: W0514 18:08:31.410720 2769 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 14 18:08:31.411284 kubelet[2769]: E0514 18:08:31.410798 2769 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4334.0.0-a-9d82e253c5\" already exists" pod="kube-system/kube-controller-manager-ci-4334.0.0-a-9d82e253c5" May 14 18:08:31.415473 kubelet[2769]: E0514 18:08:31.414840 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 14 18:08:31.464533 kubelet[2769]: I0514 18:08:31.464111 2769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4334.0.0-a-9d82e253c5" podStartSLOduration=1.4640730770000001 podStartE2EDuration="1.464073077s" podCreationTimestamp="2025-05-14 18:08:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 18:08:31.45145456 +0000 UTC m=+1.273405099" watchObservedRunningTime="2025-05-14 18:08:31.464073077 +0000 UTC m=+1.286023619" May 14 18:08:31.477739 kubelet[2769]: I0514 18:08:31.477655 2769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4334.0.0-a-9d82e253c5" podStartSLOduration=1.4776251839999999 podStartE2EDuration="1.477625184s" podCreationTimestamp="2025-05-14 18:08:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 18:08:31.476751235 +0000 UTC m=+1.298701773" watchObservedRunningTime="2025-05-14 18:08:31.477625184 +0000 UTC m=+1.299575725" May 14 18:08:31.477969 kubelet[2769]: I0514 18:08:31.477790 2769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4334.0.0-a-9d82e253c5" podStartSLOduration=1.477780454 podStartE2EDuration="1.477780454s" podCreationTimestamp="2025-05-14 18:08:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 18:08:31.465526978 +0000 UTC m=+1.287477711" watchObservedRunningTime="2025-05-14 18:08:31.477780454 +0000 UTC m=+1.299730995" May 14 18:08:32.395704 kubelet[2769]: E0514 18:08:32.395362 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 14 18:08:32.395704 kubelet[2769]: E0514 18:08:32.395372 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 14 18:08:33.215087 sudo[1759]: pam_unix(sudo:session): session closed for user root May 14 18:08:33.218144 sshd[1758]: Connection closed by 139.178.89.65 port 46762 May 14 18:08:33.219255 sshd-session[1756]: pam_unix(sshd:session): session closed for user core May 14 18:08:33.224582 systemd-logind[1497]: Session 7 logged out. Waiting for processes to exit. May 14 18:08:33.225407 systemd[1]: sshd@6-164.92.104.130:22-139.178.89.65:46762.service: Deactivated successfully. May 14 18:08:33.230039 systemd[1]: session-7.scope: Deactivated successfully. May 14 18:08:33.230780 systemd[1]: session-7.scope: Consumed 7.356s CPU time, 242.2M memory peak. May 14 18:08:33.236341 systemd-logind[1497]: Removed session 7. May 14 18:08:33.397743 kubelet[2769]: E0514 18:08:33.397625 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 14 18:08:35.592343 systemd-timesyncd[1417]: Contacted time server 129.146.193.200:123 (2.flatcar.pool.ntp.org). May 14 18:08:35.592423 systemd-timesyncd[1417]: Initial clock synchronization to Wed 2025-05-14 18:08:35.765807 UTC. May 14 18:08:38.277050 kubelet[2769]: E0514 18:08:38.277002 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 14 18:08:38.411264 kubelet[2769]: E0514 18:08:38.410320 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 14 18:08:41.216398 kubelet[2769]: E0514 18:08:41.216280 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 14 18:08:41.804348 kubelet[2769]: E0514 18:08:41.804267 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 14 18:08:43.167825 update_engine[1498]: I20250514 18:08:43.167644 1498 update_attempter.cc:509] Updating boot flags... May 14 18:08:43.895755 kubelet[2769]: I0514 18:08:43.895444 2769 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 14 18:08:43.897470 containerd[1515]: time="2025-05-14T18:08:43.897081563Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 14 18:08:43.899323 kubelet[2769]: I0514 18:08:43.899289 2769 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 14 18:08:44.480113 kubelet[2769]: I0514 18:08:44.479895 2769 topology_manager.go:215] "Topology Admit Handler" podUID="b0a06e69-f365-443e-84a2-038756c45ef9" podNamespace="kube-system" podName="kube-proxy-lg268" May 14 18:08:44.489442 kubelet[2769]: I0514 18:08:44.488092 2769 topology_manager.go:215] "Topology Admit Handler" podUID="12d1ce81-a93d-4fac-a2e1-e28db1aee075" podNamespace="kube-system" podName="cilium-8h2lp" May 14 18:08:44.499227 systemd[1]: Created slice kubepods-besteffort-podb0a06e69_f365_443e_84a2_038756c45ef9.slice - libcontainer container kubepods-besteffort-podb0a06e69_f365_443e_84a2_038756c45ef9.slice. May 14 18:08:44.505366 kubelet[2769]: I0514 18:08:44.505330 2769 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/12d1ce81-a93d-4fac-a2e1-e28db1aee075-clustermesh-secrets\") pod \"cilium-8h2lp\" (UID: \"12d1ce81-a93d-4fac-a2e1-e28db1aee075\") " pod="kube-system/cilium-8h2lp" May 14 18:08:44.505561 kubelet[2769]: I0514 18:08:44.505365 2769 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/12d1ce81-a93d-4fac-a2e1-e28db1aee075-host-proc-sys-net\") pod \"cilium-8h2lp\" (UID: \"12d1ce81-a93d-4fac-a2e1-e28db1aee075\") " pod="kube-system/cilium-8h2lp" May 14 18:08:44.505598 kubelet[2769]: I0514 18:08:44.505564 2769 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/12d1ce81-a93d-4fac-a2e1-e28db1aee075-cilium-run\") pod \"cilium-8h2lp\" (UID: \"12d1ce81-a93d-4fac-a2e1-e28db1aee075\") " pod="kube-system/cilium-8h2lp" May 14 18:08:44.505783 kubelet[2769]: I0514 18:08:44.505597 2769 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfqbm\" (UniqueName: \"kubernetes.io/projected/b0a06e69-f365-443e-84a2-038756c45ef9-kube-api-access-bfqbm\") pod \"kube-proxy-lg268\" (UID: \"b0a06e69-f365-443e-84a2-038756c45ef9\") " pod="kube-system/kube-proxy-lg268" May 14 18:08:44.505783 kubelet[2769]: I0514 18:08:44.505619 2769 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b0a06e69-f365-443e-84a2-038756c45ef9-xtables-lock\") pod \"kube-proxy-lg268\" (UID: \"b0a06e69-f365-443e-84a2-038756c45ef9\") " pod="kube-system/kube-proxy-lg268" May 14 18:08:44.505783 kubelet[2769]: I0514 18:08:44.505635 2769 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/12d1ce81-a93d-4fac-a2e1-e28db1aee075-bpf-maps\") pod \"cilium-8h2lp\" (UID: \"12d1ce81-a93d-4fac-a2e1-e28db1aee075\") " pod="kube-system/cilium-8h2lp" May 14 18:08:44.505783 kubelet[2769]: I0514 18:08:44.505654 2769 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/12d1ce81-a93d-4fac-a2e1-e28db1aee075-hostproc\") pod \"cilium-8h2lp\" (UID: \"12d1ce81-a93d-4fac-a2e1-e28db1aee075\") " pod="kube-system/cilium-8h2lp" May 14 18:08:44.505783 kubelet[2769]: I0514 18:08:44.505680 2769 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/12d1ce81-a93d-4fac-a2e1-e28db1aee075-cilium-cgroup\") pod \"cilium-8h2lp\" (UID: \"12d1ce81-a93d-4fac-a2e1-e28db1aee075\") " pod="kube-system/cilium-8h2lp" May 14 18:08:44.505783 kubelet[2769]: I0514 18:08:44.505696 2769 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/12d1ce81-a93d-4fac-a2e1-e28db1aee075-xtables-lock\") pod \"cilium-8h2lp\" (UID: \"12d1ce81-a93d-4fac-a2e1-e28db1aee075\") " pod="kube-system/cilium-8h2lp" May 14 18:08:44.506109 kubelet[2769]: I0514 18:08:44.505712 2769 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/12d1ce81-a93d-4fac-a2e1-e28db1aee075-cni-path\") pod \"cilium-8h2lp\" (UID: \"12d1ce81-a93d-4fac-a2e1-e28db1aee075\") " pod="kube-system/cilium-8h2lp" May 14 18:08:44.506109 kubelet[2769]: I0514 18:08:44.505730 2769 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/12d1ce81-a93d-4fac-a2e1-e28db1aee075-lib-modules\") pod \"cilium-8h2lp\" (UID: \"12d1ce81-a93d-4fac-a2e1-e28db1aee075\") " pod="kube-system/cilium-8h2lp" May 14 18:08:44.506109 kubelet[2769]: I0514 18:08:44.505782 2769 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b0a06e69-f365-443e-84a2-038756c45ef9-lib-modules\") pod \"kube-proxy-lg268\" (UID: \"b0a06e69-f365-443e-84a2-038756c45ef9\") " pod="kube-system/kube-proxy-lg268" May 14 18:08:44.506109 kubelet[2769]: I0514 18:08:44.505801 2769 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xn5z\" (UniqueName: \"kubernetes.io/projected/12d1ce81-a93d-4fac-a2e1-e28db1aee075-kube-api-access-8xn5z\") pod \"cilium-8h2lp\" (UID: \"12d1ce81-a93d-4fac-a2e1-e28db1aee075\") " pod="kube-system/cilium-8h2lp" May 14 18:08:44.506109 kubelet[2769]: I0514 18:08:44.505888 2769 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/12d1ce81-a93d-4fac-a2e1-e28db1aee075-etc-cni-netd\") pod \"cilium-8h2lp\" (UID: \"12d1ce81-a93d-4fac-a2e1-e28db1aee075\") " pod="kube-system/cilium-8h2lp" May 14 18:08:44.506109 kubelet[2769]: I0514 18:08:44.505946 2769 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/12d1ce81-a93d-4fac-a2e1-e28db1aee075-cilium-config-path\") pod \"cilium-8h2lp\" (UID: \"12d1ce81-a93d-4fac-a2e1-e28db1aee075\") " pod="kube-system/cilium-8h2lp" May 14 18:08:44.506318 kubelet[2769]: I0514 18:08:44.505963 2769 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b0a06e69-f365-443e-84a2-038756c45ef9-kube-proxy\") pod \"kube-proxy-lg268\" (UID: \"b0a06e69-f365-443e-84a2-038756c45ef9\") " pod="kube-system/kube-proxy-lg268" May 14 18:08:44.506318 kubelet[2769]: I0514 18:08:44.505980 2769 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/12d1ce81-a93d-4fac-a2e1-e28db1aee075-host-proc-sys-kernel\") pod \"cilium-8h2lp\" (UID: \"12d1ce81-a93d-4fac-a2e1-e28db1aee075\") " pod="kube-system/cilium-8h2lp" May 14 18:08:44.506318 kubelet[2769]: I0514 18:08:44.506098 2769 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/12d1ce81-a93d-4fac-a2e1-e28db1aee075-hubble-tls\") pod \"cilium-8h2lp\" (UID: \"12d1ce81-a93d-4fac-a2e1-e28db1aee075\") " pod="kube-system/cilium-8h2lp" May 14 18:08:44.515028 systemd[1]: Created slice kubepods-burstable-pod12d1ce81_a93d_4fac_a2e1_e28db1aee075.slice - libcontainer container kubepods-burstable-pod12d1ce81_a93d_4fac_a2e1_e28db1aee075.slice. May 14 18:08:44.643289 kubelet[2769]: E0514 18:08:44.640182 2769 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found May 14 18:08:44.643289 kubelet[2769]: E0514 18:08:44.640301 2769 projected.go:200] Error preparing data for projected volume kube-api-access-bfqbm for pod kube-system/kube-proxy-lg268: configmap "kube-root-ca.crt" not found May 14 18:08:44.643289 kubelet[2769]: E0514 18:08:44.640486 2769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b0a06e69-f365-443e-84a2-038756c45ef9-kube-api-access-bfqbm podName:b0a06e69-f365-443e-84a2-038756c45ef9 nodeName:}" failed. No retries permitted until 2025-05-14 18:08:45.140415356 +0000 UTC m=+14.962365886 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-bfqbm" (UniqueName: "kubernetes.io/projected/b0a06e69-f365-443e-84a2-038756c45ef9-kube-api-access-bfqbm") pod "kube-proxy-lg268" (UID: "b0a06e69-f365-443e-84a2-038756c45ef9") : configmap "kube-root-ca.crt" not found May 14 18:08:44.654916 kubelet[2769]: E0514 18:08:44.653568 2769 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found May 14 18:08:44.654916 kubelet[2769]: E0514 18:08:44.653619 2769 projected.go:200] Error preparing data for projected volume kube-api-access-8xn5z for pod kube-system/cilium-8h2lp: configmap "kube-root-ca.crt" not found May 14 18:08:44.654916 kubelet[2769]: E0514 18:08:44.653822 2769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/12d1ce81-a93d-4fac-a2e1-e28db1aee075-kube-api-access-8xn5z podName:12d1ce81-a93d-4fac-a2e1-e28db1aee075 nodeName:}" failed. No retries permitted until 2025-05-14 18:08:45.153791646 +0000 UTC m=+14.975742163 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-8xn5z" (UniqueName: "kubernetes.io/projected/12d1ce81-a93d-4fac-a2e1-e28db1aee075-kube-api-access-8xn5z") pod "cilium-8h2lp" (UID: "12d1ce81-a93d-4fac-a2e1-e28db1aee075") : configmap "kube-root-ca.crt" not found May 14 18:08:44.932400 kubelet[2769]: I0514 18:08:44.931909 2769 topology_manager.go:215] "Topology Admit Handler" podUID="2feb4d62-e49f-4cca-a180-457686b5c7c5" podNamespace="kube-system" podName="cilium-operator-599987898-gx2xs" May 14 18:08:44.945406 systemd[1]: Created slice kubepods-besteffort-pod2feb4d62_e49f_4cca_a180_457686b5c7c5.slice - libcontainer container kubepods-besteffort-pod2feb4d62_e49f_4cca_a180_457686b5c7c5.slice. May 14 18:08:45.011839 kubelet[2769]: I0514 18:08:45.011746 2769 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2feb4d62-e49f-4cca-a180-457686b5c7c5-cilium-config-path\") pod \"cilium-operator-599987898-gx2xs\" (UID: \"2feb4d62-e49f-4cca-a180-457686b5c7c5\") " pod="kube-system/cilium-operator-599987898-gx2xs" May 14 18:08:45.011839 kubelet[2769]: I0514 18:08:45.011833 2769 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4fk6\" (UniqueName: \"kubernetes.io/projected/2feb4d62-e49f-4cca-a180-457686b5c7c5-kube-api-access-s4fk6\") pod \"cilium-operator-599987898-gx2xs\" (UID: \"2feb4d62-e49f-4cca-a180-457686b5c7c5\") " pod="kube-system/cilium-operator-599987898-gx2xs" May 14 18:08:45.253293 kubelet[2769]: E0514 18:08:45.252984 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 14 18:08:45.254891 containerd[1515]: time="2025-05-14T18:08:45.254848157Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-gx2xs,Uid:2feb4d62-e49f-4cca-a180-457686b5c7c5,Namespace:kube-system,Attempt:0,}" May 14 18:08:45.278873 containerd[1515]: time="2025-05-14T18:08:45.278265529Z" level=info msg="connecting to shim ab41f4919e3067030bab691ab53c1898092fccba87414838d008cf0b73dded37" address="unix:///run/containerd/s/c77208c8afca339df7c3e94b8a61c986f4219c510c61918aab680c46f0e22b26" namespace=k8s.io protocol=ttrpc version=3 May 14 18:08:45.311460 systemd[1]: Started cri-containerd-ab41f4919e3067030bab691ab53c1898092fccba87414838d008cf0b73dded37.scope - libcontainer container ab41f4919e3067030bab691ab53c1898092fccba87414838d008cf0b73dded37. May 14 18:08:45.372079 containerd[1515]: time="2025-05-14T18:08:45.372007673Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-gx2xs,Uid:2feb4d62-e49f-4cca-a180-457686b5c7c5,Namespace:kube-system,Attempt:0,} returns sandbox id \"ab41f4919e3067030bab691ab53c1898092fccba87414838d008cf0b73dded37\"" May 14 18:08:45.373519 kubelet[2769]: E0514 18:08:45.373486 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 14 18:08:45.375385 containerd[1515]: time="2025-05-14T18:08:45.375317584Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 14 18:08:45.411047 kubelet[2769]: E0514 18:08:45.410975 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 14 18:08:45.412512 containerd[1515]: time="2025-05-14T18:08:45.412459554Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lg268,Uid:b0a06e69-f365-443e-84a2-038756c45ef9,Namespace:kube-system,Attempt:0,}" May 14 18:08:45.423238 kubelet[2769]: E0514 18:08:45.423150 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 14 18:08:45.425108 containerd[1515]: time="2025-05-14T18:08:45.425025887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8h2lp,Uid:12d1ce81-a93d-4fac-a2e1-e28db1aee075,Namespace:kube-system,Attempt:0,}" May 14 18:08:45.455258 containerd[1515]: time="2025-05-14T18:08:45.454431948Z" level=info msg="connecting to shim 6430611cb68f0406e5385949e13e8808ea780468debcb0b22ab135bb06c2868a" address="unix:///run/containerd/s/98c431c0e9bb4f7d7b4fed55234c47976359bfd890d39177335074ec98a098cb" namespace=k8s.io protocol=ttrpc version=3 May 14 18:08:45.478478 containerd[1515]: time="2025-05-14T18:08:45.478295110Z" level=info msg="connecting to shim ecf9032bd1adf17cd0839da58ac7766515d3316bba789ac77c242957a2661616" address="unix:///run/containerd/s/71bf19ebf4512360c045a195f39e905167bb6885e16b9b7c479fc7f6c42a61ff" namespace=k8s.io protocol=ttrpc version=3 May 14 18:08:45.511614 systemd[1]: Started cri-containerd-6430611cb68f0406e5385949e13e8808ea780468debcb0b22ab135bb06c2868a.scope - libcontainer container 6430611cb68f0406e5385949e13e8808ea780468debcb0b22ab135bb06c2868a. May 14 18:08:45.535580 systemd[1]: Started cri-containerd-ecf9032bd1adf17cd0839da58ac7766515d3316bba789ac77c242957a2661616.scope - libcontainer container ecf9032bd1adf17cd0839da58ac7766515d3316bba789ac77c242957a2661616. May 14 18:08:45.591938 containerd[1515]: time="2025-05-14T18:08:45.591805474Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lg268,Uid:b0a06e69-f365-443e-84a2-038756c45ef9,Namespace:kube-system,Attempt:0,} returns sandbox id \"6430611cb68f0406e5385949e13e8808ea780468debcb0b22ab135bb06c2868a\"" May 14 18:08:45.595458 kubelet[2769]: E0514 18:08:45.595010 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 14 18:08:45.610320 containerd[1515]: time="2025-05-14T18:08:45.610251175Z" level=info msg="CreateContainer within sandbox \"6430611cb68f0406e5385949e13e8808ea780468debcb0b22ab135bb06c2868a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 14 18:08:45.636293 containerd[1515]: time="2025-05-14T18:08:45.633785178Z" level=info msg="Container db718520939d4268918198e2d7b02b35163ed4a30cedfa503fe63bf05f10e9ee: CDI devices from CRI Config.CDIDevices: []" May 14 18:08:45.642194 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount277706428.mount: Deactivated successfully. May 14 18:08:45.650503 containerd[1515]: time="2025-05-14T18:08:45.649394282Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8h2lp,Uid:12d1ce81-a93d-4fac-a2e1-e28db1aee075,Namespace:kube-system,Attempt:0,} returns sandbox id \"ecf9032bd1adf17cd0839da58ac7766515d3316bba789ac77c242957a2661616\"" May 14 18:08:45.651821 kubelet[2769]: E0514 18:08:45.651790 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 14 18:08:45.663500 containerd[1515]: time="2025-05-14T18:08:45.663430099Z" level=info msg="CreateContainer within sandbox \"6430611cb68f0406e5385949e13e8808ea780468debcb0b22ab135bb06c2868a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"db718520939d4268918198e2d7b02b35163ed4a30cedfa503fe63bf05f10e9ee\"" May 14 18:08:45.664362 containerd[1515]: time="2025-05-14T18:08:45.664325839Z" level=info msg="StartContainer for \"db718520939d4268918198e2d7b02b35163ed4a30cedfa503fe63bf05f10e9ee\"" May 14 18:08:45.669895 containerd[1515]: time="2025-05-14T18:08:45.669722064Z" level=info msg="connecting to shim db718520939d4268918198e2d7b02b35163ed4a30cedfa503fe63bf05f10e9ee" address="unix:///run/containerd/s/98c431c0e9bb4f7d7b4fed55234c47976359bfd890d39177335074ec98a098cb" protocol=ttrpc version=3 May 14 18:08:45.710596 systemd[1]: Started cri-containerd-db718520939d4268918198e2d7b02b35163ed4a30cedfa503fe63bf05f10e9ee.scope - libcontainer container db718520939d4268918198e2d7b02b35163ed4a30cedfa503fe63bf05f10e9ee. May 14 18:08:45.792253 containerd[1515]: time="2025-05-14T18:08:45.791242059Z" level=info msg="StartContainer for \"db718520939d4268918198e2d7b02b35163ed4a30cedfa503fe63bf05f10e9ee\" returns successfully" May 14 18:08:46.441496 kubelet[2769]: E0514 18:08:46.441458 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 14 18:08:47.377922 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4213249399.mount: Deactivated successfully. May 14 18:08:47.981769 containerd[1515]: time="2025-05-14T18:08:47.981707138Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:08:47.982787 containerd[1515]: time="2025-05-14T18:08:47.982536813Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" May 14 18:08:47.983583 containerd[1515]: time="2025-05-14T18:08:47.983539982Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:08:47.985188 containerd[1515]: time="2025-05-14T18:08:47.985130706Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.609764984s" May 14 18:08:47.985388 containerd[1515]: time="2025-05-14T18:08:47.985361871Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 14 18:08:47.987572 containerd[1515]: time="2025-05-14T18:08:47.987473394Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 14 18:08:47.993517 containerd[1515]: time="2025-05-14T18:08:47.992838112Z" level=info msg="CreateContainer within sandbox \"ab41f4919e3067030bab691ab53c1898092fccba87414838d008cf0b73dded37\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 14 18:08:48.005182 containerd[1515]: time="2025-05-14T18:08:48.004892244Z" level=info msg="Container 0eb3a72e002f85664258db94d38d0dcbd6f8f0a2cf0bf7bd3dfaf0cf830cfda3: CDI devices from CRI Config.CDIDevices: []" May 14 18:08:48.025073 containerd[1515]: time="2025-05-14T18:08:48.024916833Z" level=info msg="CreateContainer within sandbox \"ab41f4919e3067030bab691ab53c1898092fccba87414838d008cf0b73dded37\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"0eb3a72e002f85664258db94d38d0dcbd6f8f0a2cf0bf7bd3dfaf0cf830cfda3\"" May 14 18:08:48.025969 containerd[1515]: time="2025-05-14T18:08:48.025851070Z" level=info msg="StartContainer for \"0eb3a72e002f85664258db94d38d0dcbd6f8f0a2cf0bf7bd3dfaf0cf830cfda3\"" May 14 18:08:48.029465 containerd[1515]: time="2025-05-14T18:08:48.029416771Z" level=info msg="connecting to shim 0eb3a72e002f85664258db94d38d0dcbd6f8f0a2cf0bf7bd3dfaf0cf830cfda3" address="unix:///run/containerd/s/c77208c8afca339df7c3e94b8a61c986f4219c510c61918aab680c46f0e22b26" protocol=ttrpc version=3 May 14 18:08:48.066768 systemd[1]: Started cri-containerd-0eb3a72e002f85664258db94d38d0dcbd6f8f0a2cf0bf7bd3dfaf0cf830cfda3.scope - libcontainer container 0eb3a72e002f85664258db94d38d0dcbd6f8f0a2cf0bf7bd3dfaf0cf830cfda3. May 14 18:08:48.121577 containerd[1515]: time="2025-05-14T18:08:48.121526341Z" level=info msg="StartContainer for \"0eb3a72e002f85664258db94d38d0dcbd6f8f0a2cf0bf7bd3dfaf0cf830cfda3\" returns successfully" May 14 18:08:48.451905 kubelet[2769]: E0514 18:08:48.451788 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 14 18:08:48.474915 kubelet[2769]: I0514 18:08:48.474719 2769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-lg268" podStartSLOduration=4.4746956319999995 podStartE2EDuration="4.474695632s" podCreationTimestamp="2025-05-14 18:08:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 18:08:46.460156579 +0000 UTC m=+16.282107121" watchObservedRunningTime="2025-05-14 18:08:48.474695632 +0000 UTC m=+18.296646168" May 14 18:08:48.475173 kubelet[2769]: I0514 18:08:48.474921 2769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-gx2xs" podStartSLOduration=1.862421623 podStartE2EDuration="4.474916876s" podCreationTimestamp="2025-05-14 18:08:44 +0000 UTC" firstStartedPulling="2025-05-14 18:08:45.37466669 +0000 UTC m=+15.196617207" lastFinishedPulling="2025-05-14 18:08:47.98716193 +0000 UTC m=+17.809112460" observedRunningTime="2025-05-14 18:08:48.474888936 +0000 UTC m=+18.296839475" watchObservedRunningTime="2025-05-14 18:08:48.474916876 +0000 UTC m=+18.296867415" May 14 18:08:49.457565 kubelet[2769]: E0514 18:08:49.455606 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 14 18:08:53.487863 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3862675873.mount: Deactivated successfully. May 14 18:08:56.280725 containerd[1515]: time="2025-05-14T18:08:56.280448823Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:08:56.282688 containerd[1515]: time="2025-05-14T18:08:56.282627370Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" May 14 18:08:56.283058 containerd[1515]: time="2025-05-14T18:08:56.282760031Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:08:56.286030 containerd[1515]: time="2025-05-14T18:08:56.285518470Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 8.297999953s" May 14 18:08:56.286030 containerd[1515]: time="2025-05-14T18:08:56.285584334Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 14 18:08:56.293255 containerd[1515]: time="2025-05-14T18:08:56.292751631Z" level=info msg="CreateContainer within sandbox \"ecf9032bd1adf17cd0839da58ac7766515d3316bba789ac77c242957a2661616\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 14 18:08:56.315635 containerd[1515]: time="2025-05-14T18:08:56.315575597Z" level=info msg="Container 68a557a4479d21f84ec27f1758240f9bf7fba44e93c8dfb482201113f6908622: CDI devices from CRI Config.CDIDevices: []" May 14 18:08:56.324605 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount123221433.mount: Deactivated successfully. May 14 18:08:56.341141 containerd[1515]: time="2025-05-14T18:08:56.340963932Z" level=info msg="CreateContainer within sandbox \"ecf9032bd1adf17cd0839da58ac7766515d3316bba789ac77c242957a2661616\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"68a557a4479d21f84ec27f1758240f9bf7fba44e93c8dfb482201113f6908622\"" May 14 18:08:56.342063 containerd[1515]: time="2025-05-14T18:08:56.341868170Z" level=info msg="StartContainer for \"68a557a4479d21f84ec27f1758240f9bf7fba44e93c8dfb482201113f6908622\"" May 14 18:08:56.343648 containerd[1515]: time="2025-05-14T18:08:56.343545669Z" level=info msg="connecting to shim 68a557a4479d21f84ec27f1758240f9bf7fba44e93c8dfb482201113f6908622" address="unix:///run/containerd/s/71bf19ebf4512360c045a195f39e905167bb6885e16b9b7c479fc7f6c42a61ff" protocol=ttrpc version=3 May 14 18:08:56.382562 systemd[1]: Started cri-containerd-68a557a4479d21f84ec27f1758240f9bf7fba44e93c8dfb482201113f6908622.scope - libcontainer container 68a557a4479d21f84ec27f1758240f9bf7fba44e93c8dfb482201113f6908622. May 14 18:08:56.455540 containerd[1515]: time="2025-05-14T18:08:56.455458341Z" level=info msg="StartContainer for \"68a557a4479d21f84ec27f1758240f9bf7fba44e93c8dfb482201113f6908622\" returns successfully" May 14 18:08:56.478127 systemd[1]: cri-containerd-68a557a4479d21f84ec27f1758240f9bf7fba44e93c8dfb482201113f6908622.scope: Deactivated successfully. May 14 18:08:56.490337 kubelet[2769]: E0514 18:08:56.489281 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 14 18:08:56.545645 containerd[1515]: time="2025-05-14T18:08:56.545363934Z" level=info msg="received exit event container_id:\"68a557a4479d21f84ec27f1758240f9bf7fba44e93c8dfb482201113f6908622\" id:\"68a557a4479d21f84ec27f1758240f9bf7fba44e93c8dfb482201113f6908622\" pid:3248 exited_at:{seconds:1747246136 nanos:481014697}" May 14 18:08:56.563261 containerd[1515]: time="2025-05-14T18:08:56.562430378Z" level=info msg="TaskExit event in podsandbox handler container_id:\"68a557a4479d21f84ec27f1758240f9bf7fba44e93c8dfb482201113f6908622\" id:\"68a557a4479d21f84ec27f1758240f9bf7fba44e93c8dfb482201113f6908622\" pid:3248 exited_at:{seconds:1747246136 nanos:481014697}" May 14 18:08:56.592515 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-68a557a4479d21f84ec27f1758240f9bf7fba44e93c8dfb482201113f6908622-rootfs.mount: Deactivated successfully. May 14 18:08:57.493852 kubelet[2769]: E0514 18:08:57.493809 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 14 18:08:57.501423 containerd[1515]: time="2025-05-14T18:08:57.500400948Z" level=info msg="CreateContainer within sandbox \"ecf9032bd1adf17cd0839da58ac7766515d3316bba789ac77c242957a2661616\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 14 18:08:57.519765 containerd[1515]: time="2025-05-14T18:08:57.519715109Z" level=info msg="Container a1cee0b6dc84af715112e1083752bc9138cf4a35bc8ce017c4256215438cafa2: CDI devices from CRI Config.CDIDevices: []" May 14 18:08:57.557163 containerd[1515]: time="2025-05-14T18:08:57.556464337Z" level=info msg="CreateContainer within sandbox \"ecf9032bd1adf17cd0839da58ac7766515d3316bba789ac77c242957a2661616\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a1cee0b6dc84af715112e1083752bc9138cf4a35bc8ce017c4256215438cafa2\"" May 14 18:08:57.559815 containerd[1515]: time="2025-05-14T18:08:57.559116527Z" level=info msg="StartContainer for \"a1cee0b6dc84af715112e1083752bc9138cf4a35bc8ce017c4256215438cafa2\"" May 14 18:08:57.561303 containerd[1515]: time="2025-05-14T18:08:57.561248262Z" level=info msg="connecting to shim a1cee0b6dc84af715112e1083752bc9138cf4a35bc8ce017c4256215438cafa2" address="unix:///run/containerd/s/71bf19ebf4512360c045a195f39e905167bb6885e16b9b7c479fc7f6c42a61ff" protocol=ttrpc version=3 May 14 18:08:57.603634 systemd[1]: Started cri-containerd-a1cee0b6dc84af715112e1083752bc9138cf4a35bc8ce017c4256215438cafa2.scope - libcontainer container a1cee0b6dc84af715112e1083752bc9138cf4a35bc8ce017c4256215438cafa2. May 14 18:08:57.664451 containerd[1515]: time="2025-05-14T18:08:57.664397206Z" level=info msg="StartContainer for \"a1cee0b6dc84af715112e1083752bc9138cf4a35bc8ce017c4256215438cafa2\" returns successfully" May 14 18:08:57.683943 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 14 18:08:57.685081 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 14 18:08:57.687696 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 14 18:08:57.695151 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 14 18:08:57.701259 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 14 18:08:57.703270 systemd[1]: cri-containerd-a1cee0b6dc84af715112e1083752bc9138cf4a35bc8ce017c4256215438cafa2.scope: Deactivated successfully. May 14 18:08:57.709305 containerd[1515]: time="2025-05-14T18:08:57.709238034Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a1cee0b6dc84af715112e1083752bc9138cf4a35bc8ce017c4256215438cafa2\" id:\"a1cee0b6dc84af715112e1083752bc9138cf4a35bc8ce017c4256215438cafa2\" pid:3293 exited_at:{seconds:1747246137 nanos:703372028}" May 14 18:08:57.709503 containerd[1515]: time="2025-05-14T18:08:57.709367653Z" level=info msg="received exit event container_id:\"a1cee0b6dc84af715112e1083752bc9138cf4a35bc8ce017c4256215438cafa2\" id:\"a1cee0b6dc84af715112e1083752bc9138cf4a35bc8ce017c4256215438cafa2\" pid:3293 exited_at:{seconds:1747246137 nanos:703372028}" May 14 18:08:57.749951 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 14 18:08:58.499711 kubelet[2769]: E0514 18:08:58.499605 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 14 18:08:58.505041 containerd[1515]: time="2025-05-14T18:08:58.504448420Z" level=info msg="CreateContainer within sandbox \"ecf9032bd1adf17cd0839da58ac7766515d3316bba789ac77c242957a2661616\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 14 18:08:58.515707 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a1cee0b6dc84af715112e1083752bc9138cf4a35bc8ce017c4256215438cafa2-rootfs.mount: Deactivated successfully. May 14 18:08:58.534704 containerd[1515]: time="2025-05-14T18:08:58.534649658Z" level=info msg="Container 9d9280d2972e9a9208c3c1f35636e18251d5d8b4edfb4e67f37d1f0d7cdac833: CDI devices from CRI Config.CDIDevices: []" May 14 18:08:58.550879 containerd[1515]: time="2025-05-14T18:08:58.550759710Z" level=info msg="CreateContainer within sandbox \"ecf9032bd1adf17cd0839da58ac7766515d3316bba789ac77c242957a2661616\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"9d9280d2972e9a9208c3c1f35636e18251d5d8b4edfb4e67f37d1f0d7cdac833\"" May 14 18:08:58.551871 containerd[1515]: time="2025-05-14T18:08:58.551473548Z" level=info msg="StartContainer for \"9d9280d2972e9a9208c3c1f35636e18251d5d8b4edfb4e67f37d1f0d7cdac833\"" May 14 18:08:58.553225 containerd[1515]: time="2025-05-14T18:08:58.553068980Z" level=info msg="connecting to shim 9d9280d2972e9a9208c3c1f35636e18251d5d8b4edfb4e67f37d1f0d7cdac833" address="unix:///run/containerd/s/71bf19ebf4512360c045a195f39e905167bb6885e16b9b7c479fc7f6c42a61ff" protocol=ttrpc version=3 May 14 18:08:58.585053 systemd[1]: Started cri-containerd-9d9280d2972e9a9208c3c1f35636e18251d5d8b4edfb4e67f37d1f0d7cdac833.scope - libcontainer container 9d9280d2972e9a9208c3c1f35636e18251d5d8b4edfb4e67f37d1f0d7cdac833. May 14 18:08:58.668501 containerd[1515]: time="2025-05-14T18:08:58.668396718Z" level=info msg="StartContainer for \"9d9280d2972e9a9208c3c1f35636e18251d5d8b4edfb4e67f37d1f0d7cdac833\" returns successfully" May 14 18:08:58.676441 systemd[1]: cri-containerd-9d9280d2972e9a9208c3c1f35636e18251d5d8b4edfb4e67f37d1f0d7cdac833.scope: Deactivated successfully. May 14 18:08:58.678390 systemd[1]: cri-containerd-9d9280d2972e9a9208c3c1f35636e18251d5d8b4edfb4e67f37d1f0d7cdac833.scope: Consumed 43ms CPU time, 4.3M memory peak, 1M read from disk. May 14 18:08:58.679937 containerd[1515]: time="2025-05-14T18:08:58.679566673Z" level=info msg="received exit event container_id:\"9d9280d2972e9a9208c3c1f35636e18251d5d8b4edfb4e67f37d1f0d7cdac833\" id:\"9d9280d2972e9a9208c3c1f35636e18251d5d8b4edfb4e67f37d1f0d7cdac833\" pid:3340 exited_at:{seconds:1747246138 nanos:678617781}" May 14 18:08:58.680447 containerd[1515]: time="2025-05-14T18:08:58.680410277Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9d9280d2972e9a9208c3c1f35636e18251d5d8b4edfb4e67f37d1f0d7cdac833\" id:\"9d9280d2972e9a9208c3c1f35636e18251d5d8b4edfb4e67f37d1f0d7cdac833\" pid:3340 exited_at:{seconds:1747246138 nanos:678617781}" May 14 18:08:58.721738 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9d9280d2972e9a9208c3c1f35636e18251d5d8b4edfb4e67f37d1f0d7cdac833-rootfs.mount: Deactivated successfully. May 14 18:08:59.507703 kubelet[2769]: E0514 18:08:59.507651 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 14 18:08:59.513011 containerd[1515]: time="2025-05-14T18:08:59.512174185Z" level=info msg="CreateContainer within sandbox \"ecf9032bd1adf17cd0839da58ac7766515d3316bba789ac77c242957a2661616\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 14 18:08:59.532934 containerd[1515]: time="2025-05-14T18:08:59.532856989Z" level=info msg="Container a4fefb97c6de6815ca41ab74e1da1d28d7100a8b8b2b2a1a7ec47e898e862bf4: CDI devices from CRI Config.CDIDevices: []" May 14 18:08:59.549233 containerd[1515]: time="2025-05-14T18:08:59.548108393Z" level=info msg="CreateContainer within sandbox \"ecf9032bd1adf17cd0839da58ac7766515d3316bba789ac77c242957a2661616\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a4fefb97c6de6815ca41ab74e1da1d28d7100a8b8b2b2a1a7ec47e898e862bf4\"" May 14 18:08:59.549482 containerd[1515]: time="2025-05-14T18:08:59.549425796Z" level=info msg="StartContainer for \"a4fefb97c6de6815ca41ab74e1da1d28d7100a8b8b2b2a1a7ec47e898e862bf4\"" May 14 18:08:59.552950 containerd[1515]: time="2025-05-14T18:08:59.552329822Z" level=info msg="connecting to shim a4fefb97c6de6815ca41ab74e1da1d28d7100a8b8b2b2a1a7ec47e898e862bf4" address="unix:///run/containerd/s/71bf19ebf4512360c045a195f39e905167bb6885e16b9b7c479fc7f6c42a61ff" protocol=ttrpc version=3 May 14 18:08:59.552663 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1309827711.mount: Deactivated successfully. May 14 18:08:59.590659 systemd[1]: Started cri-containerd-a4fefb97c6de6815ca41ab74e1da1d28d7100a8b8b2b2a1a7ec47e898e862bf4.scope - libcontainer container a4fefb97c6de6815ca41ab74e1da1d28d7100a8b8b2b2a1a7ec47e898e862bf4. May 14 18:08:59.646157 systemd[1]: cri-containerd-a4fefb97c6de6815ca41ab74e1da1d28d7100a8b8b2b2a1a7ec47e898e862bf4.scope: Deactivated successfully. May 14 18:08:59.649728 containerd[1515]: time="2025-05-14T18:08:59.649640845Z" level=info msg="received exit event container_id:\"a4fefb97c6de6815ca41ab74e1da1d28d7100a8b8b2b2a1a7ec47e898e862bf4\" id:\"a4fefb97c6de6815ca41ab74e1da1d28d7100a8b8b2b2a1a7ec47e898e862bf4\" pid:3383 exited_at:{seconds:1747246139 nanos:649182629}" May 14 18:08:59.650047 containerd[1515]: time="2025-05-14T18:08:59.650007856Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a4fefb97c6de6815ca41ab74e1da1d28d7100a8b8b2b2a1a7ec47e898e862bf4\" id:\"a4fefb97c6de6815ca41ab74e1da1d28d7100a8b8b2b2a1a7ec47e898e862bf4\" pid:3383 exited_at:{seconds:1747246139 nanos:649182629}" May 14 18:08:59.669421 containerd[1515]: time="2025-05-14T18:08:59.669363756Z" level=info msg="StartContainer for \"a4fefb97c6de6815ca41ab74e1da1d28d7100a8b8b2b2a1a7ec47e898e862bf4\" returns successfully" May 14 18:08:59.697668 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a4fefb97c6de6815ca41ab74e1da1d28d7100a8b8b2b2a1a7ec47e898e862bf4-rootfs.mount: Deactivated successfully. May 14 18:09:00.520064 kubelet[2769]: E0514 18:09:00.519992 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 14 18:09:00.528047 containerd[1515]: time="2025-05-14T18:09:00.527985808Z" level=info msg="CreateContainer within sandbox \"ecf9032bd1adf17cd0839da58ac7766515d3316bba789ac77c242957a2661616\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 14 18:09:00.558666 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount532492800.mount: Deactivated successfully. May 14 18:09:00.575825 containerd[1515]: time="2025-05-14T18:09:00.575722337Z" level=info msg="Container 685ad51f1f44514a4b4052420b2c7927d0325efd4c3c30e12dc3c3fedd60cbed: CDI devices from CRI Config.CDIDevices: []" May 14 18:09:00.584764 containerd[1515]: time="2025-05-14T18:09:00.584657977Z" level=info msg="CreateContainer within sandbox \"ecf9032bd1adf17cd0839da58ac7766515d3316bba789ac77c242957a2661616\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"685ad51f1f44514a4b4052420b2c7927d0325efd4c3c30e12dc3c3fedd60cbed\"" May 14 18:09:00.586337 containerd[1515]: time="2025-05-14T18:09:00.586290507Z" level=info msg="StartContainer for \"685ad51f1f44514a4b4052420b2c7927d0325efd4c3c30e12dc3c3fedd60cbed\"" May 14 18:09:00.588980 containerd[1515]: time="2025-05-14T18:09:00.588792424Z" level=info msg="connecting to shim 685ad51f1f44514a4b4052420b2c7927d0325efd4c3c30e12dc3c3fedd60cbed" address="unix:///run/containerd/s/71bf19ebf4512360c045a195f39e905167bb6885e16b9b7c479fc7f6c42a61ff" protocol=ttrpc version=3 May 14 18:09:00.637828 systemd[1]: Started cri-containerd-685ad51f1f44514a4b4052420b2c7927d0325efd4c3c30e12dc3c3fedd60cbed.scope - libcontainer container 685ad51f1f44514a4b4052420b2c7927d0325efd4c3c30e12dc3c3fedd60cbed. May 14 18:09:00.738867 containerd[1515]: time="2025-05-14T18:09:00.738612952Z" level=info msg="StartContainer for \"685ad51f1f44514a4b4052420b2c7927d0325efd4c3c30e12dc3c3fedd60cbed\" returns successfully" May 14 18:09:00.776565 kubelet[2769]: I0514 18:09:00.766524 2769 eviction_manager.go:366] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 14 18:09:00.776565 kubelet[2769]: I0514 18:09:00.774506 2769 container_gc.go:88] "Attempting to delete unused containers" May 14 18:09:00.782536 kubelet[2769]: I0514 18:09:00.782490 2769 image_gc_manager.go:404] "Attempting to delete unused images" May 14 18:09:00.833577 kubelet[2769]: I0514 18:09:00.833520 2769 eviction_manager.go:377] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 14 18:09:00.835876 kubelet[2769]: I0514 18:09:00.835819 2769 eviction_manager.go:395] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-8h2lp","kube-system/cilium-operator-599987898-gx2xs","kube-system/kube-controller-manager-ci-4334.0.0-a-9d82e253c5","kube-system/kube-proxy-lg268","kube-system/kube-apiserver-ci-4334.0.0-a-9d82e253c5","kube-system/kube-scheduler-ci-4334.0.0-a-9d82e253c5"] May 14 18:09:00.836054 kubelet[2769]: E0514 18:09:00.835941 2769 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-8h2lp" May 14 18:09:00.836054 kubelet[2769]: E0514 18:09:00.835968 2769 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-599987898-gx2xs" May 14 18:09:00.836054 kubelet[2769]: E0514 18:09:00.835984 2769 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-ci-4334.0.0-a-9d82e253c5" May 14 18:09:00.836054 kubelet[2769]: E0514 18:09:00.835999 2769 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-lg268" May 14 18:09:00.836054 kubelet[2769]: E0514 18:09:00.836013 2769 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-ci-4334.0.0-a-9d82e253c5" May 14 18:09:00.836054 kubelet[2769]: E0514 18:09:00.836027 2769 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-ci-4334.0.0-a-9d82e253c5" May 14 18:09:00.836054 kubelet[2769]: I0514 18:09:00.836042 2769 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 14 18:09:00.941315 containerd[1515]: time="2025-05-14T18:09:00.941074035Z" level=info msg="TaskExit event in podsandbox handler container_id:\"685ad51f1f44514a4b4052420b2c7927d0325efd4c3c30e12dc3c3fedd60cbed\" id:\"72508417a7f1884b0197ddcb0e70a9cc0acb43ecb3561db89f985f9e7384e74c\" pid:3452 exited_at:{seconds:1747246140 nanos:940539002}" May 14 18:09:00.970699 kubelet[2769]: I0514 18:09:00.970657 2769 kubelet_node_status.go:497] "Fast updating node status as it just became ready" May 14 18:09:01.564880 kubelet[2769]: E0514 18:09:01.563684 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 14 18:09:01.605234 kubelet[2769]: I0514 18:09:01.604177 2769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-8h2lp" podStartSLOduration=6.971399207 podStartE2EDuration="17.60414466s" podCreationTimestamp="2025-05-14 18:08:44 +0000 UTC" firstStartedPulling="2025-05-14 18:08:45.655056237 +0000 UTC m=+15.477006755" lastFinishedPulling="2025-05-14 18:08:56.287801686 +0000 UTC m=+26.109752208" observedRunningTime="2025-05-14 18:09:01.601508688 +0000 UTC m=+31.423459229" watchObservedRunningTime="2025-05-14 18:09:01.60414466 +0000 UTC m=+31.426095202" May 14 18:09:02.571107 kubelet[2769]: E0514 18:09:02.571028 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 14 18:09:03.542310 systemd-networkd[1459]: cilium_host: Link UP May 14 18:09:03.542690 systemd-networkd[1459]: cilium_net: Link UP May 14 18:09:03.543012 systemd-networkd[1459]: cilium_net: Gained carrier May 14 18:09:03.548481 systemd-networkd[1459]: cilium_host: Gained carrier May 14 18:09:03.580070 kubelet[2769]: E0514 18:09:03.579810 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 14 18:09:03.719692 systemd-networkd[1459]: cilium_host: Gained IPv6LL May 14 18:09:03.841478 systemd-networkd[1459]: cilium_vxlan: Link UP May 14 18:09:03.841491 systemd-networkd[1459]: cilium_vxlan: Gained carrier May 14 18:09:04.421261 kernel: NET: Registered PF_ALG protocol family May 14 18:09:04.486701 systemd-networkd[1459]: cilium_net: Gained IPv6LL May 14 18:09:05.127512 systemd-networkd[1459]: cilium_vxlan: Gained IPv6LL May 14 18:09:05.540262 systemd-networkd[1459]: lxc_health: Link UP May 14 18:09:05.552273 systemd-networkd[1459]: lxc_health: Gained carrier May 14 18:09:07.436049 kubelet[2769]: E0514 18:09:07.435990 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 14 18:09:07.558506 systemd-networkd[1459]: lxc_health: Gained IPv6LL May 14 18:09:07.586485 kubelet[2769]: E0514 18:09:07.586451 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 14 18:09:08.590257 kubelet[2769]: E0514 18:09:08.589492 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 14 18:09:10.866522 kubelet[2769]: I0514 18:09:10.866457 2769 eviction_manager.go:366] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 14 18:09:10.866522 kubelet[2769]: I0514 18:09:10.866534 2769 container_gc.go:88] "Attempting to delete unused containers" May 14 18:09:10.871735 kubelet[2769]: I0514 18:09:10.871648 2769 image_gc_manager.go:404] "Attempting to delete unused images" May 14 18:09:10.896592 kubelet[2769]: I0514 18:09:10.896543 2769 eviction_manager.go:377] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 14 18:09:10.896797 kubelet[2769]: I0514 18:09:10.896769 2769 eviction_manager.go:395] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-599987898-gx2xs","kube-system/cilium-8h2lp","kube-system/kube-controller-manager-ci-4334.0.0-a-9d82e253c5","kube-system/kube-proxy-lg268","kube-system/kube-apiserver-ci-4334.0.0-a-9d82e253c5","kube-system/kube-scheduler-ci-4334.0.0-a-9d82e253c5"] May 14 18:09:10.896870 kubelet[2769]: E0514 18:09:10.896829 2769 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-599987898-gx2xs" May 14 18:09:10.896870 kubelet[2769]: E0514 18:09:10.896853 2769 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-8h2lp" May 14 18:09:10.896870 kubelet[2769]: E0514 18:09:10.896867 2769 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-ci-4334.0.0-a-9d82e253c5" May 14 18:09:10.896987 kubelet[2769]: E0514 18:09:10.896878 2769 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-lg268" May 14 18:09:10.896987 kubelet[2769]: E0514 18:09:10.896886 2769 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-ci-4334.0.0-a-9d82e253c5" May 14 18:09:10.896987 kubelet[2769]: E0514 18:09:10.896894 2769 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-ci-4334.0.0-a-9d82e253c5" May 14 18:09:10.896987 kubelet[2769]: I0514 18:09:10.896904 2769 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 14 18:09:20.016006 systemd[1]: Started sshd@7-164.92.104.130:22-139.178.89.65:45764.service - OpenSSH per-connection server daemon (139.178.89.65:45764). May 14 18:09:20.116186 sshd[3893]: Accepted publickey for core from 139.178.89.65 port 45764 ssh2: RSA SHA256:I6v7602y95t0HxsKZunlpQRdbWqTS6jK7hLc8ah5Xaw May 14 18:09:20.118263 sshd-session[3893]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:09:20.125605 systemd-logind[1497]: New session 8 of user core. May 14 18:09:20.135500 systemd[1]: Started session-8.scope - Session 8 of User core. May 14 18:09:20.645287 sshd[3895]: Connection closed by 139.178.89.65 port 45764 May 14 18:09:20.645946 sshd-session[3893]: pam_unix(sshd:session): session closed for user core May 14 18:09:20.653168 systemd[1]: sshd@7-164.92.104.130:22-139.178.89.65:45764.service: Deactivated successfully. May 14 18:09:20.655759 systemd[1]: session-8.scope: Deactivated successfully. May 14 18:09:20.658034 systemd-logind[1497]: Session 8 logged out. Waiting for processes to exit. May 14 18:09:20.659856 systemd-logind[1497]: Removed session 8. May 14 18:09:20.914788 kubelet[2769]: I0514 18:09:20.914610 2769 eviction_manager.go:366] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 14 18:09:20.914788 kubelet[2769]: I0514 18:09:20.914671 2769 container_gc.go:88] "Attempting to delete unused containers" May 14 18:09:20.920376 kubelet[2769]: I0514 18:09:20.920334 2769 image_gc_manager.go:404] "Attempting to delete unused images" May 14 18:09:20.939396 kubelet[2769]: I0514 18:09:20.939360 2769 eviction_manager.go:377] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 14 18:09:20.939563 kubelet[2769]: I0514 18:09:20.939458 2769 eviction_manager.go:395] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-599987898-gx2xs","kube-system/cilium-8h2lp","kube-system/kube-controller-manager-ci-4334.0.0-a-9d82e253c5","kube-system/kube-proxy-lg268","kube-system/kube-apiserver-ci-4334.0.0-a-9d82e253c5","kube-system/kube-scheduler-ci-4334.0.0-a-9d82e253c5"] May 14 18:09:20.939563 kubelet[2769]: E0514 18:09:20.939502 2769 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-599987898-gx2xs" May 14 18:09:20.939563 kubelet[2769]: E0514 18:09:20.939515 2769 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-8h2lp" May 14 18:09:20.939563 kubelet[2769]: E0514 18:09:20.939528 2769 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-ci-4334.0.0-a-9d82e253c5" May 14 18:09:20.939563 kubelet[2769]: E0514 18:09:20.939536 2769 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-lg268" May 14 18:09:20.939563 kubelet[2769]: E0514 18:09:20.939545 2769 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-ci-4334.0.0-a-9d82e253c5" May 14 18:09:20.939563 kubelet[2769]: E0514 18:09:20.939554 2769 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-ci-4334.0.0-a-9d82e253c5" May 14 18:09:20.939563 kubelet[2769]: I0514 18:09:20.939564 2769 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 14 18:09:25.668299 systemd[1]: Started sshd@8-164.92.104.130:22-139.178.89.65:45768.service - OpenSSH per-connection server daemon (139.178.89.65:45768). May 14 18:09:25.746678 sshd[3908]: Accepted publickey for core from 139.178.89.65 port 45768 ssh2: RSA SHA256:I6v7602y95t0HxsKZunlpQRdbWqTS6jK7hLc8ah5Xaw May 14 18:09:25.748914 sshd-session[3908]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:09:25.757056 systemd-logind[1497]: New session 9 of user core. May 14 18:09:25.765557 systemd[1]: Started session-9.scope - Session 9 of User core. May 14 18:09:25.905692 sshd[3910]: Connection closed by 139.178.89.65 port 45768 May 14 18:09:25.906367 sshd-session[3908]: pam_unix(sshd:session): session closed for user core May 14 18:09:25.910456 systemd-logind[1497]: Session 9 logged out. Waiting for processes to exit. May 14 18:09:25.910796 systemd[1]: sshd@8-164.92.104.130:22-139.178.89.65:45768.service: Deactivated successfully. May 14 18:09:25.913471 systemd[1]: session-9.scope: Deactivated successfully. May 14 18:09:25.917504 systemd-logind[1497]: Removed session 9. May 14 18:09:30.921909 systemd[1]: Started sshd@9-164.92.104.130:22-139.178.89.65:41896.service - OpenSSH per-connection server daemon (139.178.89.65:41896). May 14 18:09:30.967745 kubelet[2769]: I0514 18:09:30.967239 2769 eviction_manager.go:366] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 14 18:09:30.967745 kubelet[2769]: I0514 18:09:30.967305 2769 container_gc.go:88] "Attempting to delete unused containers" May 14 18:09:30.971946 kubelet[2769]: I0514 18:09:30.970456 2769 image_gc_manager.go:404] "Attempting to delete unused images" May 14 18:09:31.003550 sshd[3925]: Accepted publickey for core from 139.178.89.65 port 41896 ssh2: RSA SHA256:I6v7602y95t0HxsKZunlpQRdbWqTS6jK7hLc8ah5Xaw May 14 18:09:31.006186 sshd-session[3925]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:09:31.021304 systemd-logind[1497]: New session 10 of user core. May 14 18:09:31.026728 systemd[1]: Started session-10.scope - Session 10 of User core. May 14 18:09:31.035677 kubelet[2769]: I0514 18:09:31.034610 2769 eviction_manager.go:377] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 14 18:09:31.035677 kubelet[2769]: I0514 18:09:31.035476 2769 eviction_manager.go:395] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-599987898-gx2xs","kube-system/cilium-8h2lp","kube-system/kube-proxy-lg268","kube-system/kube-controller-manager-ci-4334.0.0-a-9d82e253c5","kube-system/kube-apiserver-ci-4334.0.0-a-9d82e253c5","kube-system/kube-scheduler-ci-4334.0.0-a-9d82e253c5"] May 14 18:09:31.035677 kubelet[2769]: E0514 18:09:31.035557 2769 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-599987898-gx2xs" May 14 18:09:31.035677 kubelet[2769]: E0514 18:09:31.035578 2769 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-8h2lp" May 14 18:09:31.035677 kubelet[2769]: E0514 18:09:31.035591 2769 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-lg268" May 14 18:09:31.035677 kubelet[2769]: E0514 18:09:31.035610 2769 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-ci-4334.0.0-a-9d82e253c5" May 14 18:09:31.035677 kubelet[2769]: E0514 18:09:31.035624 2769 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-ci-4334.0.0-a-9d82e253c5" May 14 18:09:31.035677 kubelet[2769]: E0514 18:09:31.035636 2769 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-ci-4334.0.0-a-9d82e253c5" May 14 18:09:31.035677 kubelet[2769]: I0514 18:09:31.035651 2769 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 14 18:09:31.187318 sshd[3927]: Connection closed by 139.178.89.65 port 41896 May 14 18:09:31.188915 sshd-session[3925]: pam_unix(sshd:session): session closed for user core May 14 18:09:31.195654 systemd[1]: sshd@9-164.92.104.130:22-139.178.89.65:41896.service: Deactivated successfully. May 14 18:09:31.199078 systemd[1]: session-10.scope: Deactivated successfully. May 14 18:09:31.204218 systemd-logind[1497]: Session 10 logged out. Waiting for processes to exit. May 14 18:09:31.206861 systemd-logind[1497]: Removed session 10. May 14 18:09:36.205718 systemd[1]: Started sshd@10-164.92.104.130:22-139.178.89.65:41904.service - OpenSSH per-connection server daemon (139.178.89.65:41904). May 14 18:09:36.296980 sshd[3940]: Accepted publickey for core from 139.178.89.65 port 41904 ssh2: RSA SHA256:I6v7602y95t0HxsKZunlpQRdbWqTS6jK7hLc8ah5Xaw May 14 18:09:36.300218 sshd-session[3940]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:09:36.309642 systemd-logind[1497]: New session 11 of user core. May 14 18:09:36.316537 systemd[1]: Started session-11.scope - Session 11 of User core. May 14 18:09:36.503674 sshd[3942]: Connection closed by 139.178.89.65 port 41904 May 14 18:09:36.504022 sshd-session[3940]: pam_unix(sshd:session): session closed for user core May 14 18:09:36.518730 systemd[1]: sshd@10-164.92.104.130:22-139.178.89.65:41904.service: Deactivated successfully. May 14 18:09:36.522701 systemd[1]: session-11.scope: Deactivated successfully. May 14 18:09:36.524265 systemd-logind[1497]: Session 11 logged out. Waiting for processes to exit. May 14 18:09:36.530698 systemd[1]: Started sshd@11-164.92.104.130:22-139.178.89.65:32834.service - OpenSSH per-connection server daemon (139.178.89.65:32834). May 14 18:09:36.532153 systemd-logind[1497]: Removed session 11. May 14 18:09:36.614289 sshd[3954]: Accepted publickey for core from 139.178.89.65 port 32834 ssh2: RSA SHA256:I6v7602y95t0HxsKZunlpQRdbWqTS6jK7hLc8ah5Xaw May 14 18:09:36.616475 sshd-session[3954]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:09:36.626413 systemd-logind[1497]: New session 12 of user core. May 14 18:09:36.632544 systemd[1]: Started session-12.scope - Session 12 of User core. May 14 18:09:36.854188 sshd[3956]: Connection closed by 139.178.89.65 port 32834 May 14 18:09:36.858509 sshd-session[3954]: pam_unix(sshd:session): session closed for user core May 14 18:09:36.871469 systemd[1]: sshd@11-164.92.104.130:22-139.178.89.65:32834.service: Deactivated successfully. May 14 18:09:36.877225 systemd[1]: session-12.scope: Deactivated successfully. May 14 18:09:36.880468 systemd-logind[1497]: Session 12 logged out. Waiting for processes to exit. May 14 18:09:36.889452 systemd-logind[1497]: Removed session 12. May 14 18:09:36.895769 systemd[1]: Started sshd@12-164.92.104.130:22-139.178.89.65:32838.service - OpenSSH per-connection server daemon (139.178.89.65:32838). May 14 18:09:37.002233 sshd[3966]: Accepted publickey for core from 139.178.89.65 port 32838 ssh2: RSA SHA256:I6v7602y95t0HxsKZunlpQRdbWqTS6jK7hLc8ah5Xaw May 14 18:09:37.003892 sshd-session[3966]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:09:37.016903 systemd-logind[1497]: New session 13 of user core. May 14 18:09:37.024446 systemd[1]: Started session-13.scope - Session 13 of User core. May 14 18:09:37.210104 sshd[3968]: Connection closed by 139.178.89.65 port 32838 May 14 18:09:37.210973 sshd-session[3966]: pam_unix(sshd:session): session closed for user core May 14 18:09:37.218391 systemd[1]: sshd@12-164.92.104.130:22-139.178.89.65:32838.service: Deactivated successfully. May 14 18:09:37.222911 systemd[1]: session-13.scope: Deactivated successfully. May 14 18:09:37.225354 systemd-logind[1497]: Session 13 logged out. Waiting for processes to exit. May 14 18:09:37.228367 systemd-logind[1497]: Removed session 13. May 14 18:09:40.345514 kubelet[2769]: E0514 18:09:40.345453 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 14 18:09:41.059282 kubelet[2769]: I0514 18:09:41.059233 2769 eviction_manager.go:366] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 14 18:09:41.059282 kubelet[2769]: I0514 18:09:41.059296 2769 eviction_manager.go:377] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 14 18:09:41.059507 kubelet[2769]: I0514 18:09:41.059438 2769 eviction_manager.go:395] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-599987898-gx2xs","kube-system/cilium-8h2lp","kube-system/kube-controller-manager-ci-4334.0.0-a-9d82e253c5","kube-system/kube-proxy-lg268","kube-system/kube-apiserver-ci-4334.0.0-a-9d82e253c5","kube-system/kube-scheduler-ci-4334.0.0-a-9d82e253c5"] May 14 18:09:41.059507 kubelet[2769]: E0514 18:09:41.059498 2769 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-599987898-gx2xs" May 14 18:09:41.059586 kubelet[2769]: E0514 18:09:41.059519 2769 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-8h2lp" May 14 18:09:41.059586 kubelet[2769]: E0514 18:09:41.059537 2769 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-ci-4334.0.0-a-9d82e253c5" May 14 18:09:41.059586 kubelet[2769]: E0514 18:09:41.059551 2769 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-lg268" May 14 18:09:41.059586 kubelet[2769]: E0514 18:09:41.059566 2769 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-ci-4334.0.0-a-9d82e253c5" May 14 18:09:41.059586 kubelet[2769]: E0514 18:09:41.059581 2769 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-ci-4334.0.0-a-9d82e253c5" May 14 18:09:41.059767 kubelet[2769]: I0514 18:09:41.059599 2769 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 14 18:09:42.229542 systemd[1]: Started sshd@13-164.92.104.130:22-139.178.89.65:32840.service - OpenSSH per-connection server daemon (139.178.89.65:32840). May 14 18:09:42.304835 sshd[3982]: Accepted publickey for core from 139.178.89.65 port 32840 ssh2: RSA SHA256:I6v7602y95t0HxsKZunlpQRdbWqTS6jK7hLc8ah5Xaw May 14 18:09:42.307447 sshd-session[3982]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:09:42.315471 systemd-logind[1497]: New session 14 of user core. May 14 18:09:42.322586 systemd[1]: Started session-14.scope - Session 14 of User core. May 14 18:09:42.480554 sshd[3984]: Connection closed by 139.178.89.65 port 32840 May 14 18:09:42.482114 sshd-session[3982]: pam_unix(sshd:session): session closed for user core May 14 18:09:42.488257 systemd[1]: sshd@13-164.92.104.130:22-139.178.89.65:32840.service: Deactivated successfully. May 14 18:09:42.492091 systemd[1]: session-14.scope: Deactivated successfully. May 14 18:09:42.494023 systemd-logind[1497]: Session 14 logged out. Waiting for processes to exit. May 14 18:09:42.497215 systemd-logind[1497]: Removed session 14. May 14 18:09:47.496818 systemd[1]: Started sshd@14-164.92.104.130:22-139.178.89.65:59080.service - OpenSSH per-connection server daemon (139.178.89.65:59080). May 14 18:09:47.574547 sshd[4004]: Accepted publickey for core from 139.178.89.65 port 59080 ssh2: RSA SHA256:I6v7602y95t0HxsKZunlpQRdbWqTS6jK7hLc8ah5Xaw May 14 18:09:47.576372 sshd-session[4004]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:09:47.582364 systemd-logind[1497]: New session 15 of user core. May 14 18:09:47.587438 systemd[1]: Started session-15.scope - Session 15 of User core. May 14 18:09:47.740685 sshd[4006]: Connection closed by 139.178.89.65 port 59080 May 14 18:09:47.741374 sshd-session[4004]: pam_unix(sshd:session): session closed for user core May 14 18:09:47.746683 systemd-logind[1497]: Session 15 logged out. Waiting for processes to exit. May 14 18:09:47.747804 systemd[1]: sshd@14-164.92.104.130:22-139.178.89.65:59080.service: Deactivated successfully. May 14 18:09:47.750625 systemd[1]: session-15.scope: Deactivated successfully. May 14 18:09:47.753400 systemd-logind[1497]: Removed session 15. May 14 18:09:51.077225 kubelet[2769]: I0514 18:09:51.077057 2769 eviction_manager.go:366] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 14 18:09:51.077225 kubelet[2769]: I0514 18:09:51.077104 2769 container_gc.go:88] "Attempting to delete unused containers" May 14 18:09:51.080192 kubelet[2769]: I0514 18:09:51.080096 2769 image_gc_manager.go:404] "Attempting to delete unused images" May 14 18:09:51.094220 kubelet[2769]: I0514 18:09:51.094021 2769 eviction_manager.go:377] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 14 18:09:51.094220 kubelet[2769]: I0514 18:09:51.094145 2769 eviction_manager.go:395] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-599987898-gx2xs","kube-system/cilium-8h2lp","kube-system/kube-controller-manager-ci-4334.0.0-a-9d82e253c5","kube-system/kube-proxy-lg268","kube-system/kube-apiserver-ci-4334.0.0-a-9d82e253c5","kube-system/kube-scheduler-ci-4334.0.0-a-9d82e253c5"] May 14 18:09:51.094220 kubelet[2769]: E0514 18:09:51.094185 2769 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-599987898-gx2xs" May 14 18:09:51.094540 kubelet[2769]: E0514 18:09:51.094452 2769 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-8h2lp" May 14 18:09:51.094540 kubelet[2769]: E0514 18:09:51.094484 2769 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-ci-4334.0.0-a-9d82e253c5" May 14 18:09:51.094540 kubelet[2769]: E0514 18:09:51.094494 2769 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-lg268" May 14 18:09:51.094540 kubelet[2769]: E0514 18:09:51.094503 2769 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-ci-4334.0.0-a-9d82e253c5" May 14 18:09:51.094540 kubelet[2769]: E0514 18:09:51.094513 2769 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-ci-4334.0.0-a-9d82e253c5" May 14 18:09:51.094540 kubelet[2769]: I0514 18:09:51.094527 2769 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 14 18:09:52.763444 systemd[1]: Started sshd@15-164.92.104.130:22-139.178.89.65:59084.service - OpenSSH per-connection server daemon (139.178.89.65:59084). May 14 18:09:52.844247 sshd[4021]: Accepted publickey for core from 139.178.89.65 port 59084 ssh2: RSA SHA256:I6v7602y95t0HxsKZunlpQRdbWqTS6jK7hLc8ah5Xaw May 14 18:09:52.847068 sshd-session[4021]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:09:52.856131 systemd-logind[1497]: New session 16 of user core. May 14 18:09:52.860528 systemd[1]: Started session-16.scope - Session 16 of User core. May 14 18:09:53.058144 sshd[4023]: Connection closed by 139.178.89.65 port 59084 May 14 18:09:53.059923 sshd-session[4021]: pam_unix(sshd:session): session closed for user core May 14 18:09:53.074814 systemd[1]: sshd@15-164.92.104.130:22-139.178.89.65:59084.service: Deactivated successfully. May 14 18:09:53.078271 systemd[1]: session-16.scope: Deactivated successfully. May 14 18:09:53.080127 systemd-logind[1497]: Session 16 logged out. Waiting for processes to exit. May 14 18:09:53.085615 systemd[1]: Started sshd@16-164.92.104.130:22-139.178.89.65:59088.service - OpenSSH per-connection server daemon (139.178.89.65:59088). May 14 18:09:53.087116 systemd-logind[1497]: Removed session 16. May 14 18:09:53.190920 sshd[4035]: Accepted publickey for core from 139.178.89.65 port 59088 ssh2: RSA SHA256:I6v7602y95t0HxsKZunlpQRdbWqTS6jK7hLc8ah5Xaw May 14 18:09:53.193510 sshd-session[4035]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:09:53.200413 systemd-logind[1497]: New session 17 of user core. May 14 18:09:53.207764 systemd[1]: Started session-17.scope - Session 17 of User core. May 14 18:09:53.590072 sshd[4037]: Connection closed by 139.178.89.65 port 59088 May 14 18:09:53.591692 sshd-session[4035]: pam_unix(sshd:session): session closed for user core May 14 18:09:53.603431 systemd[1]: sshd@16-164.92.104.130:22-139.178.89.65:59088.service: Deactivated successfully. May 14 18:09:53.608073 systemd[1]: session-17.scope: Deactivated successfully. May 14 18:09:53.609883 systemd-logind[1497]: Session 17 logged out. Waiting for processes to exit. May 14 18:09:53.617173 systemd[1]: Started sshd@17-164.92.104.130:22-139.178.89.65:59098.service - OpenSSH per-connection server daemon (139.178.89.65:59098). May 14 18:09:53.619637 systemd-logind[1497]: Removed session 17. May 14 18:09:53.695457 sshd[4047]: Accepted publickey for core from 139.178.89.65 port 59098 ssh2: RSA SHA256:I6v7602y95t0HxsKZunlpQRdbWqTS6jK7hLc8ah5Xaw May 14 18:09:53.698880 sshd-session[4047]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:09:53.707083 systemd-logind[1497]: New session 18 of user core. May 14 18:09:53.716594 systemd[1]: Started session-18.scope - Session 18 of User core. May 14 18:09:55.710117 sshd[4049]: Connection closed by 139.178.89.65 port 59098 May 14 18:09:55.710818 sshd-session[4047]: pam_unix(sshd:session): session closed for user core May 14 18:09:55.729143 systemd[1]: sshd@17-164.92.104.130:22-139.178.89.65:59098.service: Deactivated successfully. May 14 18:09:55.735414 systemd[1]: session-18.scope: Deactivated successfully. May 14 18:09:55.735902 systemd[1]: session-18.scope: Consumed 701ms CPU time, 64.4M memory peak. May 14 18:09:55.737022 systemd-logind[1497]: Session 18 logged out. Waiting for processes to exit. May 14 18:09:55.745815 systemd[1]: Started sshd@18-164.92.104.130:22-139.178.89.65:59100.service - OpenSSH per-connection server daemon (139.178.89.65:59100). May 14 18:09:55.753342 systemd-logind[1497]: Removed session 18. May 14 18:09:55.855609 sshd[4067]: Accepted publickey for core from 139.178.89.65 port 59100 ssh2: RSA SHA256:I6v7602y95t0HxsKZunlpQRdbWqTS6jK7hLc8ah5Xaw May 14 18:09:55.857944 sshd-session[4067]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:09:55.866549 systemd-logind[1497]: New session 19 of user core. May 14 18:09:55.876563 systemd[1]: Started session-19.scope - Session 19 of User core. May 14 18:09:56.202593 sshd[4069]: Connection closed by 139.178.89.65 port 59100 May 14 18:09:56.203628 sshd-session[4067]: pam_unix(sshd:session): session closed for user core May 14 18:09:56.217862 systemd[1]: sshd@18-164.92.104.130:22-139.178.89.65:59100.service: Deactivated successfully. May 14 18:09:56.222530 systemd[1]: session-19.scope: Deactivated successfully. May 14 18:09:56.223925 systemd-logind[1497]: Session 19 logged out. Waiting for processes to exit. May 14 18:09:56.229601 systemd[1]: Started sshd@19-164.92.104.130:22-139.178.89.65:59102.service - OpenSSH per-connection server daemon (139.178.89.65:59102). May 14 18:09:56.233928 systemd-logind[1497]: Removed session 19. May 14 18:09:56.312381 sshd[4079]: Accepted publickey for core from 139.178.89.65 port 59102 ssh2: RSA SHA256:I6v7602y95t0HxsKZunlpQRdbWqTS6jK7hLc8ah5Xaw May 14 18:09:56.314504 sshd-session[4079]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:09:56.322320 systemd-logind[1497]: New session 20 of user core. May 14 18:09:56.329573 systemd[1]: Started session-20.scope - Session 20 of User core. May 14 18:09:56.471267 sshd[4081]: Connection closed by 139.178.89.65 port 59102 May 14 18:09:56.472495 sshd-session[4079]: pam_unix(sshd:session): session closed for user core May 14 18:09:56.480029 systemd[1]: sshd@19-164.92.104.130:22-139.178.89.65:59102.service: Deactivated successfully. May 14 18:09:56.483758 systemd[1]: session-20.scope: Deactivated successfully. May 14 18:09:56.485650 systemd-logind[1497]: Session 20 logged out. Waiting for processes to exit. May 14 18:09:56.488265 systemd-logind[1497]: Removed session 20. May 14 18:09:57.344645 kubelet[2769]: E0514 18:09:57.344594 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 14 18:10:01.119661 kubelet[2769]: I0514 18:10:01.119601 2769 eviction_manager.go:366] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 14 18:10:01.121024 kubelet[2769]: I0514 18:10:01.120276 2769 eviction_manager.go:377] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 14 18:10:01.121024 kubelet[2769]: I0514 18:10:01.120862 2769 eviction_manager.go:395] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-599987898-gx2xs","kube-system/cilium-8h2lp","kube-system/kube-proxy-lg268","kube-system/kube-controller-manager-ci-4334.0.0-a-9d82e253c5","kube-system/kube-apiserver-ci-4334.0.0-a-9d82e253c5","kube-system/kube-scheduler-ci-4334.0.0-a-9d82e253c5"] May 14 18:10:01.121024 kubelet[2769]: E0514 18:10:01.120918 2769 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-599987898-gx2xs" May 14 18:10:01.121024 kubelet[2769]: E0514 18:10:01.120935 2769 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-8h2lp" May 14 18:10:01.121024 kubelet[2769]: E0514 18:10:01.120950 2769 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-lg268" May 14 18:10:01.121024 kubelet[2769]: E0514 18:10:01.120962 2769 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-ci-4334.0.0-a-9d82e253c5" May 14 18:10:01.121024 kubelet[2769]: E0514 18:10:01.120975 2769 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-ci-4334.0.0-a-9d82e253c5" May 14 18:10:01.121024 kubelet[2769]: E0514 18:10:01.120987 2769 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-ci-4334.0.0-a-9d82e253c5" May 14 18:10:01.121024 kubelet[2769]: I0514 18:10:01.121001 2769 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 14 18:10:01.494604 systemd[1]: Started sshd@20-164.92.104.130:22-139.178.89.65:50668.service - OpenSSH per-connection server daemon (139.178.89.65:50668). May 14 18:10:01.586172 sshd[4093]: Accepted publickey for core from 139.178.89.65 port 50668 ssh2: RSA SHA256:I6v7602y95t0HxsKZunlpQRdbWqTS6jK7hLc8ah5Xaw May 14 18:10:01.588903 sshd-session[4093]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:10:01.600581 systemd-logind[1497]: New session 21 of user core. May 14 18:10:01.612962 systemd[1]: Started session-21.scope - Session 21 of User core. May 14 18:10:01.841711 sshd[4095]: Connection closed by 139.178.89.65 port 50668 May 14 18:10:01.844664 sshd-session[4093]: pam_unix(sshd:session): session closed for user core May 14 18:10:01.851143 systemd[1]: sshd@20-164.92.104.130:22-139.178.89.65:50668.service: Deactivated successfully. May 14 18:10:01.859036 systemd[1]: session-21.scope: Deactivated successfully. May 14 18:10:01.864564 systemd-logind[1497]: Session 21 logged out. Waiting for processes to exit. May 14 18:10:01.867509 systemd-logind[1497]: Removed session 21. May 14 18:10:06.344449 kubelet[2769]: E0514 18:10:06.344377 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 14 18:10:06.862514 systemd[1]: Started sshd@21-164.92.104.130:22-139.178.89.65:59496.service - OpenSSH per-connection server daemon (139.178.89.65:59496). May 14 18:10:06.954568 sshd[4110]: Accepted publickey for core from 139.178.89.65 port 59496 ssh2: RSA SHA256:I6v7602y95t0HxsKZunlpQRdbWqTS6jK7hLc8ah5Xaw May 14 18:10:06.958909 sshd-session[4110]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:10:06.971245 systemd-logind[1497]: New session 22 of user core. May 14 18:10:06.977530 systemd[1]: Started session-22.scope - Session 22 of User core. May 14 18:10:07.155932 sshd[4112]: Connection closed by 139.178.89.65 port 59496 May 14 18:10:07.156595 sshd-session[4110]: pam_unix(sshd:session): session closed for user core May 14 18:10:07.165412 systemd[1]: sshd@21-164.92.104.130:22-139.178.89.65:59496.service: Deactivated successfully. May 14 18:10:07.169564 systemd[1]: session-22.scope: Deactivated successfully. May 14 18:10:07.171743 systemd-logind[1497]: Session 22 logged out. Waiting for processes to exit. May 14 18:10:07.174128 systemd-logind[1497]: Removed session 22. May 14 18:10:11.146067 kubelet[2769]: I0514 18:10:11.146003 2769 eviction_manager.go:366] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 14 18:10:11.147354 kubelet[2769]: I0514 18:10:11.146309 2769 container_gc.go:88] "Attempting to delete unused containers" May 14 18:10:11.153886 kubelet[2769]: I0514 18:10:11.153736 2769 image_gc_manager.go:404] "Attempting to delete unused images" May 14 18:10:11.177157 kubelet[2769]: I0514 18:10:11.176875 2769 eviction_manager.go:377] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 14 18:10:11.177157 kubelet[2769]: I0514 18:10:11.177017 2769 eviction_manager.go:395] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-599987898-gx2xs","kube-system/cilium-8h2lp","kube-system/kube-controller-manager-ci-4334.0.0-a-9d82e253c5","kube-system/kube-proxy-lg268","kube-system/kube-apiserver-ci-4334.0.0-a-9d82e253c5","kube-system/kube-scheduler-ci-4334.0.0-a-9d82e253c5"] May 14 18:10:11.177157 kubelet[2769]: E0514 18:10:11.177071 2769 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-599987898-gx2xs" May 14 18:10:11.177157 kubelet[2769]: E0514 18:10:11.177086 2769 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-8h2lp" May 14 18:10:11.177157 kubelet[2769]: E0514 18:10:11.177096 2769 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-ci-4334.0.0-a-9d82e253c5" May 14 18:10:11.177157 kubelet[2769]: E0514 18:10:11.177107 2769 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-lg268" May 14 18:10:11.177157 kubelet[2769]: E0514 18:10:11.177116 2769 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-ci-4334.0.0-a-9d82e253c5" May 14 18:10:11.177157 kubelet[2769]: E0514 18:10:11.177124 2769 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-ci-4334.0.0-a-9d82e253c5" May 14 18:10:11.177157 kubelet[2769]: I0514 18:10:11.177134 2769 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 14 18:10:12.186931 systemd[1]: Started sshd@22-164.92.104.130:22-139.178.89.65:59500.service - OpenSSH per-connection server daemon (139.178.89.65:59500). May 14 18:10:12.265112 sshd[4124]: Accepted publickey for core from 139.178.89.65 port 59500 ssh2: RSA SHA256:I6v7602y95t0HxsKZunlpQRdbWqTS6jK7hLc8ah5Xaw May 14 18:10:12.267789 sshd-session[4124]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:10:12.276250 systemd-logind[1497]: New session 23 of user core. May 14 18:10:12.283533 systemd[1]: Started session-23.scope - Session 23 of User core. May 14 18:10:12.465592 sshd[4126]: Connection closed by 139.178.89.65 port 59500 May 14 18:10:12.467141 sshd-session[4124]: pam_unix(sshd:session): session closed for user core May 14 18:10:12.474177 systemd[1]: sshd@22-164.92.104.130:22-139.178.89.65:59500.service: Deactivated successfully. May 14 18:10:12.479134 systemd[1]: session-23.scope: Deactivated successfully. May 14 18:10:12.484617 systemd-logind[1497]: Session 23 logged out. Waiting for processes to exit. May 14 18:10:12.486815 systemd-logind[1497]: Removed session 23. May 14 18:10:14.345233 kubelet[2769]: E0514 18:10:14.344138 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 14 18:10:14.347344 kubelet[2769]: E0514 18:10:14.347310 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 14 18:10:17.497126 systemd[1]: Started sshd@23-164.92.104.130:22-139.178.89.65:49346.service - OpenSSH per-connection server daemon (139.178.89.65:49346). May 14 18:10:17.590091 sshd[4139]: Accepted publickey for core from 139.178.89.65 port 49346 ssh2: RSA SHA256:I6v7602y95t0HxsKZunlpQRdbWqTS6jK7hLc8ah5Xaw May 14 18:10:17.592455 sshd-session[4139]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:10:17.603241 systemd-logind[1497]: New session 24 of user core. May 14 18:10:17.611633 systemd[1]: Started session-24.scope - Session 24 of User core. May 14 18:10:17.786817 sshd[4141]: Connection closed by 139.178.89.65 port 49346 May 14 18:10:17.790307 sshd-session[4139]: pam_unix(sshd:session): session closed for user core May 14 18:10:17.806850 systemd[1]: sshd@23-164.92.104.130:22-139.178.89.65:49346.service: Deactivated successfully. May 14 18:10:17.812495 systemd[1]: session-24.scope: Deactivated successfully. May 14 18:10:17.814757 systemd-logind[1497]: Session 24 logged out. Waiting for processes to exit. May 14 18:10:17.823706 systemd[1]: Started sshd@24-164.92.104.130:22-139.178.89.65:49348.service - OpenSSH per-connection server daemon (139.178.89.65:49348). May 14 18:10:17.825892 systemd-logind[1497]: Removed session 24. May 14 18:10:17.915268 sshd[4153]: Accepted publickey for core from 139.178.89.65 port 49348 ssh2: RSA SHA256:I6v7602y95t0HxsKZunlpQRdbWqTS6jK7hLc8ah5Xaw May 14 18:10:17.919622 sshd-session[4153]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:10:17.944704 systemd-logind[1497]: New session 25 of user core. May 14 18:10:17.951794 systemd[1]: Started session-25.scope - Session 25 of User core. May 14 18:10:19.717680 containerd[1515]: time="2025-05-14T18:10:19.717621066Z" level=info msg="StopContainer for \"0eb3a72e002f85664258db94d38d0dcbd6f8f0a2cf0bf7bd3dfaf0cf830cfda3\" with timeout 30 (s)" May 14 18:10:19.719306 containerd[1515]: time="2025-05-14T18:10:19.718990208Z" level=info msg="Stop container \"0eb3a72e002f85664258db94d38d0dcbd6f8f0a2cf0bf7bd3dfaf0cf830cfda3\" with signal terminated" May 14 18:10:19.725166 containerd[1515]: time="2025-05-14T18:10:19.725104272Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 14 18:10:19.734027 containerd[1515]: time="2025-05-14T18:10:19.733961209Z" level=info msg="TaskExit event in podsandbox handler container_id:\"685ad51f1f44514a4b4052420b2c7927d0325efd4c3c30e12dc3c3fedd60cbed\" id:\"059e6550cbe6bc408a87517f995075565c65cff49e29436f23a1e76662849d35\" pid:4174 exited_at:{seconds:1747246219 nanos:732849452}" May 14 18:10:19.737919 containerd[1515]: time="2025-05-14T18:10:19.737860280Z" level=info msg="StopContainer for \"685ad51f1f44514a4b4052420b2c7927d0325efd4c3c30e12dc3c3fedd60cbed\" with timeout 2 (s)" May 14 18:10:19.738797 containerd[1515]: time="2025-05-14T18:10:19.738733271Z" level=info msg="Stop container \"685ad51f1f44514a4b4052420b2c7927d0325efd4c3c30e12dc3c3fedd60cbed\" with signal terminated" May 14 18:10:19.755936 systemd[1]: cri-containerd-0eb3a72e002f85664258db94d38d0dcbd6f8f0a2cf0bf7bd3dfaf0cf830cfda3.scope: Deactivated successfully. May 14 18:10:19.756308 systemd[1]: cri-containerd-0eb3a72e002f85664258db94d38d0dcbd6f8f0a2cf0bf7bd3dfaf0cf830cfda3.scope: Consumed 513ms CPU time, 27.4M memory peak, 2M read from disk, 4K written to disk. May 14 18:10:19.757992 systemd-networkd[1459]: lxc_health: Link DOWN May 14 18:10:19.758003 systemd-networkd[1459]: lxc_health: Lost carrier May 14 18:10:19.774189 containerd[1515]: time="2025-05-14T18:10:19.773993774Z" level=info msg="received exit event container_id:\"0eb3a72e002f85664258db94d38d0dcbd6f8f0a2cf0bf7bd3dfaf0cf830cfda3\" id:\"0eb3a72e002f85664258db94d38d0dcbd6f8f0a2cf0bf7bd3dfaf0cf830cfda3\" pid:3178 exited_at:{seconds:1747246219 nanos:771192270}" May 14 18:10:19.774629 containerd[1515]: time="2025-05-14T18:10:19.774598319Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0eb3a72e002f85664258db94d38d0dcbd6f8f0a2cf0bf7bd3dfaf0cf830cfda3\" id:\"0eb3a72e002f85664258db94d38d0dcbd6f8f0a2cf0bf7bd3dfaf0cf830cfda3\" pid:3178 exited_at:{seconds:1747246219 nanos:771192270}" May 14 18:10:19.795754 systemd[1]: cri-containerd-685ad51f1f44514a4b4052420b2c7927d0325efd4c3c30e12dc3c3fedd60cbed.scope: Deactivated successfully. May 14 18:10:19.796487 systemd[1]: cri-containerd-685ad51f1f44514a4b4052420b2c7927d0325efd4c3c30e12dc3c3fedd60cbed.scope: Consumed 9.283s CPU time, 152.1M memory peak, 36M read from disk, 13.3M written to disk. May 14 18:10:19.801259 containerd[1515]: time="2025-05-14T18:10:19.800925047Z" level=info msg="received exit event container_id:\"685ad51f1f44514a4b4052420b2c7927d0325efd4c3c30e12dc3c3fedd60cbed\" id:\"685ad51f1f44514a4b4052420b2c7927d0325efd4c3c30e12dc3c3fedd60cbed\" pid:3422 exited_at:{seconds:1747246219 nanos:800653181}" May 14 18:10:19.801811 containerd[1515]: time="2025-05-14T18:10:19.801776083Z" level=info msg="TaskExit event in podsandbox handler container_id:\"685ad51f1f44514a4b4052420b2c7927d0325efd4c3c30e12dc3c3fedd60cbed\" id:\"685ad51f1f44514a4b4052420b2c7927d0325efd4c3c30e12dc3c3fedd60cbed\" pid:3422 exited_at:{seconds:1747246219 nanos:800653181}" May 14 18:10:19.838301 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0eb3a72e002f85664258db94d38d0dcbd6f8f0a2cf0bf7bd3dfaf0cf830cfda3-rootfs.mount: Deactivated successfully. May 14 18:10:19.851504 containerd[1515]: time="2025-05-14T18:10:19.851418211Z" level=info msg="StopContainer for \"0eb3a72e002f85664258db94d38d0dcbd6f8f0a2cf0bf7bd3dfaf0cf830cfda3\" returns successfully" May 14 18:10:19.856470 containerd[1515]: time="2025-05-14T18:10:19.856133056Z" level=info msg="StopPodSandbox for \"ab41f4919e3067030bab691ab53c1898092fccba87414838d008cf0b73dded37\"" May 14 18:10:19.856627 containerd[1515]: time="2025-05-14T18:10:19.856587467Z" level=info msg="Container to stop \"0eb3a72e002f85664258db94d38d0dcbd6f8f0a2cf0bf7bd3dfaf0cf830cfda3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 18:10:19.862369 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-685ad51f1f44514a4b4052420b2c7927d0325efd4c3c30e12dc3c3fedd60cbed-rootfs.mount: Deactivated successfully. May 14 18:10:19.871899 containerd[1515]: time="2025-05-14T18:10:19.871836339Z" level=info msg="StopContainer for \"685ad51f1f44514a4b4052420b2c7927d0325efd4c3c30e12dc3c3fedd60cbed\" returns successfully" May 14 18:10:19.874875 containerd[1515]: time="2025-05-14T18:10:19.874399791Z" level=info msg="StopPodSandbox for \"ecf9032bd1adf17cd0839da58ac7766515d3316bba789ac77c242957a2661616\"" May 14 18:10:19.874875 containerd[1515]: time="2025-05-14T18:10:19.874509456Z" level=info msg="Container to stop \"68a557a4479d21f84ec27f1758240f9bf7fba44e93c8dfb482201113f6908622\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 18:10:19.874875 containerd[1515]: time="2025-05-14T18:10:19.874532474Z" level=info msg="Container to stop \"a4fefb97c6de6815ca41ab74e1da1d28d7100a8b8b2b2a1a7ec47e898e862bf4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 18:10:19.874875 containerd[1515]: time="2025-05-14T18:10:19.874547580Z" level=info msg="Container to stop \"a1cee0b6dc84af715112e1083752bc9138cf4a35bc8ce017c4256215438cafa2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 18:10:19.874875 containerd[1515]: time="2025-05-14T18:10:19.874560563Z" level=info msg="Container to stop \"9d9280d2972e9a9208c3c1f35636e18251d5d8b4edfb4e67f37d1f0d7cdac833\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 18:10:19.874875 containerd[1515]: time="2025-05-14T18:10:19.874572828Z" level=info msg="Container to stop \"685ad51f1f44514a4b4052420b2c7927d0325efd4c3c30e12dc3c3fedd60cbed\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 18:10:19.882370 systemd[1]: cri-containerd-ab41f4919e3067030bab691ab53c1898092fccba87414838d008cf0b73dded37.scope: Deactivated successfully. May 14 18:10:19.888488 containerd[1515]: time="2025-05-14T18:10:19.888366752Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ab41f4919e3067030bab691ab53c1898092fccba87414838d008cf0b73dded37\" id:\"ab41f4919e3067030bab691ab53c1898092fccba87414838d008cf0b73dded37\" pid:2896 exit_status:137 exited_at:{seconds:1747246219 nanos:886424351}" May 14 18:10:19.896795 systemd[1]: cri-containerd-ecf9032bd1adf17cd0839da58ac7766515d3316bba789ac77c242957a2661616.scope: Deactivated successfully. May 14 18:10:19.948611 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ecf9032bd1adf17cd0839da58ac7766515d3316bba789ac77c242957a2661616-rootfs.mount: Deactivated successfully. May 14 18:10:19.954790 containerd[1515]: time="2025-05-14T18:10:19.954736650Z" level=info msg="shim disconnected" id=ecf9032bd1adf17cd0839da58ac7766515d3316bba789ac77c242957a2661616 namespace=k8s.io May 14 18:10:19.954790 containerd[1515]: time="2025-05-14T18:10:19.954783410Z" level=warning msg="cleaning up after shim disconnected" id=ecf9032bd1adf17cd0839da58ac7766515d3316bba789ac77c242957a2661616 namespace=k8s.io May 14 18:10:19.971278 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ab41f4919e3067030bab691ab53c1898092fccba87414838d008cf0b73dded37-rootfs.mount: Deactivated successfully. May 14 18:10:19.980134 containerd[1515]: time="2025-05-14T18:10:19.954794849Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 18:10:19.981911 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ab41f4919e3067030bab691ab53c1898092fccba87414838d008cf0b73dded37-shm.mount: Deactivated successfully. May 14 18:10:19.996473 containerd[1515]: time="2025-05-14T18:10:19.976532321Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ecf9032bd1adf17cd0839da58ac7766515d3316bba789ac77c242957a2661616\" id:\"ecf9032bd1adf17cd0839da58ac7766515d3316bba789ac77c242957a2661616\" pid:2980 exit_status:137 exited_at:{seconds:1747246219 nanos:899675155}" May 14 18:10:19.998043 containerd[1515]: time="2025-05-14T18:10:19.997985959Z" level=info msg="received exit event sandbox_id:\"ecf9032bd1adf17cd0839da58ac7766515d3316bba789ac77c242957a2661616\" exit_status:137 exited_at:{seconds:1747246219 nanos:899675155}" May 14 18:10:20.001426 containerd[1515]: time="2025-05-14T18:10:20.001357873Z" level=info msg="received exit event sandbox_id:\"ab41f4919e3067030bab691ab53c1898092fccba87414838d008cf0b73dded37\" exit_status:137 exited_at:{seconds:1747246219 nanos:886424351}" May 14 18:10:20.002045 containerd[1515]: time="2025-05-14T18:10:20.001931308Z" level=info msg="shim disconnected" id=ab41f4919e3067030bab691ab53c1898092fccba87414838d008cf0b73dded37 namespace=k8s.io May 14 18:10:20.002045 containerd[1515]: time="2025-05-14T18:10:20.001971411Z" level=warning msg="cleaning up after shim disconnected" id=ab41f4919e3067030bab691ab53c1898092fccba87414838d008cf0b73dded37 namespace=k8s.io May 14 18:10:20.002045 containerd[1515]: time="2025-05-14T18:10:20.001983771Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 18:10:20.003409 containerd[1515]: time="2025-05-14T18:10:20.003320953Z" level=info msg="TearDown network for sandbox \"ecf9032bd1adf17cd0839da58ac7766515d3316bba789ac77c242957a2661616\" successfully" May 14 18:10:20.003409 containerd[1515]: time="2025-05-14T18:10:20.003356588Z" level=info msg="StopPodSandbox for \"ecf9032bd1adf17cd0839da58ac7766515d3316bba789ac77c242957a2661616\" returns successfully" May 14 18:10:20.003604 containerd[1515]: time="2025-05-14T18:10:20.003480929Z" level=info msg="TearDown network for sandbox \"ab41f4919e3067030bab691ab53c1898092fccba87414838d008cf0b73dded37\" successfully" May 14 18:10:20.003604 containerd[1515]: time="2025-05-14T18:10:20.003505308Z" level=info msg="StopPodSandbox for \"ab41f4919e3067030bab691ab53c1898092fccba87414838d008cf0b73dded37\" returns successfully" May 14 18:10:20.172847 kubelet[2769]: I0514 18:10:20.172791 2769 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/12d1ce81-a93d-4fac-a2e1-e28db1aee075-cilium-cgroup\") pod \"12d1ce81-a93d-4fac-a2e1-e28db1aee075\" (UID: \"12d1ce81-a93d-4fac-a2e1-e28db1aee075\") " May 14 18:10:20.173331 kubelet[2769]: I0514 18:10:20.173147 2769 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/12d1ce81-a93d-4fac-a2e1-e28db1aee075-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "12d1ce81-a93d-4fac-a2e1-e28db1aee075" (UID: "12d1ce81-a93d-4fac-a2e1-e28db1aee075"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 18:10:20.178316 kubelet[2769]: I0514 18:10:20.178245 2769 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4fk6\" (UniqueName: \"kubernetes.io/projected/2feb4d62-e49f-4cca-a180-457686b5c7c5-kube-api-access-s4fk6\") pod \"2feb4d62-e49f-4cca-a180-457686b5c7c5\" (UID: \"2feb4d62-e49f-4cca-a180-457686b5c7c5\") " May 14 18:10:20.178316 kubelet[2769]: I0514 18:10:20.178328 2769 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/12d1ce81-a93d-4fac-a2e1-e28db1aee075-clustermesh-secrets\") pod \"12d1ce81-a93d-4fac-a2e1-e28db1aee075\" (UID: \"12d1ce81-a93d-4fac-a2e1-e28db1aee075\") " May 14 18:10:20.178571 kubelet[2769]: I0514 18:10:20.178352 2769 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/12d1ce81-a93d-4fac-a2e1-e28db1aee075-hostproc\") pod \"12d1ce81-a93d-4fac-a2e1-e28db1aee075\" (UID: \"12d1ce81-a93d-4fac-a2e1-e28db1aee075\") " May 14 18:10:20.178571 kubelet[2769]: I0514 18:10:20.178377 2769 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/12d1ce81-a93d-4fac-a2e1-e28db1aee075-host-proc-sys-net\") pod \"12d1ce81-a93d-4fac-a2e1-e28db1aee075\" (UID: \"12d1ce81-a93d-4fac-a2e1-e28db1aee075\") " May 14 18:10:20.178571 kubelet[2769]: I0514 18:10:20.178397 2769 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/12d1ce81-a93d-4fac-a2e1-e28db1aee075-xtables-lock\") pod \"12d1ce81-a93d-4fac-a2e1-e28db1aee075\" (UID: \"12d1ce81-a93d-4fac-a2e1-e28db1aee075\") " May 14 18:10:20.178571 kubelet[2769]: I0514 18:10:20.178428 2769 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8xn5z\" (UniqueName: \"kubernetes.io/projected/12d1ce81-a93d-4fac-a2e1-e28db1aee075-kube-api-access-8xn5z\") pod \"12d1ce81-a93d-4fac-a2e1-e28db1aee075\" (UID: \"12d1ce81-a93d-4fac-a2e1-e28db1aee075\") " May 14 18:10:20.178571 kubelet[2769]: I0514 18:10:20.178449 2769 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/12d1ce81-a93d-4fac-a2e1-e28db1aee075-etc-cni-netd\") pod \"12d1ce81-a93d-4fac-a2e1-e28db1aee075\" (UID: \"12d1ce81-a93d-4fac-a2e1-e28db1aee075\") " May 14 18:10:20.178571 kubelet[2769]: I0514 18:10:20.178469 2769 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/12d1ce81-a93d-4fac-a2e1-e28db1aee075-lib-modules\") pod \"12d1ce81-a93d-4fac-a2e1-e28db1aee075\" (UID: \"12d1ce81-a93d-4fac-a2e1-e28db1aee075\") " May 14 18:10:20.178732 kubelet[2769]: I0514 18:10:20.178488 2769 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/12d1ce81-a93d-4fac-a2e1-e28db1aee075-host-proc-sys-kernel\") pod \"12d1ce81-a93d-4fac-a2e1-e28db1aee075\" (UID: \"12d1ce81-a93d-4fac-a2e1-e28db1aee075\") " May 14 18:10:20.178732 kubelet[2769]: I0514 18:10:20.178511 2769 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/12d1ce81-a93d-4fac-a2e1-e28db1aee075-hubble-tls\") pod \"12d1ce81-a93d-4fac-a2e1-e28db1aee075\" (UID: \"12d1ce81-a93d-4fac-a2e1-e28db1aee075\") " May 14 18:10:20.178732 kubelet[2769]: I0514 18:10:20.178534 2769 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/12d1ce81-a93d-4fac-a2e1-e28db1aee075-cilium-config-path\") pod \"12d1ce81-a93d-4fac-a2e1-e28db1aee075\" (UID: \"12d1ce81-a93d-4fac-a2e1-e28db1aee075\") " May 14 18:10:20.178732 kubelet[2769]: I0514 18:10:20.178559 2769 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/12d1ce81-a93d-4fac-a2e1-e28db1aee075-cilium-run\") pod \"12d1ce81-a93d-4fac-a2e1-e28db1aee075\" (UID: \"12d1ce81-a93d-4fac-a2e1-e28db1aee075\") " May 14 18:10:20.178732 kubelet[2769]: I0514 18:10:20.178579 2769 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/12d1ce81-a93d-4fac-a2e1-e28db1aee075-bpf-maps\") pod \"12d1ce81-a93d-4fac-a2e1-e28db1aee075\" (UID: \"12d1ce81-a93d-4fac-a2e1-e28db1aee075\") " May 14 18:10:20.178732 kubelet[2769]: I0514 18:10:20.178598 2769 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/12d1ce81-a93d-4fac-a2e1-e28db1aee075-cni-path\") pod \"12d1ce81-a93d-4fac-a2e1-e28db1aee075\" (UID: \"12d1ce81-a93d-4fac-a2e1-e28db1aee075\") " May 14 18:10:20.178900 kubelet[2769]: I0514 18:10:20.178622 2769 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2feb4d62-e49f-4cca-a180-457686b5c7c5-cilium-config-path\") pod \"2feb4d62-e49f-4cca-a180-457686b5c7c5\" (UID: \"2feb4d62-e49f-4cca-a180-457686b5c7c5\") " May 14 18:10:20.178900 kubelet[2769]: I0514 18:10:20.178680 2769 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/12d1ce81-a93d-4fac-a2e1-e28db1aee075-cilium-cgroup\") on node \"ci-4334.0.0-a-9d82e253c5\" DevicePath \"\"" May 14 18:10:20.179098 kubelet[2769]: I0514 18:10:20.179055 2769 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/12d1ce81-a93d-4fac-a2e1-e28db1aee075-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "12d1ce81-a93d-4fac-a2e1-e28db1aee075" (UID: "12d1ce81-a93d-4fac-a2e1-e28db1aee075"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 18:10:20.182873 kubelet[2769]: I0514 18:10:20.181919 2769 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2feb4d62-e49f-4cca-a180-457686b5c7c5-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "2feb4d62-e49f-4cca-a180-457686b5c7c5" (UID: "2feb4d62-e49f-4cca-a180-457686b5c7c5"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 14 18:10:20.182873 kubelet[2769]: I0514 18:10:20.182022 2769 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/12d1ce81-a93d-4fac-a2e1-e28db1aee075-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "12d1ce81-a93d-4fac-a2e1-e28db1aee075" (UID: "12d1ce81-a93d-4fac-a2e1-e28db1aee075"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 18:10:20.182873 kubelet[2769]: I0514 18:10:20.182053 2769 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/12d1ce81-a93d-4fac-a2e1-e28db1aee075-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "12d1ce81-a93d-4fac-a2e1-e28db1aee075" (UID: "12d1ce81-a93d-4fac-a2e1-e28db1aee075"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 18:10:20.184041 kubelet[2769]: I0514 18:10:20.183987 2769 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2feb4d62-e49f-4cca-a180-457686b5c7c5-kube-api-access-s4fk6" (OuterVolumeSpecName: "kube-api-access-s4fk6") pod "2feb4d62-e49f-4cca-a180-457686b5c7c5" (UID: "2feb4d62-e49f-4cca-a180-457686b5c7c5"). InnerVolumeSpecName "kube-api-access-s4fk6". PluginName "kubernetes.io/projected", VolumeGidValue "" May 14 18:10:20.187259 kubelet[2769]: I0514 18:10:20.186606 2769 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12d1ce81-a93d-4fac-a2e1-e28db1aee075-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "12d1ce81-a93d-4fac-a2e1-e28db1aee075" (UID: "12d1ce81-a93d-4fac-a2e1-e28db1aee075"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 14 18:10:20.188186 kubelet[2769]: I0514 18:10:20.188152 2769 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/12d1ce81-a93d-4fac-a2e1-e28db1aee075-hostproc" (OuterVolumeSpecName: "hostproc") pod "12d1ce81-a93d-4fac-a2e1-e28db1aee075" (UID: "12d1ce81-a93d-4fac-a2e1-e28db1aee075"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 18:10:20.188360 kubelet[2769]: I0514 18:10:20.188347 2769 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/12d1ce81-a93d-4fac-a2e1-e28db1aee075-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "12d1ce81-a93d-4fac-a2e1-e28db1aee075" (UID: "12d1ce81-a93d-4fac-a2e1-e28db1aee075"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 18:10:20.188460 kubelet[2769]: I0514 18:10:20.188423 2769 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/12d1ce81-a93d-4fac-a2e1-e28db1aee075-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "12d1ce81-a93d-4fac-a2e1-e28db1aee075" (UID: "12d1ce81-a93d-4fac-a2e1-e28db1aee075"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 18:10:20.189468 kubelet[2769]: I0514 18:10:20.189428 2769 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12d1ce81-a93d-4fac-a2e1-e28db1aee075-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "12d1ce81-a93d-4fac-a2e1-e28db1aee075" (UID: "12d1ce81-a93d-4fac-a2e1-e28db1aee075"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 14 18:10:20.189590 kubelet[2769]: I0514 18:10:20.189498 2769 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/12d1ce81-a93d-4fac-a2e1-e28db1aee075-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "12d1ce81-a93d-4fac-a2e1-e28db1aee075" (UID: "12d1ce81-a93d-4fac-a2e1-e28db1aee075"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 18:10:20.189590 kubelet[2769]: I0514 18:10:20.189524 2769 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/12d1ce81-a93d-4fac-a2e1-e28db1aee075-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "12d1ce81-a93d-4fac-a2e1-e28db1aee075" (UID: "12d1ce81-a93d-4fac-a2e1-e28db1aee075"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 18:10:20.189590 kubelet[2769]: I0514 18:10:20.189542 2769 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/12d1ce81-a93d-4fac-a2e1-e28db1aee075-cni-path" (OuterVolumeSpecName: "cni-path") pod "12d1ce81-a93d-4fac-a2e1-e28db1aee075" (UID: "12d1ce81-a93d-4fac-a2e1-e28db1aee075"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 18:10:20.189693 kubelet[2769]: I0514 18:10:20.189633 2769 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12d1ce81-a93d-4fac-a2e1-e28db1aee075-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "12d1ce81-a93d-4fac-a2e1-e28db1aee075" (UID: "12d1ce81-a93d-4fac-a2e1-e28db1aee075"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 14 18:10:20.191963 kubelet[2769]: I0514 18:10:20.191908 2769 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12d1ce81-a93d-4fac-a2e1-e28db1aee075-kube-api-access-8xn5z" (OuterVolumeSpecName: "kube-api-access-8xn5z") pod "12d1ce81-a93d-4fac-a2e1-e28db1aee075" (UID: "12d1ce81-a93d-4fac-a2e1-e28db1aee075"). InnerVolumeSpecName "kube-api-access-8xn5z". PluginName "kubernetes.io/projected", VolumeGidValue "" May 14 18:10:20.279888 kubelet[2769]: I0514 18:10:20.279686 2769 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/12d1ce81-a93d-4fac-a2e1-e28db1aee075-cilium-run\") on node \"ci-4334.0.0-a-9d82e253c5\" DevicePath \"\"" May 14 18:10:20.279888 kubelet[2769]: I0514 18:10:20.279729 2769 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/12d1ce81-a93d-4fac-a2e1-e28db1aee075-bpf-maps\") on node \"ci-4334.0.0-a-9d82e253c5\" DevicePath \"\"" May 14 18:10:20.279888 kubelet[2769]: I0514 18:10:20.279740 2769 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/12d1ce81-a93d-4fac-a2e1-e28db1aee075-cni-path\") on node \"ci-4334.0.0-a-9d82e253c5\" DevicePath \"\"" May 14 18:10:20.279888 kubelet[2769]: I0514 18:10:20.279750 2769 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/12d1ce81-a93d-4fac-a2e1-e28db1aee075-cilium-config-path\") on node \"ci-4334.0.0-a-9d82e253c5\" DevicePath \"\"" May 14 18:10:20.279888 kubelet[2769]: I0514 18:10:20.279763 2769 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2feb4d62-e49f-4cca-a180-457686b5c7c5-cilium-config-path\") on node \"ci-4334.0.0-a-9d82e253c5\" DevicePath \"\"" May 14 18:10:20.279888 kubelet[2769]: I0514 18:10:20.279773 2769 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-s4fk6\" (UniqueName: \"kubernetes.io/projected/2feb4d62-e49f-4cca-a180-457686b5c7c5-kube-api-access-s4fk6\") on node \"ci-4334.0.0-a-9d82e253c5\" DevicePath \"\"" May 14 18:10:20.279888 kubelet[2769]: I0514 18:10:20.279783 2769 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/12d1ce81-a93d-4fac-a2e1-e28db1aee075-clustermesh-secrets\") on node \"ci-4334.0.0-a-9d82e253c5\" DevicePath \"\"" May 14 18:10:20.279888 kubelet[2769]: I0514 18:10:20.279792 2769 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/12d1ce81-a93d-4fac-a2e1-e28db1aee075-hostproc\") on node \"ci-4334.0.0-a-9d82e253c5\" DevicePath \"\"" May 14 18:10:20.280290 kubelet[2769]: I0514 18:10:20.279802 2769 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/12d1ce81-a93d-4fac-a2e1-e28db1aee075-host-proc-sys-net\") on node \"ci-4334.0.0-a-9d82e253c5\" DevicePath \"\"" May 14 18:10:20.280290 kubelet[2769]: I0514 18:10:20.279811 2769 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/12d1ce81-a93d-4fac-a2e1-e28db1aee075-xtables-lock\") on node \"ci-4334.0.0-a-9d82e253c5\" DevicePath \"\"" May 14 18:10:20.280290 kubelet[2769]: I0514 18:10:20.279824 2769 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-8xn5z\" (UniqueName: \"kubernetes.io/projected/12d1ce81-a93d-4fac-a2e1-e28db1aee075-kube-api-access-8xn5z\") on node \"ci-4334.0.0-a-9d82e253c5\" DevicePath \"\"" May 14 18:10:20.280290 kubelet[2769]: I0514 18:10:20.279835 2769 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/12d1ce81-a93d-4fac-a2e1-e28db1aee075-etc-cni-netd\") on node \"ci-4334.0.0-a-9d82e253c5\" DevicePath \"\"" May 14 18:10:20.280290 kubelet[2769]: I0514 18:10:20.279843 2769 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/12d1ce81-a93d-4fac-a2e1-e28db1aee075-host-proc-sys-kernel\") on node \"ci-4334.0.0-a-9d82e253c5\" DevicePath \"\"" May 14 18:10:20.280290 kubelet[2769]: I0514 18:10:20.279852 2769 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/12d1ce81-a93d-4fac-a2e1-e28db1aee075-hubble-tls\") on node \"ci-4334.0.0-a-9d82e253c5\" DevicePath \"\"" May 14 18:10:20.280290 kubelet[2769]: I0514 18:10:20.279861 2769 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/12d1ce81-a93d-4fac-a2e1-e28db1aee075-lib-modules\") on node \"ci-4334.0.0-a-9d82e253c5\" DevicePath \"\"" May 14 18:10:20.353518 systemd[1]: Removed slice kubepods-burstable-pod12d1ce81_a93d_4fac_a2e1_e28db1aee075.slice - libcontainer container kubepods-burstable-pod12d1ce81_a93d_4fac_a2e1_e28db1aee075.slice. May 14 18:10:20.354038 systemd[1]: kubepods-burstable-pod12d1ce81_a93d_4fac_a2e1_e28db1aee075.slice: Consumed 9.444s CPU time, 152.4M memory peak, 37.1M read from disk, 13.3M written to disk. May 14 18:10:20.356785 systemd[1]: Removed slice kubepods-besteffort-pod2feb4d62_e49f_4cca_a180_457686b5c7c5.slice - libcontainer container kubepods-besteffort-pod2feb4d62_e49f_4cca_a180_457686b5c7c5.slice. May 14 18:10:20.357125 systemd[1]: kubepods-besteffort-pod2feb4d62_e49f_4cca_a180_457686b5c7c5.slice: Consumed 547ms CPU time, 27.7M memory peak, 2M read from disk, 4K written to disk. May 14 18:10:20.505589 kubelet[2769]: E0514 18:10:20.504895 2769 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 14 18:10:20.824279 kubelet[2769]: I0514 18:10:20.823677 2769 scope.go:117] "RemoveContainer" containerID="0eb3a72e002f85664258db94d38d0dcbd6f8f0a2cf0bf7bd3dfaf0cf830cfda3" May 14 18:10:20.832919 containerd[1515]: time="2025-05-14T18:10:20.832876431Z" level=info msg="RemoveContainer for \"0eb3a72e002f85664258db94d38d0dcbd6f8f0a2cf0bf7bd3dfaf0cf830cfda3\"" May 14 18:10:20.840254 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ecf9032bd1adf17cd0839da58ac7766515d3316bba789ac77c242957a2661616-shm.mount: Deactivated successfully. May 14 18:10:20.840431 systemd[1]: var-lib-kubelet-pods-12d1ce81\x2da93d\x2d4fac\x2da2e1\x2de28db1aee075-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8xn5z.mount: Deactivated successfully. May 14 18:10:20.840553 systemd[1]: var-lib-kubelet-pods-2feb4d62\x2de49f\x2d4cca\x2da180\x2d457686b5c7c5-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ds4fk6.mount: Deactivated successfully. May 14 18:10:20.840645 systemd[1]: var-lib-kubelet-pods-12d1ce81\x2da93d\x2d4fac\x2da2e1\x2de28db1aee075-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 14 18:10:20.840726 systemd[1]: var-lib-kubelet-pods-12d1ce81\x2da93d\x2d4fac\x2da2e1\x2de28db1aee075-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 14 18:10:20.850409 containerd[1515]: time="2025-05-14T18:10:20.850330875Z" level=info msg="RemoveContainer for \"0eb3a72e002f85664258db94d38d0dcbd6f8f0a2cf0bf7bd3dfaf0cf830cfda3\" returns successfully" May 14 18:10:20.851153 kubelet[2769]: I0514 18:10:20.850952 2769 scope.go:117] "RemoveContainer" containerID="0eb3a72e002f85664258db94d38d0dcbd6f8f0a2cf0bf7bd3dfaf0cf830cfda3" May 14 18:10:20.869557 containerd[1515]: time="2025-05-14T18:10:20.851828935Z" level=error msg="ContainerStatus for \"0eb3a72e002f85664258db94d38d0dcbd6f8f0a2cf0bf7bd3dfaf0cf830cfda3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0eb3a72e002f85664258db94d38d0dcbd6f8f0a2cf0bf7bd3dfaf0cf830cfda3\": not found" May 14 18:10:20.871490 kubelet[2769]: E0514 18:10:20.871332 2769 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0eb3a72e002f85664258db94d38d0dcbd6f8f0a2cf0bf7bd3dfaf0cf830cfda3\": not found" containerID="0eb3a72e002f85664258db94d38d0dcbd6f8f0a2cf0bf7bd3dfaf0cf830cfda3" May 14 18:10:20.872647 kubelet[2769]: I0514 18:10:20.872479 2769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0eb3a72e002f85664258db94d38d0dcbd6f8f0a2cf0bf7bd3dfaf0cf830cfda3"} err="failed to get container status \"0eb3a72e002f85664258db94d38d0dcbd6f8f0a2cf0bf7bd3dfaf0cf830cfda3\": rpc error: code = NotFound desc = an error occurred when try to find container \"0eb3a72e002f85664258db94d38d0dcbd6f8f0a2cf0bf7bd3dfaf0cf830cfda3\": not found" May 14 18:10:20.872842 kubelet[2769]: I0514 18:10:20.872826 2769 scope.go:117] "RemoveContainer" containerID="685ad51f1f44514a4b4052420b2c7927d0325efd4c3c30e12dc3c3fedd60cbed" May 14 18:10:20.880535 containerd[1515]: time="2025-05-14T18:10:20.879341966Z" level=info msg="RemoveContainer for \"685ad51f1f44514a4b4052420b2c7927d0325efd4c3c30e12dc3c3fedd60cbed\"" May 14 18:10:20.888985 containerd[1515]: time="2025-05-14T18:10:20.888014688Z" level=info msg="RemoveContainer for \"685ad51f1f44514a4b4052420b2c7927d0325efd4c3c30e12dc3c3fedd60cbed\" returns successfully" May 14 18:10:20.891111 kubelet[2769]: I0514 18:10:20.891067 2769 scope.go:117] "RemoveContainer" containerID="a4fefb97c6de6815ca41ab74e1da1d28d7100a8b8b2b2a1a7ec47e898e862bf4" May 14 18:10:20.893936 containerd[1515]: time="2025-05-14T18:10:20.893887485Z" level=info msg="RemoveContainer for \"a4fefb97c6de6815ca41ab74e1da1d28d7100a8b8b2b2a1a7ec47e898e862bf4\"" May 14 18:10:20.902917 containerd[1515]: time="2025-05-14T18:10:20.902797622Z" level=info msg="RemoveContainer for \"a4fefb97c6de6815ca41ab74e1da1d28d7100a8b8b2b2a1a7ec47e898e862bf4\" returns successfully" May 14 18:10:20.903489 kubelet[2769]: I0514 18:10:20.903309 2769 scope.go:117] "RemoveContainer" containerID="9d9280d2972e9a9208c3c1f35636e18251d5d8b4edfb4e67f37d1f0d7cdac833" May 14 18:10:20.907541 containerd[1515]: time="2025-05-14T18:10:20.907469375Z" level=info msg="RemoveContainer for \"9d9280d2972e9a9208c3c1f35636e18251d5d8b4edfb4e67f37d1f0d7cdac833\"" May 14 18:10:20.913515 containerd[1515]: time="2025-05-14T18:10:20.913441001Z" level=info msg="RemoveContainer for \"9d9280d2972e9a9208c3c1f35636e18251d5d8b4edfb4e67f37d1f0d7cdac833\" returns successfully" May 14 18:10:20.913873 kubelet[2769]: I0514 18:10:20.913776 2769 scope.go:117] "RemoveContainer" containerID="a1cee0b6dc84af715112e1083752bc9138cf4a35bc8ce017c4256215438cafa2" May 14 18:10:20.916445 containerd[1515]: time="2025-05-14T18:10:20.916400776Z" level=info msg="RemoveContainer for \"a1cee0b6dc84af715112e1083752bc9138cf4a35bc8ce017c4256215438cafa2\"" May 14 18:10:20.921389 containerd[1515]: time="2025-05-14T18:10:20.921297075Z" level=info msg="RemoveContainer for \"a1cee0b6dc84af715112e1083752bc9138cf4a35bc8ce017c4256215438cafa2\" returns successfully" May 14 18:10:20.923658 kubelet[2769]: I0514 18:10:20.923487 2769 scope.go:117] "RemoveContainer" containerID="68a557a4479d21f84ec27f1758240f9bf7fba44e93c8dfb482201113f6908622" May 14 18:10:20.928013 containerd[1515]: time="2025-05-14T18:10:20.927956158Z" level=info msg="RemoveContainer for \"68a557a4479d21f84ec27f1758240f9bf7fba44e93c8dfb482201113f6908622\"" May 14 18:10:20.932866 containerd[1515]: time="2025-05-14T18:10:20.932779195Z" level=info msg="RemoveContainer for \"68a557a4479d21f84ec27f1758240f9bf7fba44e93c8dfb482201113f6908622\" returns successfully" May 14 18:10:20.933266 kubelet[2769]: I0514 18:10:20.933156 2769 scope.go:117] "RemoveContainer" containerID="685ad51f1f44514a4b4052420b2c7927d0325efd4c3c30e12dc3c3fedd60cbed" May 14 18:10:20.933908 containerd[1515]: time="2025-05-14T18:10:20.933871802Z" level=error msg="ContainerStatus for \"685ad51f1f44514a4b4052420b2c7927d0325efd4c3c30e12dc3c3fedd60cbed\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"685ad51f1f44514a4b4052420b2c7927d0325efd4c3c30e12dc3c3fedd60cbed\": not found" May 14 18:10:20.934166 kubelet[2769]: E0514 18:10:20.934056 2769 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"685ad51f1f44514a4b4052420b2c7927d0325efd4c3c30e12dc3c3fedd60cbed\": not found" containerID="685ad51f1f44514a4b4052420b2c7927d0325efd4c3c30e12dc3c3fedd60cbed" May 14 18:10:20.934166 kubelet[2769]: I0514 18:10:20.934088 2769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"685ad51f1f44514a4b4052420b2c7927d0325efd4c3c30e12dc3c3fedd60cbed"} err="failed to get container status \"685ad51f1f44514a4b4052420b2c7927d0325efd4c3c30e12dc3c3fedd60cbed\": rpc error: code = NotFound desc = an error occurred when try to find container \"685ad51f1f44514a4b4052420b2c7927d0325efd4c3c30e12dc3c3fedd60cbed\": not found" May 14 18:10:20.934166 kubelet[2769]: I0514 18:10:20.934115 2769 scope.go:117] "RemoveContainer" containerID="a4fefb97c6de6815ca41ab74e1da1d28d7100a8b8b2b2a1a7ec47e898e862bf4" May 14 18:10:20.934772 containerd[1515]: time="2025-05-14T18:10:20.934715979Z" level=error msg="ContainerStatus for \"a4fefb97c6de6815ca41ab74e1da1d28d7100a8b8b2b2a1a7ec47e898e862bf4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a4fefb97c6de6815ca41ab74e1da1d28d7100a8b8b2b2a1a7ec47e898e862bf4\": not found" May 14 18:10:20.935467 kubelet[2769]: E0514 18:10:20.934929 2769 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a4fefb97c6de6815ca41ab74e1da1d28d7100a8b8b2b2a1a7ec47e898e862bf4\": not found" containerID="a4fefb97c6de6815ca41ab74e1da1d28d7100a8b8b2b2a1a7ec47e898e862bf4" May 14 18:10:20.935467 kubelet[2769]: I0514 18:10:20.934974 2769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a4fefb97c6de6815ca41ab74e1da1d28d7100a8b8b2b2a1a7ec47e898e862bf4"} err="failed to get container status \"a4fefb97c6de6815ca41ab74e1da1d28d7100a8b8b2b2a1a7ec47e898e862bf4\": rpc error: code = NotFound desc = an error occurred when try to find container \"a4fefb97c6de6815ca41ab74e1da1d28d7100a8b8b2b2a1a7ec47e898e862bf4\": not found" May 14 18:10:20.935467 kubelet[2769]: I0514 18:10:20.935018 2769 scope.go:117] "RemoveContainer" containerID="9d9280d2972e9a9208c3c1f35636e18251d5d8b4edfb4e67f37d1f0d7cdac833" May 14 18:10:20.936011 containerd[1515]: time="2025-05-14T18:10:20.935953416Z" level=error msg="ContainerStatus for \"9d9280d2972e9a9208c3c1f35636e18251d5d8b4edfb4e67f37d1f0d7cdac833\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9d9280d2972e9a9208c3c1f35636e18251d5d8b4edfb4e67f37d1f0d7cdac833\": not found" May 14 18:10:20.936335 kubelet[2769]: E0514 18:10:20.936144 2769 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9d9280d2972e9a9208c3c1f35636e18251d5d8b4edfb4e67f37d1f0d7cdac833\": not found" containerID="9d9280d2972e9a9208c3c1f35636e18251d5d8b4edfb4e67f37d1f0d7cdac833" May 14 18:10:20.936471 kubelet[2769]: I0514 18:10:20.936344 2769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9d9280d2972e9a9208c3c1f35636e18251d5d8b4edfb4e67f37d1f0d7cdac833"} err="failed to get container status \"9d9280d2972e9a9208c3c1f35636e18251d5d8b4edfb4e67f37d1f0d7cdac833\": rpc error: code = NotFound desc = an error occurred when try to find container \"9d9280d2972e9a9208c3c1f35636e18251d5d8b4edfb4e67f37d1f0d7cdac833\": not found" May 14 18:10:20.936471 kubelet[2769]: I0514 18:10:20.936403 2769 scope.go:117] "RemoveContainer" containerID="a1cee0b6dc84af715112e1083752bc9138cf4a35bc8ce017c4256215438cafa2" May 14 18:10:20.936718 containerd[1515]: time="2025-05-14T18:10:20.936666493Z" level=error msg="ContainerStatus for \"a1cee0b6dc84af715112e1083752bc9138cf4a35bc8ce017c4256215438cafa2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a1cee0b6dc84af715112e1083752bc9138cf4a35bc8ce017c4256215438cafa2\": not found" May 14 18:10:20.936904 kubelet[2769]: E0514 18:10:20.936882 2769 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a1cee0b6dc84af715112e1083752bc9138cf4a35bc8ce017c4256215438cafa2\": not found" containerID="a1cee0b6dc84af715112e1083752bc9138cf4a35bc8ce017c4256215438cafa2" May 14 18:10:20.937035 kubelet[2769]: I0514 18:10:20.936990 2769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a1cee0b6dc84af715112e1083752bc9138cf4a35bc8ce017c4256215438cafa2"} err="failed to get container status \"a1cee0b6dc84af715112e1083752bc9138cf4a35bc8ce017c4256215438cafa2\": rpc error: code = NotFound desc = an error occurred when try to find container \"a1cee0b6dc84af715112e1083752bc9138cf4a35bc8ce017c4256215438cafa2\": not found" May 14 18:10:20.937326 kubelet[2769]: I0514 18:10:20.937125 2769 scope.go:117] "RemoveContainer" containerID="68a557a4479d21f84ec27f1758240f9bf7fba44e93c8dfb482201113f6908622" May 14 18:10:20.937870 containerd[1515]: time="2025-05-14T18:10:20.937820989Z" level=error msg="ContainerStatus for \"68a557a4479d21f84ec27f1758240f9bf7fba44e93c8dfb482201113f6908622\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"68a557a4479d21f84ec27f1758240f9bf7fba44e93c8dfb482201113f6908622\": not found" May 14 18:10:20.938048 kubelet[2769]: E0514 18:10:20.938028 2769 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"68a557a4479d21f84ec27f1758240f9bf7fba44e93c8dfb482201113f6908622\": not found" containerID="68a557a4479d21f84ec27f1758240f9bf7fba44e93c8dfb482201113f6908622" May 14 18:10:20.938134 kubelet[2769]: I0514 18:10:20.938116 2769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"68a557a4479d21f84ec27f1758240f9bf7fba44e93c8dfb482201113f6908622"} err="failed to get container status \"68a557a4479d21f84ec27f1758240f9bf7fba44e93c8dfb482201113f6908622\": rpc error: code = NotFound desc = an error occurred when try to find container \"68a557a4479d21f84ec27f1758240f9bf7fba44e93c8dfb482201113f6908622\": not found" May 14 18:10:21.195005 kubelet[2769]: I0514 18:10:21.194837 2769 eviction_manager.go:366] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 14 18:10:21.195005 kubelet[2769]: I0514 18:10:21.194894 2769 container_gc.go:88] "Attempting to delete unused containers" May 14 18:10:21.201000 containerd[1515]: time="2025-05-14T18:10:21.200931884Z" level=info msg="StopPodSandbox for \"ab41f4919e3067030bab691ab53c1898092fccba87414838d008cf0b73dded37\"" May 14 18:10:21.201581 containerd[1515]: time="2025-05-14T18:10:21.201425202Z" level=info msg="TearDown network for sandbox \"ab41f4919e3067030bab691ab53c1898092fccba87414838d008cf0b73dded37\" successfully" May 14 18:10:21.201581 containerd[1515]: time="2025-05-14T18:10:21.201560071Z" level=info msg="StopPodSandbox for \"ab41f4919e3067030bab691ab53c1898092fccba87414838d008cf0b73dded37\" returns successfully" May 14 18:10:21.203264 containerd[1515]: time="2025-05-14T18:10:21.203097617Z" level=info msg="RemovePodSandbox for \"ab41f4919e3067030bab691ab53c1898092fccba87414838d008cf0b73dded37\"" May 14 18:10:21.203545 containerd[1515]: time="2025-05-14T18:10:21.203512641Z" level=info msg="Forcibly stopping sandbox \"ab41f4919e3067030bab691ab53c1898092fccba87414838d008cf0b73dded37\"" May 14 18:10:21.203692 containerd[1515]: time="2025-05-14T18:10:21.203674467Z" level=info msg="TearDown network for sandbox \"ab41f4919e3067030bab691ab53c1898092fccba87414838d008cf0b73dded37\" successfully" May 14 18:10:21.205545 containerd[1515]: time="2025-05-14T18:10:21.205488378Z" level=info msg="Ensure that sandbox ab41f4919e3067030bab691ab53c1898092fccba87414838d008cf0b73dded37 in task-service has been cleanup successfully" May 14 18:10:21.209509 containerd[1515]: time="2025-05-14T18:10:21.209295614Z" level=info msg="RemovePodSandbox \"ab41f4919e3067030bab691ab53c1898092fccba87414838d008cf0b73dded37\" returns successfully" May 14 18:10:21.210321 containerd[1515]: time="2025-05-14T18:10:21.210275950Z" level=info msg="StopPodSandbox for \"ecf9032bd1adf17cd0839da58ac7766515d3316bba789ac77c242957a2661616\"" May 14 18:10:21.210464 containerd[1515]: time="2025-05-14T18:10:21.210431205Z" level=info msg="TearDown network for sandbox \"ecf9032bd1adf17cd0839da58ac7766515d3316bba789ac77c242957a2661616\" successfully" May 14 18:10:21.210464 containerd[1515]: time="2025-05-14T18:10:21.210446352Z" level=info msg="StopPodSandbox for \"ecf9032bd1adf17cd0839da58ac7766515d3316bba789ac77c242957a2661616\" returns successfully" May 14 18:10:21.210883 containerd[1515]: time="2025-05-14T18:10:21.210855135Z" level=info msg="RemovePodSandbox for \"ecf9032bd1adf17cd0839da58ac7766515d3316bba789ac77c242957a2661616\"" May 14 18:10:21.210883 containerd[1515]: time="2025-05-14T18:10:21.210882121Z" level=info msg="Forcibly stopping sandbox \"ecf9032bd1adf17cd0839da58ac7766515d3316bba789ac77c242957a2661616\"" May 14 18:10:21.211029 containerd[1515]: time="2025-05-14T18:10:21.210965345Z" level=info msg="TearDown network for sandbox \"ecf9032bd1adf17cd0839da58ac7766515d3316bba789ac77c242957a2661616\" successfully" May 14 18:10:21.212411 containerd[1515]: time="2025-05-14T18:10:21.212345819Z" level=info msg="Ensure that sandbox ecf9032bd1adf17cd0839da58ac7766515d3316bba789ac77c242957a2661616 in task-service has been cleanup successfully" May 14 18:10:21.214551 containerd[1515]: time="2025-05-14T18:10:21.214509499Z" level=info msg="RemovePodSandbox \"ecf9032bd1adf17cd0839da58ac7766515d3316bba789ac77c242957a2661616\" returns successfully" May 14 18:10:21.215675 kubelet[2769]: I0514 18:10:21.215633 2769 image_gc_manager.go:404] "Attempting to delete unused images" May 14 18:10:21.233545 kubelet[2769]: I0514 18:10:21.233494 2769 eviction_manager.go:377] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 14 18:10:21.233765 kubelet[2769]: I0514 18:10:21.233604 2769 eviction_manager.go:395] "Eviction manager: pods ranked for eviction" pods=["kube-system/kube-controller-manager-ci-4334.0.0-a-9d82e253c5","kube-system/kube-proxy-lg268","kube-system/kube-apiserver-ci-4334.0.0-a-9d82e253c5","kube-system/kube-scheduler-ci-4334.0.0-a-9d82e253c5"] May 14 18:10:21.233765 kubelet[2769]: E0514 18:10:21.233660 2769 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-ci-4334.0.0-a-9d82e253c5" May 14 18:10:21.233765 kubelet[2769]: E0514 18:10:21.233680 2769 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-lg268" May 14 18:10:21.233765 kubelet[2769]: E0514 18:10:21.233694 2769 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-ci-4334.0.0-a-9d82e253c5" May 14 18:10:21.233765 kubelet[2769]: E0514 18:10:21.233709 2769 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-ci-4334.0.0-a-9d82e253c5" May 14 18:10:21.233765 kubelet[2769]: I0514 18:10:21.233726 2769 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 14 18:10:21.574723 sshd[4155]: Connection closed by 139.178.89.65 port 49348 May 14 18:10:21.575564 sshd-session[4153]: pam_unix(sshd:session): session closed for user core May 14 18:10:21.591443 systemd[1]: sshd@24-164.92.104.130:22-139.178.89.65:49348.service: Deactivated successfully. May 14 18:10:21.596617 systemd[1]: session-25.scope: Deactivated successfully. May 14 18:10:21.598490 systemd-logind[1497]: Session 25 logged out. Waiting for processes to exit. May 14 18:10:21.606529 systemd[1]: Started sshd@25-164.92.104.130:22-139.178.89.65:49356.service - OpenSSH per-connection server daemon (139.178.89.65:49356). May 14 18:10:21.608394 systemd-logind[1497]: Removed session 25. May 14 18:10:21.696879 sshd[4304]: Accepted publickey for core from 139.178.89.65 port 49356 ssh2: RSA SHA256:I6v7602y95t0HxsKZunlpQRdbWqTS6jK7hLc8ah5Xaw May 14 18:10:21.699797 sshd-session[4304]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:10:21.712640 systemd-logind[1497]: New session 26 of user core. May 14 18:10:21.720640 systemd[1]: Started session-26.scope - Session 26 of User core. May 14 18:10:22.352219 kubelet[2769]: I0514 18:10:22.352138 2769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12d1ce81-a93d-4fac-a2e1-e28db1aee075" path="/var/lib/kubelet/pods/12d1ce81-a93d-4fac-a2e1-e28db1aee075/volumes" May 14 18:10:22.353240 kubelet[2769]: I0514 18:10:22.353155 2769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2feb4d62-e49f-4cca-a180-457686b5c7c5" path="/var/lib/kubelet/pods/2feb4d62-e49f-4cca-a180-457686b5c7c5/volumes" May 14 18:10:22.644327 kubelet[2769]: I0514 18:10:22.641795 2769 setters.go:580] "Node became not ready" node="ci-4334.0.0-a-9d82e253c5" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-14T18:10:22Z","lastTransitionTime":"2025-05-14T18:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 14 18:10:23.214246 sshd[4306]: Connection closed by 139.178.89.65 port 49356 May 14 18:10:23.213484 sshd-session[4304]: pam_unix(sshd:session): session closed for user core May 14 18:10:23.234130 systemd[1]: sshd@25-164.92.104.130:22-139.178.89.65:49356.service: Deactivated successfully. May 14 18:10:23.242721 systemd[1]: session-26.scope: Deactivated successfully. May 14 18:10:23.243451 systemd[1]: session-26.scope: Consumed 1.298s CPU time, 24.6M memory peak. May 14 18:10:23.248407 systemd-logind[1497]: Session 26 logged out. Waiting for processes to exit. May 14 18:10:23.256884 systemd[1]: Started sshd@26-164.92.104.130:22-139.178.89.65:49362.service - OpenSSH per-connection server daemon (139.178.89.65:49362). May 14 18:10:23.263396 systemd-logind[1497]: Removed session 26. May 14 18:10:23.280033 kubelet[2769]: I0514 18:10:23.279969 2769 topology_manager.go:215] "Topology Admit Handler" podUID="680a6b06-cb02-4795-88a7-2c064a49e7dc" podNamespace="kube-system" podName="cilium-8w8l2" May 14 18:10:23.280743 kubelet[2769]: E0514 18:10:23.280653 2769 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="12d1ce81-a93d-4fac-a2e1-e28db1aee075" containerName="clean-cilium-state" May 14 18:10:23.280743 kubelet[2769]: E0514 18:10:23.280681 2769 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2feb4d62-e49f-4cca-a180-457686b5c7c5" containerName="cilium-operator" May 14 18:10:23.281154 kubelet[2769]: E0514 18:10:23.281048 2769 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="12d1ce81-a93d-4fac-a2e1-e28db1aee075" containerName="mount-cgroup" May 14 18:10:23.281154 kubelet[2769]: E0514 18:10:23.281073 2769 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="12d1ce81-a93d-4fac-a2e1-e28db1aee075" containerName="apply-sysctl-overwrites" May 14 18:10:23.281154 kubelet[2769]: E0514 18:10:23.281082 2769 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="12d1ce81-a93d-4fac-a2e1-e28db1aee075" containerName="mount-bpf-fs" May 14 18:10:23.281154 kubelet[2769]: E0514 18:10:23.281090 2769 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="12d1ce81-a93d-4fac-a2e1-e28db1aee075" containerName="cilium-agent" May 14 18:10:23.289437 kubelet[2769]: I0514 18:10:23.289368 2769 memory_manager.go:354] "RemoveStaleState removing state" podUID="2feb4d62-e49f-4cca-a180-457686b5c7c5" containerName="cilium-operator" May 14 18:10:23.289776 kubelet[2769]: I0514 18:10:23.289622 2769 memory_manager.go:354] "RemoveStaleState removing state" podUID="12d1ce81-a93d-4fac-a2e1-e28db1aee075" containerName="cilium-agent" May 14 18:10:23.341989 systemd[1]: Created slice kubepods-burstable-pod680a6b06_cb02_4795_88a7_2c064a49e7dc.slice - libcontainer container kubepods-burstable-pod680a6b06_cb02_4795_88a7_2c064a49e7dc.slice. May 14 18:10:23.382044 sshd[4316]: Accepted publickey for core from 139.178.89.65 port 49362 ssh2: RSA SHA256:I6v7602y95t0HxsKZunlpQRdbWqTS6jK7hLc8ah5Xaw May 14 18:10:23.384950 sshd-session[4316]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:10:23.398700 systemd-logind[1497]: New session 27 of user core. May 14 18:10:23.406843 kubelet[2769]: I0514 18:10:23.406793 2769 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/680a6b06-cb02-4795-88a7-2c064a49e7dc-clustermesh-secrets\") pod \"cilium-8w8l2\" (UID: \"680a6b06-cb02-4795-88a7-2c064a49e7dc\") " pod="kube-system/cilium-8w8l2" May 14 18:10:23.406973 systemd[1]: Started session-27.scope - Session 27 of User core. May 14 18:10:23.410919 kubelet[2769]: I0514 18:10:23.410862 2769 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/680a6b06-cb02-4795-88a7-2c064a49e7dc-cilium-run\") pod \"cilium-8w8l2\" (UID: \"680a6b06-cb02-4795-88a7-2c064a49e7dc\") " pod="kube-system/cilium-8w8l2" May 14 18:10:23.411087 kubelet[2769]: I0514 18:10:23.410929 2769 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/680a6b06-cb02-4795-88a7-2c064a49e7dc-etc-cni-netd\") pod \"cilium-8w8l2\" (UID: \"680a6b06-cb02-4795-88a7-2c064a49e7dc\") " pod="kube-system/cilium-8w8l2" May 14 18:10:23.411087 kubelet[2769]: I0514 18:10:23.410969 2769 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szpqz\" (UniqueName: \"kubernetes.io/projected/680a6b06-cb02-4795-88a7-2c064a49e7dc-kube-api-access-szpqz\") pod \"cilium-8w8l2\" (UID: \"680a6b06-cb02-4795-88a7-2c064a49e7dc\") " pod="kube-system/cilium-8w8l2" May 14 18:10:23.411087 kubelet[2769]: I0514 18:10:23.411078 2769 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/680a6b06-cb02-4795-88a7-2c064a49e7dc-host-proc-sys-kernel\") pod \"cilium-8w8l2\" (UID: \"680a6b06-cb02-4795-88a7-2c064a49e7dc\") " pod="kube-system/cilium-8w8l2" May 14 18:10:23.411982 kubelet[2769]: I0514 18:10:23.411116 2769 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/680a6b06-cb02-4795-88a7-2c064a49e7dc-bpf-maps\") pod \"cilium-8w8l2\" (UID: \"680a6b06-cb02-4795-88a7-2c064a49e7dc\") " pod="kube-system/cilium-8w8l2" May 14 18:10:23.411982 kubelet[2769]: I0514 18:10:23.411143 2769 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/680a6b06-cb02-4795-88a7-2c064a49e7dc-cni-path\") pod \"cilium-8w8l2\" (UID: \"680a6b06-cb02-4795-88a7-2c064a49e7dc\") " pod="kube-system/cilium-8w8l2" May 14 18:10:23.411982 kubelet[2769]: I0514 18:10:23.411169 2769 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/680a6b06-cb02-4795-88a7-2c064a49e7dc-hostproc\") pod \"cilium-8w8l2\" (UID: \"680a6b06-cb02-4795-88a7-2c064a49e7dc\") " pod="kube-system/cilium-8w8l2" May 14 18:10:23.412152 kubelet[2769]: I0514 18:10:23.412115 2769 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/680a6b06-cb02-4795-88a7-2c064a49e7dc-xtables-lock\") pod \"cilium-8w8l2\" (UID: \"680a6b06-cb02-4795-88a7-2c064a49e7dc\") " pod="kube-system/cilium-8w8l2" May 14 18:10:23.413807 kubelet[2769]: I0514 18:10:23.412193 2769 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/680a6b06-cb02-4795-88a7-2c064a49e7dc-hubble-tls\") pod \"cilium-8w8l2\" (UID: \"680a6b06-cb02-4795-88a7-2c064a49e7dc\") " pod="kube-system/cilium-8w8l2" May 14 18:10:23.413807 kubelet[2769]: I0514 18:10:23.413384 2769 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/680a6b06-cb02-4795-88a7-2c064a49e7dc-cilium-config-path\") pod \"cilium-8w8l2\" (UID: \"680a6b06-cb02-4795-88a7-2c064a49e7dc\") " pod="kube-system/cilium-8w8l2" May 14 18:10:23.413807 kubelet[2769]: I0514 18:10:23.413424 2769 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/680a6b06-cb02-4795-88a7-2c064a49e7dc-lib-modules\") pod \"cilium-8w8l2\" (UID: \"680a6b06-cb02-4795-88a7-2c064a49e7dc\") " pod="kube-system/cilium-8w8l2" May 14 18:10:23.413807 kubelet[2769]: I0514 18:10:23.413456 2769 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/680a6b06-cb02-4795-88a7-2c064a49e7dc-host-proc-sys-net\") pod \"cilium-8w8l2\" (UID: \"680a6b06-cb02-4795-88a7-2c064a49e7dc\") " pod="kube-system/cilium-8w8l2" May 14 18:10:23.413807 kubelet[2769]: I0514 18:10:23.413549 2769 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/680a6b06-cb02-4795-88a7-2c064a49e7dc-cilium-cgroup\") pod \"cilium-8w8l2\" (UID: \"680a6b06-cb02-4795-88a7-2c064a49e7dc\") " pod="kube-system/cilium-8w8l2" May 14 18:10:23.413807 kubelet[2769]: I0514 18:10:23.413627 2769 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/680a6b06-cb02-4795-88a7-2c064a49e7dc-cilium-ipsec-secrets\") pod \"cilium-8w8l2\" (UID: \"680a6b06-cb02-4795-88a7-2c064a49e7dc\") " pod="kube-system/cilium-8w8l2" May 14 18:10:23.478788 sshd[4318]: Connection closed by 139.178.89.65 port 49362 May 14 18:10:23.479588 sshd-session[4316]: pam_unix(sshd:session): session closed for user core May 14 18:10:23.495595 systemd[1]: sshd@26-164.92.104.130:22-139.178.89.65:49362.service: Deactivated successfully. May 14 18:10:23.499437 systemd[1]: session-27.scope: Deactivated successfully. May 14 18:10:23.501941 systemd-logind[1497]: Session 27 logged out. Waiting for processes to exit. May 14 18:10:23.507988 systemd[1]: Started sshd@27-164.92.104.130:22-139.178.89.65:49370.service - OpenSSH per-connection server daemon (139.178.89.65:49370). May 14 18:10:23.510348 systemd-logind[1497]: Removed session 27. May 14 18:10:23.640181 sshd[4325]: Accepted publickey for core from 139.178.89.65 port 49370 ssh2: RSA SHA256:I6v7602y95t0HxsKZunlpQRdbWqTS6jK7hLc8ah5Xaw May 14 18:10:23.644743 sshd-session[4325]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:10:23.653834 kubelet[2769]: E0514 18:10:23.653701 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 14 18:10:23.655486 containerd[1515]: time="2025-05-14T18:10:23.654558557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8w8l2,Uid:680a6b06-cb02-4795-88a7-2c064a49e7dc,Namespace:kube-system,Attempt:0,}" May 14 18:10:23.660215 systemd-logind[1497]: New session 28 of user core. May 14 18:10:23.664548 systemd[1]: Started session-28.scope - Session 28 of User core. May 14 18:10:23.701872 containerd[1515]: time="2025-05-14T18:10:23.701813859Z" level=info msg="connecting to shim 980110a33007dd69ab3c64550cf95306050d34a3386c96cac5bec83c5c0834e7" address="unix:///run/containerd/s/44b15805d2d8ffc50bb63f3f6c27b9b0467b8548f25a04a303797ed3a4d43583" namespace=k8s.io protocol=ttrpc version=3 May 14 18:10:23.748896 systemd[1]: Started cri-containerd-980110a33007dd69ab3c64550cf95306050d34a3386c96cac5bec83c5c0834e7.scope - libcontainer container 980110a33007dd69ab3c64550cf95306050d34a3386c96cac5bec83c5c0834e7. May 14 18:10:23.824643 containerd[1515]: time="2025-05-14T18:10:23.824503314Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8w8l2,Uid:680a6b06-cb02-4795-88a7-2c064a49e7dc,Namespace:kube-system,Attempt:0,} returns sandbox id \"980110a33007dd69ab3c64550cf95306050d34a3386c96cac5bec83c5c0834e7\"" May 14 18:10:23.828043 kubelet[2769]: E0514 18:10:23.827995 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 14 18:10:23.838512 containerd[1515]: time="2025-05-14T18:10:23.838418809Z" level=info msg="CreateContainer within sandbox \"980110a33007dd69ab3c64550cf95306050d34a3386c96cac5bec83c5c0834e7\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 14 18:10:23.851210 containerd[1515]: time="2025-05-14T18:10:23.850931681Z" level=info msg="Container 9f37b0afb383f3eab9319f911b952835dcdd89f5cd7c3f3cee57461d1b2d1e57: CDI devices from CRI Config.CDIDevices: []" May 14 18:10:23.866233 containerd[1515]: time="2025-05-14T18:10:23.865940122Z" level=info msg="CreateContainer within sandbox \"980110a33007dd69ab3c64550cf95306050d34a3386c96cac5bec83c5c0834e7\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9f37b0afb383f3eab9319f911b952835dcdd89f5cd7c3f3cee57461d1b2d1e57\"" May 14 18:10:23.870499 containerd[1515]: time="2025-05-14T18:10:23.870449490Z" level=info msg="StartContainer for \"9f37b0afb383f3eab9319f911b952835dcdd89f5cd7c3f3cee57461d1b2d1e57\"" May 14 18:10:23.875937 containerd[1515]: time="2025-05-14T18:10:23.875757174Z" level=info msg="connecting to shim 9f37b0afb383f3eab9319f911b952835dcdd89f5cd7c3f3cee57461d1b2d1e57" address="unix:///run/containerd/s/44b15805d2d8ffc50bb63f3f6c27b9b0467b8548f25a04a303797ed3a4d43583" protocol=ttrpc version=3 May 14 18:10:23.941928 systemd[1]: Started cri-containerd-9f37b0afb383f3eab9319f911b952835dcdd89f5cd7c3f3cee57461d1b2d1e57.scope - libcontainer container 9f37b0afb383f3eab9319f911b952835dcdd89f5cd7c3f3cee57461d1b2d1e57. May 14 18:10:24.053228 containerd[1515]: time="2025-05-14T18:10:24.053159791Z" level=info msg="StartContainer for \"9f37b0afb383f3eab9319f911b952835dcdd89f5cd7c3f3cee57461d1b2d1e57\" returns successfully" May 14 18:10:24.072975 systemd[1]: cri-containerd-9f37b0afb383f3eab9319f911b952835dcdd89f5cd7c3f3cee57461d1b2d1e57.scope: Deactivated successfully. May 14 18:10:24.080086 containerd[1515]: time="2025-05-14T18:10:24.079894664Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9f37b0afb383f3eab9319f911b952835dcdd89f5cd7c3f3cee57461d1b2d1e57\" id:\"9f37b0afb383f3eab9319f911b952835dcdd89f5cd7c3f3cee57461d1b2d1e57\" pid:4397 exited_at:{seconds:1747246224 nanos:78419278}" May 14 18:10:24.080476 containerd[1515]: time="2025-05-14T18:10:24.079971598Z" level=info msg="received exit event container_id:\"9f37b0afb383f3eab9319f911b952835dcdd89f5cd7c3f3cee57461d1b2d1e57\" id:\"9f37b0afb383f3eab9319f911b952835dcdd89f5cd7c3f3cee57461d1b2d1e57\" pid:4397 exited_at:{seconds:1747246224 nanos:78419278}" May 14 18:10:24.884653 kubelet[2769]: E0514 18:10:24.884612 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 14 18:10:24.889066 containerd[1515]: time="2025-05-14T18:10:24.889023242Z" level=info msg="CreateContainer within sandbox \"980110a33007dd69ab3c64550cf95306050d34a3386c96cac5bec83c5c0834e7\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 14 18:10:24.928512 containerd[1515]: time="2025-05-14T18:10:24.928435905Z" level=info msg="Container 1edc2cab60a83761d202e15ca53a47c1ee5c534ed488d8c1b2cf86afac563c8d: CDI devices from CRI Config.CDIDevices: []" May 14 18:10:24.936517 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount153954902.mount: Deactivated successfully. May 14 18:10:24.947852 containerd[1515]: time="2025-05-14T18:10:24.947767386Z" level=info msg="CreateContainer within sandbox \"980110a33007dd69ab3c64550cf95306050d34a3386c96cac5bec83c5c0834e7\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"1edc2cab60a83761d202e15ca53a47c1ee5c534ed488d8c1b2cf86afac563c8d\"" May 14 18:10:24.951970 containerd[1515]: time="2025-05-14T18:10:24.951424793Z" level=info msg="StartContainer for \"1edc2cab60a83761d202e15ca53a47c1ee5c534ed488d8c1b2cf86afac563c8d\"" May 14 18:10:24.954764 containerd[1515]: time="2025-05-14T18:10:24.954679533Z" level=info msg="connecting to shim 1edc2cab60a83761d202e15ca53a47c1ee5c534ed488d8c1b2cf86afac563c8d" address="unix:///run/containerd/s/44b15805d2d8ffc50bb63f3f6c27b9b0467b8548f25a04a303797ed3a4d43583" protocol=ttrpc version=3 May 14 18:10:25.010607 systemd[1]: Started cri-containerd-1edc2cab60a83761d202e15ca53a47c1ee5c534ed488d8c1b2cf86afac563c8d.scope - libcontainer container 1edc2cab60a83761d202e15ca53a47c1ee5c534ed488d8c1b2cf86afac563c8d. May 14 18:10:25.072103 containerd[1515]: time="2025-05-14T18:10:25.072050652Z" level=info msg="StartContainer for \"1edc2cab60a83761d202e15ca53a47c1ee5c534ed488d8c1b2cf86afac563c8d\" returns successfully" May 14 18:10:25.084869 systemd[1]: cri-containerd-1edc2cab60a83761d202e15ca53a47c1ee5c534ed488d8c1b2cf86afac563c8d.scope: Deactivated successfully. May 14 18:10:25.087945 containerd[1515]: time="2025-05-14T18:10:25.087743819Z" level=info msg="received exit event container_id:\"1edc2cab60a83761d202e15ca53a47c1ee5c534ed488d8c1b2cf86afac563c8d\" id:\"1edc2cab60a83761d202e15ca53a47c1ee5c534ed488d8c1b2cf86afac563c8d\" pid:4443 exited_at:{seconds:1747246225 nanos:87288593}" May 14 18:10:25.089604 containerd[1515]: time="2025-05-14T18:10:25.089545641Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1edc2cab60a83761d202e15ca53a47c1ee5c534ed488d8c1b2cf86afac563c8d\" id:\"1edc2cab60a83761d202e15ca53a47c1ee5c534ed488d8c1b2cf86afac563c8d\" pid:4443 exited_at:{seconds:1747246225 nanos:87288593}" May 14 18:10:25.132701 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1edc2cab60a83761d202e15ca53a47c1ee5c534ed488d8c1b2cf86afac563c8d-rootfs.mount: Deactivated successfully. May 14 18:10:25.508382 kubelet[2769]: E0514 18:10:25.508289 2769 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 14 18:10:25.893014 kubelet[2769]: E0514 18:10:25.892499 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 14 18:10:25.900273 containerd[1515]: time="2025-05-14T18:10:25.899583222Z" level=info msg="CreateContainer within sandbox \"980110a33007dd69ab3c64550cf95306050d34a3386c96cac5bec83c5c0834e7\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 14 18:10:25.914567 containerd[1515]: time="2025-05-14T18:10:25.914510405Z" level=info msg="Container b39cdd64b2dbe1d67b7f44dc8ca937948eb74ba5699c83cc171fc3ecfc49bb35: CDI devices from CRI Config.CDIDevices: []" May 14 18:10:25.922033 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount104366409.mount: Deactivated successfully. May 14 18:10:25.944045 containerd[1515]: time="2025-05-14T18:10:25.943926728Z" level=info msg="CreateContainer within sandbox \"980110a33007dd69ab3c64550cf95306050d34a3386c96cac5bec83c5c0834e7\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b39cdd64b2dbe1d67b7f44dc8ca937948eb74ba5699c83cc171fc3ecfc49bb35\"" May 14 18:10:25.945174 containerd[1515]: time="2025-05-14T18:10:25.945129097Z" level=info msg="StartContainer for \"b39cdd64b2dbe1d67b7f44dc8ca937948eb74ba5699c83cc171fc3ecfc49bb35\"" May 14 18:10:25.948793 containerd[1515]: time="2025-05-14T18:10:25.948633319Z" level=info msg="connecting to shim b39cdd64b2dbe1d67b7f44dc8ca937948eb74ba5699c83cc171fc3ecfc49bb35" address="unix:///run/containerd/s/44b15805d2d8ffc50bb63f3f6c27b9b0467b8548f25a04a303797ed3a4d43583" protocol=ttrpc version=3 May 14 18:10:25.987276 systemd[1]: Started cri-containerd-b39cdd64b2dbe1d67b7f44dc8ca937948eb74ba5699c83cc171fc3ecfc49bb35.scope - libcontainer container b39cdd64b2dbe1d67b7f44dc8ca937948eb74ba5699c83cc171fc3ecfc49bb35. May 14 18:10:26.065096 containerd[1515]: time="2025-05-14T18:10:26.065031334Z" level=info msg="StartContainer for \"b39cdd64b2dbe1d67b7f44dc8ca937948eb74ba5699c83cc171fc3ecfc49bb35\" returns successfully" May 14 18:10:26.071712 systemd[1]: cri-containerd-b39cdd64b2dbe1d67b7f44dc8ca937948eb74ba5699c83cc171fc3ecfc49bb35.scope: Deactivated successfully. May 14 18:10:26.075853 containerd[1515]: time="2025-05-14T18:10:26.075276384Z" level=info msg="received exit event container_id:\"b39cdd64b2dbe1d67b7f44dc8ca937948eb74ba5699c83cc171fc3ecfc49bb35\" id:\"b39cdd64b2dbe1d67b7f44dc8ca937948eb74ba5699c83cc171fc3ecfc49bb35\" pid:4487 exited_at:{seconds:1747246226 nanos:74599309}" May 14 18:10:26.076163 containerd[1515]: time="2025-05-14T18:10:26.076098286Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b39cdd64b2dbe1d67b7f44dc8ca937948eb74ba5699c83cc171fc3ecfc49bb35\" id:\"b39cdd64b2dbe1d67b7f44dc8ca937948eb74ba5699c83cc171fc3ecfc49bb35\" pid:4487 exited_at:{seconds:1747246226 nanos:74599309}" May 14 18:10:26.116042 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b39cdd64b2dbe1d67b7f44dc8ca937948eb74ba5699c83cc171fc3ecfc49bb35-rootfs.mount: Deactivated successfully. May 14 18:10:26.897802 kubelet[2769]: E0514 18:10:26.897741 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 14 18:10:26.902481 containerd[1515]: time="2025-05-14T18:10:26.902431930Z" level=info msg="CreateContainer within sandbox \"980110a33007dd69ab3c64550cf95306050d34a3386c96cac5bec83c5c0834e7\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 14 18:10:26.920239 containerd[1515]: time="2025-05-14T18:10:26.919600089Z" level=info msg="Container 42ed9af29a1911e7dbf97bc652b3fe9ffdac32ba6a64dfff31f0f937ff162ed1: CDI devices from CRI Config.CDIDevices: []" May 14 18:10:26.930278 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3906160503.mount: Deactivated successfully. May 14 18:10:26.943623 containerd[1515]: time="2025-05-14T18:10:26.943489423Z" level=info msg="CreateContainer within sandbox \"980110a33007dd69ab3c64550cf95306050d34a3386c96cac5bec83c5c0834e7\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"42ed9af29a1911e7dbf97bc652b3fe9ffdac32ba6a64dfff31f0f937ff162ed1\"" May 14 18:10:26.946040 containerd[1515]: time="2025-05-14T18:10:26.945732314Z" level=info msg="StartContainer for \"42ed9af29a1911e7dbf97bc652b3fe9ffdac32ba6a64dfff31f0f937ff162ed1\"" May 14 18:10:26.949471 containerd[1515]: time="2025-05-14T18:10:26.949408428Z" level=info msg="connecting to shim 42ed9af29a1911e7dbf97bc652b3fe9ffdac32ba6a64dfff31f0f937ff162ed1" address="unix:///run/containerd/s/44b15805d2d8ffc50bb63f3f6c27b9b0467b8548f25a04a303797ed3a4d43583" protocol=ttrpc version=3 May 14 18:10:26.996044 systemd[1]: Started cri-containerd-42ed9af29a1911e7dbf97bc652b3fe9ffdac32ba6a64dfff31f0f937ff162ed1.scope - libcontainer container 42ed9af29a1911e7dbf97bc652b3fe9ffdac32ba6a64dfff31f0f937ff162ed1. May 14 18:10:27.079406 systemd[1]: cri-containerd-42ed9af29a1911e7dbf97bc652b3fe9ffdac32ba6a64dfff31f0f937ff162ed1.scope: Deactivated successfully. May 14 18:10:27.083133 containerd[1515]: time="2025-05-14T18:10:27.082960018Z" level=info msg="TaskExit event in podsandbox handler container_id:\"42ed9af29a1911e7dbf97bc652b3fe9ffdac32ba6a64dfff31f0f937ff162ed1\" id:\"42ed9af29a1911e7dbf97bc652b3fe9ffdac32ba6a64dfff31f0f937ff162ed1\" pid:4528 exited_at:{seconds:1747246227 nanos:82060839}" May 14 18:10:27.084364 containerd[1515]: time="2025-05-14T18:10:27.084318323Z" level=info msg="received exit event container_id:\"42ed9af29a1911e7dbf97bc652b3fe9ffdac32ba6a64dfff31f0f937ff162ed1\" id:\"42ed9af29a1911e7dbf97bc652b3fe9ffdac32ba6a64dfff31f0f937ff162ed1\" pid:4528 exited_at:{seconds:1747246227 nanos:82060839}" May 14 18:10:27.100715 containerd[1515]: time="2025-05-14T18:10:27.100653805Z" level=info msg="StartContainer for \"42ed9af29a1911e7dbf97bc652b3fe9ffdac32ba6a64dfff31f0f937ff162ed1\" returns successfully" May 14 18:10:27.136049 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-42ed9af29a1911e7dbf97bc652b3fe9ffdac32ba6a64dfff31f0f937ff162ed1-rootfs.mount: Deactivated successfully. May 14 18:10:27.908482 kubelet[2769]: E0514 18:10:27.907659 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 14 18:10:27.919510 containerd[1515]: time="2025-05-14T18:10:27.919447206Z" level=info msg="CreateContainer within sandbox \"980110a33007dd69ab3c64550cf95306050d34a3386c96cac5bec83c5c0834e7\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 14 18:10:27.943413 containerd[1515]: time="2025-05-14T18:10:27.941172307Z" level=info msg="Container be30937cd36ae0c918ce643f80e85276a88222fc9b3b197b55f53cf29dcb2a7a: CDI devices from CRI Config.CDIDevices: []" May 14 18:10:27.948749 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2542916936.mount: Deactivated successfully. May 14 18:10:27.960839 containerd[1515]: time="2025-05-14T18:10:27.960771535Z" level=info msg="CreateContainer within sandbox \"980110a33007dd69ab3c64550cf95306050d34a3386c96cac5bec83c5c0834e7\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"be30937cd36ae0c918ce643f80e85276a88222fc9b3b197b55f53cf29dcb2a7a\"" May 14 18:10:27.964455 containerd[1515]: time="2025-05-14T18:10:27.963657332Z" level=info msg="StartContainer for \"be30937cd36ae0c918ce643f80e85276a88222fc9b3b197b55f53cf29dcb2a7a\"" May 14 18:10:27.967838 containerd[1515]: time="2025-05-14T18:10:27.967733815Z" level=info msg="connecting to shim be30937cd36ae0c918ce643f80e85276a88222fc9b3b197b55f53cf29dcb2a7a" address="unix:///run/containerd/s/44b15805d2d8ffc50bb63f3f6c27b9b0467b8548f25a04a303797ed3a4d43583" protocol=ttrpc version=3 May 14 18:10:28.005029 systemd[1]: Started cri-containerd-be30937cd36ae0c918ce643f80e85276a88222fc9b3b197b55f53cf29dcb2a7a.scope - libcontainer container be30937cd36ae0c918ce643f80e85276a88222fc9b3b197b55f53cf29dcb2a7a. May 14 18:10:28.191793 containerd[1515]: time="2025-05-14T18:10:28.190909719Z" level=info msg="StartContainer for \"be30937cd36ae0c918ce643f80e85276a88222fc9b3b197b55f53cf29dcb2a7a\" returns successfully" May 14 18:10:28.377875 containerd[1515]: time="2025-05-14T18:10:28.377815089Z" level=info msg="TaskExit event in podsandbox handler container_id:\"be30937cd36ae0c918ce643f80e85276a88222fc9b3b197b55f53cf29dcb2a7a\" id:\"1c99a87cd21ca790d0e6452c2bd3d4cc3878873e66b2a4f41845b8f7c8364625\" pid:4596 exited_at:{seconds:1747246228 nanos:376559894}" May 14 18:10:28.931503 kubelet[2769]: E0514 18:10:28.931423 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 14 18:10:28.952277 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) May 14 18:10:29.932604 kubelet[2769]: E0514 18:10:29.932446 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 14 18:10:30.681477 containerd[1515]: time="2025-05-14T18:10:30.681416295Z" level=info msg="TaskExit event in podsandbox handler container_id:\"be30937cd36ae0c918ce643f80e85276a88222fc9b3b197b55f53cf29dcb2a7a\" id:\"59db59465309dd5b80323d2b4b18fcddd8122f5cfeb42bed977bb3400e79c49d\" pid:4693 exit_status:1 exited_at:{seconds:1747246230 nanos:680869194}" May 14 18:10:30.956149 kubelet[2769]: E0514 18:10:30.955981 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 14 18:10:31.276920 kubelet[2769]: I0514 18:10:31.276864 2769 eviction_manager.go:366] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 14 18:10:31.277260 kubelet[2769]: I0514 18:10:31.276941 2769 eviction_manager.go:377] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 14 18:10:31.278246 kubelet[2769]: I0514 18:10:31.277907 2769 eviction_manager.go:395] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-8w8l2","kube-system/kube-controller-manager-ci-4334.0.0-a-9d82e253c5","kube-system/kube-proxy-lg268","kube-system/kube-apiserver-ci-4334.0.0-a-9d82e253c5","kube-system/kube-scheduler-ci-4334.0.0-a-9d82e253c5"] May 14 18:10:31.278246 kubelet[2769]: E0514 18:10:31.277985 2769 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-8w8l2" May 14 18:10:31.278246 kubelet[2769]: E0514 18:10:31.278011 2769 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-ci-4334.0.0-a-9d82e253c5" May 14 18:10:31.278246 kubelet[2769]: E0514 18:10:31.278026 2769 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-lg268" May 14 18:10:31.278246 kubelet[2769]: E0514 18:10:31.278043 2769 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-ci-4334.0.0-a-9d82e253c5" May 14 18:10:31.278246 kubelet[2769]: E0514 18:10:31.278055 2769 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-ci-4334.0.0-a-9d82e253c5" May 14 18:10:31.278246 kubelet[2769]: I0514 18:10:31.278068 2769 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 14 18:10:32.921573 containerd[1515]: time="2025-05-14T18:10:32.918490768Z" level=info msg="TaskExit event in podsandbox handler container_id:\"be30937cd36ae0c918ce643f80e85276a88222fc9b3b197b55f53cf29dcb2a7a\" id:\"777793805e51bca2173e9d1b5605475afcd71efe1780c8af6521543069e32fe7\" pid:4989 exit_status:1 exited_at:{seconds:1747246232 nanos:917918694}" May 14 18:10:33.569969 systemd-networkd[1459]: lxc_health: Link UP May 14 18:10:33.584332 systemd-networkd[1459]: lxc_health: Gained carrier May 14 18:10:33.665417 kubelet[2769]: E0514 18:10:33.665374 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 14 18:10:33.709186 kubelet[2769]: I0514 18:10:33.709108 2769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-8w8l2" podStartSLOduration=10.709085053 podStartE2EDuration="10.709085053s" podCreationTimestamp="2025-05-14 18:10:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 18:10:28.971600303 +0000 UTC m=+118.793550842" watchObservedRunningTime="2025-05-14 18:10:33.709085053 +0000 UTC m=+123.531035597" May 14 18:10:33.957255 kubelet[2769]: E0514 18:10:33.956775 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 14 18:10:34.960716 kubelet[2769]: E0514 18:10:34.960314 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 14 18:10:35.240482 containerd[1515]: time="2025-05-14T18:10:35.240052888Z" level=info msg="TaskExit event in podsandbox handler container_id:\"be30937cd36ae0c918ce643f80e85276a88222fc9b3b197b55f53cf29dcb2a7a\" id:\"be508d7e7b3dce21bd467372116dce800ca27ed9bf9e35d4b34e6e9a170cebc0\" pid:5140 exited_at:{seconds:1747246235 nanos:239359962}" May 14 18:10:35.559270 systemd-networkd[1459]: lxc_health: Gained IPv6LL May 14 18:10:37.404173 containerd[1515]: time="2025-05-14T18:10:37.404128156Z" level=info msg="TaskExit event in podsandbox handler container_id:\"be30937cd36ae0c918ce643f80e85276a88222fc9b3b197b55f53cf29dcb2a7a\" id:\"8f5d1a53f13a24a7d8e20fdc9c59c69bf3531717dfbe606cd238d41ca5de58f5\" pid:5167 exited_at:{seconds:1747246237 nanos:403406005}" May 14 18:10:39.571672 containerd[1515]: time="2025-05-14T18:10:39.571609476Z" level=info msg="TaskExit event in podsandbox handler container_id:\"be30937cd36ae0c918ce643f80e85276a88222fc9b3b197b55f53cf29dcb2a7a\" id:\"2dc9a6216863a518863e8124419e887bd8a3ef68eabff746020e5b8545ee8cad\" pid:5193 exited_at:{seconds:1747246239 nanos:571149113}" May 14 18:10:39.578241 kubelet[2769]: E0514 18:10:39.576746 2769 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:53760->127.0.0.1:43007: write tcp 127.0.0.1:53760->127.0.0.1:43007: write: broken pipe May 14 18:10:39.578840 kubelet[2769]: E0514 18:10:39.576697 2769 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 127.0.0.1:53760->127.0.0.1:43007: read tcp 127.0.0.1:53760->127.0.0.1:43007: read: connection reset by peer May 14 18:10:39.582596 sshd[4332]: Connection closed by 139.178.89.65 port 49370 May 14 18:10:39.583963 sshd-session[4325]: pam_unix(sshd:session): session closed for user core May 14 18:10:39.596313 systemd-logind[1497]: Session 28 logged out. Waiting for processes to exit. May 14 18:10:39.597457 systemd[1]: sshd@27-164.92.104.130:22-139.178.89.65:49370.service: Deactivated successfully. May 14 18:10:39.601676 systemd[1]: session-28.scope: Deactivated successfully. May 14 18:10:39.605978 systemd-logind[1497]: Removed session 28. May 14 18:10:41.293880 kubelet[2769]: I0514 18:10:41.293830 2769 eviction_manager.go:366] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 14 18:10:41.293880 kubelet[2769]: I0514 18:10:41.293895 2769 container_gc.go:88] "Attempting to delete unused containers" May 14 18:10:41.296894 kubelet[2769]: I0514 18:10:41.296837 2769 image_gc_manager.go:404] "Attempting to delete unused images" May 14 18:10:41.299272 kubelet[2769]: I0514 18:10:41.299141 2769 image_gc_manager.go:460] "Removing image to free bytes" imageID="sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" size=57236178 runtimeHandler="" May 14 18:10:41.300167 containerd[1515]: time="2025-05-14T18:10:41.300131681Z" level=info msg="RemoveImage \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" May 14 18:10:41.301224 containerd[1515]: time="2025-05-14T18:10:41.301120282Z" level=info msg="ImageDelete event name:\"registry.k8s.io/etcd:3.5.12-0\"" May 14 18:10:41.302092 containerd[1515]: time="2025-05-14T18:10:41.302036949Z" level=info msg="ImageDelete event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\"" May 14 18:10:41.302740 containerd[1515]: time="2025-05-14T18:10:41.302698916Z" level=info msg="RemoveImage \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" returns successfully" May 14 18:10:41.302875 containerd[1515]: time="2025-05-14T18:10:41.302859881Z" level=info msg="ImageDelete event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" May 14 18:10:41.303145 kubelet[2769]: I0514 18:10:41.303109 2769 image_gc_manager.go:460] "Removing image to free bytes" imageID="sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" size=18182961 runtimeHandler="" May 14 18:10:41.303483 containerd[1515]: time="2025-05-14T18:10:41.303464112Z" level=info msg="RemoveImage \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" May 14 18:10:41.304276 containerd[1515]: time="2025-05-14T18:10:41.304192561Z" level=info msg="ImageDelete event name:\"registry.k8s.io/coredns/coredns:v1.11.1\"" May 14 18:10:41.304656 containerd[1515]: time="2025-05-14T18:10:41.304606816Z" level=info msg="ImageDelete event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\"" May 14 18:10:41.304971 containerd[1515]: time="2025-05-14T18:10:41.304949508Z" level=info msg="RemoveImage \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" returns successfully" May 14 18:10:41.305071 containerd[1515]: time="2025-05-14T18:10:41.305043750Z" level=info msg="ImageDelete event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" May 14 18:10:41.305348 kubelet[2769]: I0514 18:10:41.305324 2769 image_gc_manager.go:460] "Removing image to free bytes" imageID="sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c" size=321520 runtimeHandler="" May 14 18:10:41.305645 containerd[1515]: time="2025-05-14T18:10:41.305622669Z" level=info msg="RemoveImage \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" May 14 18:10:41.306451 containerd[1515]: time="2025-05-14T18:10:41.306402733Z" level=info msg="ImageDelete event name:\"registry.k8s.io/pause:3.9\"" May 14 18:10:41.306845 containerd[1515]: time="2025-05-14T18:10:41.306815895Z" level=info msg="ImageDelete event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\"" May 14 18:10:41.309845 containerd[1515]: time="2025-05-14T18:10:41.309808233Z" level=info msg="RemoveImage \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" returns successfully" May 14 18:10:41.310224 containerd[1515]: time="2025-05-14T18:10:41.310093052Z" level=info msg="ImageDelete event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" May 14 18:10:41.325752 kubelet[2769]: I0514 18:10:41.325682 2769 eviction_manager.go:377] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 14 18:10:41.326022 kubelet[2769]: I0514 18:10:41.325998 2769 eviction_manager.go:395] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-8w8l2","kube-system/kube-controller-manager-ci-4334.0.0-a-9d82e253c5","kube-system/kube-proxy-lg268","kube-system/kube-apiserver-ci-4334.0.0-a-9d82e253c5","kube-system/kube-scheduler-ci-4334.0.0-a-9d82e253c5"] May 14 18:10:41.326177 kubelet[2769]: E0514 18:10:41.326054 2769 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-8w8l2" May 14 18:10:41.326177 kubelet[2769]: E0514 18:10:41.326078 2769 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-ci-4334.0.0-a-9d82e253c5" May 14 18:10:41.326177 kubelet[2769]: E0514 18:10:41.326122 2769 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-lg268" May 14 18:10:41.326177 kubelet[2769]: E0514 18:10:41.326134 2769 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-ci-4334.0.0-a-9d82e253c5" May 14 18:10:41.326177 kubelet[2769]: E0514 18:10:41.326142 2769 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-ci-4334.0.0-a-9d82e253c5" May 14 18:10:41.326177 kubelet[2769]: I0514 18:10:41.326152 2769 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node"