Jun 21 05:27:56.918964 kernel: Linux version 6.12.34-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Jun 20 23:59:04 -00 2025 Jun 21 05:27:56.919000 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=d3c0be6f64121476b0313f5d7d7bbd73e21bc1a219aacd38b8006b291898eca1 Jun 21 05:27:56.919011 kernel: BIOS-provided physical RAM map: Jun 21 05:27:56.919017 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jun 21 05:27:56.919024 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jun 21 05:27:56.919031 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jun 21 05:27:56.919039 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Jun 21 05:27:56.919050 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Jun 21 05:27:56.919060 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jun 21 05:27:56.919066 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jun 21 05:27:56.919085 kernel: NX (Execute Disable) protection: active Jun 21 05:27:56.919093 kernel: APIC: Static calls initialized Jun 21 05:27:56.919100 kernel: SMBIOS 2.8 present. Jun 21 05:27:56.919107 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Jun 21 05:27:56.919118 kernel: DMI: Memory slots populated: 1/1 Jun 21 05:27:56.919126 kernel: Hypervisor detected: KVM Jun 21 05:27:56.919137 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jun 21 05:27:56.919145 kernel: kvm-clock: using sched offset of 4441425930 cycles Jun 21 05:27:56.919153 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jun 21 05:27:56.919161 kernel: tsc: Detected 2494.172 MHz processor Jun 21 05:27:56.919169 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jun 21 05:27:56.919178 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jun 21 05:27:56.919186 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Jun 21 05:27:56.919197 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jun 21 05:27:56.919205 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jun 21 05:27:56.919213 kernel: ACPI: Early table checksum verification disabled Jun 21 05:27:56.919221 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Jun 21 05:27:56.919237 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 21 05:27:56.919246 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 21 05:27:56.919253 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 21 05:27:56.919261 kernel: ACPI: FACS 0x000000007FFE0000 000040 Jun 21 05:27:56.919269 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 21 05:27:56.919283 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 21 05:27:56.919293 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 21 05:27:56.919308 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 21 05:27:56.919317 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Jun 21 05:27:56.919327 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Jun 21 05:27:56.919335 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Jun 21 05:27:56.919343 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Jun 21 05:27:56.919351 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Jun 21 05:27:56.919365 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Jun 21 05:27:56.919373 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Jun 21 05:27:56.919382 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jun 21 05:27:56.919390 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jun 21 05:27:56.919399 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00001000-0x7ffdafff] Jun 21 05:27:56.919408 kernel: NODE_DATA(0) allocated [mem 0x7ffd3dc0-0x7ffdafff] Jun 21 05:27:56.919419 kernel: Zone ranges: Jun 21 05:27:56.919427 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jun 21 05:27:56.919435 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Jun 21 05:27:56.919443 kernel: Normal empty Jun 21 05:27:56.919451 kernel: Device empty Jun 21 05:27:56.919459 kernel: Movable zone start for each node Jun 21 05:27:56.919469 kernel: Early memory node ranges Jun 21 05:27:56.919478 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jun 21 05:27:56.919487 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Jun 21 05:27:56.919497 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Jun 21 05:27:56.919506 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jun 21 05:27:56.919514 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jun 21 05:27:56.919522 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Jun 21 05:27:56.919530 kernel: ACPI: PM-Timer IO Port: 0x608 Jun 21 05:27:56.919539 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jun 21 05:27:56.919550 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jun 21 05:27:56.919560 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jun 21 05:27:56.919571 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jun 21 05:27:56.919582 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jun 21 05:27:56.919596 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jun 21 05:27:56.919610 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jun 21 05:27:56.919625 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jun 21 05:27:56.919636 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jun 21 05:27:56.922721 kernel: TSC deadline timer available Jun 21 05:27:56.922752 kernel: CPU topo: Max. logical packages: 1 Jun 21 05:27:56.922763 kernel: CPU topo: Max. logical dies: 1 Jun 21 05:27:56.922772 kernel: CPU topo: Max. dies per package: 1 Jun 21 05:27:56.922788 kernel: CPU topo: Max. threads per core: 1 Jun 21 05:27:56.922797 kernel: CPU topo: Num. cores per package: 2 Jun 21 05:27:56.922805 kernel: CPU topo: Num. threads per package: 2 Jun 21 05:27:56.922814 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Jun 21 05:27:56.922823 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jun 21 05:27:56.922832 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Jun 21 05:27:56.922840 kernel: Booting paravirtualized kernel on KVM Jun 21 05:27:56.922849 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jun 21 05:27:56.922858 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jun 21 05:27:56.922867 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Jun 21 05:27:56.922878 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Jun 21 05:27:56.922886 kernel: pcpu-alloc: [0] 0 1 Jun 21 05:27:56.922895 kernel: kvm-guest: PV spinlocks disabled, no host support Jun 21 05:27:56.922905 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=d3c0be6f64121476b0313f5d7d7bbd73e21bc1a219aacd38b8006b291898eca1 Jun 21 05:27:56.922917 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jun 21 05:27:56.922928 kernel: random: crng init done Jun 21 05:27:56.922937 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jun 21 05:27:56.922945 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jun 21 05:27:56.922957 kernel: Fallback order for Node 0: 0 Jun 21 05:27:56.922966 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524153 Jun 21 05:27:56.922974 kernel: Policy zone: DMA32 Jun 21 05:27:56.922982 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jun 21 05:27:56.922991 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jun 21 05:27:56.923000 kernel: Kernel/User page tables isolation: enabled Jun 21 05:27:56.923013 kernel: ftrace: allocating 40093 entries in 157 pages Jun 21 05:27:56.923022 kernel: ftrace: allocated 157 pages with 5 groups Jun 21 05:27:56.923031 kernel: Dynamic Preempt: voluntary Jun 21 05:27:56.923042 kernel: rcu: Preemptible hierarchical RCU implementation. Jun 21 05:27:56.923052 kernel: rcu: RCU event tracing is enabled. Jun 21 05:27:56.923061 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jun 21 05:27:56.923070 kernel: Trampoline variant of Tasks RCU enabled. Jun 21 05:27:56.923078 kernel: Rude variant of Tasks RCU enabled. Jun 21 05:27:56.923087 kernel: Tracing variant of Tasks RCU enabled. Jun 21 05:27:56.923095 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jun 21 05:27:56.923104 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jun 21 05:27:56.923113 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jun 21 05:27:56.923144 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jun 21 05:27:56.923154 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jun 21 05:27:56.923163 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jun 21 05:27:56.923172 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jun 21 05:27:56.923180 kernel: Console: colour VGA+ 80x25 Jun 21 05:27:56.923189 kernel: printk: legacy console [tty0] enabled Jun 21 05:27:56.923198 kernel: printk: legacy console [ttyS0] enabled Jun 21 05:27:56.923206 kernel: ACPI: Core revision 20240827 Jun 21 05:27:56.923215 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jun 21 05:27:56.923235 kernel: APIC: Switch to symmetric I/O mode setup Jun 21 05:27:56.923244 kernel: x2apic enabled Jun 21 05:27:56.923253 kernel: APIC: Switched APIC routing to: physical x2apic Jun 21 05:27:56.923264 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jun 21 05:27:56.923277 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f3b868b6c, max_idle_ns: 440795251212 ns Jun 21 05:27:56.923286 kernel: Calibrating delay loop (skipped) preset value.. 4988.34 BogoMIPS (lpj=2494172) Jun 21 05:27:56.923295 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jun 21 05:27:56.923304 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jun 21 05:27:56.923313 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jun 21 05:27:56.923325 kernel: Spectre V2 : Mitigation: Retpolines Jun 21 05:27:56.923334 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jun 21 05:27:56.923343 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jun 21 05:27:56.923352 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jun 21 05:27:56.923361 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jun 21 05:27:56.923371 kernel: MDS: Mitigation: Clear CPU buffers Jun 21 05:27:56.923380 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jun 21 05:27:56.923392 kernel: ITS: Mitigation: Aligned branch/return thunks Jun 21 05:27:56.923401 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jun 21 05:27:56.923410 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jun 21 05:27:56.923419 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jun 21 05:27:56.923428 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jun 21 05:27:56.923437 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jun 21 05:27:56.923446 kernel: Freeing SMP alternatives memory: 32K Jun 21 05:27:56.923457 kernel: pid_max: default: 32768 minimum: 301 Jun 21 05:27:56.923466 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jun 21 05:27:56.923478 kernel: landlock: Up and running. Jun 21 05:27:56.923487 kernel: SELinux: Initializing. Jun 21 05:27:56.923496 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jun 21 05:27:56.923507 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jun 21 05:27:56.923522 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Jun 21 05:27:56.923535 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Jun 21 05:27:56.923548 kernel: signal: max sigframe size: 1776 Jun 21 05:27:56.923564 kernel: rcu: Hierarchical SRCU implementation. Jun 21 05:27:56.923611 kernel: rcu: Max phase no-delay instances is 400. Jun 21 05:27:56.923630 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jun 21 05:27:56.923691 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jun 21 05:27:56.923705 kernel: smp: Bringing up secondary CPUs ... Jun 21 05:27:56.923717 kernel: smpboot: x86: Booting SMP configuration: Jun 21 05:27:56.923735 kernel: .... node #0, CPUs: #1 Jun 21 05:27:56.923748 kernel: smp: Brought up 1 node, 2 CPUs Jun 21 05:27:56.923761 kernel: smpboot: Total of 2 processors activated (9976.68 BogoMIPS) Jun 21 05:27:56.923775 kernel: Memory: 1966904K/2096612K available (14336K kernel code, 2430K rwdata, 9956K rodata, 54424K init, 2544K bss, 125144K reserved, 0K cma-reserved) Jun 21 05:27:56.923788 kernel: devtmpfs: initialized Jun 21 05:27:56.923816 kernel: x86/mm: Memory block size: 128MB Jun 21 05:27:56.923830 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jun 21 05:27:56.923843 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jun 21 05:27:56.923856 kernel: pinctrl core: initialized pinctrl subsystem Jun 21 05:27:56.923869 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jun 21 05:27:56.923882 kernel: audit: initializing netlink subsys (disabled) Jun 21 05:27:56.923896 kernel: audit: type=2000 audit(1750483673.510:1): state=initialized audit_enabled=0 res=1 Jun 21 05:27:56.923909 kernel: thermal_sys: Registered thermal governor 'step_wise' Jun 21 05:27:56.923924 kernel: thermal_sys: Registered thermal governor 'user_space' Jun 21 05:27:56.923943 kernel: cpuidle: using governor menu Jun 21 05:27:56.923963 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jun 21 05:27:56.923976 kernel: dca service started, version 1.12.1 Jun 21 05:27:56.923989 kernel: PCI: Using configuration type 1 for base access Jun 21 05:27:56.924003 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jun 21 05:27:56.924016 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jun 21 05:27:56.924030 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jun 21 05:27:56.924044 kernel: ACPI: Added _OSI(Module Device) Jun 21 05:27:56.924058 kernel: ACPI: Added _OSI(Processor Device) Jun 21 05:27:56.924077 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jun 21 05:27:56.924090 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jun 21 05:27:56.924105 kernel: ACPI: Interpreter enabled Jun 21 05:27:56.924119 kernel: ACPI: PM: (supports S0 S5) Jun 21 05:27:56.924129 kernel: ACPI: Using IOAPIC for interrupt routing Jun 21 05:27:56.924140 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jun 21 05:27:56.924154 kernel: PCI: Using E820 reservations for host bridge windows Jun 21 05:27:56.924167 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jun 21 05:27:56.924180 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jun 21 05:27:56.924540 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jun 21 05:27:56.925593 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jun 21 05:27:56.925731 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jun 21 05:27:56.925744 kernel: acpiphp: Slot [3] registered Jun 21 05:27:56.925754 kernel: acpiphp: Slot [4] registered Jun 21 05:27:56.925764 kernel: acpiphp: Slot [5] registered Jun 21 05:27:56.925773 kernel: acpiphp: Slot [6] registered Jun 21 05:27:56.925788 kernel: acpiphp: Slot [7] registered Jun 21 05:27:56.925796 kernel: acpiphp: Slot [8] registered Jun 21 05:27:56.925808 kernel: acpiphp: Slot [9] registered Jun 21 05:27:56.925822 kernel: acpiphp: Slot [10] registered Jun 21 05:27:56.925837 kernel: acpiphp: Slot [11] registered Jun 21 05:27:56.925848 kernel: acpiphp: Slot [12] registered Jun 21 05:27:56.925860 kernel: acpiphp: Slot [13] registered Jun 21 05:27:56.925872 kernel: acpiphp: Slot [14] registered Jun 21 05:27:56.925884 kernel: acpiphp: Slot [15] registered Jun 21 05:27:56.925896 kernel: acpiphp: Slot [16] registered Jun 21 05:27:56.925912 kernel: acpiphp: Slot [17] registered Jun 21 05:27:56.925925 kernel: acpiphp: Slot [18] registered Jun 21 05:27:56.925938 kernel: acpiphp: Slot [19] registered Jun 21 05:27:56.925950 kernel: acpiphp: Slot [20] registered Jun 21 05:27:56.925959 kernel: acpiphp: Slot [21] registered Jun 21 05:27:56.925968 kernel: acpiphp: Slot [22] registered Jun 21 05:27:56.925976 kernel: acpiphp: Slot [23] registered Jun 21 05:27:56.925991 kernel: acpiphp: Slot [24] registered Jun 21 05:27:56.926004 kernel: acpiphp: Slot [25] registered Jun 21 05:27:56.926021 kernel: acpiphp: Slot [26] registered Jun 21 05:27:56.926034 kernel: acpiphp: Slot [27] registered Jun 21 05:27:56.926046 kernel: acpiphp: Slot [28] registered Jun 21 05:27:56.926059 kernel: acpiphp: Slot [29] registered Jun 21 05:27:56.926072 kernel: acpiphp: Slot [30] registered Jun 21 05:27:56.926084 kernel: acpiphp: Slot [31] registered Jun 21 05:27:56.926096 kernel: PCI host bridge to bus 0000:00 Jun 21 05:27:56.926299 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jun 21 05:27:56.926429 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jun 21 05:27:56.926532 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jun 21 05:27:56.926614 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jun 21 05:27:56.926725 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Jun 21 05:27:56.926851 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jun 21 05:27:56.927084 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Jun 21 05:27:56.927261 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint Jun 21 05:27:56.927437 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint Jun 21 05:27:56.927578 kernel: pci 0000:00:01.1: BAR 4 [io 0xc1e0-0xc1ef] Jun 21 05:27:56.927714 kernel: pci 0000:00:01.1: BAR 0 [io 0x01f0-0x01f7]: legacy IDE quirk Jun 21 05:27:56.927859 kernel: pci 0000:00:01.1: BAR 1 [io 0x03f6]: legacy IDE quirk Jun 21 05:27:56.928009 kernel: pci 0000:00:01.1: BAR 2 [io 0x0170-0x0177]: legacy IDE quirk Jun 21 05:27:56.928138 kernel: pci 0000:00:01.1: BAR 3 [io 0x0376]: legacy IDE quirk Jun 21 05:27:56.928277 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint Jun 21 05:27:56.928413 kernel: pci 0000:00:01.2: BAR 4 [io 0xc180-0xc19f] Jun 21 05:27:56.928526 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint Jun 21 05:27:56.928694 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jun 21 05:27:56.928828 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jun 21 05:27:56.928977 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint Jun 21 05:27:56.929119 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref] Jun 21 05:27:56.929249 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref] Jun 21 05:27:56.929366 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfebf0000-0xfebf0fff] Jun 21 05:27:56.929464 kernel: pci 0000:00:02.0: ROM [mem 0xfebe0000-0xfebeffff pref] Jun 21 05:27:56.929563 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jun 21 05:27:56.929747 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jun 21 05:27:56.929954 kernel: pci 0000:00:03.0: BAR 0 [io 0xc1a0-0xc1bf] Jun 21 05:27:56.930123 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebf1000-0xfebf1fff] Jun 21 05:27:56.930256 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref] Jun 21 05:27:56.930461 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jun 21 05:27:56.930735 kernel: pci 0000:00:04.0: BAR 0 [io 0xc1c0-0xc1df] Jun 21 05:27:56.930906 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebf2000-0xfebf2fff] Jun 21 05:27:56.931031 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref] Jun 21 05:27:56.931158 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint Jun 21 05:27:56.931301 kernel: pci 0000:00:05.0: BAR 0 [io 0xc100-0xc13f] Jun 21 05:27:56.931403 kernel: pci 0000:00:05.0: BAR 1 [mem 0xfebf3000-0xfebf3fff] Jun 21 05:27:56.931506 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref] Jun 21 05:27:56.931670 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jun 21 05:27:56.931782 kernel: pci 0000:00:06.0: BAR 0 [io 0xc000-0xc07f] Jun 21 05:27:56.931937 kernel: pci 0000:00:06.0: BAR 1 [mem 0xfebf4000-0xfebf4fff] Jun 21 05:27:56.932063 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref] Jun 21 05:27:56.932239 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jun 21 05:27:56.932417 kernel: pci 0000:00:07.0: BAR 0 [io 0xc080-0xc0ff] Jun 21 05:27:56.932581 kernel: pci 0000:00:07.0: BAR 1 [mem 0xfebf5000-0xfebf5fff] Jun 21 05:27:56.932830 kernel: pci 0000:00:07.0: BAR 4 [mem 0xfe814000-0xfe817fff 64bit pref] Jun 21 05:27:56.933026 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint Jun 21 05:27:56.933241 kernel: pci 0000:00:08.0: BAR 0 [io 0xc140-0xc17f] Jun 21 05:27:56.933420 kernel: pci 0000:00:08.0: BAR 4 [mem 0xfe818000-0xfe81bfff 64bit pref] Jun 21 05:27:56.933443 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jun 21 05:27:56.933461 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jun 21 05:27:56.933478 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jun 21 05:27:56.933495 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jun 21 05:27:56.933512 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jun 21 05:27:56.933530 kernel: iommu: Default domain type: Translated Jun 21 05:27:56.933547 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jun 21 05:27:56.933565 kernel: PCI: Using ACPI for IRQ routing Jun 21 05:27:56.933587 kernel: PCI: pci_cache_line_size set to 64 bytes Jun 21 05:27:56.933604 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jun 21 05:27:56.933621 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Jun 21 05:27:56.933832 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jun 21 05:27:56.933962 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jun 21 05:27:56.934157 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jun 21 05:27:56.934183 kernel: vgaarb: loaded Jun 21 05:27:56.934201 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jun 21 05:27:56.934218 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jun 21 05:27:56.934242 kernel: clocksource: Switched to clocksource kvm-clock Jun 21 05:27:56.934259 kernel: VFS: Disk quotas dquot_6.6.0 Jun 21 05:27:56.934275 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jun 21 05:27:56.934292 kernel: pnp: PnP ACPI init Jun 21 05:27:56.934307 kernel: pnp: PnP ACPI: found 4 devices Jun 21 05:27:56.934325 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jun 21 05:27:56.934340 kernel: NET: Registered PF_INET protocol family Jun 21 05:27:56.934356 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jun 21 05:27:56.934373 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jun 21 05:27:56.934394 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jun 21 05:27:56.934425 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jun 21 05:27:56.934441 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jun 21 05:27:56.934456 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jun 21 05:27:56.934471 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jun 21 05:27:56.934486 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jun 21 05:27:56.934501 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jun 21 05:27:56.934515 kernel: NET: Registered PF_XDP protocol family Jun 21 05:27:56.934725 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jun 21 05:27:56.934883 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jun 21 05:27:56.935030 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jun 21 05:27:56.935199 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jun 21 05:27:56.935358 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Jun 21 05:27:56.935501 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jun 21 05:27:56.937755 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jun 21 05:27:56.937791 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jun 21 05:27:56.937996 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x720 took 28227 usecs Jun 21 05:27:56.938014 kernel: PCI: CLS 0 bytes, default 64 Jun 21 05:27:56.938032 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jun 21 05:27:56.938043 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f3b868b6c, max_idle_ns: 440795251212 ns Jun 21 05:27:56.938054 kernel: Initialise system trusted keyrings Jun 21 05:27:56.938065 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jun 21 05:27:56.938075 kernel: Key type asymmetric registered Jun 21 05:27:56.938085 kernel: Asymmetric key parser 'x509' registered Jun 21 05:27:56.938095 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jun 21 05:27:56.938111 kernel: io scheduler mq-deadline registered Jun 21 05:27:56.938121 kernel: io scheduler kyber registered Jun 21 05:27:56.938131 kernel: io scheduler bfq registered Jun 21 05:27:56.938141 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jun 21 05:27:56.938152 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jun 21 05:27:56.938162 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jun 21 05:27:56.938172 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jun 21 05:27:56.938182 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jun 21 05:27:56.938193 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jun 21 05:27:56.938205 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jun 21 05:27:56.938216 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jun 21 05:27:56.938226 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jun 21 05:27:56.938537 kernel: rtc_cmos 00:03: RTC can wake from S4 Jun 21 05:27:56.938602 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jun 21 05:27:56.938786 kernel: rtc_cmos 00:03: registered as rtc0 Jun 21 05:27:56.938935 kernel: rtc_cmos 00:03: setting system clock to 2025-06-21T05:27:56 UTC (1750483676) Jun 21 05:27:56.939078 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jun 21 05:27:56.939106 kernel: intel_pstate: CPU model not supported Jun 21 05:27:56.939123 kernel: NET: Registered PF_INET6 protocol family Jun 21 05:27:56.939140 kernel: Segment Routing with IPv6 Jun 21 05:27:56.939157 kernel: In-situ OAM (IOAM) with IPv6 Jun 21 05:27:56.939174 kernel: NET: Registered PF_PACKET protocol family Jun 21 05:27:56.939191 kernel: Key type dns_resolver registered Jun 21 05:27:56.939208 kernel: IPI shorthand broadcast: enabled Jun 21 05:27:56.939225 kernel: sched_clock: Marking stable (3459003845, 110824122)->(3589467081, -19639114) Jun 21 05:27:56.939242 kernel: registered taskstats version 1 Jun 21 05:27:56.939263 kernel: Loading compiled-in X.509 certificates Jun 21 05:27:56.939280 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.34-flatcar: ec4617d162e00e1890f71f252cdf44036a7b66f7' Jun 21 05:27:56.939297 kernel: Demotion targets for Node 0: null Jun 21 05:27:56.939315 kernel: Key type .fscrypt registered Jun 21 05:27:56.939331 kernel: Key type fscrypt-provisioning registered Jun 21 05:27:56.939352 kernel: ima: No TPM chip found, activating TPM-bypass! Jun 21 05:27:56.939390 kernel: ima: Allocated hash algorithm: sha1 Jun 21 05:27:56.939411 kernel: ima: No architecture policies found Jun 21 05:27:56.939430 kernel: clk: Disabling unused clocks Jun 21 05:27:56.939445 kernel: Warning: unable to open an initial console. Jun 21 05:27:56.939461 kernel: Freeing unused kernel image (initmem) memory: 54424K Jun 21 05:27:56.939480 kernel: Write protecting the kernel read-only data: 24576k Jun 21 05:27:56.939498 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Jun 21 05:27:56.939516 kernel: Run /init as init process Jun 21 05:27:56.939534 kernel: with arguments: Jun 21 05:27:56.939550 kernel: /init Jun 21 05:27:56.939565 kernel: with environment: Jun 21 05:27:56.939580 kernel: HOME=/ Jun 21 05:27:56.939590 kernel: TERM=linux Jun 21 05:27:56.939601 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jun 21 05:27:56.939620 systemd[1]: Successfully made /usr/ read-only. Jun 21 05:27:56.939639 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jun 21 05:27:56.940098 systemd[1]: Detected virtualization kvm. Jun 21 05:27:56.940121 systemd[1]: Detected architecture x86-64. Jun 21 05:27:56.940141 systemd[1]: Running in initrd. Jun 21 05:27:56.940167 systemd[1]: No hostname configured, using default hostname. Jun 21 05:27:56.940184 systemd[1]: Hostname set to . Jun 21 05:27:56.940198 systemd[1]: Initializing machine ID from VM UUID. Jun 21 05:27:56.940212 systemd[1]: Queued start job for default target initrd.target. Jun 21 05:27:56.940227 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 21 05:27:56.940243 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 21 05:27:56.940260 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jun 21 05:27:56.940323 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 21 05:27:56.940348 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jun 21 05:27:56.940368 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jun 21 05:27:56.940389 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jun 21 05:27:56.940411 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jun 21 05:27:56.940433 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 21 05:27:56.940451 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 21 05:27:56.940470 systemd[1]: Reached target paths.target - Path Units. Jun 21 05:27:56.940489 systemd[1]: Reached target slices.target - Slice Units. Jun 21 05:27:56.940507 systemd[1]: Reached target swap.target - Swaps. Jun 21 05:27:56.940539 systemd[1]: Reached target timers.target - Timer Units. Jun 21 05:27:56.940559 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jun 21 05:27:56.940578 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 21 05:27:56.940600 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jun 21 05:27:56.940618 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jun 21 05:27:56.940635 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 21 05:27:56.940720 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 21 05:27:56.940737 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 21 05:27:56.940755 systemd[1]: Reached target sockets.target - Socket Units. Jun 21 05:27:56.940774 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jun 21 05:27:56.940792 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 21 05:27:56.940811 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jun 21 05:27:56.940836 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jun 21 05:27:56.940855 systemd[1]: Starting systemd-fsck-usr.service... Jun 21 05:27:56.940873 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 21 05:27:56.940892 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 21 05:27:56.940910 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 21 05:27:56.940929 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jun 21 05:27:56.940959 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 21 05:27:56.941053 systemd-journald[212]: Collecting audit messages is disabled. Jun 21 05:27:56.941106 systemd[1]: Finished systemd-fsck-usr.service. Jun 21 05:27:56.941126 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 21 05:27:56.941146 systemd-journald[212]: Journal started Jun 21 05:27:56.941179 systemd-journald[212]: Runtime Journal (/run/log/journal/29396245793f49f58b631bb596c80888) is 4.9M, max 39.5M, 34.6M free. Jun 21 05:27:56.943718 systemd[1]: Started systemd-journald.service - Journal Service. Jun 21 05:27:56.941727 systemd-modules-load[213]: Inserted module 'overlay' Jun 21 05:27:56.956641 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jun 21 05:27:56.961943 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 21 05:27:56.996529 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jun 21 05:27:56.996575 kernel: Bridge firewalling registered Jun 21 05:27:56.996360 systemd-modules-load[213]: Inserted module 'br_netfilter' Jun 21 05:27:56.997335 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 21 05:27:57.001983 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 21 05:27:57.006926 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 21 05:27:57.008921 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 21 05:27:57.011828 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 21 05:27:57.017361 systemd-tmpfiles[226]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jun 21 05:27:57.025704 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 21 05:27:57.039663 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 21 05:27:57.044758 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 21 05:27:57.047935 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 21 05:27:57.049341 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 21 05:27:57.053894 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jun 21 05:27:57.084692 dracut-cmdline[251]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=d3c0be6f64121476b0313f5d7d7bbd73e21bc1a219aacd38b8006b291898eca1 Jun 21 05:27:57.110319 systemd-resolved[250]: Positive Trust Anchors: Jun 21 05:27:57.111199 systemd-resolved[250]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 21 05:27:57.111264 systemd-resolved[250]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jun 21 05:27:57.118362 systemd-resolved[250]: Defaulting to hostname 'linux'. Jun 21 05:27:57.120670 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 21 05:27:57.121718 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 21 05:27:57.210785 kernel: SCSI subsystem initialized Jun 21 05:27:57.222748 kernel: Loading iSCSI transport class v2.0-870. Jun 21 05:27:57.236726 kernel: iscsi: registered transport (tcp) Jun 21 05:27:57.268975 kernel: iscsi: registered transport (qla4xxx) Jun 21 05:27:57.269118 kernel: QLogic iSCSI HBA Driver Jun 21 05:27:57.302612 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 21 05:27:57.345890 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 21 05:27:57.349290 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 21 05:27:57.432789 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jun 21 05:27:57.436021 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jun 21 05:27:57.505774 kernel: raid6: avx2x4 gen() 17216 MB/s Jun 21 05:27:57.522770 kernel: raid6: avx2x2 gen() 18227 MB/s Jun 21 05:27:57.540402 kernel: raid6: avx2x1 gen() 15984 MB/s Jun 21 05:27:57.540544 kernel: raid6: using algorithm avx2x2 gen() 18227 MB/s Jun 21 05:27:57.557976 kernel: raid6: .... xor() 15745 MB/s, rmw enabled Jun 21 05:27:57.558100 kernel: raid6: using avx2x2 recovery algorithm Jun 21 05:27:57.585708 kernel: xor: automatically using best checksumming function avx Jun 21 05:27:57.814724 kernel: Btrfs loaded, zoned=no, fsverity=no Jun 21 05:27:57.825774 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jun 21 05:27:57.830502 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 21 05:27:57.871718 systemd-udevd[460]: Using default interface naming scheme 'v255'. Jun 21 05:27:57.880693 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 21 05:27:57.886114 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jun 21 05:27:57.926785 dracut-pre-trigger[464]: rd.md=0: removing MD RAID activation Jun 21 05:27:57.974711 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jun 21 05:27:57.978329 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 21 05:27:58.058735 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 21 05:27:58.062188 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jun 21 05:27:58.180003 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Jun 21 05:27:58.186125 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jun 21 05:27:58.197673 kernel: cryptd: max_cpu_qlen set to 1000 Jun 21 05:27:58.197752 kernel: virtio_scsi virtio3: 2/0/0 default/read/poll queues Jun 21 05:27:58.215190 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jun 21 05:27:58.215298 kernel: scsi host0: Virtio SCSI HBA Jun 21 05:27:58.215366 kernel: GPT:9289727 != 125829119 Jun 21 05:27:58.215398 kernel: GPT:Alternate GPT header not at the end of the disk. Jun 21 05:27:58.215420 kernel: GPT:9289727 != 125829119 Jun 21 05:27:58.215438 kernel: GPT: Use GNU Parted to correct GPT errors. Jun 21 05:27:58.215462 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 21 05:27:58.218692 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Jun 21 05:27:58.221889 kernel: virtio_blk virtio5: [vdb] 976 512-byte logical blocks (500 kB/488 KiB) Jun 21 05:27:58.239809 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 21 05:27:58.240020 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 21 05:27:58.242314 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jun 21 05:27:58.247026 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 21 05:27:58.248236 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jun 21 05:27:58.284999 kernel: AES CTR mode by8 optimization enabled Jun 21 05:27:58.286693 kernel: libata version 3.00 loaded. Jun 21 05:27:58.308690 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jun 21 05:27:58.339969 kernel: ata_piix 0000:00:01.1: version 2.13 Jun 21 05:27:58.359830 kernel: scsi host1: ata_piix Jun 21 05:27:58.360682 kernel: ACPI: bus type USB registered Jun 21 05:27:58.360746 kernel: usbcore: registered new interface driver usbfs Jun 21 05:27:58.360767 kernel: usbcore: registered new interface driver hub Jun 21 05:27:58.360784 kernel: usbcore: registered new device driver usb Jun 21 05:27:58.363701 kernel: scsi host2: ata_piix Jun 21 05:27:58.364031 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 lpm-pol 0 Jun 21 05:27:58.364073 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 lpm-pol 0 Jun 21 05:27:58.426436 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jun 21 05:27:58.427487 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 21 05:27:58.460577 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jun 21 05:27:58.471126 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jun 21 05:27:58.471741 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jun 21 05:27:58.485000 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jun 21 05:27:58.486984 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jun 21 05:27:58.530948 disk-uuid[605]: Primary Header is updated. Jun 21 05:27:58.530948 disk-uuid[605]: Secondary Entries is updated. Jun 21 05:27:58.530948 disk-uuid[605]: Secondary Header is updated. Jun 21 05:27:58.544704 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 21 05:27:58.549765 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Jun 21 05:27:58.550128 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Jun 21 05:27:58.551674 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Jun 21 05:27:58.554680 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Jun 21 05:27:58.556692 kernel: hub 1-0:1.0: USB hub found Jun 21 05:27:58.560707 kernel: hub 1-0:1.0: 2 ports detected Jun 21 05:27:58.709896 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jun 21 05:27:58.720499 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jun 21 05:27:58.721221 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 21 05:27:58.722229 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 21 05:27:58.724336 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jun 21 05:27:58.754544 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jun 21 05:27:59.562715 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 21 05:27:59.564930 disk-uuid[606]: The operation has completed successfully. Jun 21 05:27:59.636923 systemd[1]: disk-uuid.service: Deactivated successfully. Jun 21 05:27:59.637150 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jun 21 05:27:59.673064 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jun 21 05:27:59.709161 sh[630]: Success Jun 21 05:27:59.734731 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jun 21 05:27:59.734872 kernel: device-mapper: uevent: version 1.0.3 Jun 21 05:27:59.736388 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jun 21 05:27:59.749688 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" Jun 21 05:27:59.830864 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jun 21 05:27:59.833783 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jun 21 05:27:59.850870 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jun 21 05:27:59.865691 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jun 21 05:27:59.868762 kernel: BTRFS: device fsid bfb8168c-5be0-428c-83e7-820ccaf1f8e9 devid 1 transid 41 /dev/mapper/usr (253:0) scanned by mount (642) Jun 21 05:27:59.872107 kernel: BTRFS info (device dm-0): first mount of filesystem bfb8168c-5be0-428c-83e7-820ccaf1f8e9 Jun 21 05:27:59.872223 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jun 21 05:27:59.872245 kernel: BTRFS info (device dm-0): using free-space-tree Jun 21 05:27:59.883682 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jun 21 05:27:59.885363 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jun 21 05:27:59.886313 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jun 21 05:27:59.888915 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jun 21 05:27:59.891891 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jun 21 05:27:59.933818 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 (254:6) scanned by mount (675) Jun 21 05:27:59.937225 kernel: BTRFS info (device vda6): first mount of filesystem 57d2b200-37a8-4067-8765-910d3ed0182c Jun 21 05:27:59.937356 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jun 21 05:27:59.937371 kernel: BTRFS info (device vda6): using free-space-tree Jun 21 05:27:59.952805 kernel: BTRFS info (device vda6): last unmount of filesystem 57d2b200-37a8-4067-8765-910d3ed0182c Jun 21 05:27:59.955766 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jun 21 05:27:59.958979 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jun 21 05:28:00.112372 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 21 05:28:00.117860 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 21 05:28:00.193365 systemd-networkd[813]: lo: Link UP Jun 21 05:28:00.194099 systemd-networkd[813]: lo: Gained carrier Jun 21 05:28:00.209845 systemd-networkd[813]: Enumeration completed Jun 21 05:28:00.210779 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 21 05:28:00.211546 systemd[1]: Reached target network.target - Network. Jun 21 05:28:00.216004 systemd-networkd[813]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jun 21 05:28:00.216019 systemd-networkd[813]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Jun 21 05:28:00.217742 systemd-networkd[813]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 21 05:28:00.217749 systemd-networkd[813]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 21 05:28:00.222330 systemd-networkd[813]: eth0: Link UP Jun 21 05:28:00.222343 systemd-networkd[813]: eth0: Gained carrier Jun 21 05:28:00.222467 systemd-networkd[813]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jun 21 05:28:00.227326 systemd-networkd[813]: eth1: Link UP Jun 21 05:28:00.227341 systemd-networkd[813]: eth1: Gained carrier Jun 21 05:28:00.227371 systemd-networkd[813]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 21 05:28:00.243843 systemd-networkd[813]: eth0: DHCPv4 address 64.23.242.202/20, gateway 64.23.240.1 acquired from 169.254.169.253 Jun 21 05:28:00.250875 systemd-networkd[813]: eth1: DHCPv4 address 10.124.0.20/20 acquired from 169.254.169.253 Jun 21 05:28:00.251165 ignition[722]: Ignition 2.21.0 Jun 21 05:28:00.251178 ignition[722]: Stage: fetch-offline Jun 21 05:28:00.251242 ignition[722]: no configs at "/usr/lib/ignition/base.d" Jun 21 05:28:00.251257 ignition[722]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jun 21 05:28:00.251423 ignition[722]: parsed url from cmdline: "" Jun 21 05:28:00.251429 ignition[722]: no config URL provided Jun 21 05:28:00.251439 ignition[722]: reading system config file "/usr/lib/ignition/user.ign" Jun 21 05:28:00.251461 ignition[722]: no config at "/usr/lib/ignition/user.ign" Jun 21 05:28:00.256938 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jun 21 05:28:00.251472 ignition[722]: failed to fetch config: resource requires networking Jun 21 05:28:00.252968 ignition[722]: Ignition finished successfully Jun 21 05:28:00.262094 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jun 21 05:28:00.305785 ignition[823]: Ignition 2.21.0 Jun 21 05:28:00.305807 ignition[823]: Stage: fetch Jun 21 05:28:00.306115 ignition[823]: no configs at "/usr/lib/ignition/base.d" Jun 21 05:28:00.306131 ignition[823]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jun 21 05:28:00.306286 ignition[823]: parsed url from cmdline: "" Jun 21 05:28:00.306292 ignition[823]: no config URL provided Jun 21 05:28:00.306300 ignition[823]: reading system config file "/usr/lib/ignition/user.ign" Jun 21 05:28:00.306329 ignition[823]: no config at "/usr/lib/ignition/user.ign" Jun 21 05:28:00.306403 ignition[823]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Jun 21 05:28:00.340114 ignition[823]: GET result: OK Jun 21 05:28:00.340947 ignition[823]: parsing config with SHA512: fc77ac32d7bb2cf46eebb08edfe2f2d0edba3a8922b7593761eebce8bb8abfba059c6c39da1664d6b64a80bc4683123ebd998874127d46b50b8debcd28cfd441 Jun 21 05:28:00.346448 unknown[823]: fetched base config from "system" Jun 21 05:28:00.346463 unknown[823]: fetched base config from "system" Jun 21 05:28:00.346830 ignition[823]: fetch: fetch complete Jun 21 05:28:00.346470 unknown[823]: fetched user config from "digitalocean" Jun 21 05:28:00.346836 ignition[823]: fetch: fetch passed Jun 21 05:28:00.346915 ignition[823]: Ignition finished successfully Jun 21 05:28:00.352296 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jun 21 05:28:00.355598 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jun 21 05:28:00.402691 ignition[829]: Ignition 2.21.0 Jun 21 05:28:00.402706 ignition[829]: Stage: kargs Jun 21 05:28:00.403033 ignition[829]: no configs at "/usr/lib/ignition/base.d" Jun 21 05:28:00.403048 ignition[829]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jun 21 05:28:00.404983 ignition[829]: kargs: kargs passed Jun 21 05:28:00.405069 ignition[829]: Ignition finished successfully Jun 21 05:28:00.408836 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jun 21 05:28:00.411394 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jun 21 05:28:00.457120 ignition[836]: Ignition 2.21.0 Jun 21 05:28:00.457155 ignition[836]: Stage: disks Jun 21 05:28:00.457811 ignition[836]: no configs at "/usr/lib/ignition/base.d" Jun 21 05:28:00.457836 ignition[836]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jun 21 05:28:00.461787 ignition[836]: disks: disks passed Jun 21 05:28:00.461885 ignition[836]: Ignition finished successfully Jun 21 05:28:00.464147 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jun 21 05:28:00.464956 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jun 21 05:28:00.465374 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 21 05:28:00.466503 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 21 05:28:00.467389 systemd[1]: Reached target sysinit.target - System Initialization. Jun 21 05:28:00.468150 systemd[1]: Reached target basic.target - Basic System. Jun 21 05:28:00.470229 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jun 21 05:28:00.506412 systemd-fsck[845]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jun 21 05:28:00.508986 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jun 21 05:28:00.511751 systemd[1]: Mounting sysroot.mount - /sysroot... Jun 21 05:28:00.663712 kernel: EXT4-fs (vda9): mounted filesystem 6d18c974-0fd6-4e4a-98cf-62524fcf9e99 r/w with ordered data mode. Quota mode: none. Jun 21 05:28:00.665369 systemd[1]: Mounted sysroot.mount - /sysroot. Jun 21 05:28:00.666557 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jun 21 05:28:00.669106 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 21 05:28:00.672162 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jun 21 05:28:00.677930 systemd[1]: Starting flatcar-afterburn-network.service - Flatcar Afterburn network service... Jun 21 05:28:00.685500 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jun 21 05:28:00.687282 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jun 21 05:28:00.688427 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jun 21 05:28:00.707672 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 (254:6) scanned by mount (853) Jun 21 05:28:00.708781 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jun 21 05:28:00.712808 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jun 21 05:28:00.723379 kernel: BTRFS info (device vda6): first mount of filesystem 57d2b200-37a8-4067-8765-910d3ed0182c Jun 21 05:28:00.723489 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jun 21 05:28:00.723515 kernel: BTRFS info (device vda6): using free-space-tree Jun 21 05:28:00.750314 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 21 05:28:00.807077 initrd-setup-root[885]: cut: /sysroot/etc/passwd: No such file or directory Jun 21 05:28:00.830713 initrd-setup-root[892]: cut: /sysroot/etc/group: No such file or directory Jun 21 05:28:00.833109 coreos-metadata[855]: Jun 21 05:28:00.832 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jun 21 05:28:00.835204 coreos-metadata[856]: Jun 21 05:28:00.834 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jun 21 05:28:00.842678 initrd-setup-root[899]: cut: /sysroot/etc/shadow: No such file or directory Jun 21 05:28:00.845361 coreos-metadata[856]: Jun 21 05:28:00.845 INFO Fetch successful Jun 21 05:28:00.852500 initrd-setup-root[906]: cut: /sysroot/etc/gshadow: No such file or directory Jun 21 05:28:00.856090 coreos-metadata[855]: Jun 21 05:28:00.853 INFO Fetch successful Jun 21 05:28:00.856778 coreos-metadata[856]: Jun 21 05:28:00.856 INFO wrote hostname ci-4372.0.0-d-47135505f9 to /sysroot/etc/hostname Jun 21 05:28:00.857870 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jun 21 05:28:00.869823 systemd[1]: flatcar-afterburn-network.service: Deactivated successfully. Jun 21 05:28:00.870045 systemd[1]: Finished flatcar-afterburn-network.service - Flatcar Afterburn network service. Jun 21 05:28:01.083394 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jun 21 05:28:01.087813 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jun 21 05:28:01.090972 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jun 21 05:28:01.134689 kernel: BTRFS info (device vda6): last unmount of filesystem 57d2b200-37a8-4067-8765-910d3ed0182c Jun 21 05:28:01.134901 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jun 21 05:28:01.168002 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jun 21 05:28:01.197157 ignition[976]: INFO : Ignition 2.21.0 Jun 21 05:28:01.200432 ignition[976]: INFO : Stage: mount Jun 21 05:28:01.202492 ignition[976]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 21 05:28:01.205898 ignition[976]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jun 21 05:28:01.209003 ignition[976]: INFO : mount: mount passed Jun 21 05:28:01.209715 ignition[976]: INFO : Ignition finished successfully Jun 21 05:28:01.212569 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jun 21 05:28:01.218603 systemd[1]: Starting ignition-files.service - Ignition (files)... Jun 21 05:28:01.256379 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 21 05:28:01.288173 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 (254:6) scanned by mount (987) Jun 21 05:28:01.292938 kernel: BTRFS info (device vda6): first mount of filesystem 57d2b200-37a8-4067-8765-910d3ed0182c Jun 21 05:28:01.293073 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jun 21 05:28:01.293097 kernel: BTRFS info (device vda6): using free-space-tree Jun 21 05:28:01.309528 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 21 05:28:01.367957 ignition[1004]: INFO : Ignition 2.21.0 Jun 21 05:28:01.367957 ignition[1004]: INFO : Stage: files Jun 21 05:28:01.370307 ignition[1004]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 21 05:28:01.370307 ignition[1004]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jun 21 05:28:01.373962 ignition[1004]: DEBUG : files: compiled without relabeling support, skipping Jun 21 05:28:01.376609 ignition[1004]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jun 21 05:28:01.376609 ignition[1004]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jun 21 05:28:01.380878 ignition[1004]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jun 21 05:28:01.382231 ignition[1004]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jun 21 05:28:01.383583 unknown[1004]: wrote ssh authorized keys file for user: core Jun 21 05:28:01.384809 ignition[1004]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jun 21 05:28:01.389925 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jun 21 05:28:01.391531 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jun 21 05:28:01.469469 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jun 21 05:28:01.578702 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jun 21 05:28:01.578702 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jun 21 05:28:01.578702 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jun 21 05:28:01.885364 systemd-networkd[813]: eth1: Gained IPv6LL Jun 21 05:28:02.044639 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jun 21 05:28:02.141194 systemd-networkd[813]: eth0: Gained IPv6LL Jun 21 05:28:02.280819 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jun 21 05:28:02.280819 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jun 21 05:28:02.288056 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jun 21 05:28:02.288056 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jun 21 05:28:02.288056 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jun 21 05:28:02.288056 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 21 05:28:02.288056 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 21 05:28:02.288056 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 21 05:28:02.288056 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 21 05:28:02.288056 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jun 21 05:28:02.288056 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jun 21 05:28:02.288056 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jun 21 05:28:02.288056 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jun 21 05:28:02.288056 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jun 21 05:28:02.288056 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Jun 21 05:28:02.949907 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jun 21 05:28:03.309310 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jun 21 05:28:03.309310 ignition[1004]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jun 21 05:28:03.311838 ignition[1004]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 21 05:28:03.311838 ignition[1004]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 21 05:28:03.311838 ignition[1004]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jun 21 05:28:03.311838 ignition[1004]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jun 21 05:28:03.316564 ignition[1004]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jun 21 05:28:03.316564 ignition[1004]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jun 21 05:28:03.316564 ignition[1004]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jun 21 05:28:03.316564 ignition[1004]: INFO : files: files passed Jun 21 05:28:03.316564 ignition[1004]: INFO : Ignition finished successfully Jun 21 05:28:03.315066 systemd[1]: Finished ignition-files.service - Ignition (files). Jun 21 05:28:03.319842 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jun 21 05:28:03.321471 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jun 21 05:28:03.343134 systemd[1]: ignition-quench.service: Deactivated successfully. Jun 21 05:28:03.343806 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jun 21 05:28:03.352378 initrd-setup-root-after-ignition[1033]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 21 05:28:03.352378 initrd-setup-root-after-ignition[1033]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jun 21 05:28:03.354369 initrd-setup-root-after-ignition[1037]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 21 05:28:03.356308 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 21 05:28:03.357570 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jun 21 05:28:03.359899 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jun 21 05:28:03.426802 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jun 21 05:28:03.426990 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jun 21 05:28:03.428574 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jun 21 05:28:03.429104 systemd[1]: Reached target initrd.target - Initrd Default Target. Jun 21 05:28:03.430017 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jun 21 05:28:03.431189 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jun 21 05:28:03.454022 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 21 05:28:03.456602 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jun 21 05:28:03.487797 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jun 21 05:28:03.489303 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 21 05:28:03.489979 systemd[1]: Stopped target timers.target - Timer Units. Jun 21 05:28:03.490564 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jun 21 05:28:03.491921 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 21 05:28:03.493057 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jun 21 05:28:03.494098 systemd[1]: Stopped target basic.target - Basic System. Jun 21 05:28:03.494858 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jun 21 05:28:03.495525 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jun 21 05:28:03.496491 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jun 21 05:28:03.497278 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jun 21 05:28:03.498492 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jun 21 05:28:03.499299 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jun 21 05:28:03.500310 systemd[1]: Stopped target sysinit.target - System Initialization. Jun 21 05:28:03.501300 systemd[1]: Stopped target local-fs.target - Local File Systems. Jun 21 05:28:03.502254 systemd[1]: Stopped target swap.target - Swaps. Jun 21 05:28:03.503194 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jun 21 05:28:03.503507 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jun 21 05:28:03.504974 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jun 21 05:28:03.506159 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 21 05:28:03.506849 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jun 21 05:28:03.507057 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 21 05:28:03.507689 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jun 21 05:28:03.507988 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jun 21 05:28:03.508902 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jun 21 05:28:03.509159 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 21 05:28:03.509852 systemd[1]: ignition-files.service: Deactivated successfully. Jun 21 05:28:03.510017 systemd[1]: Stopped ignition-files.service - Ignition (files). Jun 21 05:28:03.511036 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jun 21 05:28:03.511193 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jun 21 05:28:03.513788 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jun 21 05:28:03.514171 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jun 21 05:28:03.514381 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jun 21 05:28:03.521588 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jun 21 05:28:03.523453 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jun 21 05:28:03.524509 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jun 21 05:28:03.525960 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jun 21 05:28:03.526703 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jun 21 05:28:03.532826 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jun 21 05:28:03.534073 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jun 21 05:28:03.563200 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jun 21 05:28:03.567627 systemd[1]: sysroot-boot.service: Deactivated successfully. Jun 21 05:28:03.568388 ignition[1057]: INFO : Ignition 2.21.0 Jun 21 05:28:03.568388 ignition[1057]: INFO : Stage: umount Jun 21 05:28:03.568388 ignition[1057]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 21 05:28:03.568388 ignition[1057]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jun 21 05:28:03.568786 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jun 21 05:28:03.574454 ignition[1057]: INFO : umount: umount passed Jun 21 05:28:03.574454 ignition[1057]: INFO : Ignition finished successfully Jun 21 05:28:03.576318 systemd[1]: ignition-mount.service: Deactivated successfully. Jun 21 05:28:03.576528 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jun 21 05:28:03.578459 systemd[1]: ignition-disks.service: Deactivated successfully. Jun 21 05:28:03.578634 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jun 21 05:28:03.579513 systemd[1]: ignition-kargs.service: Deactivated successfully. Jun 21 05:28:03.579586 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jun 21 05:28:03.580060 systemd[1]: ignition-fetch.service: Deactivated successfully. Jun 21 05:28:03.580107 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jun 21 05:28:03.580791 systemd[1]: Stopped target network.target - Network. Jun 21 05:28:03.581270 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jun 21 05:28:03.581330 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jun 21 05:28:03.582239 systemd[1]: Stopped target paths.target - Path Units. Jun 21 05:28:03.583576 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jun 21 05:28:03.587756 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 21 05:28:03.588225 systemd[1]: Stopped target slices.target - Slice Units. Jun 21 05:28:03.589232 systemd[1]: Stopped target sockets.target - Socket Units. Jun 21 05:28:03.590037 systemd[1]: iscsid.socket: Deactivated successfully. Jun 21 05:28:03.590099 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jun 21 05:28:03.590756 systemd[1]: iscsiuio.socket: Deactivated successfully. Jun 21 05:28:03.590800 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 21 05:28:03.591315 systemd[1]: ignition-setup.service: Deactivated successfully. Jun 21 05:28:03.591390 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jun 21 05:28:03.591923 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jun 21 05:28:03.591973 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jun 21 05:28:03.592738 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jun 21 05:28:03.592797 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jun 21 05:28:03.593634 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jun 21 05:28:03.594262 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jun 21 05:28:03.604235 systemd[1]: systemd-resolved.service: Deactivated successfully. Jun 21 05:28:03.604410 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jun 21 05:28:03.609537 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jun 21 05:28:03.610510 systemd[1]: systemd-networkd.service: Deactivated successfully. Jun 21 05:28:03.610674 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jun 21 05:28:03.612420 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jun 21 05:28:03.614067 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jun 21 05:28:03.614739 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jun 21 05:28:03.614810 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jun 21 05:28:03.617086 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jun 21 05:28:03.617550 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jun 21 05:28:03.617637 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 21 05:28:03.618244 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 21 05:28:03.618308 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 21 05:28:03.619983 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jun 21 05:28:03.620038 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jun 21 05:28:03.620678 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jun 21 05:28:03.620748 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 21 05:28:03.623917 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 21 05:28:03.628557 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jun 21 05:28:03.628724 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jun 21 05:28:03.636164 systemd[1]: systemd-udevd.service: Deactivated successfully. Jun 21 05:28:03.636970 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 21 05:28:03.638552 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jun 21 05:28:03.639047 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jun 21 05:28:03.639948 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jun 21 05:28:03.639989 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jun 21 05:28:03.640326 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jun 21 05:28:03.640376 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jun 21 05:28:03.640907 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jun 21 05:28:03.640954 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jun 21 05:28:03.643182 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 21 05:28:03.643258 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 21 05:28:03.647159 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jun 21 05:28:03.647590 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jun 21 05:28:03.647677 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jun 21 05:28:03.650873 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jun 21 05:28:03.650957 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 21 05:28:03.651962 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 21 05:28:03.652035 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 21 05:28:03.656642 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jun 21 05:28:03.657459 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jun 21 05:28:03.657528 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jun 21 05:28:03.658178 systemd[1]: network-cleanup.service: Deactivated successfully. Jun 21 05:28:03.666247 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jun 21 05:28:03.675054 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jun 21 05:28:03.675213 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jun 21 05:28:03.677051 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jun 21 05:28:03.680003 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jun 21 05:28:03.705637 systemd[1]: Switching root. Jun 21 05:28:03.747010 systemd-journald[212]: Journal stopped Jun 21 05:28:05.216149 systemd-journald[212]: Received SIGTERM from PID 1 (systemd). Jun 21 05:28:05.216250 kernel: SELinux: policy capability network_peer_controls=1 Jun 21 05:28:05.216267 kernel: SELinux: policy capability open_perms=1 Jun 21 05:28:05.216287 kernel: SELinux: policy capability extended_socket_class=1 Jun 21 05:28:05.216299 kernel: SELinux: policy capability always_check_network=0 Jun 21 05:28:05.216311 kernel: SELinux: policy capability cgroup_seclabel=1 Jun 21 05:28:05.216323 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jun 21 05:28:05.216335 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jun 21 05:28:05.216353 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jun 21 05:28:05.216372 kernel: SELinux: policy capability userspace_initial_context=0 Jun 21 05:28:05.216384 kernel: audit: type=1403 audit(1750483683.946:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jun 21 05:28:05.216400 systemd[1]: Successfully loaded SELinux policy in 58.777ms. Jun 21 05:28:05.216422 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 13.050ms. Jun 21 05:28:05.216437 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jun 21 05:28:05.216455 systemd[1]: Detected virtualization kvm. Jun 21 05:28:05.216467 systemd[1]: Detected architecture x86-64. Jun 21 05:28:05.216483 systemd[1]: Detected first boot. Jun 21 05:28:05.216496 systemd[1]: Hostname set to . Jun 21 05:28:05.216508 systemd[1]: Initializing machine ID from VM UUID. Jun 21 05:28:05.216521 zram_generator::config[1102]: No configuration found. Jun 21 05:28:05.216540 kernel: Guest personality initialized and is inactive Jun 21 05:28:05.216556 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jun 21 05:28:05.216573 kernel: Initialized host personality Jun 21 05:28:05.216589 kernel: NET: Registered PF_VSOCK protocol family Jun 21 05:28:05.216607 systemd[1]: Populated /etc with preset unit settings. Jun 21 05:28:05.216621 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jun 21 05:28:05.216635 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jun 21 05:28:05.216660 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jun 21 05:28:05.216673 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jun 21 05:28:05.216686 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jun 21 05:28:05.216698 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jun 21 05:28:05.216710 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jun 21 05:28:05.216723 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jun 21 05:28:05.216739 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jun 21 05:28:05.216752 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jun 21 05:28:05.216766 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jun 21 05:28:05.216778 systemd[1]: Created slice user.slice - User and Session Slice. Jun 21 05:28:05.216791 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 21 05:28:05.216804 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 21 05:28:05.216818 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jun 21 05:28:05.216845 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jun 21 05:28:05.216858 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jun 21 05:28:05.216871 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 21 05:28:05.216888 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jun 21 05:28:05.216901 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 21 05:28:05.216913 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 21 05:28:05.216926 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jun 21 05:28:05.216939 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jun 21 05:28:05.216958 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jun 21 05:28:05.216971 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jun 21 05:28:05.216983 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 21 05:28:05.216996 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 21 05:28:05.217009 systemd[1]: Reached target slices.target - Slice Units. Jun 21 05:28:05.217021 systemd[1]: Reached target swap.target - Swaps. Jun 21 05:28:05.217034 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jun 21 05:28:05.217046 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jun 21 05:28:05.217061 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jun 21 05:28:05.217079 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 21 05:28:05.217092 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 21 05:28:05.217104 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 21 05:28:05.217117 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jun 21 05:28:05.217130 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jun 21 05:28:05.217142 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jun 21 05:28:05.217154 systemd[1]: Mounting media.mount - External Media Directory... Jun 21 05:28:05.217167 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 21 05:28:05.217180 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jun 21 05:28:05.217198 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jun 21 05:28:05.217211 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jun 21 05:28:05.217224 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jun 21 05:28:05.217237 systemd[1]: Reached target machines.target - Containers. Jun 21 05:28:05.217249 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jun 21 05:28:05.217261 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 21 05:28:05.217274 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 21 05:28:05.217286 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jun 21 05:28:05.217305 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 21 05:28:05.217317 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 21 05:28:05.217329 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 21 05:28:05.217343 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jun 21 05:28:05.217355 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 21 05:28:05.217368 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jun 21 05:28:05.217381 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jun 21 05:28:05.217393 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jun 21 05:28:05.217405 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jun 21 05:28:05.217424 systemd[1]: Stopped systemd-fsck-usr.service. Jun 21 05:28:05.217437 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 21 05:28:05.217449 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 21 05:28:05.217468 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 21 05:28:05.217481 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 21 05:28:05.217500 kernel: loop: module loaded Jun 21 05:28:05.217513 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jun 21 05:28:05.217525 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jun 21 05:28:05.217538 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 21 05:28:05.217551 systemd[1]: verity-setup.service: Deactivated successfully. Jun 21 05:28:05.217569 systemd[1]: Stopped verity-setup.service. Jun 21 05:28:05.217583 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 21 05:28:05.217595 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jun 21 05:28:05.217607 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jun 21 05:28:05.217620 systemd[1]: Mounted media.mount - External Media Directory. Jun 21 05:28:05.217634 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jun 21 05:28:05.225712 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jun 21 05:28:05.225812 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jun 21 05:28:05.225829 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 21 05:28:05.225863 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jun 21 05:28:05.225877 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jun 21 05:28:05.225890 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 21 05:28:05.225903 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 21 05:28:05.225917 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 21 05:28:05.225930 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 21 05:28:05.225944 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 21 05:28:05.225958 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 21 05:28:05.225976 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 21 05:28:05.225989 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jun 21 05:28:05.226003 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 21 05:28:05.226018 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 21 05:28:05.226032 kernel: fuse: init (API version 7.41) Jun 21 05:28:05.226047 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jun 21 05:28:05.226060 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jun 21 05:28:05.226073 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 21 05:28:05.226092 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jun 21 05:28:05.226106 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jun 21 05:28:05.226125 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 21 05:28:05.226138 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jun 21 05:28:05.226152 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 21 05:28:05.226165 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jun 21 05:28:05.226178 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jun 21 05:28:05.226191 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jun 21 05:28:05.226205 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jun 21 05:28:05.226218 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 21 05:28:05.226237 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jun 21 05:28:05.226250 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 21 05:28:05.226266 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jun 21 05:28:05.226352 systemd-journald[1175]: Collecting audit messages is disabled. Jun 21 05:28:05.226386 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jun 21 05:28:05.226400 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jun 21 05:28:05.226415 systemd-journald[1175]: Journal started Jun 21 05:28:05.226449 systemd-journald[1175]: Runtime Journal (/run/log/journal/29396245793f49f58b631bb596c80888) is 4.9M, max 39.5M, 34.6M free. Jun 21 05:28:04.746124 systemd[1]: Queued start job for default target multi-user.target. Jun 21 05:28:04.769837 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jun 21 05:28:04.770537 systemd[1]: systemd-journald.service: Deactivated successfully. Jun 21 05:28:05.244490 systemd[1]: Started systemd-journald.service - Journal Service. Jun 21 05:28:05.254852 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jun 21 05:28:05.281535 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jun 21 05:28:05.290279 kernel: ACPI: bus type drm_connector registered Jun 21 05:28:05.305147 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jun 21 05:28:05.315812 kernel: loop0: detected capacity change from 0 to 221472 Jun 21 05:28:05.313999 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jun 21 05:28:05.314951 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 21 05:28:05.315182 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 21 05:28:05.341420 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jun 21 05:28:05.345091 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jun 21 05:28:05.350203 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 21 05:28:05.407048 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jun 21 05:28:05.407365 systemd-journald[1175]: Time spent on flushing to /var/log/journal/29396245793f49f58b631bb596c80888 is 64.962ms for 1017 entries. Jun 21 05:28:05.407365 systemd-journald[1175]: System Journal (/var/log/journal/29396245793f49f58b631bb596c80888) is 8M, max 195.6M, 187.6M free. Jun 21 05:28:05.484310 systemd-journald[1175]: Received client request to flush runtime journal. Jun 21 05:28:05.484382 kernel: loop1: detected capacity change from 0 to 8 Jun 21 05:28:05.484409 kernel: loop2: detected capacity change from 0 to 113872 Jun 21 05:28:05.421635 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jun 21 05:28:05.487771 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jun 21 05:28:05.497584 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jun 21 05:28:05.503490 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 21 05:28:05.557338 kernel: loop3: detected capacity change from 0 to 146240 Jun 21 05:28:05.556543 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 21 05:28:05.587107 systemd-tmpfiles[1246]: ACLs are not supported, ignoring. Jun 21 05:28:05.587732 systemd-tmpfiles[1246]: ACLs are not supported, ignoring. Jun 21 05:28:05.605759 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 21 05:28:05.639697 kernel: loop4: detected capacity change from 0 to 221472 Jun 21 05:28:05.686768 kernel: loop5: detected capacity change from 0 to 8 Jun 21 05:28:05.695846 kernel: loop6: detected capacity change from 0 to 113872 Jun 21 05:28:05.722724 kernel: loop7: detected capacity change from 0 to 146240 Jun 21 05:28:05.765167 (sd-merge)[1251]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Jun 21 05:28:05.765850 (sd-merge)[1251]: Merged extensions into '/usr'. Jun 21 05:28:05.773165 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jun 21 05:28:05.786561 systemd[1]: Reload requested from client PID 1197 ('systemd-sysext') (unit systemd-sysext.service)... Jun 21 05:28:05.788712 systemd[1]: Reloading... Jun 21 05:28:06.047580 zram_generator::config[1281]: No configuration found. Jun 21 05:28:06.137135 ldconfig[1190]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jun 21 05:28:06.242112 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 21 05:28:06.338040 systemd[1]: Reloading finished in 548 ms. Jun 21 05:28:06.352862 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jun 21 05:28:06.359263 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jun 21 05:28:06.371940 systemd[1]: Starting ensure-sysext.service... Jun 21 05:28:06.375928 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jun 21 05:28:06.413479 systemd[1]: Reload requested from client PID 1321 ('systemctl') (unit ensure-sysext.service)... Jun 21 05:28:06.413502 systemd[1]: Reloading... Jun 21 05:28:06.453060 systemd-tmpfiles[1322]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jun 21 05:28:06.453111 systemd-tmpfiles[1322]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jun 21 05:28:06.453479 systemd-tmpfiles[1322]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jun 21 05:28:06.453776 systemd-tmpfiles[1322]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jun 21 05:28:06.454731 systemd-tmpfiles[1322]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jun 21 05:28:06.455012 systemd-tmpfiles[1322]: ACLs are not supported, ignoring. Jun 21 05:28:06.455072 systemd-tmpfiles[1322]: ACLs are not supported, ignoring. Jun 21 05:28:06.464135 systemd-tmpfiles[1322]: Detected autofs mount point /boot during canonicalization of boot. Jun 21 05:28:06.464151 systemd-tmpfiles[1322]: Skipping /boot Jun 21 05:28:06.496461 systemd-tmpfiles[1322]: Detected autofs mount point /boot during canonicalization of boot. Jun 21 05:28:06.496480 systemd-tmpfiles[1322]: Skipping /boot Jun 21 05:28:06.579846 zram_generator::config[1349]: No configuration found. Jun 21 05:28:06.713039 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 21 05:28:06.814876 systemd[1]: Reloading finished in 400 ms. Jun 21 05:28:06.838176 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jun 21 05:28:06.849969 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 21 05:28:06.857785 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jun 21 05:28:06.863864 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jun 21 05:28:06.869308 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jun 21 05:28:06.875766 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 21 05:28:06.880180 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 21 05:28:06.883226 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jun 21 05:28:06.895311 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 21 05:28:06.895551 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 21 05:28:06.904202 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 21 05:28:06.914980 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 21 05:28:06.918642 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 21 05:28:06.919488 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 21 05:28:06.919660 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 21 05:28:06.919774 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 21 05:28:06.929272 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jun 21 05:28:06.932510 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 21 05:28:06.932737 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 21 05:28:06.932916 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 21 05:28:06.933003 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 21 05:28:06.933088 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 21 05:28:06.938049 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 21 05:28:06.938364 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 21 05:28:06.946230 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 21 05:28:06.948081 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 21 05:28:06.948246 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 21 05:28:06.948387 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 21 05:28:06.954767 systemd[1]: Finished ensure-sysext.service. Jun 21 05:28:06.970155 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jun 21 05:28:06.972734 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jun 21 05:28:06.974119 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jun 21 05:28:06.988100 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 21 05:28:06.989782 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 21 05:28:06.999736 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jun 21 05:28:07.000497 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 21 05:28:07.000714 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 21 05:28:07.001271 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 21 05:28:07.012536 systemd-udevd[1398]: Using default interface naming scheme 'v255'. Jun 21 05:28:07.034784 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 21 05:28:07.035037 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 21 05:28:07.036720 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jun 21 05:28:07.039451 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 21 05:28:07.039751 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 21 05:28:07.041318 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 21 05:28:07.045644 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jun 21 05:28:07.081017 augenrules[1434]: No rules Jun 21 05:28:07.083320 systemd[1]: audit-rules.service: Deactivated successfully. Jun 21 05:28:07.083607 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jun 21 05:28:07.091299 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 21 05:28:07.093848 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jun 21 05:28:07.101045 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 21 05:28:07.102868 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jun 21 05:28:07.352136 systemd-networkd[1449]: lo: Link UP Jun 21 05:28:07.352156 systemd-networkd[1449]: lo: Gained carrier Jun 21 05:28:07.353325 systemd-networkd[1449]: Enumeration completed Jun 21 05:28:07.353529 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 21 05:28:07.360148 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jun 21 05:28:07.365281 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jun 21 05:28:07.387611 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jun 21 05:28:07.388191 systemd[1]: Reached target time-set.target - System Time Set. Jun 21 05:28:07.423792 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jun 21 05:28:07.466188 systemd-resolved[1397]: Positive Trust Anchors: Jun 21 05:28:07.466205 systemd-resolved[1397]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 21 05:28:07.466247 systemd-resolved[1397]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jun 21 05:28:07.475295 systemd-resolved[1397]: Using system hostname 'ci-4372.0.0-d-47135505f9'. Jun 21 05:28:07.478374 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 21 05:28:07.478989 systemd[1]: Reached target network.target - Network. Jun 21 05:28:07.479738 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 21 05:28:07.480770 systemd[1]: Reached target sysinit.target - System Initialization. Jun 21 05:28:07.481596 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jun 21 05:28:07.482776 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jun 21 05:28:07.483532 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jun 21 05:28:07.484602 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jun 21 05:28:07.486349 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jun 21 05:28:07.486779 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jun 21 05:28:07.487248 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jun 21 05:28:07.487330 systemd[1]: Reached target paths.target - Path Units. Jun 21 05:28:07.488598 systemd[1]: Reached target timers.target - Timer Units. Jun 21 05:28:07.490335 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jun 21 05:28:07.494916 systemd[1]: Starting docker.socket - Docker Socket for the API... Jun 21 05:28:07.502899 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jun 21 05:28:07.504454 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jun 21 05:28:07.505486 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jun 21 05:28:07.515934 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jun 21 05:28:07.518990 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jun 21 05:28:07.522087 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jun 21 05:28:07.525531 systemd[1]: Reached target sockets.target - Socket Units. Jun 21 05:28:07.526252 systemd[1]: Reached target basic.target - Basic System. Jun 21 05:28:07.527730 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jun 21 05:28:07.527804 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jun 21 05:28:07.530800 systemd[1]: Starting containerd.service - containerd container runtime... Jun 21 05:28:07.534428 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jun 21 05:28:07.539003 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jun 21 05:28:07.545103 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jun 21 05:28:07.550164 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jun 21 05:28:07.555123 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jun 21 05:28:07.555708 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jun 21 05:28:07.560028 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jun 21 05:28:07.572870 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jun 21 05:28:07.583112 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jun 21 05:28:07.587966 extend-filesystems[1487]: Found /dev/vda6 Jun 21 05:28:07.594063 extend-filesystems[1487]: Found /dev/vda9 Jun 21 05:28:07.597027 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jun 21 05:28:07.598081 extend-filesystems[1487]: Checking size of /dev/vda9 Jun 21 05:28:07.619693 jq[1486]: false Jun 21 05:28:07.617970 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jun 21 05:28:07.622995 extend-filesystems[1487]: Resized partition /dev/vda9 Jun 21 05:28:07.630912 extend-filesystems[1505]: resize2fs 1.47.2 (1-Jan-2025) Jun 21 05:28:07.632196 systemd[1]: Starting systemd-logind.service - User Login Management... Jun 21 05:28:07.636377 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jun 21 05:28:07.636827 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Jun 21 05:28:07.640834 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jun 21 05:28:07.648707 systemd[1]: Starting update-engine.service - Update Engine... Jun 21 05:28:07.663729 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jun 21 05:28:07.671743 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jun 21 05:28:07.674163 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jun 21 05:28:07.674530 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jun 21 05:28:07.714794 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jun 21 05:28:07.715307 oslogin_cache_refresh[1488]: Refreshing passwd entry cache Jun 21 05:28:07.717133 google_oslogin_nss_cache[1488]: oslogin_cache_refresh[1488]: Refreshing passwd entry cache Jun 21 05:28:07.726995 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jun 21 05:28:07.727310 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jun 21 05:28:07.745688 google_oslogin_nss_cache[1488]: oslogin_cache_refresh[1488]: Failure getting users, quitting Jun 21 05:28:07.745688 google_oslogin_nss_cache[1488]: oslogin_cache_refresh[1488]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jun 21 05:28:07.745688 google_oslogin_nss_cache[1488]: oslogin_cache_refresh[1488]: Refreshing group entry cache Jun 21 05:28:07.744622 oslogin_cache_refresh[1488]: Failure getting users, quitting Jun 21 05:28:07.744682 oslogin_cache_refresh[1488]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jun 21 05:28:07.744762 oslogin_cache_refresh[1488]: Refreshing group entry cache Jun 21 05:28:07.752706 google_oslogin_nss_cache[1488]: oslogin_cache_refresh[1488]: Failure getting groups, quitting Jun 21 05:28:07.752706 google_oslogin_nss_cache[1488]: oslogin_cache_refresh[1488]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jun 21 05:28:07.747286 oslogin_cache_refresh[1488]: Failure getting groups, quitting Jun 21 05:28:07.747302 oslogin_cache_refresh[1488]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jun 21 05:28:07.758601 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jun 21 05:28:07.759757 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jun 21 05:28:07.777113 jq[1510]: true Jun 21 05:28:07.788172 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Jun 21 05:28:07.796690 update_engine[1509]: I20250621 05:28:07.796540 1509 main.cc:92] Flatcar Update Engine starting Jun 21 05:28:07.804529 tar[1513]: linux-amd64/helm Jun 21 05:28:07.812272 systemd[1]: motdgen.service: Deactivated successfully. Jun 21 05:28:07.820162 extend-filesystems[1505]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jun 21 05:28:07.820162 extend-filesystems[1505]: old_desc_blocks = 1, new_desc_blocks = 8 Jun 21 05:28:07.820162 extend-filesystems[1505]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Jun 21 05:28:07.812565 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jun 21 05:28:07.828111 dbus-daemon[1484]: [system] SELinux support is enabled Jun 21 05:28:07.832987 extend-filesystems[1487]: Resized filesystem in /dev/vda9 Jun 21 05:28:07.813450 systemd[1]: extend-filesystems.service: Deactivated successfully. Jun 21 05:28:07.813770 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jun 21 05:28:07.821055 systemd[1]: Condition check resulted in dev-disk-by\x2dlabel-config\x2d2.device - /dev/disk/by-label/config-2 being skipped. Jun 21 05:28:07.823152 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Jun 21 05:28:07.823542 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jun 21 05:28:07.828430 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jun 21 05:28:07.832929 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jun 21 05:28:07.832969 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jun 21 05:28:07.835712 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jun 21 05:28:07.844709 coreos-metadata[1483]: Jun 21 05:28:07.841 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jun 21 05:28:07.844709 coreos-metadata[1483]: Jun 21 05:28:07.841 INFO Failed to fetch: error sending request for url (http://169.254.169.254/metadata/v1.json) Jun 21 05:28:07.849379 systemd-networkd[1449]: eth1: Configuring with /run/systemd/network/10-aa:54:22:91:28:22.network. Jun 21 05:28:07.851270 (ntainerd)[1528]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jun 21 05:28:07.871242 systemd-networkd[1449]: eth1: Link UP Jun 21 05:28:07.871617 systemd-networkd[1449]: eth1: Gained carrier Jun 21 05:28:07.877709 systemd[1]: Started update-engine.service - Update Engine. Jun 21 05:28:07.878750 jq[1532]: true Jun 21 05:28:07.882123 update_engine[1509]: I20250621 05:28:07.882030 1509 update_check_scheduler.cc:74] Next update check in 8m27s Jun 21 05:28:07.889410 systemd-timesyncd[1412]: Network configuration changed, trying to establish connection. Jun 21 05:28:07.925043 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jun 21 05:28:07.942570 kernel: ISO 9660 Extensions: RRIP_1991A Jun 21 05:28:07.955419 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Jun 21 05:28:07.963979 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Jun 21 05:28:07.964712 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jun 21 05:28:08.047720 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jun 21 05:28:08.058685 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jun 21 05:28:08.138679 systemd-networkd[1449]: eth0: Configuring with /run/systemd/network/10-5e:4c:93:c9:6a:2a.network. Jun 21 05:28:08.139868 systemd-networkd[1449]: eth0: Link UP Jun 21 05:28:08.141017 systemd-networkd[1449]: eth0: Gained carrier Jun 21 05:28:08.227608 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jun 21 05:28:08.247762 bash[1566]: Updated "/home/core/.ssh/authorized_keys" Jun 21 05:28:08.251619 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jun 21 05:28:08.262021 systemd[1]: Starting sshkeys.service... Jun 21 05:28:08.280720 systemd-logind[1504]: New seat seat0. Jun 21 05:28:08.282942 systemd[1]: Started systemd-logind.service - User Login Management. Jun 21 05:28:08.338523 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jun 21 05:28:08.344116 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jun 21 05:28:08.426108 coreos-metadata[1578]: Jun 21 05:28:08.425 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jun 21 05:28:08.442107 coreos-metadata[1578]: Jun 21 05:28:08.440 INFO Fetch successful Jun 21 05:28:08.462825 unknown[1578]: wrote ssh authorized keys file for user: core Jun 21 05:28:08.512825 locksmithd[1539]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jun 21 05:28:08.554707 update-ssh-keys[1583]: Updated "/home/core/.ssh/authorized_keys" Jun 21 05:28:08.557609 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jun 21 05:28:08.565144 systemd[1]: Finished sshkeys.service. Jun 21 05:28:08.585884 sshd_keygen[1529]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jun 21 05:28:08.611725 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jun 21 05:28:08.614677 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jun 21 05:28:08.625718 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jun 21 05:28:08.625821 kernel: mousedev: PS/2 mouse device common for all mice Jun 21 05:28:08.632709 kernel: ACPI: button: Power Button [PWRF] Jun 21 05:28:08.718905 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jun 21 05:28:08.721952 containerd[1528]: time="2025-06-21T05:28:08Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jun 21 05:28:08.725189 systemd[1]: Starting issuegen.service - Generate /run/issue... Jun 21 05:28:08.731342 containerd[1528]: time="2025-06-21T05:28:08.728757082Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jun 21 05:28:08.756926 systemd[1]: issuegen.service: Deactivated successfully. Jun 21 05:28:08.757218 systemd[1]: Finished issuegen.service - Generate /run/issue. Jun 21 05:28:08.763772 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jun 21 05:28:08.810390 containerd[1528]: time="2025-06-21T05:28:08.809314989Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="14.933µs" Jun 21 05:28:08.810390 containerd[1528]: time="2025-06-21T05:28:08.809360003Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jun 21 05:28:08.810390 containerd[1528]: time="2025-06-21T05:28:08.809383238Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jun 21 05:28:08.810390 containerd[1528]: time="2025-06-21T05:28:08.809550367Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jun 21 05:28:08.810390 containerd[1528]: time="2025-06-21T05:28:08.809567037Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jun 21 05:28:08.810390 containerd[1528]: time="2025-06-21T05:28:08.809599337Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jun 21 05:28:08.814343 containerd[1528]: time="2025-06-21T05:28:08.813034457Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jun 21 05:28:08.814343 containerd[1528]: time="2025-06-21T05:28:08.813084570Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jun 21 05:28:08.814343 containerd[1528]: time="2025-06-21T05:28:08.813428315Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jun 21 05:28:08.814343 containerd[1528]: time="2025-06-21T05:28:08.813446359Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jun 21 05:28:08.814343 containerd[1528]: time="2025-06-21T05:28:08.813459802Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jun 21 05:28:08.814343 containerd[1528]: time="2025-06-21T05:28:08.813470316Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jun 21 05:28:08.814343 containerd[1528]: time="2025-06-21T05:28:08.813559451Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jun 21 05:28:08.815244 containerd[1528]: time="2025-06-21T05:28:08.815207628Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jun 21 05:28:08.818967 containerd[1528]: time="2025-06-21T05:28:08.817151988Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jun 21 05:28:08.818967 containerd[1528]: time="2025-06-21T05:28:08.817186586Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jun 21 05:28:08.818967 containerd[1528]: time="2025-06-21T05:28:08.817266227Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jun 21 05:28:08.819227 containerd[1528]: time="2025-06-21T05:28:08.819192394Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jun 21 05:28:08.820144 containerd[1528]: time="2025-06-21T05:28:08.819406928Z" level=info msg="metadata content store policy set" policy=shared Jun 21 05:28:08.829690 containerd[1528]: time="2025-06-21T05:28:08.828599525Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jun 21 05:28:08.829690 containerd[1528]: time="2025-06-21T05:28:08.828697423Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jun 21 05:28:08.829690 containerd[1528]: time="2025-06-21T05:28:08.828716092Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jun 21 05:28:08.829690 containerd[1528]: time="2025-06-21T05:28:08.828732138Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jun 21 05:28:08.829690 containerd[1528]: time="2025-06-21T05:28:08.828750156Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jun 21 05:28:08.829690 containerd[1528]: time="2025-06-21T05:28:08.828761499Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jun 21 05:28:08.829690 containerd[1528]: time="2025-06-21T05:28:08.828775074Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jun 21 05:28:08.829690 containerd[1528]: time="2025-06-21T05:28:08.828789075Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jun 21 05:28:08.829690 containerd[1528]: time="2025-06-21T05:28:08.828807674Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jun 21 05:28:08.829690 containerd[1528]: time="2025-06-21T05:28:08.828851406Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jun 21 05:28:08.829690 containerd[1528]: time="2025-06-21T05:28:08.828865747Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jun 21 05:28:08.829690 containerd[1528]: time="2025-06-21T05:28:08.828896477Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jun 21 05:28:08.829690 containerd[1528]: time="2025-06-21T05:28:08.829096022Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jun 21 05:28:08.829690 containerd[1528]: time="2025-06-21T05:28:08.829122998Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jun 21 05:28:08.830081 containerd[1528]: time="2025-06-21T05:28:08.829147786Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jun 21 05:28:08.830081 containerd[1528]: time="2025-06-21T05:28:08.829161863Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jun 21 05:28:08.830081 containerd[1528]: time="2025-06-21T05:28:08.829173531Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jun 21 05:28:08.830081 containerd[1528]: time="2025-06-21T05:28:08.829184055Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jun 21 05:28:08.830081 containerd[1528]: time="2025-06-21T05:28:08.829208214Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jun 21 05:28:08.830081 containerd[1528]: time="2025-06-21T05:28:08.829221303Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jun 21 05:28:08.830081 containerd[1528]: time="2025-06-21T05:28:08.829235113Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jun 21 05:28:08.830081 containerd[1528]: time="2025-06-21T05:28:08.829247201Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jun 21 05:28:08.830081 containerd[1528]: time="2025-06-21T05:28:08.829258092Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jun 21 05:28:08.830081 containerd[1528]: time="2025-06-21T05:28:08.829407961Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jun 21 05:28:08.830081 containerd[1528]: time="2025-06-21T05:28:08.829426354Z" level=info msg="Start snapshots syncer" Jun 21 05:28:08.830081 containerd[1528]: time="2025-06-21T05:28:08.829452041Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jun 21 05:28:08.834576 containerd[1528]: time="2025-06-21T05:28:08.833086847Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jun 21 05:28:08.834576 containerd[1528]: time="2025-06-21T05:28:08.834129235Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jun 21 05:28:08.839506 containerd[1528]: time="2025-06-21T05:28:08.838067155Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jun 21 05:28:08.839506 containerd[1528]: time="2025-06-21T05:28:08.838871820Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jun 21 05:28:08.839506 containerd[1528]: time="2025-06-21T05:28:08.838918092Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jun 21 05:28:08.839506 containerd[1528]: time="2025-06-21T05:28:08.838931348Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jun 21 05:28:08.839506 containerd[1528]: time="2025-06-21T05:28:08.838943813Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jun 21 05:28:08.839506 containerd[1528]: time="2025-06-21T05:28:08.838958260Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jun 21 05:28:08.839506 containerd[1528]: time="2025-06-21T05:28:08.838970054Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jun 21 05:28:08.839506 containerd[1528]: time="2025-06-21T05:28:08.838996215Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jun 21 05:28:08.839506 containerd[1528]: time="2025-06-21T05:28:08.839036963Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jun 21 05:28:08.839506 containerd[1528]: time="2025-06-21T05:28:08.839048209Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jun 21 05:28:08.839506 containerd[1528]: time="2025-06-21T05:28:08.839058654Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jun 21 05:28:08.839875 coreos-metadata[1483]: Jun 21 05:28:08.839 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #2 Jun 21 05:28:08.845898 containerd[1528]: time="2025-06-21T05:28:08.844512130Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jun 21 05:28:08.845898 containerd[1528]: time="2025-06-21T05:28:08.844594058Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jun 21 05:28:08.845898 containerd[1528]: time="2025-06-21T05:28:08.844610105Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jun 21 05:28:08.845898 containerd[1528]: time="2025-06-21T05:28:08.844621909Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jun 21 05:28:08.845898 containerd[1528]: time="2025-06-21T05:28:08.844630499Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jun 21 05:28:08.845898 containerd[1528]: time="2025-06-21T05:28:08.844640830Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jun 21 05:28:08.845898 containerd[1528]: time="2025-06-21T05:28:08.844669138Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jun 21 05:28:08.845898 containerd[1528]: time="2025-06-21T05:28:08.845332106Z" level=info msg="runtime interface created" Jun 21 05:28:08.845898 containerd[1528]: time="2025-06-21T05:28:08.845357678Z" level=info msg="created NRI interface" Jun 21 05:28:08.845898 containerd[1528]: time="2025-06-21T05:28:08.845377968Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jun 21 05:28:08.845898 containerd[1528]: time="2025-06-21T05:28:08.845406185Z" level=info msg="Connect containerd service" Jun 21 05:28:08.845898 containerd[1528]: time="2025-06-21T05:28:08.845470458Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jun 21 05:28:08.854894 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jun 21 05:28:08.854981 coreos-metadata[1483]: Jun 21 05:28:08.852 INFO Fetch successful Jun 21 05:28:08.857054 containerd[1528]: time="2025-06-21T05:28:08.857008688Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 21 05:28:08.862037 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jun 21 05:28:08.874326 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jun 21 05:28:08.887056 systemd[1]: Started getty@tty1.service - Getty on tty1. Jun 21 05:28:08.896244 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jun 21 05:28:08.897347 systemd[1]: Reached target getty.target - Login Prompts. Jun 21 05:28:08.939447 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jun 21 05:28:08.957082 kernel: Console: switching to colour dummy device 80x25 Jun 21 05:28:08.956632 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jun 21 05:28:08.962729 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jun 21 05:28:08.962827 kernel: [drm] features: -context_init Jun 21 05:28:08.976959 kernel: [drm] number of scanouts: 1 Jun 21 05:28:08.977066 kernel: [drm] number of cap sets: 0 Jun 21 05:28:09.025694 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0 Jun 21 05:28:09.196677 containerd[1528]: time="2025-06-21T05:28:09.196574448Z" level=info msg="Start subscribing containerd event" Jun 21 05:28:09.197692 containerd[1528]: time="2025-06-21T05:28:09.197060853Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jun 21 05:28:09.197692 containerd[1528]: time="2025-06-21T05:28:09.197162876Z" level=info msg=serving... address=/run/containerd/containerd.sock Jun 21 05:28:09.200578 containerd[1528]: time="2025-06-21T05:28:09.200527173Z" level=info msg="Start recovering state" Jun 21 05:28:09.200742 containerd[1528]: time="2025-06-21T05:28:09.200720916Z" level=info msg="Start event monitor" Jun 21 05:28:09.200780 containerd[1528]: time="2025-06-21T05:28:09.200753826Z" level=info msg="Start cni network conf syncer for default" Jun 21 05:28:09.200780 containerd[1528]: time="2025-06-21T05:28:09.200770053Z" level=info msg="Start streaming server" Jun 21 05:28:09.200857 containerd[1528]: time="2025-06-21T05:28:09.200794403Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jun 21 05:28:09.200857 containerd[1528]: time="2025-06-21T05:28:09.200807674Z" level=info msg="runtime interface starting up..." Jun 21 05:28:09.200857 containerd[1528]: time="2025-06-21T05:28:09.200817993Z" level=info msg="starting plugins..." Jun 21 05:28:09.200857 containerd[1528]: time="2025-06-21T05:28:09.200836983Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jun 21 05:28:09.201258 containerd[1528]: time="2025-06-21T05:28:09.201035372Z" level=info msg="containerd successfully booted in 0.480959s" Jun 21 05:28:09.201169 systemd[1]: Started containerd.service - containerd container runtime. Jun 21 05:28:09.247262 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 21 05:28:09.693851 systemd-networkd[1449]: eth1: Gained IPv6LL Jun 21 05:28:09.703462 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jun 21 05:28:09.705081 systemd[1]: Reached target network-online.target - Network is Online. Jun 21 05:28:09.712100 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 21 05:28:09.717245 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jun 21 05:28:09.758131 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 21 05:28:09.816505 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 21 05:28:09.816684 systemd-logind[1504]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jun 21 05:28:09.817569 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 21 05:28:09.819559 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jun 21 05:28:09.824852 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 21 05:28:09.829785 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jun 21 05:28:09.843926 systemd-logind[1504]: Watching system buttons on /dev/input/event2 (Power Button) Jun 21 05:28:09.864349 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jun 21 05:28:09.887988 systemd-networkd[1449]: eth0: Gained IPv6LL Jun 21 05:28:09.951026 kernel: EDAC MC: Ver: 3.0.0 Jun 21 05:28:09.971750 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 21 05:28:09.981591 tar[1513]: linux-amd64/LICENSE Jun 21 05:28:09.982149 tar[1513]: linux-amd64/README.md Jun 21 05:28:10.001243 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jun 21 05:28:10.369195 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jun 21 05:28:10.373008 systemd[1]: Started sshd@0-64.23.242.202:22-139.178.68.195:41976.service - OpenSSH per-connection server daemon (139.178.68.195:41976). Jun 21 05:28:10.477869 sshd[1666]: Accepted publickey for core from 139.178.68.195 port 41976 ssh2: RSA SHA256:esrwHbjCvD8R4I7sQRiHa5Rpu9l1igA0BMtQzkIUH4o Jun 21 05:28:10.481144 sshd-session[1666]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 05:28:10.499005 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jun 21 05:28:10.501852 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jun 21 05:28:10.506943 systemd-logind[1504]: New session 1 of user core. Jun 21 05:28:10.553840 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jun 21 05:28:10.559035 systemd[1]: Starting user@500.service - User Manager for UID 500... Jun 21 05:28:10.576694 (systemd)[1670]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jun 21 05:28:10.583036 systemd-logind[1504]: New session c1 of user core. Jun 21 05:28:10.789522 systemd[1670]: Queued start job for default target default.target. Jun 21 05:28:10.797047 systemd[1670]: Created slice app.slice - User Application Slice. Jun 21 05:28:10.797107 systemd[1670]: Reached target paths.target - Paths. Jun 21 05:28:10.797165 systemd[1670]: Reached target timers.target - Timers. Jun 21 05:28:10.801833 systemd[1670]: Starting dbus.socket - D-Bus User Message Bus Socket... Jun 21 05:28:10.823129 systemd[1670]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jun 21 05:28:10.824296 systemd[1670]: Reached target sockets.target - Sockets. Jun 21 05:28:10.824398 systemd[1670]: Reached target basic.target - Basic System. Jun 21 05:28:10.824472 systemd[1670]: Reached target default.target - Main User Target. Jun 21 05:28:10.824537 systemd[1670]: Startup finished in 228ms. Jun 21 05:28:10.824854 systemd[1]: Started user@500.service - User Manager for UID 500. Jun 21 05:28:10.831979 systemd[1]: Started session-1.scope - Session 1 of User core. Jun 21 05:28:10.916103 systemd[1]: Started sshd@1-64.23.242.202:22-139.178.68.195:41988.service - OpenSSH per-connection server daemon (139.178.68.195:41988). Jun 21 05:28:11.008016 sshd[1681]: Accepted publickey for core from 139.178.68.195 port 41988 ssh2: RSA SHA256:esrwHbjCvD8R4I7sQRiHa5Rpu9l1igA0BMtQzkIUH4o Jun 21 05:28:11.011096 sshd-session[1681]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 05:28:11.023087 systemd-logind[1504]: New session 2 of user core. Jun 21 05:28:11.036002 systemd[1]: Started session-2.scope - Session 2 of User core. Jun 21 05:28:11.042569 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 21 05:28:11.044237 systemd[1]: Reached target multi-user.target - Multi-User System. Jun 21 05:28:11.047321 systemd[1]: Startup finished in 3.552s (kernel) + 7.290s (initrd) + 7.157s (userspace) = 17.999s. Jun 21 05:28:11.056564 (kubelet)[1688]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 21 05:28:11.119042 sshd[1689]: Connection closed by 139.178.68.195 port 41988 Jun 21 05:28:11.118898 sshd-session[1681]: pam_unix(sshd:session): session closed for user core Jun 21 05:28:11.133794 systemd[1]: sshd@1-64.23.242.202:22-139.178.68.195:41988.service: Deactivated successfully. Jun 21 05:28:11.138395 systemd[1]: session-2.scope: Deactivated successfully. Jun 21 05:28:11.141176 systemd-logind[1504]: Session 2 logged out. Waiting for processes to exit. Jun 21 05:28:11.148828 systemd[1]: Started sshd@2-64.23.242.202:22-139.178.68.195:42000.service - OpenSSH per-connection server daemon (139.178.68.195:42000). Jun 21 05:28:11.150433 systemd-logind[1504]: Removed session 2. Jun 21 05:28:11.222834 sshd[1699]: Accepted publickey for core from 139.178.68.195 port 42000 ssh2: RSA SHA256:esrwHbjCvD8R4I7sQRiHa5Rpu9l1igA0BMtQzkIUH4o Jun 21 05:28:11.226093 sshd-session[1699]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 05:28:11.242455 systemd-logind[1504]: New session 3 of user core. Jun 21 05:28:11.247966 systemd[1]: Started session-3.scope - Session 3 of User core. Jun 21 05:28:11.314820 sshd[1705]: Connection closed by 139.178.68.195 port 42000 Jun 21 05:28:11.319502 sshd-session[1699]: pam_unix(sshd:session): session closed for user core Jun 21 05:28:11.332305 systemd[1]: sshd@2-64.23.242.202:22-139.178.68.195:42000.service: Deactivated successfully. Jun 21 05:28:11.337539 systemd[1]: session-3.scope: Deactivated successfully. Jun 21 05:28:11.341947 systemd-logind[1504]: Session 3 logged out. Waiting for processes to exit. Jun 21 05:28:11.347110 systemd[1]: Started sshd@3-64.23.242.202:22-139.178.68.195:42010.service - OpenSSH per-connection server daemon (139.178.68.195:42010). Jun 21 05:28:11.351058 systemd-logind[1504]: Removed session 3. Jun 21 05:28:11.424818 sshd[1711]: Accepted publickey for core from 139.178.68.195 port 42010 ssh2: RSA SHA256:esrwHbjCvD8R4I7sQRiHa5Rpu9l1igA0BMtQzkIUH4o Jun 21 05:28:11.427403 sshd-session[1711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 05:28:11.438968 systemd-logind[1504]: New session 4 of user core. Jun 21 05:28:11.442937 systemd[1]: Started session-4.scope - Session 4 of User core. Jun 21 05:28:11.514566 sshd[1713]: Connection closed by 139.178.68.195 port 42010 Jun 21 05:28:11.515964 sshd-session[1711]: pam_unix(sshd:session): session closed for user core Jun 21 05:28:11.530387 systemd[1]: sshd@3-64.23.242.202:22-139.178.68.195:42010.service: Deactivated successfully. Jun 21 05:28:11.533724 systemd[1]: session-4.scope: Deactivated successfully. Jun 21 05:28:11.537740 systemd-logind[1504]: Session 4 logged out. Waiting for processes to exit. Jun 21 05:28:11.543752 systemd[1]: Started sshd@4-64.23.242.202:22-139.178.68.195:42018.service - OpenSSH per-connection server daemon (139.178.68.195:42018). Jun 21 05:28:11.546476 systemd-logind[1504]: Removed session 4. Jun 21 05:28:11.619220 sshd[1719]: Accepted publickey for core from 139.178.68.195 port 42018 ssh2: RSA SHA256:esrwHbjCvD8R4I7sQRiHa5Rpu9l1igA0BMtQzkIUH4o Jun 21 05:28:11.621406 sshd-session[1719]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 05:28:11.637094 systemd-logind[1504]: New session 5 of user core. Jun 21 05:28:11.639508 systemd[1]: Started session-5.scope - Session 5 of User core. Jun 21 05:28:11.718139 sudo[1723]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jun 21 05:28:11.718756 sudo[1723]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 21 05:28:11.738028 sudo[1723]: pam_unix(sudo:session): session closed for user root Jun 21 05:28:11.741640 sshd[1721]: Connection closed by 139.178.68.195 port 42018 Jun 21 05:28:11.742310 sshd-session[1719]: pam_unix(sshd:session): session closed for user core Jun 21 05:28:11.757836 systemd[1]: sshd@4-64.23.242.202:22-139.178.68.195:42018.service: Deactivated successfully. Jun 21 05:28:11.760843 systemd[1]: session-5.scope: Deactivated successfully. Jun 21 05:28:11.764039 systemd-logind[1504]: Session 5 logged out. Waiting for processes to exit. Jun 21 05:28:11.768995 systemd[1]: Started sshd@5-64.23.242.202:22-139.178.68.195:42032.service - OpenSSH per-connection server daemon (139.178.68.195:42032). Jun 21 05:28:11.771941 systemd-logind[1504]: Removed session 5. Jun 21 05:28:11.835217 sshd[1729]: Accepted publickey for core from 139.178.68.195 port 42032 ssh2: RSA SHA256:esrwHbjCvD8R4I7sQRiHa5Rpu9l1igA0BMtQzkIUH4o Jun 21 05:28:11.837583 sshd-session[1729]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 05:28:11.847386 systemd-logind[1504]: New session 6 of user core. Jun 21 05:28:11.850896 systemd[1]: Started session-6.scope - Session 6 of User core. Jun 21 05:28:11.906077 kubelet[1688]: E0621 05:28:11.904949 1688 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 21 05:28:11.909594 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 21 05:28:11.910029 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 21 05:28:11.910806 systemd[1]: kubelet.service: Consumed 1.394s CPU time, 263M memory peak. Jun 21 05:28:11.918884 sudo[1733]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jun 21 05:28:11.919316 sudo[1733]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 21 05:28:11.925842 sudo[1733]: pam_unix(sudo:session): session closed for user root Jun 21 05:28:11.934199 sudo[1732]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jun 21 05:28:11.934608 sudo[1732]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 21 05:28:11.951547 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jun 21 05:28:12.010607 augenrules[1756]: No rules Jun 21 05:28:12.012998 systemd[1]: audit-rules.service: Deactivated successfully. Jun 21 05:28:12.013351 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jun 21 05:28:12.015232 sudo[1732]: pam_unix(sudo:session): session closed for user root Jun 21 05:28:12.018485 sshd[1731]: Connection closed by 139.178.68.195 port 42032 Jun 21 05:28:12.019167 sshd-session[1729]: pam_unix(sshd:session): session closed for user core Jun 21 05:28:12.033266 systemd[1]: sshd@5-64.23.242.202:22-139.178.68.195:42032.service: Deactivated successfully. Jun 21 05:28:12.036417 systemd[1]: session-6.scope: Deactivated successfully. Jun 21 05:28:12.037753 systemd-logind[1504]: Session 6 logged out. Waiting for processes to exit. Jun 21 05:28:12.043261 systemd[1]: Started sshd@6-64.23.242.202:22-139.178.68.195:42036.service - OpenSSH per-connection server daemon (139.178.68.195:42036). Jun 21 05:28:12.044984 systemd-logind[1504]: Removed session 6. Jun 21 05:28:12.110037 sshd[1765]: Accepted publickey for core from 139.178.68.195 port 42036 ssh2: RSA SHA256:esrwHbjCvD8R4I7sQRiHa5Rpu9l1igA0BMtQzkIUH4o Jun 21 05:28:12.112338 sshd-session[1765]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 05:28:12.120851 systemd-logind[1504]: New session 7 of user core. Jun 21 05:28:12.126971 systemd[1]: Started session-7.scope - Session 7 of User core. Jun 21 05:28:12.188247 sudo[1768]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jun 21 05:28:12.189428 sudo[1768]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 21 05:28:12.674993 systemd[1]: Starting docker.service - Docker Application Container Engine... Jun 21 05:28:12.697294 (dockerd)[1786]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jun 21 05:28:13.022645 dockerd[1786]: time="2025-06-21T05:28:13.022266942Z" level=info msg="Starting up" Jun 21 05:28:13.027829 dockerd[1786]: time="2025-06-21T05:28:13.027296786Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jun 21 05:28:13.080136 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport187214125-merged.mount: Deactivated successfully. Jun 21 05:28:13.158161 dockerd[1786]: time="2025-06-21T05:28:13.157891817Z" level=info msg="Loading containers: start." Jun 21 05:28:13.172692 kernel: Initializing XFRM netlink socket Jun 21 05:28:13.495963 systemd-networkd[1449]: docker0: Link UP Jun 21 05:28:13.499379 dockerd[1786]: time="2025-06-21T05:28:13.499340803Z" level=info msg="Loading containers: done." Jun 21 05:28:13.519110 dockerd[1786]: time="2025-06-21T05:28:13.519014636Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jun 21 05:28:13.519329 dockerd[1786]: time="2025-06-21T05:28:13.519161029Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jun 21 05:28:13.519389 dockerd[1786]: time="2025-06-21T05:28:13.519347651Z" level=info msg="Initializing buildkit" Jun 21 05:28:13.521939 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2373461319-merged.mount: Deactivated successfully. Jun 21 05:28:13.546585 dockerd[1786]: time="2025-06-21T05:28:13.546516444Z" level=info msg="Completed buildkit initialization" Jun 21 05:28:13.558088 dockerd[1786]: time="2025-06-21T05:28:13.558026145Z" level=info msg="Daemon has completed initialization" Jun 21 05:28:13.558402 dockerd[1786]: time="2025-06-21T05:28:13.558158152Z" level=info msg="API listen on /run/docker.sock" Jun 21 05:28:13.558624 systemd[1]: Started docker.service - Docker Application Container Engine. Jun 21 05:28:14.705963 systemd-timesyncd[1412]: Contacted time server 74.6.168.73:123 (1.flatcar.pool.ntp.org). Jun 21 05:28:14.706373 systemd-resolved[1397]: Clock change detected. Flushing caches. Jun 21 05:28:14.706694 systemd-timesyncd[1412]: Initial clock synchronization to Sat 2025-06-21 05:28:14.705559 UTC. Jun 21 05:28:14.945709 containerd[1528]: time="2025-06-21T05:28:14.945628159Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jun 21 05:28:15.482845 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3772058369.mount: Deactivated successfully. Jun 21 05:28:16.800103 containerd[1528]: time="2025-06-21T05:28:16.800001795Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:28:16.802175 containerd[1528]: time="2025-06-21T05:28:16.802106026Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.10: active requests=0, bytes read=28077744" Jun 21 05:28:16.803565 containerd[1528]: time="2025-06-21T05:28:16.803471785Z" level=info msg="ImageCreate event name:\"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:28:16.806381 containerd[1528]: time="2025-06-21T05:28:16.806269720Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:28:16.808495 containerd[1528]: time="2025-06-21T05:28:16.807717539Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.10\" with image id \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\", size \"28074544\" in 1.862023417s" Jun 21 05:28:16.808495 containerd[1528]: time="2025-06-21T05:28:16.807782049Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\"" Jun 21 05:28:16.808882 containerd[1528]: time="2025-06-21T05:28:16.808850246Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jun 21 05:28:18.351345 containerd[1528]: time="2025-06-21T05:28:18.350976946Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:28:18.353277 containerd[1528]: time="2025-06-21T05:28:18.353218820Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.10: active requests=0, bytes read=24713294" Jun 21 05:28:18.354888 containerd[1528]: time="2025-06-21T05:28:18.354081369Z" level=info msg="ImageCreate event name:\"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:28:18.358064 containerd[1528]: time="2025-06-21T05:28:18.357993151Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:28:18.361263 containerd[1528]: time="2025-06-21T05:28:18.361192048Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.10\" with image id \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\", size \"26315128\" in 1.552298332s" Jun 21 05:28:18.361263 containerd[1528]: time="2025-06-21T05:28:18.361249935Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\"" Jun 21 05:28:18.363083 containerd[1528]: time="2025-06-21T05:28:18.363027195Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jun 21 05:28:19.564273 containerd[1528]: time="2025-06-21T05:28:19.564215105Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:28:19.566326 containerd[1528]: time="2025-06-21T05:28:19.566271578Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.10: active requests=0, bytes read=18783671" Jun 21 05:28:19.567333 containerd[1528]: time="2025-06-21T05:28:19.567278563Z" level=info msg="ImageCreate event name:\"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:28:19.570186 containerd[1528]: time="2025-06-21T05:28:19.570099614Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:28:19.571396 containerd[1528]: time="2025-06-21T05:28:19.571227528Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.10\" with image id \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\", size \"20385523\" in 1.207947839s" Jun 21 05:28:19.571396 containerd[1528]: time="2025-06-21T05:28:19.571272916Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\"" Jun 21 05:28:19.571820 containerd[1528]: time="2025-06-21T05:28:19.571746550Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jun 21 05:28:20.794120 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3565386605.mount: Deactivated successfully. Jun 21 05:28:21.496342 containerd[1528]: time="2025-06-21T05:28:21.495901416Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:28:21.497901 containerd[1528]: time="2025-06-21T05:28:21.497732928Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.10: active requests=0, bytes read=30383943" Jun 21 05:28:21.498284 containerd[1528]: time="2025-06-21T05:28:21.498241915Z" level=info msg="ImageCreate event name:\"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:28:21.500458 containerd[1528]: time="2025-06-21T05:28:21.500407705Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:28:21.501478 containerd[1528]: time="2025-06-21T05:28:21.500977288Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.10\" with image id \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\", repo tag \"registry.k8s.io/kube-proxy:v1.31.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\", size \"30382962\" in 1.929191885s" Jun 21 05:28:21.501478 containerd[1528]: time="2025-06-21T05:28:21.501020647Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\"" Jun 21 05:28:21.502581 containerd[1528]: time="2025-06-21T05:28:21.502442364Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jun 21 05:28:21.505187 systemd-resolved[1397]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Jun 21 05:28:22.038681 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1263651460.mount: Deactivated successfully. Jun 21 05:28:22.644379 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jun 21 05:28:22.648280 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 21 05:28:22.891274 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 21 05:28:22.903866 (kubelet)[2127]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 21 05:28:23.004603 kubelet[2127]: E0621 05:28:23.004454 2127 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 21 05:28:23.015396 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 21 05:28:23.015733 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 21 05:28:23.017140 systemd[1]: kubelet.service: Consumed 244ms CPU time, 109.6M memory peak. Jun 21 05:28:23.299157 containerd[1528]: time="2025-06-21T05:28:23.299020293Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:28:23.300987 containerd[1528]: time="2025-06-21T05:28:23.300939046Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jun 21 05:28:23.302262 containerd[1528]: time="2025-06-21T05:28:23.302144537Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:28:23.305033 containerd[1528]: time="2025-06-21T05:28:23.304950181Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:28:23.306439 containerd[1528]: time="2025-06-21T05:28:23.306270760Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.803754401s" Jun 21 05:28:23.306439 containerd[1528]: time="2025-06-21T05:28:23.306340500Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jun 21 05:28:23.307677 containerd[1528]: time="2025-06-21T05:28:23.307616299Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jun 21 05:28:23.769206 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2135704531.mount: Deactivated successfully. Jun 21 05:28:23.774376 containerd[1528]: time="2025-06-21T05:28:23.774281423Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 21 05:28:23.775415 containerd[1528]: time="2025-06-21T05:28:23.775369065Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jun 21 05:28:23.778014 containerd[1528]: time="2025-06-21T05:28:23.777965250Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 21 05:28:23.779182 containerd[1528]: time="2025-06-21T05:28:23.779141447Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 21 05:28:23.781096 containerd[1528]: time="2025-06-21T05:28:23.781060555Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 473.410098ms" Jun 21 05:28:23.781199 containerd[1528]: time="2025-06-21T05:28:23.781109837Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jun 21 05:28:23.781742 containerd[1528]: time="2025-06-21T05:28:23.781696186Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jun 21 05:28:24.246787 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount349436944.mount: Deactivated successfully. Jun 21 05:28:24.577469 systemd-resolved[1397]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Jun 21 05:28:25.945380 containerd[1528]: time="2025-06-21T05:28:25.945287959Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:28:25.946531 containerd[1528]: time="2025-06-21T05:28:25.946480720Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" Jun 21 05:28:25.947534 containerd[1528]: time="2025-06-21T05:28:25.947489694Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:28:25.951494 containerd[1528]: time="2025-06-21T05:28:25.950826554Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:28:25.952203 containerd[1528]: time="2025-06-21T05:28:25.952158958Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.170210171s" Jun 21 05:28:25.952203 containerd[1528]: time="2025-06-21T05:28:25.952201347Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jun 21 05:28:28.812264 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 21 05:28:28.813083 systemd[1]: kubelet.service: Consumed 244ms CPU time, 109.6M memory peak. Jun 21 05:28:28.816067 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 21 05:28:28.849229 systemd[1]: Reload requested from client PID 2220 ('systemctl') (unit session-7.scope)... Jun 21 05:28:28.849466 systemd[1]: Reloading... Jun 21 05:28:28.975432 zram_generator::config[2259]: No configuration found. Jun 21 05:28:29.185190 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 21 05:28:29.356564 systemd[1]: Reloading finished in 506 ms. Jun 21 05:28:29.436342 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jun 21 05:28:29.436479 systemd[1]: kubelet.service: Failed with result 'signal'. Jun 21 05:28:29.436783 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 21 05:28:29.436839 systemd[1]: kubelet.service: Consumed 154ms CPU time, 98.3M memory peak. Jun 21 05:28:29.439596 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 21 05:28:29.608860 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 21 05:28:29.627382 (kubelet)[2317]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 21 05:28:29.691220 kubelet[2317]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 21 05:28:29.691220 kubelet[2317]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 21 05:28:29.691220 kubelet[2317]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 21 05:28:29.691220 kubelet[2317]: I0621 05:28:29.690751 2317 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 21 05:28:29.968068 kubelet[2317]: I0621 05:28:29.967602 2317 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jun 21 05:28:29.968068 kubelet[2317]: I0621 05:28:29.967657 2317 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 21 05:28:29.968068 kubelet[2317]: I0621 05:28:29.967955 2317 server.go:934] "Client rotation is on, will bootstrap in background" Jun 21 05:28:30.007341 kubelet[2317]: E0621 05:28:30.006267 2317 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://64.23.242.202:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 64.23.242.202:6443: connect: connection refused" logger="UnhandledError" Jun 21 05:28:30.008733 kubelet[2317]: I0621 05:28:30.008684 2317 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 21 05:28:30.021829 kubelet[2317]: I0621 05:28:30.021780 2317 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jun 21 05:28:30.029536 kubelet[2317]: I0621 05:28:30.029499 2317 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 21 05:28:30.031979 kubelet[2317]: I0621 05:28:30.031939 2317 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jun 21 05:28:30.032434 kubelet[2317]: I0621 05:28:30.032386 2317 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 21 05:28:30.032736 kubelet[2317]: I0621 05:28:30.032534 2317 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4372.0.0-d-47135505f9","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jun 21 05:28:30.032956 kubelet[2317]: I0621 05:28:30.032946 2317 topology_manager.go:138] "Creating topology manager with none policy" Jun 21 05:28:30.033007 kubelet[2317]: I0621 05:28:30.033000 2317 container_manager_linux.go:300] "Creating device plugin manager" Jun 21 05:28:30.033173 kubelet[2317]: I0621 05:28:30.033162 2317 state_mem.go:36] "Initialized new in-memory state store" Jun 21 05:28:30.036125 kubelet[2317]: I0621 05:28:30.036079 2317 kubelet.go:408] "Attempting to sync node with API server" Jun 21 05:28:30.036334 kubelet[2317]: I0621 05:28:30.036317 2317 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 21 05:28:30.036446 kubelet[2317]: I0621 05:28:30.036433 2317 kubelet.go:314] "Adding apiserver pod source" Jun 21 05:28:30.036540 kubelet[2317]: I0621 05:28:30.036531 2317 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 21 05:28:30.041106 kubelet[2317]: W0621 05:28:30.041041 2317 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://64.23.242.202:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4372.0.0-d-47135505f9&limit=500&resourceVersion=0": dial tcp 64.23.242.202:6443: connect: connection refused Jun 21 05:28:30.041106 kubelet[2317]: E0621 05:28:30.041107 2317 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://64.23.242.202:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4372.0.0-d-47135505f9&limit=500&resourceVersion=0\": dial tcp 64.23.242.202:6443: connect: connection refused" logger="UnhandledError" Jun 21 05:28:30.042073 kubelet[2317]: W0621 05:28:30.042020 2317 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://64.23.242.202:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 64.23.242.202:6443: connect: connection refused Jun 21 05:28:30.042197 kubelet[2317]: E0621 05:28:30.042077 2317 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://64.23.242.202:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 64.23.242.202:6443: connect: connection refused" logger="UnhandledError" Jun 21 05:28:30.042197 kubelet[2317]: I0621 05:28:30.042188 2317 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jun 21 05:28:30.045653 kubelet[2317]: I0621 05:28:30.045617 2317 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jun 21 05:28:30.045857 kubelet[2317]: W0621 05:28:30.045789 2317 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jun 21 05:28:30.047368 kubelet[2317]: I0621 05:28:30.046591 2317 server.go:1274] "Started kubelet" Jun 21 05:28:30.047904 kubelet[2317]: I0621 05:28:30.047864 2317 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 21 05:28:30.048927 kubelet[2317]: I0621 05:28:30.048897 2317 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 21 05:28:30.050025 kubelet[2317]: I0621 05:28:30.049880 2317 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 21 05:28:30.050025 kubelet[2317]: I0621 05:28:30.049997 2317 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jun 21 05:28:30.058745 kubelet[2317]: I0621 05:28:30.058144 2317 server.go:449] "Adding debug handlers to kubelet server" Jun 21 05:28:30.061802 kubelet[2317]: I0621 05:28:30.061680 2317 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jun 21 05:28:30.064850 kubelet[2317]: I0621 05:28:30.064814 2317 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jun 21 05:28:30.065292 kubelet[2317]: E0621 05:28:30.065267 2317 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4372.0.0-d-47135505f9\" not found" Jun 21 05:28:30.066610 kubelet[2317]: I0621 05:28:30.066563 2317 volume_manager.go:289] "Starting Kubelet Volume Manager" Jun 21 05:28:30.067066 kubelet[2317]: I0621 05:28:30.067045 2317 reconciler.go:26] "Reconciler: start to sync state" Jun 21 05:28:30.071708 kubelet[2317]: E0621 05:28:30.070229 2317 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://64.23.242.202:6443/api/v1/namespaces/default/events\": dial tcp 64.23.242.202:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4372.0.0-d-47135505f9.184af7a8ea6ee533 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4372.0.0-d-47135505f9,UID:ci-4372.0.0-d-47135505f9,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4372.0.0-d-47135505f9,},FirstTimestamp:2025-06-21 05:28:30.046561587 +0000 UTC m=+0.412427575,LastTimestamp:2025-06-21 05:28:30.046561587 +0000 UTC m=+0.412427575,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4372.0.0-d-47135505f9,}" Jun 21 05:28:30.073348 kubelet[2317]: E0621 05:28:30.073262 2317 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.242.202:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372.0.0-d-47135505f9?timeout=10s\": dial tcp 64.23.242.202:6443: connect: connection refused" interval="200ms" Jun 21 05:28:30.075378 kubelet[2317]: I0621 05:28:30.075320 2317 factory.go:221] Registration of the systemd container factory successfully Jun 21 05:28:30.077891 kubelet[2317]: I0621 05:28:30.077156 2317 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 21 05:28:30.079917 kubelet[2317]: I0621 05:28:30.079882 2317 factory.go:221] Registration of the containerd container factory successfully Jun 21 05:28:30.090726 kubelet[2317]: W0621 05:28:30.090686 2317 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://64.23.242.202:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 64.23.242.202:6443: connect: connection refused Jun 21 05:28:30.091287 kubelet[2317]: E0621 05:28:30.091250 2317 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://64.23.242.202:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 64.23.242.202:6443: connect: connection refused" logger="UnhandledError" Jun 21 05:28:30.102621 kubelet[2317]: I0621 05:28:30.102486 2317 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 21 05:28:30.109757 kubelet[2317]: E0621 05:28:30.109712 2317 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 21 05:28:30.109908 kubelet[2317]: I0621 05:28:30.109803 2317 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 21 05:28:30.109908 kubelet[2317]: I0621 05:28:30.109829 2317 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 21 05:28:30.109908 kubelet[2317]: I0621 05:28:30.109861 2317 kubelet.go:2321] "Starting kubelet main sync loop" Jun 21 05:28:30.109985 kubelet[2317]: E0621 05:28:30.109927 2317 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 21 05:28:30.117677 kubelet[2317]: W0621 05:28:30.117613 2317 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://64.23.242.202:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 64.23.242.202:6443: connect: connection refused Jun 21 05:28:30.117677 kubelet[2317]: E0621 05:28:30.117679 2317 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://64.23.242.202:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 64.23.242.202:6443: connect: connection refused" logger="UnhandledError" Jun 21 05:28:30.123980 kubelet[2317]: I0621 05:28:30.123942 2317 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 21 05:28:30.123980 kubelet[2317]: I0621 05:28:30.123961 2317 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 21 05:28:30.123980 kubelet[2317]: I0621 05:28:30.123984 2317 state_mem.go:36] "Initialized new in-memory state store" Jun 21 05:28:30.126449 kubelet[2317]: I0621 05:28:30.126408 2317 policy_none.go:49] "None policy: Start" Jun 21 05:28:30.127402 kubelet[2317]: I0621 05:28:30.127380 2317 memory_manager.go:170] "Starting memorymanager" policy="None" Jun 21 05:28:30.127484 kubelet[2317]: I0621 05:28:30.127418 2317 state_mem.go:35] "Initializing new in-memory state store" Jun 21 05:28:30.137428 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jun 21 05:28:30.159313 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jun 21 05:28:30.165550 kubelet[2317]: E0621 05:28:30.165504 2317 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4372.0.0-d-47135505f9\" not found" Jun 21 05:28:30.174660 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jun 21 05:28:30.177054 kubelet[2317]: I0621 05:28:30.176827 2317 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 21 05:28:30.177177 kubelet[2317]: I0621 05:28:30.177071 2317 eviction_manager.go:189] "Eviction manager: starting control loop" Jun 21 05:28:30.177177 kubelet[2317]: I0621 05:28:30.177094 2317 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 21 05:28:30.178291 kubelet[2317]: I0621 05:28:30.177834 2317 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 21 05:28:30.183030 kubelet[2317]: E0621 05:28:30.183003 2317 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4372.0.0-d-47135505f9\" not found" Jun 21 05:28:30.222423 systemd[1]: Created slice kubepods-burstable-podc01bd59b88ff1246d6e04066dc51de04.slice - libcontainer container kubepods-burstable-podc01bd59b88ff1246d6e04066dc51de04.slice. Jun 21 05:28:30.236258 systemd[1]: Created slice kubepods-burstable-pod877615f28fdfc9f192d36a28879733c3.slice - libcontainer container kubepods-burstable-pod877615f28fdfc9f192d36a28879733c3.slice. Jun 21 05:28:30.242477 systemd[1]: Created slice kubepods-burstable-pod9c91829c6b8ded1d2c2517f460878460.slice - libcontainer container kubepods-burstable-pod9c91829c6b8ded1d2c2517f460878460.slice. Jun 21 05:28:30.269067 kubelet[2317]: I0621 05:28:30.268984 2317 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/877615f28fdfc9f192d36a28879733c3-ca-certs\") pod \"kube-controller-manager-ci-4372.0.0-d-47135505f9\" (UID: \"877615f28fdfc9f192d36a28879733c3\") " pod="kube-system/kube-controller-manager-ci-4372.0.0-d-47135505f9" Jun 21 05:28:30.269067 kubelet[2317]: I0621 05:28:30.269046 2317 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/877615f28fdfc9f192d36a28879733c3-flexvolume-dir\") pod \"kube-controller-manager-ci-4372.0.0-d-47135505f9\" (UID: \"877615f28fdfc9f192d36a28879733c3\") " pod="kube-system/kube-controller-manager-ci-4372.0.0-d-47135505f9" Jun 21 05:28:30.269067 kubelet[2317]: I0621 05:28:30.269067 2317 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9c91829c6b8ded1d2c2517f460878460-kubeconfig\") pod \"kube-scheduler-ci-4372.0.0-d-47135505f9\" (UID: \"9c91829c6b8ded1d2c2517f460878460\") " pod="kube-system/kube-scheduler-ci-4372.0.0-d-47135505f9" Jun 21 05:28:30.269067 kubelet[2317]: I0621 05:28:30.269095 2317 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/877615f28fdfc9f192d36a28879733c3-kubeconfig\") pod \"kube-controller-manager-ci-4372.0.0-d-47135505f9\" (UID: \"877615f28fdfc9f192d36a28879733c3\") " pod="kube-system/kube-controller-manager-ci-4372.0.0-d-47135505f9" Jun 21 05:28:30.269448 kubelet[2317]: I0621 05:28:30.269119 2317 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/877615f28fdfc9f192d36a28879733c3-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4372.0.0-d-47135505f9\" (UID: \"877615f28fdfc9f192d36a28879733c3\") " pod="kube-system/kube-controller-manager-ci-4372.0.0-d-47135505f9" Jun 21 05:28:30.269448 kubelet[2317]: I0621 05:28:30.269145 2317 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c01bd59b88ff1246d6e04066dc51de04-ca-certs\") pod \"kube-apiserver-ci-4372.0.0-d-47135505f9\" (UID: \"c01bd59b88ff1246d6e04066dc51de04\") " pod="kube-system/kube-apiserver-ci-4372.0.0-d-47135505f9" Jun 21 05:28:30.269448 kubelet[2317]: I0621 05:28:30.269182 2317 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c01bd59b88ff1246d6e04066dc51de04-k8s-certs\") pod \"kube-apiserver-ci-4372.0.0-d-47135505f9\" (UID: \"c01bd59b88ff1246d6e04066dc51de04\") " pod="kube-system/kube-apiserver-ci-4372.0.0-d-47135505f9" Jun 21 05:28:30.269448 kubelet[2317]: I0621 05:28:30.269208 2317 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c01bd59b88ff1246d6e04066dc51de04-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4372.0.0-d-47135505f9\" (UID: \"c01bd59b88ff1246d6e04066dc51de04\") " pod="kube-system/kube-apiserver-ci-4372.0.0-d-47135505f9" Jun 21 05:28:30.269448 kubelet[2317]: I0621 05:28:30.269233 2317 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/877615f28fdfc9f192d36a28879733c3-k8s-certs\") pod \"kube-controller-manager-ci-4372.0.0-d-47135505f9\" (UID: \"877615f28fdfc9f192d36a28879733c3\") " pod="kube-system/kube-controller-manager-ci-4372.0.0-d-47135505f9" Jun 21 05:28:30.274704 kubelet[2317]: E0621 05:28:30.274636 2317 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.242.202:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372.0.0-d-47135505f9?timeout=10s\": dial tcp 64.23.242.202:6443: connect: connection refused" interval="400ms" Jun 21 05:28:30.278954 kubelet[2317]: I0621 05:28:30.278917 2317 kubelet_node_status.go:72] "Attempting to register node" node="ci-4372.0.0-d-47135505f9" Jun 21 05:28:30.279743 kubelet[2317]: E0621 05:28:30.279707 2317 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://64.23.242.202:6443/api/v1/nodes\": dial tcp 64.23.242.202:6443: connect: connection refused" node="ci-4372.0.0-d-47135505f9" Jun 21 05:28:30.482111 kubelet[2317]: I0621 05:28:30.481964 2317 kubelet_node_status.go:72] "Attempting to register node" node="ci-4372.0.0-d-47135505f9" Jun 21 05:28:30.482466 kubelet[2317]: E0621 05:28:30.482427 2317 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://64.23.242.202:6443/api/v1/nodes\": dial tcp 64.23.242.202:6443: connect: connection refused" node="ci-4372.0.0-d-47135505f9" Jun 21 05:28:30.534174 kubelet[2317]: E0621 05:28:30.534066 2317 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 21 05:28:30.535778 containerd[1528]: time="2025-06-21T05:28:30.535723041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4372.0.0-d-47135505f9,Uid:c01bd59b88ff1246d6e04066dc51de04,Namespace:kube-system,Attempt:0,}" Jun 21 05:28:30.546547 kubelet[2317]: E0621 05:28:30.545912 2317 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 21 05:28:30.546547 kubelet[2317]: E0621 05:28:30.546329 2317 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 21 05:28:30.550844 containerd[1528]: time="2025-06-21T05:28:30.550794608Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4372.0.0-d-47135505f9,Uid:877615f28fdfc9f192d36a28879733c3,Namespace:kube-system,Attempt:0,}" Jun 21 05:28:30.551788 containerd[1528]: time="2025-06-21T05:28:30.551751677Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4372.0.0-d-47135505f9,Uid:9c91829c6b8ded1d2c2517f460878460,Namespace:kube-system,Attempt:0,}" Jun 21 05:28:30.675330 kubelet[2317]: E0621 05:28:30.675223 2317 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.242.202:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372.0.0-d-47135505f9?timeout=10s\": dial tcp 64.23.242.202:6443: connect: connection refused" interval="800ms" Jun 21 05:28:30.679510 containerd[1528]: time="2025-06-21T05:28:30.679404217Z" level=info msg="connecting to shim 112d5917ad43322ea86e06a90d58111a493ab56e0275e0e50aaeba620f87fdd9" address="unix:///run/containerd/s/1d47d4e0c4efb2289ac4b4a585d95d9b2ad992a20734183be28eebbd3aa38544" namespace=k8s.io protocol=ttrpc version=3 Jun 21 05:28:30.680901 containerd[1528]: time="2025-06-21T05:28:30.679409145Z" level=info msg="connecting to shim 89f7737a574650cdd84e939b856440add3fc705f1764df7d17bbbe98b1b1c17e" address="unix:///run/containerd/s/fb2a1ebb31da8b370fde64350702f799c62e8f31fdffc873bd47ce1a7669b2a7" namespace=k8s.io protocol=ttrpc version=3 Jun 21 05:28:30.682653 containerd[1528]: time="2025-06-21T05:28:30.682597928Z" level=info msg="connecting to shim b1c1157da5d1064f76471660402baf9aa92169a76363a5a820cc5c14a1a918a8" address="unix:///run/containerd/s/7219aee680392fc0828ee9affa5aba16e8759a2715592b7a04bc6b666c63eec7" namespace=k8s.io protocol=ttrpc version=3 Jun 21 05:28:30.802576 systemd[1]: Started cri-containerd-112d5917ad43322ea86e06a90d58111a493ab56e0275e0e50aaeba620f87fdd9.scope - libcontainer container 112d5917ad43322ea86e06a90d58111a493ab56e0275e0e50aaeba620f87fdd9. Jun 21 05:28:30.817558 systemd[1]: Started cri-containerd-89f7737a574650cdd84e939b856440add3fc705f1764df7d17bbbe98b1b1c17e.scope - libcontainer container 89f7737a574650cdd84e939b856440add3fc705f1764df7d17bbbe98b1b1c17e. Jun 21 05:28:30.819748 systemd[1]: Started cri-containerd-b1c1157da5d1064f76471660402baf9aa92169a76363a5a820cc5c14a1a918a8.scope - libcontainer container b1c1157da5d1064f76471660402baf9aa92169a76363a5a820cc5c14a1a918a8. Jun 21 05:28:30.884970 kubelet[2317]: I0621 05:28:30.884928 2317 kubelet_node_status.go:72] "Attempting to register node" node="ci-4372.0.0-d-47135505f9" Jun 21 05:28:30.885762 kubelet[2317]: E0621 05:28:30.885442 2317 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://64.23.242.202:6443/api/v1/nodes\": dial tcp 64.23.242.202:6443: connect: connection refused" node="ci-4372.0.0-d-47135505f9" Jun 21 05:28:30.949953 containerd[1528]: time="2025-06-21T05:28:30.949913842Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4372.0.0-d-47135505f9,Uid:9c91829c6b8ded1d2c2517f460878460,Namespace:kube-system,Attempt:0,} returns sandbox id \"b1c1157da5d1064f76471660402baf9aa92169a76363a5a820cc5c14a1a918a8\"" Jun 21 05:28:30.951962 kubelet[2317]: E0621 05:28:30.951764 2317 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 21 05:28:30.955224 containerd[1528]: time="2025-06-21T05:28:30.955180111Z" level=info msg="CreateContainer within sandbox \"b1c1157da5d1064f76471660402baf9aa92169a76363a5a820cc5c14a1a918a8\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jun 21 05:28:30.977152 containerd[1528]: time="2025-06-21T05:28:30.977094883Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4372.0.0-d-47135505f9,Uid:c01bd59b88ff1246d6e04066dc51de04,Namespace:kube-system,Attempt:0,} returns sandbox id \"89f7737a574650cdd84e939b856440add3fc705f1764df7d17bbbe98b1b1c17e\"" Jun 21 05:28:30.979246 kubelet[2317]: E0621 05:28:30.979210 2317 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 21 05:28:30.986963 containerd[1528]: time="2025-06-21T05:28:30.986912840Z" level=info msg="Container c4fd40345adc27a7d8cc28f3e336040a7df758d1cfddd7b41999c1b5129346ae: CDI devices from CRI Config.CDIDevices: []" Jun 21 05:28:30.989422 containerd[1528]: time="2025-06-21T05:28:30.989241692Z" level=info msg="CreateContainer within sandbox \"89f7737a574650cdd84e939b856440add3fc705f1764df7d17bbbe98b1b1c17e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jun 21 05:28:30.991249 kubelet[2317]: W0621 05:28:30.991068 2317 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://64.23.242.202:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 64.23.242.202:6443: connect: connection refused Jun 21 05:28:30.992337 kubelet[2317]: E0621 05:28:30.992286 2317 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://64.23.242.202:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 64.23.242.202:6443: connect: connection refused" logger="UnhandledError" Jun 21 05:28:30.994729 containerd[1528]: time="2025-06-21T05:28:30.994688714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4372.0.0-d-47135505f9,Uid:877615f28fdfc9f192d36a28879733c3,Namespace:kube-system,Attempt:0,} returns sandbox id \"112d5917ad43322ea86e06a90d58111a493ab56e0275e0e50aaeba620f87fdd9\"" Jun 21 05:28:30.996945 kubelet[2317]: E0621 05:28:30.996284 2317 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 21 05:28:31.003677 containerd[1528]: time="2025-06-21T05:28:31.003586793Z" level=info msg="CreateContainer within sandbox \"b1c1157da5d1064f76471660402baf9aa92169a76363a5a820cc5c14a1a918a8\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c4fd40345adc27a7d8cc28f3e336040a7df758d1cfddd7b41999c1b5129346ae\"" Jun 21 05:28:31.008838 containerd[1528]: time="2025-06-21T05:28:31.008481531Z" level=info msg="CreateContainer within sandbox \"112d5917ad43322ea86e06a90d58111a493ab56e0275e0e50aaeba620f87fdd9\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jun 21 05:28:31.010386 containerd[1528]: time="2025-06-21T05:28:31.009880444Z" level=info msg="StartContainer for \"c4fd40345adc27a7d8cc28f3e336040a7df758d1cfddd7b41999c1b5129346ae\"" Jun 21 05:28:31.012376 containerd[1528]: time="2025-06-21T05:28:31.012270602Z" level=info msg="connecting to shim c4fd40345adc27a7d8cc28f3e336040a7df758d1cfddd7b41999c1b5129346ae" address="unix:///run/containerd/s/7219aee680392fc0828ee9affa5aba16e8759a2715592b7a04bc6b666c63eec7" protocol=ttrpc version=3 Jun 21 05:28:31.017731 containerd[1528]: time="2025-06-21T05:28:31.017666103Z" level=info msg="Container 7f417f68780791681f5bdc8faad6d3a3c156dc072632ea07dcab5e7f4cc9c41c: CDI devices from CRI Config.CDIDevices: []" Jun 21 05:28:31.027216 containerd[1528]: time="2025-06-21T05:28:31.027165767Z" level=info msg="CreateContainer within sandbox \"89f7737a574650cdd84e939b856440add3fc705f1764df7d17bbbe98b1b1c17e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"7f417f68780791681f5bdc8faad6d3a3c156dc072632ea07dcab5e7f4cc9c41c\"" Jun 21 05:28:31.031349 containerd[1528]: time="2025-06-21T05:28:31.030667132Z" level=info msg="StartContainer for \"7f417f68780791681f5bdc8faad6d3a3c156dc072632ea07dcab5e7f4cc9c41c\"" Jun 21 05:28:31.033481 containerd[1528]: time="2025-06-21T05:28:31.033438410Z" level=info msg="connecting to shim 7f417f68780791681f5bdc8faad6d3a3c156dc072632ea07dcab5e7f4cc9c41c" address="unix:///run/containerd/s/fb2a1ebb31da8b370fde64350702f799c62e8f31fdffc873bd47ce1a7669b2a7" protocol=ttrpc version=3 Jun 21 05:28:31.033860 containerd[1528]: time="2025-06-21T05:28:31.033756245Z" level=info msg="Container b6b56a22b15519d637f15e5bff8c0ec27483a69980eb149ed2dae4aac42c11ce: CDI devices from CRI Config.CDIDevices: []" Jun 21 05:28:31.045088 containerd[1528]: time="2025-06-21T05:28:31.045035217Z" level=info msg="CreateContainer within sandbox \"112d5917ad43322ea86e06a90d58111a493ab56e0275e0e50aaeba620f87fdd9\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b6b56a22b15519d637f15e5bff8c0ec27483a69980eb149ed2dae4aac42c11ce\"" Jun 21 05:28:31.046975 containerd[1528]: time="2025-06-21T05:28:31.046927971Z" level=info msg="StartContainer for \"b6b56a22b15519d637f15e5bff8c0ec27483a69980eb149ed2dae4aac42c11ce\"" Jun 21 05:28:31.050347 containerd[1528]: time="2025-06-21T05:28:31.050274565Z" level=info msg="connecting to shim b6b56a22b15519d637f15e5bff8c0ec27483a69980eb149ed2dae4aac42c11ce" address="unix:///run/containerd/s/1d47d4e0c4efb2289ac4b4a585d95d9b2ad992a20734183be28eebbd3aa38544" protocol=ttrpc version=3 Jun 21 05:28:31.063620 systemd[1]: Started cri-containerd-c4fd40345adc27a7d8cc28f3e336040a7df758d1cfddd7b41999c1b5129346ae.scope - libcontainer container c4fd40345adc27a7d8cc28f3e336040a7df758d1cfddd7b41999c1b5129346ae. Jun 21 05:28:31.068783 kubelet[2317]: W0621 05:28:31.068708 2317 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://64.23.242.202:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4372.0.0-d-47135505f9&limit=500&resourceVersion=0": dial tcp 64.23.242.202:6443: connect: connection refused Jun 21 05:28:31.069696 kubelet[2317]: E0621 05:28:31.069638 2317 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://64.23.242.202:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4372.0.0-d-47135505f9&limit=500&resourceVersion=0\": dial tcp 64.23.242.202:6443: connect: connection refused" logger="UnhandledError" Jun 21 05:28:31.086595 systemd[1]: Started cri-containerd-7f417f68780791681f5bdc8faad6d3a3c156dc072632ea07dcab5e7f4cc9c41c.scope - libcontainer container 7f417f68780791681f5bdc8faad6d3a3c156dc072632ea07dcab5e7f4cc9c41c. Jun 21 05:28:31.112653 systemd[1]: Started cri-containerd-b6b56a22b15519d637f15e5bff8c0ec27483a69980eb149ed2dae4aac42c11ce.scope - libcontainer container b6b56a22b15519d637f15e5bff8c0ec27483a69980eb149ed2dae4aac42c11ce. Jun 21 05:28:31.241516 kubelet[2317]: W0621 05:28:31.241273 2317 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://64.23.242.202:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 64.23.242.202:6443: connect: connection refused Jun 21 05:28:31.241516 kubelet[2317]: E0621 05:28:31.241442 2317 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://64.23.242.202:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 64.23.242.202:6443: connect: connection refused" logger="UnhandledError" Jun 21 05:28:31.247140 containerd[1528]: time="2025-06-21T05:28:31.247068777Z" level=info msg="StartContainer for \"c4fd40345adc27a7d8cc28f3e336040a7df758d1cfddd7b41999c1b5129346ae\" returns successfully" Jun 21 05:28:31.252700 kubelet[2317]: W0621 05:28:31.252457 2317 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://64.23.242.202:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 64.23.242.202:6443: connect: connection refused Jun 21 05:28:31.253251 kubelet[2317]: E0621 05:28:31.252886 2317 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://64.23.242.202:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 64.23.242.202:6443: connect: connection refused" logger="UnhandledError" Jun 21 05:28:31.267625 containerd[1528]: time="2025-06-21T05:28:31.267548372Z" level=info msg="StartContainer for \"b6b56a22b15519d637f15e5bff8c0ec27483a69980eb149ed2dae4aac42c11ce\" returns successfully" Jun 21 05:28:31.275618 containerd[1528]: time="2025-06-21T05:28:31.275563479Z" level=info msg="StartContainer for \"7f417f68780791681f5bdc8faad6d3a3c156dc072632ea07dcab5e7f4cc9c41c\" returns successfully" Jun 21 05:28:31.688335 kubelet[2317]: I0621 05:28:31.687231 2317 kubelet_node_status.go:72] "Attempting to register node" node="ci-4372.0.0-d-47135505f9" Jun 21 05:28:32.162210 kubelet[2317]: E0621 05:28:32.161935 2317 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 21 05:28:32.170139 kubelet[2317]: E0621 05:28:32.170089 2317 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 21 05:28:32.171075 kubelet[2317]: E0621 05:28:32.170970 2317 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 21 05:28:33.179358 kubelet[2317]: E0621 05:28:33.176107 2317 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 21 05:28:33.179358 kubelet[2317]: E0621 05:28:33.176905 2317 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 21 05:28:33.182712 kubelet[2317]: E0621 05:28:33.182668 2317 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 21 05:28:33.889689 kubelet[2317]: E0621 05:28:33.889624 2317 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4372.0.0-d-47135505f9\" not found" node="ci-4372.0.0-d-47135505f9" Jun 21 05:28:33.932873 kubelet[2317]: E0621 05:28:33.932724 2317 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4372.0.0-d-47135505f9.184af7a8ea6ee533 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4372.0.0-d-47135505f9,UID:ci-4372.0.0-d-47135505f9,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4372.0.0-d-47135505f9,},FirstTimestamp:2025-06-21 05:28:30.046561587 +0000 UTC m=+0.412427575,LastTimestamp:2025-06-21 05:28:30.046561587 +0000 UTC m=+0.412427575,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4372.0.0-d-47135505f9,}" Jun 21 05:28:33.984450 kubelet[2317]: I0621 05:28:33.984386 2317 kubelet_node_status.go:75] "Successfully registered node" node="ci-4372.0.0-d-47135505f9" Jun 21 05:28:34.016354 kubelet[2317]: E0621 05:28:34.013776 2317 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4372.0.0-d-47135505f9.184af7a8ee31ddac default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4372.0.0-d-47135505f9,UID:ci-4372.0.0-d-47135505f9,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ci-4372.0.0-d-47135505f9,},FirstTimestamp:2025-06-21 05:28:30.109670828 +0000 UTC m=+0.475536838,LastTimestamp:2025-06-21 05:28:30.109670828 +0000 UTC m=+0.475536838,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4372.0.0-d-47135505f9,}" Jun 21 05:28:34.044468 kubelet[2317]: I0621 05:28:34.044412 2317 apiserver.go:52] "Watching apiserver" Jun 21 05:28:34.066348 kubelet[2317]: I0621 05:28:34.065778 2317 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jun 21 05:28:34.188745 kubelet[2317]: E0621 05:28:34.188549 2317 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4372.0.0-d-47135505f9\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4372.0.0-d-47135505f9" Jun 21 05:28:34.190938 kubelet[2317]: E0621 05:28:34.190802 2317 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 21 05:28:35.205580 kubelet[2317]: W0621 05:28:35.205514 2317 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 21 05:28:35.206107 kubelet[2317]: E0621 05:28:35.205917 2317 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 21 05:28:36.068239 systemd[1]: Reload requested from client PID 2584 ('systemctl') (unit session-7.scope)... Jun 21 05:28:36.068256 systemd[1]: Reloading... Jun 21 05:28:36.152988 kubelet[2317]: W0621 05:28:36.151433 2317 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 21 05:28:36.152988 kubelet[2317]: E0621 05:28:36.151784 2317 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 21 05:28:36.181229 kubelet[2317]: E0621 05:28:36.181181 2317 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 21 05:28:36.182136 kubelet[2317]: E0621 05:28:36.182024 2317 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 21 05:28:36.240381 zram_generator::config[2633]: No configuration found. Jun 21 05:28:36.359911 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 21 05:28:36.511994 systemd[1]: Reloading finished in 443 ms. Jun 21 05:28:36.558323 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 21 05:28:36.574777 systemd[1]: kubelet.service: Deactivated successfully. Jun 21 05:28:36.575087 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 21 05:28:36.575182 systemd[1]: kubelet.service: Consumed 931ms CPU time, 125.9M memory peak. Jun 21 05:28:36.577450 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 21 05:28:36.767221 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 21 05:28:36.777868 (kubelet)[2678]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 21 05:28:36.862633 kubelet[2678]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 21 05:28:36.862633 kubelet[2678]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 21 05:28:36.862633 kubelet[2678]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 21 05:28:36.863377 kubelet[2678]: I0621 05:28:36.863289 2678 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 21 05:28:36.872640 kubelet[2678]: I0621 05:28:36.872598 2678 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jun 21 05:28:36.873378 kubelet[2678]: I0621 05:28:36.872807 2678 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 21 05:28:36.873378 kubelet[2678]: I0621 05:28:36.873081 2678 server.go:934] "Client rotation is on, will bootstrap in background" Jun 21 05:28:36.875284 kubelet[2678]: I0621 05:28:36.875240 2678 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jun 21 05:28:36.882398 kubelet[2678]: I0621 05:28:36.881959 2678 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 21 05:28:36.888312 kubelet[2678]: I0621 05:28:36.888267 2678 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jun 21 05:28:36.894596 kubelet[2678]: I0621 05:28:36.894545 2678 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 21 05:28:36.894907 kubelet[2678]: I0621 05:28:36.894872 2678 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jun 21 05:28:36.895134 kubelet[2678]: I0621 05:28:36.895082 2678 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 21 05:28:36.895420 kubelet[2678]: I0621 05:28:36.895156 2678 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4372.0.0-d-47135505f9","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jun 21 05:28:36.895591 kubelet[2678]: I0621 05:28:36.895430 2678 topology_manager.go:138] "Creating topology manager with none policy" Jun 21 05:28:36.895591 kubelet[2678]: I0621 05:28:36.895442 2678 container_manager_linux.go:300] "Creating device plugin manager" Jun 21 05:28:36.895591 kubelet[2678]: I0621 05:28:36.895482 2678 state_mem.go:36] "Initialized new in-memory state store" Jun 21 05:28:36.895834 kubelet[2678]: I0621 05:28:36.895816 2678 kubelet.go:408] "Attempting to sync node with API server" Jun 21 05:28:36.895918 kubelet[2678]: I0621 05:28:36.895840 2678 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 21 05:28:36.895918 kubelet[2678]: I0621 05:28:36.895901 2678 kubelet.go:314] "Adding apiserver pod source" Jun 21 05:28:36.895918 kubelet[2678]: I0621 05:28:36.895913 2678 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 21 05:28:36.899659 kubelet[2678]: I0621 05:28:36.899629 2678 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jun 21 05:28:36.900100 kubelet[2678]: I0621 05:28:36.900078 2678 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jun 21 05:28:36.902604 kubelet[2678]: I0621 05:28:36.902573 2678 server.go:1274] "Started kubelet" Jun 21 05:28:36.906477 kubelet[2678]: I0621 05:28:36.906442 2678 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 21 05:28:36.920366 kubelet[2678]: I0621 05:28:36.919667 2678 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jun 21 05:28:36.927663 kubelet[2678]: I0621 05:28:36.927603 2678 server.go:449] "Adding debug handlers to kubelet server" Jun 21 05:28:36.930737 kubelet[2678]: I0621 05:28:36.922119 2678 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jun 21 05:28:36.932608 kubelet[2678]: I0621 05:28:36.920874 2678 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 21 05:28:36.932854 kubelet[2678]: I0621 05:28:36.932835 2678 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 21 05:28:36.932903 kubelet[2678]: E0621 05:28:36.925049 2678 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4372.0.0-d-47135505f9\" not found" Jun 21 05:28:36.932933 kubelet[2678]: I0621 05:28:36.924838 2678 volume_manager.go:289] "Starting Kubelet Volume Manager" Jun 21 05:28:36.933342 kubelet[2678]: I0621 05:28:36.924858 2678 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jun 21 05:28:36.933480 kubelet[2678]: I0621 05:28:36.933466 2678 reconciler.go:26] "Reconciler: start to sync state" Jun 21 05:28:36.935334 kubelet[2678]: I0621 05:28:36.935014 2678 factory.go:221] Registration of the systemd container factory successfully Jun 21 05:28:36.936185 kubelet[2678]: I0621 05:28:36.935633 2678 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 21 05:28:36.938832 kubelet[2678]: E0621 05:28:36.938805 2678 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 21 05:28:36.943725 kubelet[2678]: I0621 05:28:36.943572 2678 factory.go:221] Registration of the containerd container factory successfully Jun 21 05:28:36.947601 kubelet[2678]: I0621 05:28:36.947391 2678 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 21 05:28:36.951879 kubelet[2678]: I0621 05:28:36.951435 2678 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 21 05:28:36.951879 kubelet[2678]: I0621 05:28:36.951475 2678 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 21 05:28:36.951879 kubelet[2678]: I0621 05:28:36.951505 2678 kubelet.go:2321] "Starting kubelet main sync loop" Jun 21 05:28:36.951879 kubelet[2678]: E0621 05:28:36.951561 2678 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 21 05:28:37.007406 kubelet[2678]: I0621 05:28:37.007369 2678 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 21 05:28:37.007406 kubelet[2678]: I0621 05:28:37.007390 2678 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 21 05:28:37.007406 kubelet[2678]: I0621 05:28:37.007420 2678 state_mem.go:36] "Initialized new in-memory state store" Jun 21 05:28:37.007684 kubelet[2678]: I0621 05:28:37.007654 2678 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jun 21 05:28:37.007733 kubelet[2678]: I0621 05:28:37.007670 2678 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jun 21 05:28:37.007733 kubelet[2678]: I0621 05:28:37.007698 2678 policy_none.go:49] "None policy: Start" Jun 21 05:28:37.008781 kubelet[2678]: I0621 05:28:37.008751 2678 memory_manager.go:170] "Starting memorymanager" policy="None" Jun 21 05:28:37.008781 kubelet[2678]: I0621 05:28:37.008785 2678 state_mem.go:35] "Initializing new in-memory state store" Jun 21 05:28:37.009037 kubelet[2678]: I0621 05:28:37.009015 2678 state_mem.go:75] "Updated machine memory state" Jun 21 05:28:37.015370 kubelet[2678]: I0621 05:28:37.014607 2678 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 21 05:28:37.015370 kubelet[2678]: I0621 05:28:37.014804 2678 eviction_manager.go:189] "Eviction manager: starting control loop" Jun 21 05:28:37.015370 kubelet[2678]: I0621 05:28:37.014816 2678 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 21 05:28:37.015591 kubelet[2678]: I0621 05:28:37.015400 2678 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 21 05:28:37.064947 kubelet[2678]: W0621 05:28:37.063873 2678 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 21 05:28:37.068570 kubelet[2678]: W0621 05:28:37.068525 2678 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 21 05:28:37.070046 kubelet[2678]: E0621 05:28:37.069999 2678 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4372.0.0-d-47135505f9\" already exists" pod="kube-system/kube-controller-manager-ci-4372.0.0-d-47135505f9" Jun 21 05:28:37.070385 kubelet[2678]: W0621 05:28:37.070346 2678 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 21 05:28:37.070500 kubelet[2678]: E0621 05:28:37.070410 2678 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4372.0.0-d-47135505f9\" already exists" pod="kube-system/kube-scheduler-ci-4372.0.0-d-47135505f9" Jun 21 05:28:37.092554 sudo[2710]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jun 21 05:28:37.092963 sudo[2710]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jun 21 05:28:37.117522 kubelet[2678]: I0621 05:28:37.116620 2678 kubelet_node_status.go:72] "Attempting to register node" node="ci-4372.0.0-d-47135505f9" Jun 21 05:28:37.129080 kubelet[2678]: I0621 05:28:37.129039 2678 kubelet_node_status.go:111] "Node was previously registered" node="ci-4372.0.0-d-47135505f9" Jun 21 05:28:37.129231 kubelet[2678]: I0621 05:28:37.129123 2678 kubelet_node_status.go:75] "Successfully registered node" node="ci-4372.0.0-d-47135505f9" Jun 21 05:28:37.138050 kubelet[2678]: I0621 05:28:37.138003 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c01bd59b88ff1246d6e04066dc51de04-k8s-certs\") pod \"kube-apiserver-ci-4372.0.0-d-47135505f9\" (UID: \"c01bd59b88ff1246d6e04066dc51de04\") " pod="kube-system/kube-apiserver-ci-4372.0.0-d-47135505f9" Jun 21 05:28:37.138247 kubelet[2678]: I0621 05:28:37.138048 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/877615f28fdfc9f192d36a28879733c3-ca-certs\") pod \"kube-controller-manager-ci-4372.0.0-d-47135505f9\" (UID: \"877615f28fdfc9f192d36a28879733c3\") " pod="kube-system/kube-controller-manager-ci-4372.0.0-d-47135505f9" Jun 21 05:28:37.138247 kubelet[2678]: I0621 05:28:37.138114 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/877615f28fdfc9f192d36a28879733c3-flexvolume-dir\") pod \"kube-controller-manager-ci-4372.0.0-d-47135505f9\" (UID: \"877615f28fdfc9f192d36a28879733c3\") " pod="kube-system/kube-controller-manager-ci-4372.0.0-d-47135505f9" Jun 21 05:28:37.138247 kubelet[2678]: I0621 05:28:37.138151 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9c91829c6b8ded1d2c2517f460878460-kubeconfig\") pod \"kube-scheduler-ci-4372.0.0-d-47135505f9\" (UID: \"9c91829c6b8ded1d2c2517f460878460\") " pod="kube-system/kube-scheduler-ci-4372.0.0-d-47135505f9" Jun 21 05:28:37.138247 kubelet[2678]: I0621 05:28:37.138173 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c01bd59b88ff1246d6e04066dc51de04-ca-certs\") pod \"kube-apiserver-ci-4372.0.0-d-47135505f9\" (UID: \"c01bd59b88ff1246d6e04066dc51de04\") " pod="kube-system/kube-apiserver-ci-4372.0.0-d-47135505f9" Jun 21 05:28:37.138247 kubelet[2678]: I0621 05:28:37.138190 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/877615f28fdfc9f192d36a28879733c3-k8s-certs\") pod \"kube-controller-manager-ci-4372.0.0-d-47135505f9\" (UID: \"877615f28fdfc9f192d36a28879733c3\") " pod="kube-system/kube-controller-manager-ci-4372.0.0-d-47135505f9" Jun 21 05:28:37.138431 kubelet[2678]: I0621 05:28:37.138234 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/877615f28fdfc9f192d36a28879733c3-kubeconfig\") pod \"kube-controller-manager-ci-4372.0.0-d-47135505f9\" (UID: \"877615f28fdfc9f192d36a28879733c3\") " pod="kube-system/kube-controller-manager-ci-4372.0.0-d-47135505f9" Jun 21 05:28:37.138431 kubelet[2678]: I0621 05:28:37.138250 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/877615f28fdfc9f192d36a28879733c3-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4372.0.0-d-47135505f9\" (UID: \"877615f28fdfc9f192d36a28879733c3\") " pod="kube-system/kube-controller-manager-ci-4372.0.0-d-47135505f9" Jun 21 05:28:37.138431 kubelet[2678]: I0621 05:28:37.138267 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c01bd59b88ff1246d6e04066dc51de04-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4372.0.0-d-47135505f9\" (UID: \"c01bd59b88ff1246d6e04066dc51de04\") " pod="kube-system/kube-apiserver-ci-4372.0.0-d-47135505f9" Jun 21 05:28:37.366257 kubelet[2678]: E0621 05:28:37.365675 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 21 05:28:37.370787 kubelet[2678]: E0621 05:28:37.370710 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 21 05:28:37.370787 kubelet[2678]: E0621 05:28:37.370726 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 21 05:28:37.722913 sudo[2710]: pam_unix(sudo:session): session closed for user root Jun 21 05:28:37.898182 kubelet[2678]: I0621 05:28:37.898092 2678 apiserver.go:52] "Watching apiserver" Jun 21 05:28:37.934354 kubelet[2678]: I0621 05:28:37.934257 2678 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jun 21 05:28:37.982430 kubelet[2678]: E0621 05:28:37.981704 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 21 05:28:37.983764 kubelet[2678]: E0621 05:28:37.983733 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 21 05:28:37.992337 kubelet[2678]: W0621 05:28:37.992282 2678 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 21 05:28:37.992546 kubelet[2678]: E0621 05:28:37.992389 2678 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4372.0.0-d-47135505f9\" already exists" pod="kube-system/kube-apiserver-ci-4372.0.0-d-47135505f9" Jun 21 05:28:37.993176 kubelet[2678]: E0621 05:28:37.992621 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 21 05:28:38.048478 kubelet[2678]: I0621 05:28:38.048379 2678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4372.0.0-d-47135505f9" podStartSLOduration=1.048354319 podStartE2EDuration="1.048354319s" podCreationTimestamp="2025-06-21 05:28:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-21 05:28:38.034672847 +0000 UTC m=+1.247331092" watchObservedRunningTime="2025-06-21 05:28:38.048354319 +0000 UTC m=+1.261012556" Jun 21 05:28:38.048732 kubelet[2678]: I0621 05:28:38.048534 2678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4372.0.0-d-47135505f9" podStartSLOduration=3.048523224 podStartE2EDuration="3.048523224s" podCreationTimestamp="2025-06-21 05:28:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-21 05:28:38.04833547 +0000 UTC m=+1.260993710" watchObservedRunningTime="2025-06-21 05:28:38.048523224 +0000 UTC m=+1.261181466" Jun 21 05:28:38.127334 kubelet[2678]: I0621 05:28:38.126372 2678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4372.0.0-d-47135505f9" podStartSLOduration=2.126338909 podStartE2EDuration="2.126338909s" podCreationTimestamp="2025-06-21 05:28:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-21 05:28:38.063957554 +0000 UTC m=+1.276615799" watchObservedRunningTime="2025-06-21 05:28:38.126338909 +0000 UTC m=+1.338997162" Jun 21 05:28:38.985014 kubelet[2678]: E0621 05:28:38.984902 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 21 05:28:38.986048 kubelet[2678]: E0621 05:28:38.985254 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 21 05:28:39.397018 kubelet[2678]: E0621 05:28:39.396879 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 21 05:28:39.938773 sudo[1768]: pam_unix(sudo:session): session closed for user root Jun 21 05:28:39.942742 sshd[1767]: Connection closed by 139.178.68.195 port 42036 Jun 21 05:28:39.943906 sshd-session[1765]: pam_unix(sshd:session): session closed for user core Jun 21 05:28:39.951380 systemd-logind[1504]: Session 7 logged out. Waiting for processes to exit. Jun 21 05:28:39.952187 systemd[1]: sshd@6-64.23.242.202:22-139.178.68.195:42036.service: Deactivated successfully. Jun 21 05:28:39.956215 systemd[1]: session-7.scope: Deactivated successfully. Jun 21 05:28:39.956907 systemd[1]: session-7.scope: Consumed 5.779s CPU time, 221.9M memory peak. Jun 21 05:28:39.961497 systemd-logind[1504]: Removed session 7. Jun 21 05:28:39.987095 kubelet[2678]: E0621 05:28:39.987042 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 21 05:28:42.734552 kubelet[2678]: I0621 05:28:42.734514 2678 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jun 21 05:28:42.735600 containerd[1528]: time="2025-06-21T05:28:42.735484590Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jun 21 05:28:42.735968 kubelet[2678]: I0621 05:28:42.735756 2678 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jun 21 05:28:42.947720 systemd[1]: Created slice kubepods-besteffort-pode10b9d5c_6484_4dea_9fc4_83c1257c772f.slice - libcontainer container kubepods-besteffort-pode10b9d5c_6484_4dea_9fc4_83c1257c772f.slice. Jun 21 05:28:42.969987 systemd[1]: Created slice kubepods-burstable-pod3a892538_ae7a_4d13_bf49_8f7bc0eb7436.slice - libcontainer container kubepods-burstable-pod3a892538_ae7a_4d13_bf49_8f7bc0eb7436.slice. Jun 21 05:28:42.983910 kubelet[2678]: I0621 05:28:42.982345 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3a892538-ae7a-4d13-bf49-8f7bc0eb7436-bpf-maps\") pod \"cilium-tt6kx\" (UID: \"3a892538-ae7a-4d13-bf49-8f7bc0eb7436\") " pod="kube-system/cilium-tt6kx" Jun 21 05:28:42.983910 kubelet[2678]: I0621 05:28:42.982421 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3a892538-ae7a-4d13-bf49-8f7bc0eb7436-cilium-config-path\") pod \"cilium-tt6kx\" (UID: \"3a892538-ae7a-4d13-bf49-8f7bc0eb7436\") " pod="kube-system/cilium-tt6kx" Jun 21 05:28:42.983910 kubelet[2678]: I0621 05:28:42.982465 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksvqh\" (UniqueName: \"kubernetes.io/projected/e10b9d5c-6484-4dea-9fc4-83c1257c772f-kube-api-access-ksvqh\") pod \"kube-proxy-7ghzm\" (UID: \"e10b9d5c-6484-4dea-9fc4-83c1257c772f\") " pod="kube-system/kube-proxy-7ghzm" Jun 21 05:28:42.983910 kubelet[2678]: I0621 05:28:42.982497 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3a892538-ae7a-4d13-bf49-8f7bc0eb7436-host-proc-sys-net\") pod \"cilium-tt6kx\" (UID: \"3a892538-ae7a-4d13-bf49-8f7bc0eb7436\") " pod="kube-system/cilium-tt6kx" Jun 21 05:28:42.983910 kubelet[2678]: I0621 05:28:42.982524 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3a892538-ae7a-4d13-bf49-8f7bc0eb7436-hostproc\") pod \"cilium-tt6kx\" (UID: \"3a892538-ae7a-4d13-bf49-8f7bc0eb7436\") " pod="kube-system/cilium-tt6kx" Jun 21 05:28:42.983910 kubelet[2678]: I0621 05:28:42.982573 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3a892538-ae7a-4d13-bf49-8f7bc0eb7436-cilium-run\") pod \"cilium-tt6kx\" (UID: \"3a892538-ae7a-4d13-bf49-8f7bc0eb7436\") " pod="kube-system/cilium-tt6kx" Jun 21 05:28:42.984396 kubelet[2678]: I0621 05:28:42.982603 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3a892538-ae7a-4d13-bf49-8f7bc0eb7436-hubble-tls\") pod \"cilium-tt6kx\" (UID: \"3a892538-ae7a-4d13-bf49-8f7bc0eb7436\") " pod="kube-system/cilium-tt6kx" Jun 21 05:28:42.984396 kubelet[2678]: I0621 05:28:42.982634 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e10b9d5c-6484-4dea-9fc4-83c1257c772f-kube-proxy\") pod \"kube-proxy-7ghzm\" (UID: \"e10b9d5c-6484-4dea-9fc4-83c1257c772f\") " pod="kube-system/kube-proxy-7ghzm" Jun 21 05:28:42.984396 kubelet[2678]: I0621 05:28:42.982662 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e10b9d5c-6484-4dea-9fc4-83c1257c772f-xtables-lock\") pod \"kube-proxy-7ghzm\" (UID: \"e10b9d5c-6484-4dea-9fc4-83c1257c772f\") " pod="kube-system/kube-proxy-7ghzm" Jun 21 05:28:42.984396 kubelet[2678]: I0621 05:28:42.982720 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3a892538-ae7a-4d13-bf49-8f7bc0eb7436-cni-path\") pod \"cilium-tt6kx\" (UID: \"3a892538-ae7a-4d13-bf49-8f7bc0eb7436\") " pod="kube-system/cilium-tt6kx" Jun 21 05:28:42.984396 kubelet[2678]: I0621 05:28:42.982776 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3a892538-ae7a-4d13-bf49-8f7bc0eb7436-host-proc-sys-kernel\") pod \"cilium-tt6kx\" (UID: \"3a892538-ae7a-4d13-bf49-8f7bc0eb7436\") " pod="kube-system/cilium-tt6kx" Jun 21 05:28:42.984396 kubelet[2678]: I0621 05:28:42.982843 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2q4qt\" (UniqueName: \"kubernetes.io/projected/3a892538-ae7a-4d13-bf49-8f7bc0eb7436-kube-api-access-2q4qt\") pod \"cilium-tt6kx\" (UID: \"3a892538-ae7a-4d13-bf49-8f7bc0eb7436\") " pod="kube-system/cilium-tt6kx" Jun 21 05:28:42.986103 kubelet[2678]: I0621 05:28:42.982874 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3a892538-ae7a-4d13-bf49-8f7bc0eb7436-etc-cni-netd\") pod \"cilium-tt6kx\" (UID: \"3a892538-ae7a-4d13-bf49-8f7bc0eb7436\") " pod="kube-system/cilium-tt6kx" Jun 21 05:28:42.986103 kubelet[2678]: I0621 05:28:42.982902 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3a892538-ae7a-4d13-bf49-8f7bc0eb7436-clustermesh-secrets\") pod \"cilium-tt6kx\" (UID: \"3a892538-ae7a-4d13-bf49-8f7bc0eb7436\") " pod="kube-system/cilium-tt6kx" Jun 21 05:28:42.986103 kubelet[2678]: I0621 05:28:42.982950 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e10b9d5c-6484-4dea-9fc4-83c1257c772f-lib-modules\") pod \"kube-proxy-7ghzm\" (UID: \"e10b9d5c-6484-4dea-9fc4-83c1257c772f\") " pod="kube-system/kube-proxy-7ghzm" Jun 21 05:28:42.986103 kubelet[2678]: I0621 05:28:42.982981 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3a892538-ae7a-4d13-bf49-8f7bc0eb7436-cilium-cgroup\") pod \"cilium-tt6kx\" (UID: \"3a892538-ae7a-4d13-bf49-8f7bc0eb7436\") " pod="kube-system/cilium-tt6kx" Jun 21 05:28:42.986103 kubelet[2678]: I0621 05:28:42.983011 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3a892538-ae7a-4d13-bf49-8f7bc0eb7436-lib-modules\") pod \"cilium-tt6kx\" (UID: \"3a892538-ae7a-4d13-bf49-8f7bc0eb7436\") " pod="kube-system/cilium-tt6kx" Jun 21 05:28:42.986103 kubelet[2678]: I0621 05:28:42.983038 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3a892538-ae7a-4d13-bf49-8f7bc0eb7436-xtables-lock\") pod \"cilium-tt6kx\" (UID: \"3a892538-ae7a-4d13-bf49-8f7bc0eb7436\") " pod="kube-system/cilium-tt6kx" Jun 21 05:28:43.106357 kubelet[2678]: E0621 05:28:43.105056 2678 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jun 21 05:28:43.106357 kubelet[2678]: E0621 05:28:43.105115 2678 projected.go:194] Error preparing data for projected volume kube-api-access-ksvqh for pod kube-system/kube-proxy-7ghzm: configmap "kube-root-ca.crt" not found Jun 21 05:28:43.106357 kubelet[2678]: E0621 05:28:43.105221 2678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e10b9d5c-6484-4dea-9fc4-83c1257c772f-kube-api-access-ksvqh podName:e10b9d5c-6484-4dea-9fc4-83c1257c772f nodeName:}" failed. No retries permitted until 2025-06-21 05:28:43.60518698 +0000 UTC m=+6.817845217 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-ksvqh" (UniqueName: "kubernetes.io/projected/e10b9d5c-6484-4dea-9fc4-83c1257c772f-kube-api-access-ksvqh") pod "kube-proxy-7ghzm" (UID: "e10b9d5c-6484-4dea-9fc4-83c1257c772f") : configmap "kube-root-ca.crt" not found Jun 21 05:28:43.106357 kubelet[2678]: E0621 05:28:43.105898 2678 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jun 21 05:28:43.106357 kubelet[2678]: E0621 05:28:43.105922 2678 projected.go:194] Error preparing data for projected volume kube-api-access-2q4qt for pod kube-system/cilium-tt6kx: configmap "kube-root-ca.crt" not found Jun 21 05:28:43.106357 kubelet[2678]: E0621 05:28:43.105966 2678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3a892538-ae7a-4d13-bf49-8f7bc0eb7436-kube-api-access-2q4qt podName:3a892538-ae7a-4d13-bf49-8f7bc0eb7436 nodeName:}" failed. No retries permitted until 2025-06-21 05:28:43.605949898 +0000 UTC m=+6.818608121 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-2q4qt" (UniqueName: "kubernetes.io/projected/3a892538-ae7a-4d13-bf49-8f7bc0eb7436-kube-api-access-2q4qt") pod "cilium-tt6kx" (UID: "3a892538-ae7a-4d13-bf49-8f7bc0eb7436") : configmap "kube-root-ca.crt" not found Jun 21 05:28:43.690529 kubelet[2678]: E0621 05:28:43.690446 2678 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jun 21 05:28:43.690529 kubelet[2678]: E0621 05:28:43.690531 2678 projected.go:194] Error preparing data for projected volume kube-api-access-2q4qt for pod kube-system/cilium-tt6kx: configmap "kube-root-ca.crt" not found Jun 21 05:28:43.690864 kubelet[2678]: E0621 05:28:43.690618 2678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3a892538-ae7a-4d13-bf49-8f7bc0eb7436-kube-api-access-2q4qt podName:3a892538-ae7a-4d13-bf49-8f7bc0eb7436 nodeName:}" failed. No retries permitted until 2025-06-21 05:28:44.690594172 +0000 UTC m=+7.903252410 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-2q4qt" (UniqueName: "kubernetes.io/projected/3a892538-ae7a-4d13-bf49-8f7bc0eb7436-kube-api-access-2q4qt") pod "cilium-tt6kx" (UID: "3a892538-ae7a-4d13-bf49-8f7bc0eb7436") : configmap "kube-root-ca.crt" not found Jun 21 05:28:43.691529 kubelet[2678]: E0621 05:28:43.691484 2678 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jun 21 05:28:43.691529 kubelet[2678]: E0621 05:28:43.691531 2678 projected.go:194] Error preparing data for projected volume kube-api-access-ksvqh for pod kube-system/kube-proxy-7ghzm: configmap "kube-root-ca.crt" not found Jun 21 05:28:43.691720 kubelet[2678]: E0621 05:28:43.691604 2678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e10b9d5c-6484-4dea-9fc4-83c1257c772f-kube-api-access-ksvqh podName:e10b9d5c-6484-4dea-9fc4-83c1257c772f nodeName:}" failed. No retries permitted until 2025-06-21 05:28:44.691576968 +0000 UTC m=+7.904235223 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-ksvqh" (UniqueName: "kubernetes.io/projected/e10b9d5c-6484-4dea-9fc4-83c1257c772f-kube-api-access-ksvqh") pod "kube-proxy-7ghzm" (UID: "e10b9d5c-6484-4dea-9fc4-83c1257c772f") : configmap "kube-root-ca.crt" not found Jun 21 05:28:43.836833 systemd[1]: Created slice kubepods-besteffort-pod4d5b46bc_8771_46d9_821a_5cb1cb3655d8.slice - libcontainer container kubepods-besteffort-pod4d5b46bc_8771_46d9_821a_5cb1cb3655d8.slice. Jun 21 05:28:43.892064 kubelet[2678]: I0621 05:28:43.891983 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2tcg\" (UniqueName: \"kubernetes.io/projected/4d5b46bc-8771-46d9-821a-5cb1cb3655d8-kube-api-access-c2tcg\") pod \"cilium-operator-5d85765b45-8d8nq\" (UID: \"4d5b46bc-8771-46d9-821a-5cb1cb3655d8\") " pod="kube-system/cilium-operator-5d85765b45-8d8nq" Jun 21 05:28:43.892064 kubelet[2678]: I0621 05:28:43.892070 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4d5b46bc-8771-46d9-821a-5cb1cb3655d8-cilium-config-path\") pod \"cilium-operator-5d85765b45-8d8nq\" (UID: \"4d5b46bc-8771-46d9-821a-5cb1cb3655d8\") " pod="kube-system/cilium-operator-5d85765b45-8d8nq" Jun 21 05:28:44.144579 kubelet[2678]: E0621 05:28:44.144459 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 21 05:28:44.145834 containerd[1528]: time="2025-06-21T05:28:44.145525281Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-8d8nq,Uid:4d5b46bc-8771-46d9-821a-5cb1cb3655d8,Namespace:kube-system,Attempt:0,}" Jun 21 05:28:44.189122 containerd[1528]: time="2025-06-21T05:28:44.188830547Z" level=info msg="connecting to shim 02102bbec33283547fe27f623daf3153246f8069571354eb64b433faad3345a1" address="unix:///run/containerd/s/f5521de6f104bd5dba59bb233bcf043e5db22920d8650cb8a9f648822fc0f193" namespace=k8s.io protocol=ttrpc version=3 Jun 21 05:28:44.243708 systemd[1]: Started cri-containerd-02102bbec33283547fe27f623daf3153246f8069571354eb64b433faad3345a1.scope - libcontainer container 02102bbec33283547fe27f623daf3153246f8069571354eb64b433faad3345a1. Jun 21 05:28:44.320632 containerd[1528]: time="2025-06-21T05:28:44.320534742Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-8d8nq,Uid:4d5b46bc-8771-46d9-821a-5cb1cb3655d8,Namespace:kube-system,Attempt:0,} returns sandbox id \"02102bbec33283547fe27f623daf3153246f8069571354eb64b433faad3345a1\"" Jun 21 05:28:44.323271 kubelet[2678]: E0621 05:28:44.323218 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 21 05:28:44.327155 containerd[1528]: time="2025-06-21T05:28:44.327104472Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jun 21 05:28:44.331609 systemd-resolved[1397]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.2. Jun 21 05:28:44.764502 kubelet[2678]: E0621 05:28:44.764451 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 21 05:28:44.765336 containerd[1528]: time="2025-06-21T05:28:44.765221622Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7ghzm,Uid:e10b9d5c-6484-4dea-9fc4-83c1257c772f,Namespace:kube-system,Attempt:0,}" Jun 21 05:28:44.777858 kubelet[2678]: E0621 05:28:44.777044 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 21 05:28:44.778451 containerd[1528]: time="2025-06-21T05:28:44.778197784Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tt6kx,Uid:3a892538-ae7a-4d13-bf49-8f7bc0eb7436,Namespace:kube-system,Attempt:0,}" Jun 21 05:28:44.802457 containerd[1528]: time="2025-06-21T05:28:44.802292436Z" level=info msg="connecting to shim e8c40c82acb7a1bedc443a772e8ee123b9fac55e67ca2ca91e231cd739f30666" address="unix:///run/containerd/s/cdc6068746d53fcc9599aeba44206d63989608866c80daa3e2624a486c790cb6" namespace=k8s.io protocol=ttrpc version=3 Jun 21 05:28:44.835574 containerd[1528]: time="2025-06-21T05:28:44.835432069Z" level=info msg="connecting to shim f7115590d1ac76298bf0345bf8b52d4fe923c98708f6ba19f2646a8dccc5dd0b" address="unix:///run/containerd/s/3e9551efbecbba6a255406c0627d9595f6d574fea3ec76765200b8fc5ab6ead6" namespace=k8s.io protocol=ttrpc version=3 Jun 21 05:28:44.867874 systemd[1]: Started cri-containerd-e8c40c82acb7a1bedc443a772e8ee123b9fac55e67ca2ca91e231cd739f30666.scope - libcontainer container e8c40c82acb7a1bedc443a772e8ee123b9fac55e67ca2ca91e231cd739f30666. Jun 21 05:28:44.896688 systemd[1]: Started cri-containerd-f7115590d1ac76298bf0345bf8b52d4fe923c98708f6ba19f2646a8dccc5dd0b.scope - libcontainer container f7115590d1ac76298bf0345bf8b52d4fe923c98708f6ba19f2646a8dccc5dd0b. Jun 21 05:28:44.962275 containerd[1528]: time="2025-06-21T05:28:44.962154696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7ghzm,Uid:e10b9d5c-6484-4dea-9fc4-83c1257c772f,Namespace:kube-system,Attempt:0,} returns sandbox id \"e8c40c82acb7a1bedc443a772e8ee123b9fac55e67ca2ca91e231cd739f30666\"" Jun 21 05:28:44.963425 kubelet[2678]: E0621 05:28:44.963390 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 21 05:28:44.971936 containerd[1528]: time="2025-06-21T05:28:44.970981054Z" level=info msg="CreateContainer within sandbox \"e8c40c82acb7a1bedc443a772e8ee123b9fac55e67ca2ca91e231cd739f30666\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jun 21 05:28:44.986386 containerd[1528]: time="2025-06-21T05:28:44.986328267Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tt6kx,Uid:3a892538-ae7a-4d13-bf49-8f7bc0eb7436,Namespace:kube-system,Attempt:0,} returns sandbox id \"f7115590d1ac76298bf0345bf8b52d4fe923c98708f6ba19f2646a8dccc5dd0b\"" Jun 21 05:28:44.988383 kubelet[2678]: E0621 05:28:44.988348 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 21 05:28:44.992703 containerd[1528]: time="2025-06-21T05:28:44.992637498Z" level=info msg="Container 1c767fcbe78f8da37e34ab468e8623d6ee7cb43a5849abb63e7d138f6822fbdc: CDI devices from CRI Config.CDIDevices: []" Jun 21 05:28:45.012643 containerd[1528]: time="2025-06-21T05:28:45.012544411Z" level=info msg="CreateContainer within sandbox \"e8c40c82acb7a1bedc443a772e8ee123b9fac55e67ca2ca91e231cd739f30666\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1c767fcbe78f8da37e34ab468e8623d6ee7cb43a5849abb63e7d138f6822fbdc\"" Jun 21 05:28:45.013654 containerd[1528]: time="2025-06-21T05:28:45.013584685Z" level=info msg="StartContainer for \"1c767fcbe78f8da37e34ab468e8623d6ee7cb43a5849abb63e7d138f6822fbdc\"" Jun 21 05:28:45.022567 containerd[1528]: time="2025-06-21T05:28:45.021630291Z" level=info msg="connecting to shim 1c767fcbe78f8da37e34ab468e8623d6ee7cb43a5849abb63e7d138f6822fbdc" address="unix:///run/containerd/s/cdc6068746d53fcc9599aeba44206d63989608866c80daa3e2624a486c790cb6" protocol=ttrpc version=3 Jun 21 05:28:45.055681 systemd[1]: Started cri-containerd-1c767fcbe78f8da37e34ab468e8623d6ee7cb43a5849abb63e7d138f6822fbdc.scope - libcontainer container 1c767fcbe78f8da37e34ab468e8623d6ee7cb43a5849abb63e7d138f6822fbdc. Jun 21 05:28:45.123440 containerd[1528]: time="2025-06-21T05:28:45.123140628Z" level=info msg="StartContainer for \"1c767fcbe78f8da37e34ab468e8623d6ee7cb43a5849abb63e7d138f6822fbdc\" returns successfully" Jun 21 05:28:45.745203 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3527422205.mount: Deactivated successfully. Jun 21 05:28:46.032705 kubelet[2678]: E0621 05:28:46.032080 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 21 05:28:46.047524 kubelet[2678]: I0621 05:28:46.047436 2678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-7ghzm" podStartSLOduration=4.047407192 podStartE2EDuration="4.047407192s" podCreationTimestamp="2025-06-21 05:28:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-21 05:28:46.046433447 +0000 UTC m=+9.259091691" watchObservedRunningTime="2025-06-21 05:28:46.047407192 +0000 UTC m=+9.260065432" Jun 21 05:28:47.033246 kubelet[2678]: E0621 05:28:47.033200 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 21 05:28:48.289824 kubelet[2678]: E0621 05:28:48.289774 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 21 05:28:48.303340 containerd[1528]: time="2025-06-21T05:28:48.303068638Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:28:48.306109 containerd[1528]: time="2025-06-21T05:28:48.305282287Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jun 21 05:28:48.308040 containerd[1528]: time="2025-06-21T05:28:48.307979186Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:28:48.311424 containerd[1528]: time="2025-06-21T05:28:48.311320755Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.983926801s" Jun 21 05:28:48.311424 containerd[1528]: time="2025-06-21T05:28:48.311384113Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jun 21 05:28:48.316832 containerd[1528]: time="2025-06-21T05:28:48.316744514Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jun 21 05:28:48.323931 containerd[1528]: time="2025-06-21T05:28:48.323844221Z" level=info msg="CreateContainer within sandbox \"02102bbec33283547fe27f623daf3153246f8069571354eb64b433faad3345a1\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jun 21 05:28:48.338344 containerd[1528]: time="2025-06-21T05:28:48.336993013Z" level=info msg="Container 25e15a28d16b7224aabc04e8f0c858776a61a6275b6dbc2230577dc753312bf6: CDI devices from CRI Config.CDIDevices: []" Jun 21 05:28:48.344471 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1474020979.mount: Deactivated successfully. Jun 21 05:28:48.347965 containerd[1528]: time="2025-06-21T05:28:48.347886699Z" level=info msg="CreateContainer within sandbox \"02102bbec33283547fe27f623daf3153246f8069571354eb64b433faad3345a1\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"25e15a28d16b7224aabc04e8f0c858776a61a6275b6dbc2230577dc753312bf6\"" Jun 21 05:28:48.349281 containerd[1528]: time="2025-06-21T05:28:48.349193809Z" level=info msg="StartContainer for \"25e15a28d16b7224aabc04e8f0c858776a61a6275b6dbc2230577dc753312bf6\"" Jun 21 05:28:48.350714 containerd[1528]: time="2025-06-21T05:28:48.350600941Z" level=info msg="connecting to shim 25e15a28d16b7224aabc04e8f0c858776a61a6275b6dbc2230577dc753312bf6" address="unix:///run/containerd/s/f5521de6f104bd5dba59bb233bcf043e5db22920d8650cb8a9f648822fc0f193" protocol=ttrpc version=3 Jun 21 05:28:48.383641 systemd[1]: Started cri-containerd-25e15a28d16b7224aabc04e8f0c858776a61a6275b6dbc2230577dc753312bf6.scope - libcontainer container 25e15a28d16b7224aabc04e8f0c858776a61a6275b6dbc2230577dc753312bf6. Jun 21 05:28:48.438387 containerd[1528]: time="2025-06-21T05:28:48.438293755Z" level=info msg="StartContainer for \"25e15a28d16b7224aabc04e8f0c858776a61a6275b6dbc2230577dc753312bf6\" returns successfully" Jun 21 05:28:48.698417 kubelet[2678]: E0621 05:28:48.698364 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 21 05:28:49.047321 kubelet[2678]: E0621 05:28:49.046832 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 21 05:28:49.412193 kubelet[2678]: E0621 05:28:49.411551 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 21 05:28:49.479223 kubelet[2678]: I0621 05:28:49.479134 2678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-8d8nq" podStartSLOduration=2.489163854 podStartE2EDuration="6.479113957s" podCreationTimestamp="2025-06-21 05:28:43 +0000 UTC" firstStartedPulling="2025-06-21 05:28:44.326034198 +0000 UTC m=+7.538692430" lastFinishedPulling="2025-06-21 05:28:48.315984291 +0000 UTC m=+11.528642533" observedRunningTime="2025-06-21 05:28:49.191153124 +0000 UTC m=+12.403811363" watchObservedRunningTime="2025-06-21 05:28:49.479113957 +0000 UTC m=+12.691772201" Jun 21 05:28:50.049207 kubelet[2678]: E0621 05:28:50.049165 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 21 05:28:53.535333 update_engine[1509]: I20250621 05:28:53.534613 1509 update_attempter.cc:509] Updating boot flags... Jun 21 05:28:54.227091 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2781887329.mount: Deactivated successfully. Jun 21 05:28:56.673137 containerd[1528]: time="2025-06-21T05:28:56.673068994Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:28:56.675084 containerd[1528]: time="2025-06-21T05:28:56.675024690Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jun 21 05:28:56.675875 containerd[1528]: time="2025-06-21T05:28:56.675750843Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:28:56.677128 containerd[1528]: time="2025-06-21T05:28:56.676842798Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 8.360039562s" Jun 21 05:28:56.677128 containerd[1528]: time="2025-06-21T05:28:56.676901169Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jun 21 05:28:56.679848 containerd[1528]: time="2025-06-21T05:28:56.679803905Z" level=info msg="CreateContainer within sandbox \"f7115590d1ac76298bf0345bf8b52d4fe923c98708f6ba19f2646a8dccc5dd0b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jun 21 05:28:56.747322 containerd[1528]: time="2025-06-21T05:28:56.746207818Z" level=info msg="Container 45be10e72f92db15bb911ea51db8f6518cdcb1f31983bc88523a27a151797f7e: CDI devices from CRI Config.CDIDevices: []" Jun 21 05:28:56.761131 containerd[1528]: time="2025-06-21T05:28:56.761067392Z" level=info msg="CreateContainer within sandbox \"f7115590d1ac76298bf0345bf8b52d4fe923c98708f6ba19f2646a8dccc5dd0b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"45be10e72f92db15bb911ea51db8f6518cdcb1f31983bc88523a27a151797f7e\"" Jun 21 05:28:56.764334 containerd[1528]: time="2025-06-21T05:28:56.762506004Z" level=info msg="StartContainer for \"45be10e72f92db15bb911ea51db8f6518cdcb1f31983bc88523a27a151797f7e\"" Jun 21 05:28:56.764898 containerd[1528]: time="2025-06-21T05:28:56.764847297Z" level=info msg="connecting to shim 45be10e72f92db15bb911ea51db8f6518cdcb1f31983bc88523a27a151797f7e" address="unix:///run/containerd/s/3e9551efbecbba6a255406c0627d9595f6d574fea3ec76765200b8fc5ab6ead6" protocol=ttrpc version=3 Jun 21 05:28:56.797604 systemd[1]: Started cri-containerd-45be10e72f92db15bb911ea51db8f6518cdcb1f31983bc88523a27a151797f7e.scope - libcontainer container 45be10e72f92db15bb911ea51db8f6518cdcb1f31983bc88523a27a151797f7e. Jun 21 05:28:56.841334 containerd[1528]: time="2025-06-21T05:28:56.841232013Z" level=info msg="StartContainer for \"45be10e72f92db15bb911ea51db8f6518cdcb1f31983bc88523a27a151797f7e\" returns successfully" Jun 21 05:28:56.858899 systemd[1]: cri-containerd-45be10e72f92db15bb911ea51db8f6518cdcb1f31983bc88523a27a151797f7e.scope: Deactivated successfully. Jun 21 05:28:56.975349 containerd[1528]: time="2025-06-21T05:28:56.974240152Z" level=info msg="TaskExit event in podsandbox handler container_id:\"45be10e72f92db15bb911ea51db8f6518cdcb1f31983bc88523a27a151797f7e\" id:\"45be10e72f92db15bb911ea51db8f6518cdcb1f31983bc88523a27a151797f7e\" pid:3154 exited_at:{seconds:1750483736 nanos:864952557}" Jun 21 05:28:56.979947 containerd[1528]: time="2025-06-21T05:28:56.979734842Z" level=info msg="received exit event container_id:\"45be10e72f92db15bb911ea51db8f6518cdcb1f31983bc88523a27a151797f7e\" id:\"45be10e72f92db15bb911ea51db8f6518cdcb1f31983bc88523a27a151797f7e\" pid:3154 exited_at:{seconds:1750483736 nanos:864952557}" Jun 21 05:28:57.027373 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-45be10e72f92db15bb911ea51db8f6518cdcb1f31983bc88523a27a151797f7e-rootfs.mount: Deactivated successfully. Jun 21 05:28:57.092654 kubelet[2678]: E0621 05:28:57.092553 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 21 05:28:57.099505 containerd[1528]: time="2025-06-21T05:28:57.099274933Z" level=info msg="CreateContainer within sandbox \"f7115590d1ac76298bf0345bf8b52d4fe923c98708f6ba19f2646a8dccc5dd0b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jun 21 05:28:57.108188 containerd[1528]: time="2025-06-21T05:28:57.108141466Z" level=info msg="Container b24ccd6c9ec8d040e81d335690e0b51adfac2e1c9d7ef10fa2b33ad1a5dce9a3: CDI devices from CRI Config.CDIDevices: []" Jun 21 05:28:57.124388 containerd[1528]: time="2025-06-21T05:28:57.124238559Z" level=info msg="CreateContainer within sandbox \"f7115590d1ac76298bf0345bf8b52d4fe923c98708f6ba19f2646a8dccc5dd0b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b24ccd6c9ec8d040e81d335690e0b51adfac2e1c9d7ef10fa2b33ad1a5dce9a3\"" Jun 21 05:28:57.125263 containerd[1528]: time="2025-06-21T05:28:57.125221596Z" level=info msg="StartContainer for \"b24ccd6c9ec8d040e81d335690e0b51adfac2e1c9d7ef10fa2b33ad1a5dce9a3\"" Jun 21 05:28:57.127108 containerd[1528]: time="2025-06-21T05:28:57.127068289Z" level=info msg="connecting to shim b24ccd6c9ec8d040e81d335690e0b51adfac2e1c9d7ef10fa2b33ad1a5dce9a3" address="unix:///run/containerd/s/3e9551efbecbba6a255406c0627d9595f6d574fea3ec76765200b8fc5ab6ead6" protocol=ttrpc version=3 Jun 21 05:28:57.162589 systemd[1]: Started cri-containerd-b24ccd6c9ec8d040e81d335690e0b51adfac2e1c9d7ef10fa2b33ad1a5dce9a3.scope - libcontainer container b24ccd6c9ec8d040e81d335690e0b51adfac2e1c9d7ef10fa2b33ad1a5dce9a3. Jun 21 05:28:57.208436 containerd[1528]: time="2025-06-21T05:28:57.208290053Z" level=info msg="StartContainer for \"b24ccd6c9ec8d040e81d335690e0b51adfac2e1c9d7ef10fa2b33ad1a5dce9a3\" returns successfully" Jun 21 05:28:57.225867 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 21 05:28:57.226597 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 21 05:28:57.227282 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jun 21 05:28:57.230006 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 21 05:28:57.235514 systemd[1]: cri-containerd-b24ccd6c9ec8d040e81d335690e0b51adfac2e1c9d7ef10fa2b33ad1a5dce9a3.scope: Deactivated successfully. Jun 21 05:28:57.237783 containerd[1528]: time="2025-06-21T05:28:57.237740170Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b24ccd6c9ec8d040e81d335690e0b51adfac2e1c9d7ef10fa2b33ad1a5dce9a3\" id:\"b24ccd6c9ec8d040e81d335690e0b51adfac2e1c9d7ef10fa2b33ad1a5dce9a3\" pid:3199 exited_at:{seconds:1750483737 nanos:237259099}" Jun 21 05:28:57.238036 containerd[1528]: time="2025-06-21T05:28:57.237956137Z" level=info msg="received exit event container_id:\"b24ccd6c9ec8d040e81d335690e0b51adfac2e1c9d7ef10fa2b33ad1a5dce9a3\" id:\"b24ccd6c9ec8d040e81d335690e0b51adfac2e1c9d7ef10fa2b33ad1a5dce9a3\" pid:3199 exited_at:{seconds:1750483737 nanos:237259099}" Jun 21 05:28:57.277046 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 21 05:28:58.098667 kubelet[2678]: E0621 05:28:58.098620 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 21 05:28:58.105273 containerd[1528]: time="2025-06-21T05:28:58.105172998Z" level=info msg="CreateContainer within sandbox \"f7115590d1ac76298bf0345bf8b52d4fe923c98708f6ba19f2646a8dccc5dd0b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jun 21 05:28:58.128324 containerd[1528]: time="2025-06-21T05:28:58.127763495Z" level=info msg="Container ef7e5f640033dccb5e408953847b2b10266742339fb5e4d206bad072c3062772: CDI devices from CRI Config.CDIDevices: []" Jun 21 05:28:58.133790 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4240378276.mount: Deactivated successfully. Jun 21 05:28:58.140436 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3010285295.mount: Deactivated successfully. Jun 21 05:28:58.149999 containerd[1528]: time="2025-06-21T05:28:58.149900787Z" level=info msg="CreateContainer within sandbox \"f7115590d1ac76298bf0345bf8b52d4fe923c98708f6ba19f2646a8dccc5dd0b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ef7e5f640033dccb5e408953847b2b10266742339fb5e4d206bad072c3062772\"" Jun 21 05:28:58.151609 containerd[1528]: time="2025-06-21T05:28:58.151574718Z" level=info msg="StartContainer for \"ef7e5f640033dccb5e408953847b2b10266742339fb5e4d206bad072c3062772\"" Jun 21 05:28:58.154913 containerd[1528]: time="2025-06-21T05:28:58.154844139Z" level=info msg="connecting to shim ef7e5f640033dccb5e408953847b2b10266742339fb5e4d206bad072c3062772" address="unix:///run/containerd/s/3e9551efbecbba6a255406c0627d9595f6d574fea3ec76765200b8fc5ab6ead6" protocol=ttrpc version=3 Jun 21 05:28:58.186670 systemd[1]: Started cri-containerd-ef7e5f640033dccb5e408953847b2b10266742339fb5e4d206bad072c3062772.scope - libcontainer container ef7e5f640033dccb5e408953847b2b10266742339fb5e4d206bad072c3062772. Jun 21 05:28:58.250793 systemd[1]: cri-containerd-ef7e5f640033dccb5e408953847b2b10266742339fb5e4d206bad072c3062772.scope: Deactivated successfully. Jun 21 05:28:58.253558 containerd[1528]: time="2025-06-21T05:28:58.253397023Z" level=info msg="received exit event container_id:\"ef7e5f640033dccb5e408953847b2b10266742339fb5e4d206bad072c3062772\" id:\"ef7e5f640033dccb5e408953847b2b10266742339fb5e4d206bad072c3062772\" pid:3246 exited_at:{seconds:1750483738 nanos:252068592}" Jun 21 05:28:58.254831 containerd[1528]: time="2025-06-21T05:28:58.253824835Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ef7e5f640033dccb5e408953847b2b10266742339fb5e4d206bad072c3062772\" id:\"ef7e5f640033dccb5e408953847b2b10266742339fb5e4d206bad072c3062772\" pid:3246 exited_at:{seconds:1750483738 nanos:252068592}" Jun 21 05:28:58.254831 containerd[1528]: time="2025-06-21T05:28:58.254071639Z" level=info msg="StartContainer for \"ef7e5f640033dccb5e408953847b2b10266742339fb5e4d206bad072c3062772\" returns successfully" Jun 21 05:28:58.746055 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ef7e5f640033dccb5e408953847b2b10266742339fb5e4d206bad072c3062772-rootfs.mount: Deactivated successfully. Jun 21 05:28:59.105026 kubelet[2678]: E0621 05:28:59.104963 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 21 05:28:59.109941 containerd[1528]: time="2025-06-21T05:28:59.109888695Z" level=info msg="CreateContainer within sandbox \"f7115590d1ac76298bf0345bf8b52d4fe923c98708f6ba19f2646a8dccc5dd0b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jun 21 05:28:59.133633 containerd[1528]: time="2025-06-21T05:28:59.133485007Z" level=info msg="Container 8f9e19426d1bcece875bc3898d36fef50ba9e4400b256c50da9b74b3c86fb3d3: CDI devices from CRI Config.CDIDevices: []" Jun 21 05:28:59.142913 containerd[1528]: time="2025-06-21T05:28:59.142468662Z" level=info msg="CreateContainer within sandbox \"f7115590d1ac76298bf0345bf8b52d4fe923c98708f6ba19f2646a8dccc5dd0b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"8f9e19426d1bcece875bc3898d36fef50ba9e4400b256c50da9b74b3c86fb3d3\"" Jun 21 05:28:59.145433 containerd[1528]: time="2025-06-21T05:28:59.143686211Z" level=info msg="StartContainer for \"8f9e19426d1bcece875bc3898d36fef50ba9e4400b256c50da9b74b3c86fb3d3\"" Jun 21 05:28:59.145859 containerd[1528]: time="2025-06-21T05:28:59.145772725Z" level=info msg="connecting to shim 8f9e19426d1bcece875bc3898d36fef50ba9e4400b256c50da9b74b3c86fb3d3" address="unix:///run/containerd/s/3e9551efbecbba6a255406c0627d9595f6d574fea3ec76765200b8fc5ab6ead6" protocol=ttrpc version=3 Jun 21 05:28:59.198688 systemd[1]: Started cri-containerd-8f9e19426d1bcece875bc3898d36fef50ba9e4400b256c50da9b74b3c86fb3d3.scope - libcontainer container 8f9e19426d1bcece875bc3898d36fef50ba9e4400b256c50da9b74b3c86fb3d3. Jun 21 05:28:59.233504 systemd[1]: cri-containerd-8f9e19426d1bcece875bc3898d36fef50ba9e4400b256c50da9b74b3c86fb3d3.scope: Deactivated successfully. Jun 21 05:28:59.237103 containerd[1528]: time="2025-06-21T05:28:59.237030082Z" level=info msg="received exit event container_id:\"8f9e19426d1bcece875bc3898d36fef50ba9e4400b256c50da9b74b3c86fb3d3\" id:\"8f9e19426d1bcece875bc3898d36fef50ba9e4400b256c50da9b74b3c86fb3d3\" pid:3284 exited_at:{seconds:1750483739 nanos:236510972}" Jun 21 05:28:59.237558 containerd[1528]: time="2025-06-21T05:28:59.237437273Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8f9e19426d1bcece875bc3898d36fef50ba9e4400b256c50da9b74b3c86fb3d3\" id:\"8f9e19426d1bcece875bc3898d36fef50ba9e4400b256c50da9b74b3c86fb3d3\" pid:3284 exited_at:{seconds:1750483739 nanos:236510972}" Jun 21 05:28:59.249439 containerd[1528]: time="2025-06-21T05:28:59.249286300Z" level=info msg="StartContainer for \"8f9e19426d1bcece875bc3898d36fef50ba9e4400b256c50da9b74b3c86fb3d3\" returns successfully" Jun 21 05:28:59.272133 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8f9e19426d1bcece875bc3898d36fef50ba9e4400b256c50da9b74b3c86fb3d3-rootfs.mount: Deactivated successfully. Jun 21 05:29:00.111580 kubelet[2678]: E0621 05:29:00.111383 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 21 05:29:00.118091 containerd[1528]: time="2025-06-21T05:29:00.117773170Z" level=info msg="CreateContainer within sandbox \"f7115590d1ac76298bf0345bf8b52d4fe923c98708f6ba19f2646a8dccc5dd0b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jun 21 05:29:00.134923 containerd[1528]: time="2025-06-21T05:29:00.134828222Z" level=info msg="Container e5340475fde70979872be8b5a3d451f30f1219eb58cd347647f1b7888a61c7d2: CDI devices from CRI Config.CDIDevices: []" Jun 21 05:29:00.155792 containerd[1528]: time="2025-06-21T05:29:00.155611818Z" level=info msg="CreateContainer within sandbox \"f7115590d1ac76298bf0345bf8b52d4fe923c98708f6ba19f2646a8dccc5dd0b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e5340475fde70979872be8b5a3d451f30f1219eb58cd347647f1b7888a61c7d2\"" Jun 21 05:29:00.157065 containerd[1528]: time="2025-06-21T05:29:00.156994743Z" level=info msg="StartContainer for \"e5340475fde70979872be8b5a3d451f30f1219eb58cd347647f1b7888a61c7d2\"" Jun 21 05:29:00.159875 containerd[1528]: time="2025-06-21T05:29:00.158905892Z" level=info msg="connecting to shim e5340475fde70979872be8b5a3d451f30f1219eb58cd347647f1b7888a61c7d2" address="unix:///run/containerd/s/3e9551efbecbba6a255406c0627d9595f6d574fea3ec76765200b8fc5ab6ead6" protocol=ttrpc version=3 Jun 21 05:29:00.192765 systemd[1]: Started cri-containerd-e5340475fde70979872be8b5a3d451f30f1219eb58cd347647f1b7888a61c7d2.scope - libcontainer container e5340475fde70979872be8b5a3d451f30f1219eb58cd347647f1b7888a61c7d2. Jun 21 05:29:00.248680 containerd[1528]: time="2025-06-21T05:29:00.248614797Z" level=info msg="StartContainer for \"e5340475fde70979872be8b5a3d451f30f1219eb58cd347647f1b7888a61c7d2\" returns successfully" Jun 21 05:29:00.415927 containerd[1528]: time="2025-06-21T05:29:00.413073374Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e5340475fde70979872be8b5a3d451f30f1219eb58cd347647f1b7888a61c7d2\" id:\"0e09464519b97dea1923376c0b7b64ed38b213af9f858bf6f52df23a31bc084e\" pid:3353 exited_at:{seconds:1750483740 nanos:408020200}" Jun 21 05:29:00.523974 kubelet[2678]: I0621 05:29:00.523934 2678 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jun 21 05:29:00.580958 systemd[1]: Created slice kubepods-burstable-pod9ad298b5_9803_4ed9_b093_b5662165969f.slice - libcontainer container kubepods-burstable-pod9ad298b5_9803_4ed9_b093_b5662165969f.slice. Jun 21 05:29:00.589488 systemd[1]: Created slice kubepods-burstable-podb5c5904a_3188_4523_884f_9b46364d5b47.slice - libcontainer container kubepods-burstable-podb5c5904a_3188_4523_884f_9b46364d5b47.slice. Jun 21 05:29:00.611114 kubelet[2678]: I0621 05:29:00.611057 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5xwv\" (UniqueName: \"kubernetes.io/projected/9ad298b5-9803-4ed9-b093-b5662165969f-kube-api-access-f5xwv\") pod \"coredns-7c65d6cfc9-95qfq\" (UID: \"9ad298b5-9803-4ed9-b093-b5662165969f\") " pod="kube-system/coredns-7c65d6cfc9-95qfq" Jun 21 05:29:00.611676 kubelet[2678]: I0621 05:29:00.611621 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b5c5904a-3188-4523-884f-9b46364d5b47-config-volume\") pod \"coredns-7c65d6cfc9-h7xkm\" (UID: \"b5c5904a-3188-4523-884f-9b46364d5b47\") " pod="kube-system/coredns-7c65d6cfc9-h7xkm" Jun 21 05:29:00.612089 kubelet[2678]: I0621 05:29:00.612045 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96985\" (UniqueName: \"kubernetes.io/projected/b5c5904a-3188-4523-884f-9b46364d5b47-kube-api-access-96985\") pod \"coredns-7c65d6cfc9-h7xkm\" (UID: \"b5c5904a-3188-4523-884f-9b46364d5b47\") " pod="kube-system/coredns-7c65d6cfc9-h7xkm" Jun 21 05:29:00.612395 kubelet[2678]: I0621 05:29:00.612369 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9ad298b5-9803-4ed9-b093-b5662165969f-config-volume\") pod \"coredns-7c65d6cfc9-95qfq\" (UID: \"9ad298b5-9803-4ed9-b093-b5662165969f\") " pod="kube-system/coredns-7c65d6cfc9-95qfq" Jun 21 05:29:00.888382 kubelet[2678]: E0621 05:29:00.888074 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 21 05:29:00.889480 containerd[1528]: time="2025-06-21T05:29:00.889203187Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-95qfq,Uid:9ad298b5-9803-4ed9-b093-b5662165969f,Namespace:kube-system,Attempt:0,}" Jun 21 05:29:00.894465 kubelet[2678]: E0621 05:29:00.894386 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 21 05:29:00.896179 containerd[1528]: time="2025-06-21T05:29:00.896129905Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-h7xkm,Uid:b5c5904a-3188-4523-884f-9b46364d5b47,Namespace:kube-system,Attempt:0,}" Jun 21 05:29:01.125939 kubelet[2678]: E0621 05:29:01.125867 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 21 05:29:02.130185 kubelet[2678]: E0621 05:29:02.129827 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 21 05:29:02.732876 systemd-networkd[1449]: cilium_host: Link UP Jun 21 05:29:02.738541 systemd-networkd[1449]: cilium_net: Link UP Jun 21 05:29:02.739550 systemd-networkd[1449]: cilium_host: Gained carrier Jun 21 05:29:02.739767 systemd-networkd[1449]: cilium_net: Gained carrier Jun 21 05:29:02.963217 systemd-networkd[1449]: cilium_vxlan: Link UP Jun 21 05:29:02.963228 systemd-networkd[1449]: cilium_vxlan: Gained carrier Jun 21 05:29:03.016666 systemd-networkd[1449]: cilium_host: Gained IPv6LL Jun 21 05:29:03.132602 kubelet[2678]: E0621 05:29:03.132564 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 21 05:29:03.438402 kernel: NET: Registered PF_ALG protocol family Jun 21 05:29:03.616631 systemd-networkd[1449]: cilium_net: Gained IPv6LL Jun 21 05:29:04.192503 systemd-networkd[1449]: cilium_vxlan: Gained IPv6LL Jun 21 05:29:04.335687 systemd-networkd[1449]: lxc_health: Link UP Jun 21 05:29:04.336018 systemd-networkd[1449]: lxc_health: Gained carrier Jun 21 05:29:04.780134 kubelet[2678]: E0621 05:29:04.779857 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 21 05:29:04.805600 kubelet[2678]: I0621 05:29:04.805520 2678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-tt6kx" podStartSLOduration=11.117820873 podStartE2EDuration="22.805493481s" podCreationTimestamp="2025-06-21 05:28:42 +0000 UTC" firstStartedPulling="2025-06-21 05:28:44.990684314 +0000 UTC m=+8.203342552" lastFinishedPulling="2025-06-21 05:28:56.678356926 +0000 UTC m=+19.891015160" observedRunningTime="2025-06-21 05:29:01.156254443 +0000 UTC m=+24.368912690" watchObservedRunningTime="2025-06-21 05:29:04.805493481 +0000 UTC m=+28.018151726" Jun 21 05:29:04.996336 kernel: eth0: renamed from tmpef1d9 Jun 21 05:29:04.994050 systemd-networkd[1449]: lxc05600444d44d: Link UP Jun 21 05:29:04.999720 systemd-networkd[1449]: lxc05600444d44d: Gained carrier Jun 21 05:29:05.057333 kernel: eth0: renamed from tmpb3c3b Jun 21 05:29:05.061463 systemd-networkd[1449]: lxc488d95fa82f1: Link UP Jun 21 05:29:05.064953 systemd-networkd[1449]: lxc488d95fa82f1: Gained carrier Jun 21 05:29:05.664568 systemd-networkd[1449]: lxc_health: Gained IPv6LL Jun 21 05:29:06.240609 systemd-networkd[1449]: lxc488d95fa82f1: Gained IPv6LL Jun 21 05:29:06.368575 systemd-networkd[1449]: lxc05600444d44d: Gained IPv6LL Jun 21 05:29:10.160379 kubelet[2678]: I0621 05:29:10.159966 2678 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 21 05:29:10.162443 kubelet[2678]: E0621 05:29:10.162294 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 21 05:29:10.710844 containerd[1528]: time="2025-06-21T05:29:10.710159537Z" level=info msg="connecting to shim ef1d94b84588af8323d1192841b5ff324366612ed60d3988e9117fe07e50ff7a" address="unix:///run/containerd/s/8244299296b44aef62d941b50c40cb8b39537a7c347973c098a357b16ea8f0b2" namespace=k8s.io protocol=ttrpc version=3 Jun 21 05:29:10.771624 systemd[1]: Started cri-containerd-ef1d94b84588af8323d1192841b5ff324366612ed60d3988e9117fe07e50ff7a.scope - libcontainer container ef1d94b84588af8323d1192841b5ff324366612ed60d3988e9117fe07e50ff7a. Jun 21 05:29:10.784046 containerd[1528]: time="2025-06-21T05:29:10.783980657Z" level=info msg="connecting to shim b3c3bd39909258a1916c6f6eb4f792d43ae9b86a0743d35c15660b5db35beb94" address="unix:///run/containerd/s/f2c656dfcdcbe94c7ad7b50cb73f7ea38e39ae462c117a9b6ec8fb4982682831" namespace=k8s.io protocol=ttrpc version=3 Jun 21 05:29:10.851656 systemd[1]: Started cri-containerd-b3c3bd39909258a1916c6f6eb4f792d43ae9b86a0743d35c15660b5db35beb94.scope - libcontainer container b3c3bd39909258a1916c6f6eb4f792d43ae9b86a0743d35c15660b5db35beb94. Jun 21 05:29:10.919341 containerd[1528]: time="2025-06-21T05:29:10.917007568Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-h7xkm,Uid:b5c5904a-3188-4523-884f-9b46364d5b47,Namespace:kube-system,Attempt:0,} returns sandbox id \"ef1d94b84588af8323d1192841b5ff324366612ed60d3988e9117fe07e50ff7a\"" Jun 21 05:29:10.920481 kubelet[2678]: E0621 05:29:10.920443 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 21 05:29:10.927725 containerd[1528]: time="2025-06-21T05:29:10.927663102Z" level=info msg="CreateContainer within sandbox \"ef1d94b84588af8323d1192841b5ff324366612ed60d3988e9117fe07e50ff7a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 21 05:29:10.979927 containerd[1528]: time="2025-06-21T05:29:10.978662860Z" level=info msg="Container e48af0287e271a8f40a7cb6f41a7fc0003322e3fce32d3210baf4d59e7cc67a4: CDI devices from CRI Config.CDIDevices: []" Jun 21 05:29:10.984993 containerd[1528]: time="2025-06-21T05:29:10.984910756Z" level=info msg="CreateContainer within sandbox \"ef1d94b84588af8323d1192841b5ff324366612ed60d3988e9117fe07e50ff7a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e48af0287e271a8f40a7cb6f41a7fc0003322e3fce32d3210baf4d59e7cc67a4\"" Jun 21 05:29:10.987278 containerd[1528]: time="2025-06-21T05:29:10.987233254Z" level=info msg="StartContainer for \"e48af0287e271a8f40a7cb6f41a7fc0003322e3fce32d3210baf4d59e7cc67a4\"" Jun 21 05:29:10.989944 containerd[1528]: time="2025-06-21T05:29:10.989899691Z" level=info msg="connecting to shim e48af0287e271a8f40a7cb6f41a7fc0003322e3fce32d3210baf4d59e7cc67a4" address="unix:///run/containerd/s/8244299296b44aef62d941b50c40cb8b39537a7c347973c098a357b16ea8f0b2" protocol=ttrpc version=3 Jun 21 05:29:10.991769 containerd[1528]: time="2025-06-21T05:29:10.991719095Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-95qfq,Uid:9ad298b5-9803-4ed9-b093-b5662165969f,Namespace:kube-system,Attempt:0,} returns sandbox id \"b3c3bd39909258a1916c6f6eb4f792d43ae9b86a0743d35c15660b5db35beb94\"" Jun 21 05:29:10.993740 kubelet[2678]: E0621 05:29:10.993579 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 21 05:29:10.998470 containerd[1528]: time="2025-06-21T05:29:10.998396105Z" level=info msg="CreateContainer within sandbox \"b3c3bd39909258a1916c6f6eb4f792d43ae9b86a0743d35c15660b5db35beb94\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 21 05:29:11.024241 containerd[1528]: time="2025-06-21T05:29:11.024200698Z" level=info msg="Container 51063d053c9c7df9c3ca39280c9f534343a01ea576cbb6f69263dba1a4f57ee6: CDI devices from CRI Config.CDIDevices: []" Jun 21 05:29:11.028649 systemd[1]: Started cri-containerd-e48af0287e271a8f40a7cb6f41a7fc0003322e3fce32d3210baf4d59e7cc67a4.scope - libcontainer container e48af0287e271a8f40a7cb6f41a7fc0003322e3fce32d3210baf4d59e7cc67a4. Jun 21 05:29:11.033960 containerd[1528]: time="2025-06-21T05:29:11.033567654Z" level=info msg="CreateContainer within sandbox \"b3c3bd39909258a1916c6f6eb4f792d43ae9b86a0743d35c15660b5db35beb94\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"51063d053c9c7df9c3ca39280c9f534343a01ea576cbb6f69263dba1a4f57ee6\"" Jun 21 05:29:11.034888 containerd[1528]: time="2025-06-21T05:29:11.034833390Z" level=info msg="StartContainer for \"51063d053c9c7df9c3ca39280c9f534343a01ea576cbb6f69263dba1a4f57ee6\"" Jun 21 05:29:11.036649 containerd[1528]: time="2025-06-21T05:29:11.036609713Z" level=info msg="connecting to shim 51063d053c9c7df9c3ca39280c9f534343a01ea576cbb6f69263dba1a4f57ee6" address="unix:///run/containerd/s/f2c656dfcdcbe94c7ad7b50cb73f7ea38e39ae462c117a9b6ec8fb4982682831" protocol=ttrpc version=3 Jun 21 05:29:11.073903 systemd[1]: Started cri-containerd-51063d053c9c7df9c3ca39280c9f534343a01ea576cbb6f69263dba1a4f57ee6.scope - libcontainer container 51063d053c9c7df9c3ca39280c9f534343a01ea576cbb6f69263dba1a4f57ee6. Jun 21 05:29:11.119992 containerd[1528]: time="2025-06-21T05:29:11.119945504Z" level=info msg="StartContainer for \"e48af0287e271a8f40a7cb6f41a7fc0003322e3fce32d3210baf4d59e7cc67a4\" returns successfully" Jun 21 05:29:11.136074 containerd[1528]: time="2025-06-21T05:29:11.135941477Z" level=info msg="StartContainer for \"51063d053c9c7df9c3ca39280c9f534343a01ea576cbb6f69263dba1a4f57ee6\" returns successfully" Jun 21 05:29:11.161253 kubelet[2678]: E0621 05:29:11.161157 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 21 05:29:11.167649 kubelet[2678]: E0621 05:29:11.167526 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 21 05:29:11.168531 kubelet[2678]: E0621 05:29:11.168490 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 21 05:29:11.193376 kubelet[2678]: I0621 05:29:11.193048 2678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-h7xkm" podStartSLOduration=28.193021851 podStartE2EDuration="28.193021851s" podCreationTimestamp="2025-06-21 05:28:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-21 05:29:11.187134115 +0000 UTC m=+34.399792362" watchObservedRunningTime="2025-06-21 05:29:11.193021851 +0000 UTC m=+34.405680095" Jun 21 05:29:11.217602 kubelet[2678]: I0621 05:29:11.217275 2678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-95qfq" podStartSLOduration=28.217250897 podStartE2EDuration="28.217250897s" podCreationTimestamp="2025-06-21 05:28:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-21 05:29:11.216433673 +0000 UTC m=+34.429091916" watchObservedRunningTime="2025-06-21 05:29:11.217250897 +0000 UTC m=+34.429909143" Jun 21 05:29:12.170543 kubelet[2678]: E0621 05:29:12.170493 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 21 05:29:12.172967 kubelet[2678]: E0621 05:29:12.172925 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 21 05:29:13.173102 kubelet[2678]: E0621 05:29:13.173006 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 21 05:29:13.173925 kubelet[2678]: E0621 05:29:13.173526 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 21 05:29:24.784493 systemd[1]: Started sshd@7-64.23.242.202:22-139.178.68.195:46436.service - OpenSSH per-connection server daemon (139.178.68.195:46436). Jun 21 05:29:24.882000 sshd[3996]: Accepted publickey for core from 139.178.68.195 port 46436 ssh2: RSA SHA256:esrwHbjCvD8R4I7sQRiHa5Rpu9l1igA0BMtQzkIUH4o Jun 21 05:29:24.884056 sshd-session[3996]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 05:29:24.891203 systemd-logind[1504]: New session 8 of user core. Jun 21 05:29:24.897741 systemd[1]: Started session-8.scope - Session 8 of User core. Jun 21 05:29:25.522569 sshd[3999]: Connection closed by 139.178.68.195 port 46436 Jun 21 05:29:25.524135 sshd-session[3996]: pam_unix(sshd:session): session closed for user core Jun 21 05:29:25.529811 systemd[1]: sshd@7-64.23.242.202:22-139.178.68.195:46436.service: Deactivated successfully. Jun 21 05:29:25.534058 systemd[1]: session-8.scope: Deactivated successfully. Jun 21 05:29:25.535665 systemd-logind[1504]: Session 8 logged out. Waiting for processes to exit. Jun 21 05:29:25.538235 systemd-logind[1504]: Removed session 8. Jun 21 05:29:30.545785 systemd[1]: Started sshd@8-64.23.242.202:22-139.178.68.195:46440.service - OpenSSH per-connection server daemon (139.178.68.195:46440). Jun 21 05:29:30.634379 sshd[4012]: Accepted publickey for core from 139.178.68.195 port 46440 ssh2: RSA SHA256:esrwHbjCvD8R4I7sQRiHa5Rpu9l1igA0BMtQzkIUH4o Jun 21 05:29:30.636977 sshd-session[4012]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 05:29:30.644390 systemd-logind[1504]: New session 9 of user core. Jun 21 05:29:30.652665 systemd[1]: Started session-9.scope - Session 9 of User core. Jun 21 05:29:30.832257 sshd[4014]: Connection closed by 139.178.68.195 port 46440 Jun 21 05:29:30.833255 sshd-session[4012]: pam_unix(sshd:session): session closed for user core Jun 21 05:29:30.838206 systemd[1]: sshd@8-64.23.242.202:22-139.178.68.195:46440.service: Deactivated successfully. Jun 21 05:29:30.841208 systemd[1]: session-9.scope: Deactivated successfully. Jun 21 05:29:30.846239 systemd-logind[1504]: Session 9 logged out. Waiting for processes to exit. Jun 21 05:29:30.847920 systemd-logind[1504]: Removed session 9. Jun 21 05:29:35.850078 systemd[1]: Started sshd@9-64.23.242.202:22-139.178.68.195:48152.service - OpenSSH per-connection server daemon (139.178.68.195:48152). Jun 21 05:29:35.929136 sshd[4029]: Accepted publickey for core from 139.178.68.195 port 48152 ssh2: RSA SHA256:esrwHbjCvD8R4I7sQRiHa5Rpu9l1igA0BMtQzkIUH4o Jun 21 05:29:35.931872 sshd-session[4029]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 05:29:35.940191 systemd-logind[1504]: New session 10 of user core. Jun 21 05:29:35.944644 systemd[1]: Started session-10.scope - Session 10 of User core. Jun 21 05:29:36.098338 sshd[4031]: Connection closed by 139.178.68.195 port 48152 Jun 21 05:29:36.099084 sshd-session[4029]: pam_unix(sshd:session): session closed for user core Jun 21 05:29:36.105027 systemd[1]: sshd@9-64.23.242.202:22-139.178.68.195:48152.service: Deactivated successfully. Jun 21 05:29:36.110057 systemd[1]: session-10.scope: Deactivated successfully. Jun 21 05:29:36.112832 systemd-logind[1504]: Session 10 logged out. Waiting for processes to exit. Jun 21 05:29:36.115799 systemd-logind[1504]: Removed session 10. Jun 21 05:29:41.116184 systemd[1]: Started sshd@10-64.23.242.202:22-139.178.68.195:48156.service - OpenSSH per-connection server daemon (139.178.68.195:48156). Jun 21 05:29:41.187998 sshd[4048]: Accepted publickey for core from 139.178.68.195 port 48156 ssh2: RSA SHA256:esrwHbjCvD8R4I7sQRiHa5Rpu9l1igA0BMtQzkIUH4o Jun 21 05:29:41.190619 sshd-session[4048]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 05:29:41.199488 systemd-logind[1504]: New session 11 of user core. Jun 21 05:29:41.203748 systemd[1]: Started session-11.scope - Session 11 of User core. Jun 21 05:29:41.397999 sshd[4050]: Connection closed by 139.178.68.195 port 48156 Jun 21 05:29:41.398667 sshd-session[4048]: pam_unix(sshd:session): session closed for user core Jun 21 05:29:41.406678 systemd[1]: sshd@10-64.23.242.202:22-139.178.68.195:48156.service: Deactivated successfully. Jun 21 05:29:41.411186 systemd[1]: session-11.scope: Deactivated successfully. Jun 21 05:29:41.415737 systemd-logind[1504]: Session 11 logged out. Waiting for processes to exit. Jun 21 05:29:41.418193 systemd-logind[1504]: Removed session 11. Jun 21 05:29:46.414436 systemd[1]: Started sshd@11-64.23.242.202:22-139.178.68.195:56678.service - OpenSSH per-connection server daemon (139.178.68.195:56678). Jun 21 05:29:46.503076 sshd[4065]: Accepted publickey for core from 139.178.68.195 port 56678 ssh2: RSA SHA256:esrwHbjCvD8R4I7sQRiHa5Rpu9l1igA0BMtQzkIUH4o Jun 21 05:29:46.505170 sshd-session[4065]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 05:29:46.512627 systemd-logind[1504]: New session 12 of user core. Jun 21 05:29:46.525780 systemd[1]: Started session-12.scope - Session 12 of User core. Jun 21 05:29:46.675967 sshd[4067]: Connection closed by 139.178.68.195 port 56678 Jun 21 05:29:46.676636 sshd-session[4065]: pam_unix(sshd:session): session closed for user core Jun 21 05:29:46.689476 systemd[1]: sshd@11-64.23.242.202:22-139.178.68.195:56678.service: Deactivated successfully. Jun 21 05:29:46.691960 systemd[1]: session-12.scope: Deactivated successfully. Jun 21 05:29:46.693219 systemd-logind[1504]: Session 12 logged out. Waiting for processes to exit. Jun 21 05:29:46.699163 systemd[1]: Started sshd@12-64.23.242.202:22-139.178.68.195:56694.service - OpenSSH per-connection server daemon (139.178.68.195:56694). Jun 21 05:29:46.700818 systemd-logind[1504]: Removed session 12. Jun 21 05:29:46.764080 sshd[4080]: Accepted publickey for core from 139.178.68.195 port 56694 ssh2: RSA SHA256:esrwHbjCvD8R4I7sQRiHa5Rpu9l1igA0BMtQzkIUH4o Jun 21 05:29:46.766144 sshd-session[4080]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 05:29:46.774734 systemd-logind[1504]: New session 13 of user core. Jun 21 05:29:46.783595 systemd[1]: Started session-13.scope - Session 13 of User core. Jun 21 05:29:47.001401 sshd[4082]: Connection closed by 139.178.68.195 port 56694 Jun 21 05:29:47.002654 sshd-session[4080]: pam_unix(sshd:session): session closed for user core Jun 21 05:29:47.013954 systemd[1]: sshd@12-64.23.242.202:22-139.178.68.195:56694.service: Deactivated successfully. Jun 21 05:29:47.018203 systemd[1]: session-13.scope: Deactivated successfully. Jun 21 05:29:47.021444 systemd-logind[1504]: Session 13 logged out. Waiting for processes to exit. Jun 21 05:29:47.025557 systemd[1]: Started sshd@13-64.23.242.202:22-139.178.68.195:56704.service - OpenSSH per-connection server daemon (139.178.68.195:56704). Jun 21 05:29:47.030278 systemd-logind[1504]: Removed session 13. Jun 21 05:29:47.116038 sshd[4092]: Accepted publickey for core from 139.178.68.195 port 56704 ssh2: RSA SHA256:esrwHbjCvD8R4I7sQRiHa5Rpu9l1igA0BMtQzkIUH4o Jun 21 05:29:47.118421 sshd-session[4092]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 05:29:47.125576 systemd-logind[1504]: New session 14 of user core. Jun 21 05:29:47.132588 systemd[1]: Started session-14.scope - Session 14 of User core. Jun 21 05:29:47.301292 sshd[4094]: Connection closed by 139.178.68.195 port 56704 Jun 21 05:29:47.301983 sshd-session[4092]: pam_unix(sshd:session): session closed for user core Jun 21 05:29:47.307698 systemd[1]: sshd@13-64.23.242.202:22-139.178.68.195:56704.service: Deactivated successfully. Jun 21 05:29:47.311288 systemd[1]: session-14.scope: Deactivated successfully. Jun 21 05:29:47.313430 systemd-logind[1504]: Session 14 logged out. Waiting for processes to exit. Jun 21 05:29:47.316104 systemd-logind[1504]: Removed session 14. Jun 21 05:29:51.952941 kubelet[2678]: E0621 05:29:51.952868 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 21 05:29:52.320502 systemd[1]: Started sshd@14-64.23.242.202:22-139.178.68.195:56710.service - OpenSSH per-connection server daemon (139.178.68.195:56710). Jun 21 05:29:52.406430 sshd[4105]: Accepted publickey for core from 139.178.68.195 port 56710 ssh2: RSA SHA256:esrwHbjCvD8R4I7sQRiHa5Rpu9l1igA0BMtQzkIUH4o Jun 21 05:29:52.409759 sshd-session[4105]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 05:29:52.420503 systemd-logind[1504]: New session 15 of user core. Jun 21 05:29:52.422662 systemd[1]: Started session-15.scope - Session 15 of User core. Jun 21 05:29:52.598452 sshd[4107]: Connection closed by 139.178.68.195 port 56710 Jun 21 05:29:52.599280 sshd-session[4105]: pam_unix(sshd:session): session closed for user core Jun 21 05:29:52.606758 systemd[1]: sshd@14-64.23.242.202:22-139.178.68.195:56710.service: Deactivated successfully. Jun 21 05:29:52.611594 systemd[1]: session-15.scope: Deactivated successfully. Jun 21 05:29:52.615401 systemd-logind[1504]: Session 15 logged out. Waiting for processes to exit. Jun 21 05:29:52.618732 systemd-logind[1504]: Removed session 15. Jun 21 05:29:55.952346 kubelet[2678]: E0621 05:29:55.952219 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 21 05:29:57.621625 systemd[1]: Started sshd@15-64.23.242.202:22-139.178.68.195:47224.service - OpenSSH per-connection server daemon (139.178.68.195:47224). Jun 21 05:29:57.696698 sshd[4118]: Accepted publickey for core from 139.178.68.195 port 47224 ssh2: RSA SHA256:esrwHbjCvD8R4I7sQRiHa5Rpu9l1igA0BMtQzkIUH4o Jun 21 05:29:57.698820 sshd-session[4118]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 05:29:57.706732 systemd-logind[1504]: New session 16 of user core. Jun 21 05:29:57.711871 systemd[1]: Started session-16.scope - Session 16 of User core. Jun 21 05:29:57.879714 sshd[4120]: Connection closed by 139.178.68.195 port 47224 Jun 21 05:29:57.882405 sshd-session[4118]: pam_unix(sshd:session): session closed for user core Jun 21 05:29:57.893951 systemd[1]: sshd@15-64.23.242.202:22-139.178.68.195:47224.service: Deactivated successfully. Jun 21 05:29:57.898595 systemd[1]: session-16.scope: Deactivated successfully. Jun 21 05:29:57.900555 systemd-logind[1504]: Session 16 logged out. Waiting for processes to exit. Jun 21 05:29:57.907018 systemd[1]: Started sshd@16-64.23.242.202:22-139.178.68.195:47236.service - OpenSSH per-connection server daemon (139.178.68.195:47236). Jun 21 05:29:57.909751 systemd-logind[1504]: Removed session 16. Jun 21 05:29:57.986620 sshd[4131]: Accepted publickey for core from 139.178.68.195 port 47236 ssh2: RSA SHA256:esrwHbjCvD8R4I7sQRiHa5Rpu9l1igA0BMtQzkIUH4o Jun 21 05:29:57.988861 sshd-session[4131]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 05:29:57.996380 systemd-logind[1504]: New session 17 of user core. Jun 21 05:29:58.001875 systemd[1]: Started session-17.scope - Session 17 of User core. Jun 21 05:29:58.357451 sshd[4133]: Connection closed by 139.178.68.195 port 47236 Jun 21 05:29:58.358959 sshd-session[4131]: pam_unix(sshd:session): session closed for user core Jun 21 05:29:58.377610 systemd[1]: sshd@16-64.23.242.202:22-139.178.68.195:47236.service: Deactivated successfully. Jun 21 05:29:58.382112 systemd[1]: session-17.scope: Deactivated successfully. Jun 21 05:29:58.383997 systemd-logind[1504]: Session 17 logged out. Waiting for processes to exit. Jun 21 05:29:58.391863 systemd[1]: Started sshd@17-64.23.242.202:22-139.178.68.195:47252.service - OpenSSH per-connection server daemon (139.178.68.195:47252). Jun 21 05:29:58.398359 systemd-logind[1504]: Removed session 17. Jun 21 05:29:58.463563 sshd[4143]: Accepted publickey for core from 139.178.68.195 port 47252 ssh2: RSA SHA256:esrwHbjCvD8R4I7sQRiHa5Rpu9l1igA0BMtQzkIUH4o Jun 21 05:29:58.467095 sshd-session[4143]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 05:29:58.477262 systemd-logind[1504]: New session 18 of user core. Jun 21 05:29:58.491712 systemd[1]: Started session-18.scope - Session 18 of User core. Jun 21 05:30:00.573621 sshd[4145]: Connection closed by 139.178.68.195 port 47252 Jun 21 05:30:00.574784 sshd-session[4143]: pam_unix(sshd:session): session closed for user core Jun 21 05:30:00.596950 systemd[1]: sshd@17-64.23.242.202:22-139.178.68.195:47252.service: Deactivated successfully. Jun 21 05:30:00.608987 systemd[1]: session-18.scope: Deactivated successfully. Jun 21 05:30:00.615650 systemd-logind[1504]: Session 18 logged out. Waiting for processes to exit. Jun 21 05:30:00.630073 systemd[1]: Started sshd@18-64.23.242.202:22-139.178.68.195:47262.service - OpenSSH per-connection server daemon (139.178.68.195:47262). Jun 21 05:30:00.633197 systemd-logind[1504]: Removed session 18. Jun 21 05:30:00.771325 sshd[4164]: Accepted publickey for core from 139.178.68.195 port 47262 ssh2: RSA SHA256:esrwHbjCvD8R4I7sQRiHa5Rpu9l1igA0BMtQzkIUH4o Jun 21 05:30:00.774200 sshd-session[4164]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 05:30:00.787070 systemd-logind[1504]: New session 19 of user core. Jun 21 05:30:00.794695 systemd[1]: Started session-19.scope - Session 19 of User core. Jun 21 05:30:01.228332 sshd[4166]: Connection closed by 139.178.68.195 port 47262 Jun 21 05:30:01.227505 sshd-session[4164]: pam_unix(sshd:session): session closed for user core Jun 21 05:30:01.248650 systemd[1]: sshd@18-64.23.242.202:22-139.178.68.195:47262.service: Deactivated successfully. Jun 21 05:30:01.254526 systemd[1]: session-19.scope: Deactivated successfully. Jun 21 05:30:01.258754 systemd-logind[1504]: Session 19 logged out. Waiting for processes to exit. Jun 21 05:30:01.270177 systemd[1]: Started sshd@19-64.23.242.202:22-139.178.68.195:47278.service - OpenSSH per-connection server daemon (139.178.68.195:47278). Jun 21 05:30:01.275147 systemd-logind[1504]: Removed session 19. Jun 21 05:30:01.366679 sshd[4176]: Accepted publickey for core from 139.178.68.195 port 47278 ssh2: RSA SHA256:esrwHbjCvD8R4I7sQRiHa5Rpu9l1igA0BMtQzkIUH4o Jun 21 05:30:01.369860 sshd-session[4176]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 05:30:01.379895 systemd-logind[1504]: New session 20 of user core. Jun 21 05:30:01.389364 systemd[1]: Started session-20.scope - Session 20 of User core. Jun 21 05:30:01.716281 sshd[4178]: Connection closed by 139.178.68.195 port 47278 Jun 21 05:30:01.717205 sshd-session[4176]: pam_unix(sshd:session): session closed for user core Jun 21 05:30:01.747540 systemd[1]: sshd@19-64.23.242.202:22-139.178.68.195:47278.service: Deactivated successfully. Jun 21 05:30:01.756418 systemd[1]: session-20.scope: Deactivated successfully. Jun 21 05:30:01.775499 systemd-logind[1504]: Session 20 logged out. Waiting for processes to exit. Jun 21 05:30:01.779022 systemd-logind[1504]: Removed session 20. Jun 21 05:30:06.744034 systemd[1]: Started sshd@20-64.23.242.202:22-139.178.68.195:49398.service - OpenSSH per-connection server daemon (139.178.68.195:49398). Jun 21 05:30:06.812362 sshd[4190]: Accepted publickey for core from 139.178.68.195 port 49398 ssh2: RSA SHA256:esrwHbjCvD8R4I7sQRiHa5Rpu9l1igA0BMtQzkIUH4o Jun 21 05:30:06.814963 sshd-session[4190]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 05:30:06.824710 systemd-logind[1504]: New session 21 of user core. Jun 21 05:30:06.834642 systemd[1]: Started session-21.scope - Session 21 of User core. Jun 21 05:30:07.002746 sshd[4192]: Connection closed by 139.178.68.195 port 49398 Jun 21 05:30:07.003938 sshd-session[4190]: pam_unix(sshd:session): session closed for user core Jun 21 05:30:07.012008 systemd[1]: sshd@20-64.23.242.202:22-139.178.68.195:49398.service: Deactivated successfully. Jun 21 05:30:07.017227 systemd[1]: session-21.scope: Deactivated successfully. Jun 21 05:30:07.021831 systemd-logind[1504]: Session 21 logged out. Waiting for processes to exit. Jun 21 05:30:07.024202 systemd-logind[1504]: Removed session 21. Jun 21 05:30:10.954339 kubelet[2678]: E0621 05:30:10.952512 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 21 05:30:11.952886 kubelet[2678]: E0621 05:30:11.952841 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 21 05:30:12.024266 systemd[1]: Started sshd@21-64.23.242.202:22-139.178.68.195:49406.service - OpenSSH per-connection server daemon (139.178.68.195:49406). Jun 21 05:30:12.108401 sshd[4207]: Accepted publickey for core from 139.178.68.195 port 49406 ssh2: RSA SHA256:esrwHbjCvD8R4I7sQRiHa5Rpu9l1igA0BMtQzkIUH4o Jun 21 05:30:12.110756 sshd-session[4207]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 05:30:12.118698 systemd-logind[1504]: New session 22 of user core. Jun 21 05:30:12.124638 systemd[1]: Started session-22.scope - Session 22 of User core. Jun 21 05:30:12.274184 sshd[4209]: Connection closed by 139.178.68.195 port 49406 Jun 21 05:30:12.275829 sshd-session[4207]: pam_unix(sshd:session): session closed for user core Jun 21 05:30:12.282475 systemd[1]: sshd@21-64.23.242.202:22-139.178.68.195:49406.service: Deactivated successfully. Jun 21 05:30:12.285702 systemd[1]: session-22.scope: Deactivated successfully. Jun 21 05:30:12.287153 systemd-logind[1504]: Session 22 logged out. Waiting for processes to exit. Jun 21 05:30:12.290719 systemd-logind[1504]: Removed session 22. Jun 21 05:30:17.291016 systemd[1]: Started sshd@22-64.23.242.202:22-139.178.68.195:36380.service - OpenSSH per-connection server daemon (139.178.68.195:36380). Jun 21 05:30:17.370516 sshd[4223]: Accepted publickey for core from 139.178.68.195 port 36380 ssh2: RSA SHA256:esrwHbjCvD8R4I7sQRiHa5Rpu9l1igA0BMtQzkIUH4o Jun 21 05:30:17.375453 sshd-session[4223]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 05:30:17.388814 systemd-logind[1504]: New session 23 of user core. Jun 21 05:30:17.393789 systemd[1]: Started session-23.scope - Session 23 of User core. Jun 21 05:30:17.565112 sshd[4225]: Connection closed by 139.178.68.195 port 36380 Jun 21 05:30:17.566183 sshd-session[4223]: pam_unix(sshd:session): session closed for user core Jun 21 05:30:17.572114 systemd[1]: sshd@22-64.23.242.202:22-139.178.68.195:36380.service: Deactivated successfully. Jun 21 05:30:17.576921 systemd[1]: session-23.scope: Deactivated successfully. Jun 21 05:30:17.578789 systemd-logind[1504]: Session 23 logged out. Waiting for processes to exit. Jun 21 05:30:17.581357 systemd-logind[1504]: Removed session 23. Jun 21 05:30:18.954225 kubelet[2678]: E0621 05:30:18.954173 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 21 05:30:22.583118 systemd[1]: Started sshd@23-64.23.242.202:22-139.178.68.195:36386.service - OpenSSH per-connection server daemon (139.178.68.195:36386). Jun 21 05:30:22.655328 sshd[4237]: Accepted publickey for core from 139.178.68.195 port 36386 ssh2: RSA SHA256:esrwHbjCvD8R4I7sQRiHa5Rpu9l1igA0BMtQzkIUH4o Jun 21 05:30:22.657677 sshd-session[4237]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 05:30:22.664686 systemd-logind[1504]: New session 24 of user core. Jun 21 05:30:22.671555 systemd[1]: Started session-24.scope - Session 24 of User core. Jun 21 05:30:22.810877 sshd[4239]: Connection closed by 139.178.68.195 port 36386 Jun 21 05:30:22.811645 sshd-session[4237]: pam_unix(sshd:session): session closed for user core Jun 21 05:30:22.823583 systemd[1]: sshd@23-64.23.242.202:22-139.178.68.195:36386.service: Deactivated successfully. Jun 21 05:30:22.826751 systemd[1]: session-24.scope: Deactivated successfully. Jun 21 05:30:22.830117 systemd-logind[1504]: Session 24 logged out. Waiting for processes to exit. Jun 21 05:30:22.834468 systemd[1]: Started sshd@24-64.23.242.202:22-139.178.68.195:36392.service - OpenSSH per-connection server daemon (139.178.68.195:36392). Jun 21 05:30:22.839224 systemd-logind[1504]: Removed session 24. Jun 21 05:30:22.911846 sshd[4251]: Accepted publickey for core from 139.178.68.195 port 36392 ssh2: RSA SHA256:esrwHbjCvD8R4I7sQRiHa5Rpu9l1igA0BMtQzkIUH4o Jun 21 05:30:22.913969 sshd-session[4251]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 05:30:22.920795 systemd-logind[1504]: New session 25 of user core. Jun 21 05:30:22.928706 systemd[1]: Started session-25.scope - Session 25 of User core. Jun 21 05:30:24.450884 containerd[1528]: time="2025-06-21T05:30:24.450496494Z" level=info msg="StopContainer for \"25e15a28d16b7224aabc04e8f0c858776a61a6275b6dbc2230577dc753312bf6\" with timeout 30 (s)" Jun 21 05:30:24.469511 containerd[1528]: time="2025-06-21T05:30:24.469398066Z" level=info msg="Stop container \"25e15a28d16b7224aabc04e8f0c858776a61a6275b6dbc2230577dc753312bf6\" with signal terminated" Jun 21 05:30:24.484940 containerd[1528]: time="2025-06-21T05:30:24.484887794Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 21 05:30:24.490529 systemd[1]: cri-containerd-25e15a28d16b7224aabc04e8f0c858776a61a6275b6dbc2230577dc753312bf6.scope: Deactivated successfully. Jun 21 05:30:24.490845 systemd[1]: cri-containerd-25e15a28d16b7224aabc04e8f0c858776a61a6275b6dbc2230577dc753312bf6.scope: Consumed 506ms CPU time, 26M memory peak, 2.6M read from disk, 4K written to disk. Jun 21 05:30:24.494157 containerd[1528]: time="2025-06-21T05:30:24.494097999Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e5340475fde70979872be8b5a3d451f30f1219eb58cd347647f1b7888a61c7d2\" id:\"28b288036f3618f157ea5ba325bb4ca7aa0291ee2ce4e1696c1b9f8feec22007\" pid:4272 exited_at:{seconds:1750483824 nanos:493494698}" Jun 21 05:30:24.495622 containerd[1528]: time="2025-06-21T05:30:24.495268108Z" level=info msg="received exit event container_id:\"25e15a28d16b7224aabc04e8f0c858776a61a6275b6dbc2230577dc753312bf6\" id:\"25e15a28d16b7224aabc04e8f0c858776a61a6275b6dbc2230577dc753312bf6\" pid:3077 exited_at:{seconds:1750483824 nanos:494504004}" Jun 21 05:30:24.495775 containerd[1528]: time="2025-06-21T05:30:24.495482530Z" level=info msg="TaskExit event in podsandbox handler container_id:\"25e15a28d16b7224aabc04e8f0c858776a61a6275b6dbc2230577dc753312bf6\" id:\"25e15a28d16b7224aabc04e8f0c858776a61a6275b6dbc2230577dc753312bf6\" pid:3077 exited_at:{seconds:1750483824 nanos:494504004}" Jun 21 05:30:24.497580 containerd[1528]: time="2025-06-21T05:30:24.497532720Z" level=info msg="StopContainer for \"e5340475fde70979872be8b5a3d451f30f1219eb58cd347647f1b7888a61c7d2\" with timeout 2 (s)" Jun 21 05:30:24.498393 containerd[1528]: time="2025-06-21T05:30:24.498285485Z" level=info msg="Stop container \"e5340475fde70979872be8b5a3d451f30f1219eb58cd347647f1b7888a61c7d2\" with signal terminated" Jun 21 05:30:24.508975 systemd-networkd[1449]: lxc_health: Link DOWN Jun 21 05:30:24.508984 systemd-networkd[1449]: lxc_health: Lost carrier Jun 21 05:30:24.530770 systemd[1]: cri-containerd-e5340475fde70979872be8b5a3d451f30f1219eb58cd347647f1b7888a61c7d2.scope: Deactivated successfully. Jun 21 05:30:24.531084 systemd[1]: cri-containerd-e5340475fde70979872be8b5a3d451f30f1219eb58cd347647f1b7888a61c7d2.scope: Consumed 9.884s CPU time, 167.5M memory peak, 44.9M read from disk, 13.3M written to disk. Jun 21 05:30:24.535902 containerd[1528]: time="2025-06-21T05:30:24.534381736Z" level=info msg="received exit event container_id:\"e5340475fde70979872be8b5a3d451f30f1219eb58cd347647f1b7888a61c7d2\" id:\"e5340475fde70979872be8b5a3d451f30f1219eb58cd347647f1b7888a61c7d2\" pid:3323 exited_at:{seconds:1750483824 nanos:533665795}" Jun 21 05:30:24.536763 containerd[1528]: time="2025-06-21T05:30:24.536715601Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e5340475fde70979872be8b5a3d451f30f1219eb58cd347647f1b7888a61c7d2\" id:\"e5340475fde70979872be8b5a3d451f30f1219eb58cd347647f1b7888a61c7d2\" pid:3323 exited_at:{seconds:1750483824 nanos:533665795}" Jun 21 05:30:24.545189 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-25e15a28d16b7224aabc04e8f0c858776a61a6275b6dbc2230577dc753312bf6-rootfs.mount: Deactivated successfully. Jun 21 05:30:24.558812 containerd[1528]: time="2025-06-21T05:30:24.558757980Z" level=info msg="StopContainer for \"25e15a28d16b7224aabc04e8f0c858776a61a6275b6dbc2230577dc753312bf6\" returns successfully" Jun 21 05:30:24.559633 containerd[1528]: time="2025-06-21T05:30:24.559543646Z" level=info msg="StopPodSandbox for \"02102bbec33283547fe27f623daf3153246f8069571354eb64b433faad3345a1\"" Jun 21 05:30:24.559809 containerd[1528]: time="2025-06-21T05:30:24.559715850Z" level=info msg="Container to stop \"25e15a28d16b7224aabc04e8f0c858776a61a6275b6dbc2230577dc753312bf6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 21 05:30:24.580988 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e5340475fde70979872be8b5a3d451f30f1219eb58cd347647f1b7888a61c7d2-rootfs.mount: Deactivated successfully. Jun 21 05:30:24.584698 systemd[1]: cri-containerd-02102bbec33283547fe27f623daf3153246f8069571354eb64b433faad3345a1.scope: Deactivated successfully. Jun 21 05:30:24.589377 containerd[1528]: time="2025-06-21T05:30:24.589194468Z" level=info msg="TaskExit event in podsandbox handler container_id:\"02102bbec33283547fe27f623daf3153246f8069571354eb64b433faad3345a1\" id:\"02102bbec33283547fe27f623daf3153246f8069571354eb64b433faad3345a1\" pid:2780 exit_status:137 exited_at:{seconds:1750483824 nanos:588371826}" Jun 21 05:30:24.595904 containerd[1528]: time="2025-06-21T05:30:24.595610158Z" level=info msg="StopContainer for \"e5340475fde70979872be8b5a3d451f30f1219eb58cd347647f1b7888a61c7d2\" returns successfully" Jun 21 05:30:24.596557 containerd[1528]: time="2025-06-21T05:30:24.596532104Z" level=info msg="StopPodSandbox for \"f7115590d1ac76298bf0345bf8b52d4fe923c98708f6ba19f2646a8dccc5dd0b\"" Jun 21 05:30:24.596644 containerd[1528]: time="2025-06-21T05:30:24.596606917Z" level=info msg="Container to stop \"45be10e72f92db15bb911ea51db8f6518cdcb1f31983bc88523a27a151797f7e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 21 05:30:24.596644 containerd[1528]: time="2025-06-21T05:30:24.596619305Z" level=info msg="Container to stop \"ef7e5f640033dccb5e408953847b2b10266742339fb5e4d206bad072c3062772\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 21 05:30:24.596644 containerd[1528]: time="2025-06-21T05:30:24.596627213Z" level=info msg="Container to stop \"8f9e19426d1bcece875bc3898d36fef50ba9e4400b256c50da9b74b3c86fb3d3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 21 05:30:24.596644 containerd[1528]: time="2025-06-21T05:30:24.596637970Z" level=info msg="Container to stop \"e5340475fde70979872be8b5a3d451f30f1219eb58cd347647f1b7888a61c7d2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 21 05:30:24.596756 containerd[1528]: time="2025-06-21T05:30:24.596646352Z" level=info msg="Container to stop \"b24ccd6c9ec8d040e81d335690e0b51adfac2e1c9d7ef10fa2b33ad1a5dce9a3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 21 05:30:24.612596 systemd[1]: cri-containerd-f7115590d1ac76298bf0345bf8b52d4fe923c98708f6ba19f2646a8dccc5dd0b.scope: Deactivated successfully. Jun 21 05:30:24.634635 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-02102bbec33283547fe27f623daf3153246f8069571354eb64b433faad3345a1-rootfs.mount: Deactivated successfully. Jun 21 05:30:24.640347 containerd[1528]: time="2025-06-21T05:30:24.640273422Z" level=info msg="shim disconnected" id=02102bbec33283547fe27f623daf3153246f8069571354eb64b433faad3345a1 namespace=k8s.io Jun 21 05:30:24.640347 containerd[1528]: time="2025-06-21T05:30:24.640315515Z" level=warning msg="cleaning up after shim disconnected" id=02102bbec33283547fe27f623daf3153246f8069571354eb64b433faad3345a1 namespace=k8s.io Jun 21 05:30:24.640347 containerd[1528]: time="2025-06-21T05:30:24.640323955Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 21 05:30:24.664937 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f7115590d1ac76298bf0345bf8b52d4fe923c98708f6ba19f2646a8dccc5dd0b-rootfs.mount: Deactivated successfully. Jun 21 05:30:24.673588 containerd[1528]: time="2025-06-21T05:30:24.673525648Z" level=info msg="shim disconnected" id=f7115590d1ac76298bf0345bf8b52d4fe923c98708f6ba19f2646a8dccc5dd0b namespace=k8s.io Jun 21 05:30:24.673588 containerd[1528]: time="2025-06-21T05:30:24.673576261Z" level=warning msg="cleaning up after shim disconnected" id=f7115590d1ac76298bf0345bf8b52d4fe923c98708f6ba19f2646a8dccc5dd0b namespace=k8s.io Jun 21 05:30:24.673896 containerd[1528]: time="2025-06-21T05:30:24.673589087Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 21 05:30:24.683604 containerd[1528]: time="2025-06-21T05:30:24.682212463Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f7115590d1ac76298bf0345bf8b52d4fe923c98708f6ba19f2646a8dccc5dd0b\" id:\"f7115590d1ac76298bf0345bf8b52d4fe923c98708f6ba19f2646a8dccc5dd0b\" pid:2867 exit_status:137 exited_at:{seconds:1750483824 nanos:621576585}" Jun 21 05:30:24.684220 containerd[1528]: time="2025-06-21T05:30:24.684032746Z" level=info msg="received exit event sandbox_id:\"f7115590d1ac76298bf0345bf8b52d4fe923c98708f6ba19f2646a8dccc5dd0b\" exit_status:137 exited_at:{seconds:1750483824 nanos:621576585}" Jun 21 05:30:24.688026 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-02102bbec33283547fe27f623daf3153246f8069571354eb64b433faad3345a1-shm.mount: Deactivated successfully. Jun 21 05:30:24.691917 containerd[1528]: time="2025-06-21T05:30:24.691788226Z" level=info msg="received exit event sandbox_id:\"02102bbec33283547fe27f623daf3153246f8069571354eb64b433faad3345a1\" exit_status:137 exited_at:{seconds:1750483824 nanos:588371826}" Jun 21 05:30:24.693497 containerd[1528]: time="2025-06-21T05:30:24.693456128Z" level=info msg="TearDown network for sandbox \"f7115590d1ac76298bf0345bf8b52d4fe923c98708f6ba19f2646a8dccc5dd0b\" successfully" Jun 21 05:30:24.693792 containerd[1528]: time="2025-06-21T05:30:24.693628249Z" level=info msg="StopPodSandbox for \"f7115590d1ac76298bf0345bf8b52d4fe923c98708f6ba19f2646a8dccc5dd0b\" returns successfully" Jun 21 05:30:24.696203 containerd[1528]: time="2025-06-21T05:30:24.695446183Z" level=info msg="TearDown network for sandbox \"02102bbec33283547fe27f623daf3153246f8069571354eb64b433faad3345a1\" successfully" Jun 21 05:30:24.696203 containerd[1528]: time="2025-06-21T05:30:24.695506195Z" level=info msg="StopPodSandbox for \"02102bbec33283547fe27f623daf3153246f8069571354eb64b433faad3345a1\" returns successfully" Jun 21 05:30:24.757969 kubelet[2678]: I0621 05:30:24.756901 2678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3a892538-ae7a-4d13-bf49-8f7bc0eb7436-lib-modules\") pod \"3a892538-ae7a-4d13-bf49-8f7bc0eb7436\" (UID: \"3a892538-ae7a-4d13-bf49-8f7bc0eb7436\") " Jun 21 05:30:24.757969 kubelet[2678]: I0621 05:30:24.757106 2678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3a892538-ae7a-4d13-bf49-8f7bc0eb7436-bpf-maps\") pod \"3a892538-ae7a-4d13-bf49-8f7bc0eb7436\" (UID: \"3a892538-ae7a-4d13-bf49-8f7bc0eb7436\") " Jun 21 05:30:24.757969 kubelet[2678]: I0621 05:30:24.757019 2678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a892538-ae7a-4d13-bf49-8f7bc0eb7436-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "3a892538-ae7a-4d13-bf49-8f7bc0eb7436" (UID: "3a892538-ae7a-4d13-bf49-8f7bc0eb7436"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 21 05:30:24.757969 kubelet[2678]: I0621 05:30:24.757187 2678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a892538-ae7a-4d13-bf49-8f7bc0eb7436-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "3a892538-ae7a-4d13-bf49-8f7bc0eb7436" (UID: "3a892538-ae7a-4d13-bf49-8f7bc0eb7436"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 21 05:30:24.757969 kubelet[2678]: I0621 05:30:24.757212 2678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3a892538-ae7a-4d13-bf49-8f7bc0eb7436-cilium-run\") pod \"3a892538-ae7a-4d13-bf49-8f7bc0eb7436\" (UID: \"3a892538-ae7a-4d13-bf49-8f7bc0eb7436\") " Jun 21 05:30:24.757969 kubelet[2678]: I0621 05:30:24.757237 2678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3a892538-ae7a-4d13-bf49-8f7bc0eb7436-host-proc-sys-net\") pod \"3a892538-ae7a-4d13-bf49-8f7bc0eb7436\" (UID: \"3a892538-ae7a-4d13-bf49-8f7bc0eb7436\") " Jun 21 05:30:24.759799 kubelet[2678]: I0621 05:30:24.757277 2678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a892538-ae7a-4d13-bf49-8f7bc0eb7436-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "3a892538-ae7a-4d13-bf49-8f7bc0eb7436" (UID: "3a892538-ae7a-4d13-bf49-8f7bc0eb7436"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 21 05:30:24.759799 kubelet[2678]: I0621 05:30:24.757322 2678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a892538-ae7a-4d13-bf49-8f7bc0eb7436-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "3a892538-ae7a-4d13-bf49-8f7bc0eb7436" (UID: "3a892538-ae7a-4d13-bf49-8f7bc0eb7436"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 21 05:30:24.759799 kubelet[2678]: I0621 05:30:24.757350 2678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3a892538-ae7a-4d13-bf49-8f7bc0eb7436-hostproc\") pod \"3a892538-ae7a-4d13-bf49-8f7bc0eb7436\" (UID: \"3a892538-ae7a-4d13-bf49-8f7bc0eb7436\") " Jun 21 05:30:24.759799 kubelet[2678]: I0621 05:30:24.757376 2678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3a892538-ae7a-4d13-bf49-8f7bc0eb7436-host-proc-sys-kernel\") pod \"3a892538-ae7a-4d13-bf49-8f7bc0eb7436\" (UID: \"3a892538-ae7a-4d13-bf49-8f7bc0eb7436\") " Jun 21 05:30:24.759799 kubelet[2678]: I0621 05:30:24.757406 2678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a892538-ae7a-4d13-bf49-8f7bc0eb7436-hostproc" (OuterVolumeSpecName: "hostproc") pod "3a892538-ae7a-4d13-bf49-8f7bc0eb7436" (UID: "3a892538-ae7a-4d13-bf49-8f7bc0eb7436"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 21 05:30:24.759939 kubelet[2678]: I0621 05:30:24.757503 2678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a892538-ae7a-4d13-bf49-8f7bc0eb7436-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "3a892538-ae7a-4d13-bf49-8f7bc0eb7436" (UID: "3a892538-ae7a-4d13-bf49-8f7bc0eb7436"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 21 05:30:24.759939 kubelet[2678]: I0621 05:30:24.757532 2678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c2tcg\" (UniqueName: \"kubernetes.io/projected/4d5b46bc-8771-46d9-821a-5cb1cb3655d8-kube-api-access-c2tcg\") pod \"4d5b46bc-8771-46d9-821a-5cb1cb3655d8\" (UID: \"4d5b46bc-8771-46d9-821a-5cb1cb3655d8\") " Jun 21 05:30:24.759939 kubelet[2678]: I0621 05:30:24.757561 2678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4d5b46bc-8771-46d9-821a-5cb1cb3655d8-cilium-config-path\") pod \"4d5b46bc-8771-46d9-821a-5cb1cb3655d8\" (UID: \"4d5b46bc-8771-46d9-821a-5cb1cb3655d8\") " Jun 21 05:30:24.759939 kubelet[2678]: I0621 05:30:24.758490 2678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3a892538-ae7a-4d13-bf49-8f7bc0eb7436-cni-path\") pod \"3a892538-ae7a-4d13-bf49-8f7bc0eb7436\" (UID: \"3a892538-ae7a-4d13-bf49-8f7bc0eb7436\") " Jun 21 05:30:24.759939 kubelet[2678]: I0621 05:30:24.758531 2678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2q4qt\" (UniqueName: \"kubernetes.io/projected/3a892538-ae7a-4d13-bf49-8f7bc0eb7436-kube-api-access-2q4qt\") pod \"3a892538-ae7a-4d13-bf49-8f7bc0eb7436\" (UID: \"3a892538-ae7a-4d13-bf49-8f7bc0eb7436\") " Jun 21 05:30:24.759939 kubelet[2678]: I0621 05:30:24.758549 2678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3a892538-ae7a-4d13-bf49-8f7bc0eb7436-etc-cni-netd\") pod \"3a892538-ae7a-4d13-bf49-8f7bc0eb7436\" (UID: \"3a892538-ae7a-4d13-bf49-8f7bc0eb7436\") " Jun 21 05:30:24.760151 kubelet[2678]: I0621 05:30:24.758563 2678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3a892538-ae7a-4d13-bf49-8f7bc0eb7436-xtables-lock\") pod \"3a892538-ae7a-4d13-bf49-8f7bc0eb7436\" (UID: \"3a892538-ae7a-4d13-bf49-8f7bc0eb7436\") " Jun 21 05:30:24.760151 kubelet[2678]: I0621 05:30:24.758577 2678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3a892538-ae7a-4d13-bf49-8f7bc0eb7436-cilium-cgroup\") pod \"3a892538-ae7a-4d13-bf49-8f7bc0eb7436\" (UID: \"3a892538-ae7a-4d13-bf49-8f7bc0eb7436\") " Jun 21 05:30:24.760151 kubelet[2678]: I0621 05:30:24.758597 2678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3a892538-ae7a-4d13-bf49-8f7bc0eb7436-cilium-config-path\") pod \"3a892538-ae7a-4d13-bf49-8f7bc0eb7436\" (UID: \"3a892538-ae7a-4d13-bf49-8f7bc0eb7436\") " Jun 21 05:30:24.760151 kubelet[2678]: I0621 05:30:24.758613 2678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3a892538-ae7a-4d13-bf49-8f7bc0eb7436-hubble-tls\") pod \"3a892538-ae7a-4d13-bf49-8f7bc0eb7436\" (UID: \"3a892538-ae7a-4d13-bf49-8f7bc0eb7436\") " Jun 21 05:30:24.760151 kubelet[2678]: I0621 05:30:24.758633 2678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3a892538-ae7a-4d13-bf49-8f7bc0eb7436-clustermesh-secrets\") pod \"3a892538-ae7a-4d13-bf49-8f7bc0eb7436\" (UID: \"3a892538-ae7a-4d13-bf49-8f7bc0eb7436\") " Jun 21 05:30:24.760151 kubelet[2678]: I0621 05:30:24.758675 2678 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3a892538-ae7a-4d13-bf49-8f7bc0eb7436-lib-modules\") on node \"ci-4372.0.0-d-47135505f9\" DevicePath \"\"" Jun 21 05:30:24.760317 kubelet[2678]: I0621 05:30:24.758688 2678 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3a892538-ae7a-4d13-bf49-8f7bc0eb7436-cilium-run\") on node \"ci-4372.0.0-d-47135505f9\" DevicePath \"\"" Jun 21 05:30:24.760317 kubelet[2678]: I0621 05:30:24.758698 2678 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3a892538-ae7a-4d13-bf49-8f7bc0eb7436-bpf-maps\") on node \"ci-4372.0.0-d-47135505f9\" DevicePath \"\"" Jun 21 05:30:24.760317 kubelet[2678]: I0621 05:30:24.758712 2678 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3a892538-ae7a-4d13-bf49-8f7bc0eb7436-host-proc-sys-net\") on node \"ci-4372.0.0-d-47135505f9\" DevicePath \"\"" Jun 21 05:30:24.760317 kubelet[2678]: I0621 05:30:24.758720 2678 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3a892538-ae7a-4d13-bf49-8f7bc0eb7436-hostproc\") on node \"ci-4372.0.0-d-47135505f9\" DevicePath \"\"" Jun 21 05:30:24.760317 kubelet[2678]: I0621 05:30:24.758729 2678 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3a892538-ae7a-4d13-bf49-8f7bc0eb7436-host-proc-sys-kernel\") on node \"ci-4372.0.0-d-47135505f9\" DevicePath \"\"" Jun 21 05:30:24.761477 kubelet[2678]: I0621 05:30:24.761413 2678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a892538-ae7a-4d13-bf49-8f7bc0eb7436-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "3a892538-ae7a-4d13-bf49-8f7bc0eb7436" (UID: "3a892538-ae7a-4d13-bf49-8f7bc0eb7436"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 21 05:30:24.762595 kubelet[2678]: I0621 05:30:24.762551 2678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a892538-ae7a-4d13-bf49-8f7bc0eb7436-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "3a892538-ae7a-4d13-bf49-8f7bc0eb7436" (UID: "3a892538-ae7a-4d13-bf49-8f7bc0eb7436"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 21 05:30:24.762775 kubelet[2678]: I0621 05:30:24.762731 2678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a892538-ae7a-4d13-bf49-8f7bc0eb7436-cni-path" (OuterVolumeSpecName: "cni-path") pod "3a892538-ae7a-4d13-bf49-8f7bc0eb7436" (UID: "3a892538-ae7a-4d13-bf49-8f7bc0eb7436"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 21 05:30:24.763142 kubelet[2678]: I0621 05:30:24.763085 2678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a892538-ae7a-4d13-bf49-8f7bc0eb7436-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "3a892538-ae7a-4d13-bf49-8f7bc0eb7436" (UID: "3a892538-ae7a-4d13-bf49-8f7bc0eb7436"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 21 05:30:24.776787 kubelet[2678]: I0621 05:30:24.776708 2678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d5b46bc-8771-46d9-821a-5cb1cb3655d8-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "4d5b46bc-8771-46d9-821a-5cb1cb3655d8" (UID: "4d5b46bc-8771-46d9-821a-5cb1cb3655d8"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jun 21 05:30:24.777252 kubelet[2678]: I0621 05:30:24.777209 2678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a892538-ae7a-4d13-bf49-8f7bc0eb7436-kube-api-access-2q4qt" (OuterVolumeSpecName: "kube-api-access-2q4qt") pod "3a892538-ae7a-4d13-bf49-8f7bc0eb7436" (UID: "3a892538-ae7a-4d13-bf49-8f7bc0eb7436"). InnerVolumeSpecName "kube-api-access-2q4qt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jun 21 05:30:24.778494 kubelet[2678]: I0621 05:30:24.778447 2678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a892538-ae7a-4d13-bf49-8f7bc0eb7436-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "3a892538-ae7a-4d13-bf49-8f7bc0eb7436" (UID: "3a892538-ae7a-4d13-bf49-8f7bc0eb7436"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jun 21 05:30:24.780285 kubelet[2678]: I0621 05:30:24.780236 2678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a892538-ae7a-4d13-bf49-8f7bc0eb7436-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "3a892538-ae7a-4d13-bf49-8f7bc0eb7436" (UID: "3a892538-ae7a-4d13-bf49-8f7bc0eb7436"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jun 21 05:30:24.781211 kubelet[2678]: I0621 05:30:24.780889 2678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d5b46bc-8771-46d9-821a-5cb1cb3655d8-kube-api-access-c2tcg" (OuterVolumeSpecName: "kube-api-access-c2tcg") pod "4d5b46bc-8771-46d9-821a-5cb1cb3655d8" (UID: "4d5b46bc-8771-46d9-821a-5cb1cb3655d8"). InnerVolumeSpecName "kube-api-access-c2tcg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jun 21 05:30:24.782054 kubelet[2678]: I0621 05:30:24.781848 2678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a892538-ae7a-4d13-bf49-8f7bc0eb7436-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "3a892538-ae7a-4d13-bf49-8f7bc0eb7436" (UID: "3a892538-ae7a-4d13-bf49-8f7bc0eb7436"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jun 21 05:30:24.859441 kubelet[2678]: I0621 05:30:24.859383 2678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c2tcg\" (UniqueName: \"kubernetes.io/projected/4d5b46bc-8771-46d9-821a-5cb1cb3655d8-kube-api-access-c2tcg\") on node \"ci-4372.0.0-d-47135505f9\" DevicePath \"\"" Jun 21 05:30:24.859708 kubelet[2678]: I0621 05:30:24.859682 2678 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3a892538-ae7a-4d13-bf49-8f7bc0eb7436-cni-path\") on node \"ci-4372.0.0-d-47135505f9\" DevicePath \"\"" Jun 21 05:30:24.859811 kubelet[2678]: I0621 05:30:24.859796 2678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2q4qt\" (UniqueName: \"kubernetes.io/projected/3a892538-ae7a-4d13-bf49-8f7bc0eb7436-kube-api-access-2q4qt\") on node \"ci-4372.0.0-d-47135505f9\" DevicePath \"\"" Jun 21 05:30:24.860009 kubelet[2678]: I0621 05:30:24.859887 2678 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3a892538-ae7a-4d13-bf49-8f7bc0eb7436-etc-cni-netd\") on node \"ci-4372.0.0-d-47135505f9\" DevicePath \"\"" Jun 21 05:30:24.860009 kubelet[2678]: I0621 05:30:24.859906 2678 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3a892538-ae7a-4d13-bf49-8f7bc0eb7436-xtables-lock\") on node \"ci-4372.0.0-d-47135505f9\" DevicePath \"\"" Jun 21 05:30:24.860009 kubelet[2678]: I0621 05:30:24.859919 2678 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4d5b46bc-8771-46d9-821a-5cb1cb3655d8-cilium-config-path\") on node \"ci-4372.0.0-d-47135505f9\" DevicePath \"\"" Jun 21 05:30:24.860009 kubelet[2678]: I0621 05:30:24.859937 2678 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3a892538-ae7a-4d13-bf49-8f7bc0eb7436-cilium-config-path\") on node \"ci-4372.0.0-d-47135505f9\" DevicePath \"\"" Jun 21 05:30:24.860009 kubelet[2678]: I0621 05:30:24.859954 2678 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3a892538-ae7a-4d13-bf49-8f7bc0eb7436-cilium-cgroup\") on node \"ci-4372.0.0-d-47135505f9\" DevicePath \"\"" Jun 21 05:30:24.860009 kubelet[2678]: I0621 05:30:24.859969 2678 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3a892538-ae7a-4d13-bf49-8f7bc0eb7436-clustermesh-secrets\") on node \"ci-4372.0.0-d-47135505f9\" DevicePath \"\"" Jun 21 05:30:24.860009 kubelet[2678]: I0621 05:30:24.859982 2678 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3a892538-ae7a-4d13-bf49-8f7bc0eb7436-hubble-tls\") on node \"ci-4372.0.0-d-47135505f9\" DevicePath \"\"" Jun 21 05:30:24.971462 systemd[1]: Removed slice kubepods-besteffort-pod4d5b46bc_8771_46d9_821a_5cb1cb3655d8.slice - libcontainer container kubepods-besteffort-pod4d5b46bc_8771_46d9_821a_5cb1cb3655d8.slice. Jun 21 05:30:24.971819 systemd[1]: kubepods-besteffort-pod4d5b46bc_8771_46d9_821a_5cb1cb3655d8.slice: Consumed 549ms CPU time, 26.3M memory peak, 2.6M read from disk, 4K written to disk. Jun 21 05:30:24.973935 systemd[1]: Removed slice kubepods-burstable-pod3a892538_ae7a_4d13_bf49_8f7bc0eb7436.slice - libcontainer container kubepods-burstable-pod3a892538_ae7a_4d13_bf49_8f7bc0eb7436.slice. Jun 21 05:30:24.974099 systemd[1]: kubepods-burstable-pod3a892538_ae7a_4d13_bf49_8f7bc0eb7436.slice: Consumed 10.006s CPU time, 167.8M memory peak, 45M read from disk, 13.3M written to disk. Jun 21 05:30:25.392699 kubelet[2678]: I0621 05:30:25.392658 2678 scope.go:117] "RemoveContainer" containerID="e5340475fde70979872be8b5a3d451f30f1219eb58cd347647f1b7888a61c7d2" Jun 21 05:30:25.403580 containerd[1528]: time="2025-06-21T05:30:25.403491434Z" level=info msg="RemoveContainer for \"e5340475fde70979872be8b5a3d451f30f1219eb58cd347647f1b7888a61c7d2\"" Jun 21 05:30:25.425066 containerd[1528]: time="2025-06-21T05:30:25.424928922Z" level=info msg="RemoveContainer for \"e5340475fde70979872be8b5a3d451f30f1219eb58cd347647f1b7888a61c7d2\" returns successfully" Jun 21 05:30:25.448205 kubelet[2678]: I0621 05:30:25.448056 2678 scope.go:117] "RemoveContainer" containerID="8f9e19426d1bcece875bc3898d36fef50ba9e4400b256c50da9b74b3c86fb3d3" Jun 21 05:30:25.454444 containerd[1528]: time="2025-06-21T05:30:25.454394699Z" level=info msg="RemoveContainer for \"8f9e19426d1bcece875bc3898d36fef50ba9e4400b256c50da9b74b3c86fb3d3\"" Jun 21 05:30:25.466365 containerd[1528]: time="2025-06-21T05:30:25.466272900Z" level=info msg="RemoveContainer for \"8f9e19426d1bcece875bc3898d36fef50ba9e4400b256c50da9b74b3c86fb3d3\" returns successfully" Jun 21 05:30:25.467036 kubelet[2678]: I0621 05:30:25.466885 2678 scope.go:117] "RemoveContainer" containerID="ef7e5f640033dccb5e408953847b2b10266742339fb5e4d206bad072c3062772" Jun 21 05:30:25.480064 containerd[1528]: time="2025-06-21T05:30:25.479971351Z" level=info msg="RemoveContainer for \"ef7e5f640033dccb5e408953847b2b10266742339fb5e4d206bad072c3062772\"" Jun 21 05:30:25.484860 containerd[1528]: time="2025-06-21T05:30:25.484783838Z" level=info msg="RemoveContainer for \"ef7e5f640033dccb5e408953847b2b10266742339fb5e4d206bad072c3062772\" returns successfully" Jun 21 05:30:25.485228 kubelet[2678]: I0621 05:30:25.485188 2678 scope.go:117] "RemoveContainer" containerID="b24ccd6c9ec8d040e81d335690e0b51adfac2e1c9d7ef10fa2b33ad1a5dce9a3" Jun 21 05:30:25.488430 containerd[1528]: time="2025-06-21T05:30:25.487677310Z" level=info msg="RemoveContainer for \"b24ccd6c9ec8d040e81d335690e0b51adfac2e1c9d7ef10fa2b33ad1a5dce9a3\"" Jun 21 05:30:25.492999 containerd[1528]: time="2025-06-21T05:30:25.492942599Z" level=info msg="RemoveContainer for \"b24ccd6c9ec8d040e81d335690e0b51adfac2e1c9d7ef10fa2b33ad1a5dce9a3\" returns successfully" Jun 21 05:30:25.493593 kubelet[2678]: I0621 05:30:25.493525 2678 scope.go:117] "RemoveContainer" containerID="45be10e72f92db15bb911ea51db8f6518cdcb1f31983bc88523a27a151797f7e" Jun 21 05:30:25.496049 containerd[1528]: time="2025-06-21T05:30:25.495995402Z" level=info msg="RemoveContainer for \"45be10e72f92db15bb911ea51db8f6518cdcb1f31983bc88523a27a151797f7e\"" Jun 21 05:30:25.499880 containerd[1528]: time="2025-06-21T05:30:25.499812062Z" level=info msg="RemoveContainer for \"45be10e72f92db15bb911ea51db8f6518cdcb1f31983bc88523a27a151797f7e\" returns successfully" Jun 21 05:30:25.500388 kubelet[2678]: I0621 05:30:25.500163 2678 scope.go:117] "RemoveContainer" containerID="e5340475fde70979872be8b5a3d451f30f1219eb58cd347647f1b7888a61c7d2" Jun 21 05:30:25.500622 containerd[1528]: time="2025-06-21T05:30:25.500509119Z" level=error msg="ContainerStatus for \"e5340475fde70979872be8b5a3d451f30f1219eb58cd347647f1b7888a61c7d2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e5340475fde70979872be8b5a3d451f30f1219eb58cd347647f1b7888a61c7d2\": not found" Jun 21 05:30:25.501578 kubelet[2678]: E0621 05:30:25.501513 2678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e5340475fde70979872be8b5a3d451f30f1219eb58cd347647f1b7888a61c7d2\": not found" containerID="e5340475fde70979872be8b5a3d451f30f1219eb58cd347647f1b7888a61c7d2" Jun 21 05:30:25.502687 kubelet[2678]: I0621 05:30:25.502564 2678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e5340475fde70979872be8b5a3d451f30f1219eb58cd347647f1b7888a61c7d2"} err="failed to get container status \"e5340475fde70979872be8b5a3d451f30f1219eb58cd347647f1b7888a61c7d2\": rpc error: code = NotFound desc = an error occurred when try to find container \"e5340475fde70979872be8b5a3d451f30f1219eb58cd347647f1b7888a61c7d2\": not found" Jun 21 05:30:25.502873 kubelet[2678]: I0621 05:30:25.502718 2678 scope.go:117] "RemoveContainer" containerID="8f9e19426d1bcece875bc3898d36fef50ba9e4400b256c50da9b74b3c86fb3d3" Jun 21 05:30:25.503227 containerd[1528]: time="2025-06-21T05:30:25.503153882Z" level=error msg="ContainerStatus for \"8f9e19426d1bcece875bc3898d36fef50ba9e4400b256c50da9b74b3c86fb3d3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8f9e19426d1bcece875bc3898d36fef50ba9e4400b256c50da9b74b3c86fb3d3\": not found" Jun 21 05:30:25.503549 kubelet[2678]: E0621 05:30:25.503504 2678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8f9e19426d1bcece875bc3898d36fef50ba9e4400b256c50da9b74b3c86fb3d3\": not found" containerID="8f9e19426d1bcece875bc3898d36fef50ba9e4400b256c50da9b74b3c86fb3d3" Jun 21 05:30:25.503642 kubelet[2678]: I0621 05:30:25.503546 2678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8f9e19426d1bcece875bc3898d36fef50ba9e4400b256c50da9b74b3c86fb3d3"} err="failed to get container status \"8f9e19426d1bcece875bc3898d36fef50ba9e4400b256c50da9b74b3c86fb3d3\": rpc error: code = NotFound desc = an error occurred when try to find container \"8f9e19426d1bcece875bc3898d36fef50ba9e4400b256c50da9b74b3c86fb3d3\": not found" Jun 21 05:30:25.503642 kubelet[2678]: I0621 05:30:25.503570 2678 scope.go:117] "RemoveContainer" containerID="ef7e5f640033dccb5e408953847b2b10266742339fb5e4d206bad072c3062772" Jun 21 05:30:25.504026 containerd[1528]: time="2025-06-21T05:30:25.503881730Z" level=error msg="ContainerStatus for \"ef7e5f640033dccb5e408953847b2b10266742339fb5e4d206bad072c3062772\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ef7e5f640033dccb5e408953847b2b10266742339fb5e4d206bad072c3062772\": not found" Jun 21 05:30:25.504095 kubelet[2678]: E0621 05:30:25.504024 2678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ef7e5f640033dccb5e408953847b2b10266742339fb5e4d206bad072c3062772\": not found" containerID="ef7e5f640033dccb5e408953847b2b10266742339fb5e4d206bad072c3062772" Jun 21 05:30:25.504164 kubelet[2678]: I0621 05:30:25.504100 2678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ef7e5f640033dccb5e408953847b2b10266742339fb5e4d206bad072c3062772"} err="failed to get container status \"ef7e5f640033dccb5e408953847b2b10266742339fb5e4d206bad072c3062772\": rpc error: code = NotFound desc = an error occurred when try to find container \"ef7e5f640033dccb5e408953847b2b10266742339fb5e4d206bad072c3062772\": not found" Jun 21 05:30:25.504164 kubelet[2678]: I0621 05:30:25.504118 2678 scope.go:117] "RemoveContainer" containerID="b24ccd6c9ec8d040e81d335690e0b51adfac2e1c9d7ef10fa2b33ad1a5dce9a3" Jun 21 05:30:25.505066 kubelet[2678]: E0621 05:30:25.504409 2678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b24ccd6c9ec8d040e81d335690e0b51adfac2e1c9d7ef10fa2b33ad1a5dce9a3\": not found" containerID="b24ccd6c9ec8d040e81d335690e0b51adfac2e1c9d7ef10fa2b33ad1a5dce9a3" Jun 21 05:30:25.505066 kubelet[2678]: I0621 05:30:25.504429 2678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b24ccd6c9ec8d040e81d335690e0b51adfac2e1c9d7ef10fa2b33ad1a5dce9a3"} err="failed to get container status \"b24ccd6c9ec8d040e81d335690e0b51adfac2e1c9d7ef10fa2b33ad1a5dce9a3\": rpc error: code = NotFound desc = an error occurred when try to find container \"b24ccd6c9ec8d040e81d335690e0b51adfac2e1c9d7ef10fa2b33ad1a5dce9a3\": not found" Jun 21 05:30:25.505066 kubelet[2678]: I0621 05:30:25.504446 2678 scope.go:117] "RemoveContainer" containerID="45be10e72f92db15bb911ea51db8f6518cdcb1f31983bc88523a27a151797f7e" Jun 21 05:30:25.505066 kubelet[2678]: E0621 05:30:25.504807 2678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"45be10e72f92db15bb911ea51db8f6518cdcb1f31983bc88523a27a151797f7e\": not found" containerID="45be10e72f92db15bb911ea51db8f6518cdcb1f31983bc88523a27a151797f7e" Jun 21 05:30:25.505066 kubelet[2678]: I0621 05:30:25.504859 2678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"45be10e72f92db15bb911ea51db8f6518cdcb1f31983bc88523a27a151797f7e"} err="failed to get container status \"45be10e72f92db15bb911ea51db8f6518cdcb1f31983bc88523a27a151797f7e\": rpc error: code = NotFound desc = an error occurred when try to find container \"45be10e72f92db15bb911ea51db8f6518cdcb1f31983bc88523a27a151797f7e\": not found" Jun 21 05:30:25.505066 kubelet[2678]: I0621 05:30:25.504876 2678 scope.go:117] "RemoveContainer" containerID="25e15a28d16b7224aabc04e8f0c858776a61a6275b6dbc2230577dc753312bf6" Jun 21 05:30:25.505365 containerd[1528]: time="2025-06-21T05:30:25.504284106Z" level=error msg="ContainerStatus for \"b24ccd6c9ec8d040e81d335690e0b51adfac2e1c9d7ef10fa2b33ad1a5dce9a3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b24ccd6c9ec8d040e81d335690e0b51adfac2e1c9d7ef10fa2b33ad1a5dce9a3\": not found" Jun 21 05:30:25.505365 containerd[1528]: time="2025-06-21T05:30:25.504640579Z" level=error msg="ContainerStatus for \"45be10e72f92db15bb911ea51db8f6518cdcb1f31983bc88523a27a151797f7e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"45be10e72f92db15bb911ea51db8f6518cdcb1f31983bc88523a27a151797f7e\": not found" Jun 21 05:30:25.509697 containerd[1528]: time="2025-06-21T05:30:25.509643209Z" level=info msg="RemoveContainer for \"25e15a28d16b7224aabc04e8f0c858776a61a6275b6dbc2230577dc753312bf6\"" Jun 21 05:30:25.515192 containerd[1528]: time="2025-06-21T05:30:25.515089647Z" level=info msg="RemoveContainer for \"25e15a28d16b7224aabc04e8f0c858776a61a6275b6dbc2230577dc753312bf6\" returns successfully" Jun 21 05:30:25.515734 kubelet[2678]: I0621 05:30:25.515691 2678 scope.go:117] "RemoveContainer" containerID="25e15a28d16b7224aabc04e8f0c858776a61a6275b6dbc2230577dc753312bf6" Jun 21 05:30:25.516102 containerd[1528]: time="2025-06-21T05:30:25.516067285Z" level=error msg="ContainerStatus for \"25e15a28d16b7224aabc04e8f0c858776a61a6275b6dbc2230577dc753312bf6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"25e15a28d16b7224aabc04e8f0c858776a61a6275b6dbc2230577dc753312bf6\": not found" Jun 21 05:30:25.516569 kubelet[2678]: E0621 05:30:25.516528 2678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"25e15a28d16b7224aabc04e8f0c858776a61a6275b6dbc2230577dc753312bf6\": not found" containerID="25e15a28d16b7224aabc04e8f0c858776a61a6275b6dbc2230577dc753312bf6" Jun 21 05:30:25.516656 kubelet[2678]: I0621 05:30:25.516592 2678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"25e15a28d16b7224aabc04e8f0c858776a61a6275b6dbc2230577dc753312bf6"} err="failed to get container status \"25e15a28d16b7224aabc04e8f0c858776a61a6275b6dbc2230577dc753312bf6\": rpc error: code = NotFound desc = an error occurred when try to find container \"25e15a28d16b7224aabc04e8f0c858776a61a6275b6dbc2230577dc753312bf6\": not found" Jun 21 05:30:25.545219 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f7115590d1ac76298bf0345bf8b52d4fe923c98708f6ba19f2646a8dccc5dd0b-shm.mount: Deactivated successfully. Jun 21 05:30:25.546026 systemd[1]: var-lib-kubelet-pods-3a892538\x2dae7a\x2d4d13\x2dbf49\x2d8f7bc0eb7436-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2q4qt.mount: Deactivated successfully. Jun 21 05:30:25.546352 systemd[1]: var-lib-kubelet-pods-4d5b46bc\x2d8771\x2d46d9\x2d821a\x2d5cb1cb3655d8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dc2tcg.mount: Deactivated successfully. Jun 21 05:30:25.546608 systemd[1]: var-lib-kubelet-pods-3a892538\x2dae7a\x2d4d13\x2dbf49\x2d8f7bc0eb7436-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jun 21 05:30:25.546822 systemd[1]: var-lib-kubelet-pods-3a892538\x2dae7a\x2d4d13\x2dbf49\x2d8f7bc0eb7436-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jun 21 05:30:26.376842 sshd[4253]: Connection closed by 139.178.68.195 port 36392 Jun 21 05:30:26.378578 sshd-session[4251]: pam_unix(sshd:session): session closed for user core Jun 21 05:30:26.389891 systemd[1]: sshd@24-64.23.242.202:22-139.178.68.195:36392.service: Deactivated successfully. Jun 21 05:30:26.395517 systemd[1]: session-25.scope: Deactivated successfully. Jun 21 05:30:26.396969 systemd-logind[1504]: Session 25 logged out. Waiting for processes to exit. Jun 21 05:30:26.400950 systemd-logind[1504]: Removed session 25. Jun 21 05:30:26.402653 systemd[1]: Started sshd@25-64.23.242.202:22-139.178.68.195:55314.service - OpenSSH per-connection server daemon (139.178.68.195:55314). Jun 21 05:30:26.485358 sshd[4408]: Accepted publickey for core from 139.178.68.195 port 55314 ssh2: RSA SHA256:esrwHbjCvD8R4I7sQRiHa5Rpu9l1igA0BMtQzkIUH4o Jun 21 05:30:26.487483 sshd-session[4408]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 05:30:26.495395 systemd-logind[1504]: New session 26 of user core. Jun 21 05:30:26.502681 systemd[1]: Started session-26.scope - Session 26 of User core. Jun 21 05:30:26.958275 kubelet[2678]: I0621 05:30:26.958167 2678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a892538-ae7a-4d13-bf49-8f7bc0eb7436" path="/var/lib/kubelet/pods/3a892538-ae7a-4d13-bf49-8f7bc0eb7436/volumes" Jun 21 05:30:26.959021 kubelet[2678]: I0621 05:30:26.958976 2678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d5b46bc-8771-46d9-821a-5cb1cb3655d8" path="/var/lib/kubelet/pods/4d5b46bc-8771-46d9-821a-5cb1cb3655d8/volumes" Jun 21 05:30:27.031574 sshd[4410]: Connection closed by 139.178.68.195 port 55314 Jun 21 05:30:27.032861 sshd-session[4408]: pam_unix(sshd:session): session closed for user core Jun 21 05:30:27.053991 systemd[1]: sshd@25-64.23.242.202:22-139.178.68.195:55314.service: Deactivated successfully. Jun 21 05:30:27.059153 systemd[1]: session-26.scope: Deactivated successfully. Jun 21 05:30:27.062891 systemd-logind[1504]: Session 26 logged out. Waiting for processes to exit. Jun 21 05:30:27.064098 kubelet[2678]: E0621 05:30:27.063610 2678 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jun 21 05:30:27.069604 systemd[1]: Started sshd@26-64.23.242.202:22-139.178.68.195:55326.service - OpenSSH per-connection server daemon (139.178.68.195:55326). Jun 21 05:30:27.073687 systemd-logind[1504]: Removed session 26. Jun 21 05:30:27.106082 kubelet[2678]: E0621 05:30:27.105097 2678 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4d5b46bc-8771-46d9-821a-5cb1cb3655d8" containerName="cilium-operator" Jun 21 05:30:27.106082 kubelet[2678]: E0621 05:30:27.105151 2678 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3a892538-ae7a-4d13-bf49-8f7bc0eb7436" containerName="apply-sysctl-overwrites" Jun 21 05:30:27.106082 kubelet[2678]: E0621 05:30:27.105162 2678 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3a892538-ae7a-4d13-bf49-8f7bc0eb7436" containerName="mount-bpf-fs" Jun 21 05:30:27.106082 kubelet[2678]: E0621 05:30:27.105173 2678 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3a892538-ae7a-4d13-bf49-8f7bc0eb7436" containerName="clean-cilium-state" Jun 21 05:30:27.106082 kubelet[2678]: E0621 05:30:27.105182 2678 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3a892538-ae7a-4d13-bf49-8f7bc0eb7436" containerName="cilium-agent" Jun 21 05:30:27.106082 kubelet[2678]: E0621 05:30:27.105195 2678 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3a892538-ae7a-4d13-bf49-8f7bc0eb7436" containerName="mount-cgroup" Jun 21 05:30:27.106082 kubelet[2678]: I0621 05:30:27.105232 2678 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d5b46bc-8771-46d9-821a-5cb1cb3655d8" containerName="cilium-operator" Jun 21 05:30:27.106082 kubelet[2678]: I0621 05:30:27.105241 2678 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a892538-ae7a-4d13-bf49-8f7bc0eb7436" containerName="cilium-agent" Jun 21 05:30:27.128672 systemd[1]: Created slice kubepods-burstable-podad7575aa_e454_47ee_8669_90ad34f04f02.slice - libcontainer container kubepods-burstable-podad7575aa_e454_47ee_8669_90ad34f04f02.slice. Jun 21 05:30:27.173163 sshd[4420]: Accepted publickey for core from 139.178.68.195 port 55326 ssh2: RSA SHA256:esrwHbjCvD8R4I7sQRiHa5Rpu9l1igA0BMtQzkIUH4o Jun 21 05:30:27.176162 sshd-session[4420]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 05:30:27.184334 kubelet[2678]: I0621 05:30:27.183532 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ad7575aa-e454-47ee-8669-90ad34f04f02-cilium-ipsec-secrets\") pod \"cilium-5ljqt\" (UID: \"ad7575aa-e454-47ee-8669-90ad34f04f02\") " pod="kube-system/cilium-5ljqt" Jun 21 05:30:27.184726 kubelet[2678]: I0621 05:30:27.184639 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ad7575aa-e454-47ee-8669-90ad34f04f02-bpf-maps\") pod \"cilium-5ljqt\" (UID: \"ad7575aa-e454-47ee-8669-90ad34f04f02\") " pod="kube-system/cilium-5ljqt" Jun 21 05:30:27.184899 kubelet[2678]: I0621 05:30:27.184770 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ad7575aa-e454-47ee-8669-90ad34f04f02-xtables-lock\") pod \"cilium-5ljqt\" (UID: \"ad7575aa-e454-47ee-8669-90ad34f04f02\") " pod="kube-system/cilium-5ljqt" Jun 21 05:30:27.184899 kubelet[2678]: I0621 05:30:27.184797 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ad7575aa-e454-47ee-8669-90ad34f04f02-lib-modules\") pod \"cilium-5ljqt\" (UID: \"ad7575aa-e454-47ee-8669-90ad34f04f02\") " pod="kube-system/cilium-5ljqt" Jun 21 05:30:27.185170 kubelet[2678]: I0621 05:30:27.185070 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ad7575aa-e454-47ee-8669-90ad34f04f02-host-proc-sys-net\") pod \"cilium-5ljqt\" (UID: \"ad7575aa-e454-47ee-8669-90ad34f04f02\") " pod="kube-system/cilium-5ljqt" Jun 21 05:30:27.185170 kubelet[2678]: I0621 05:30:27.185147 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ad7575aa-e454-47ee-8669-90ad34f04f02-hubble-tls\") pod \"cilium-5ljqt\" (UID: \"ad7575aa-e454-47ee-8669-90ad34f04f02\") " pod="kube-system/cilium-5ljqt" Jun 21 05:30:27.186257 kubelet[2678]: I0621 05:30:27.185992 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ad7575aa-e454-47ee-8669-90ad34f04f02-cilium-cgroup\") pod \"cilium-5ljqt\" (UID: \"ad7575aa-e454-47ee-8669-90ad34f04f02\") " pod="kube-system/cilium-5ljqt" Jun 21 05:30:27.186257 kubelet[2678]: I0621 05:30:27.186161 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncglc\" (UniqueName: \"kubernetes.io/projected/ad7575aa-e454-47ee-8669-90ad34f04f02-kube-api-access-ncglc\") pod \"cilium-5ljqt\" (UID: \"ad7575aa-e454-47ee-8669-90ad34f04f02\") " pod="kube-system/cilium-5ljqt" Jun 21 05:30:27.186257 kubelet[2678]: I0621 05:30:27.186218 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ad7575aa-e454-47ee-8669-90ad34f04f02-clustermesh-secrets\") pod \"cilium-5ljqt\" (UID: \"ad7575aa-e454-47ee-8669-90ad34f04f02\") " pod="kube-system/cilium-5ljqt" Jun 21 05:30:27.188003 kubelet[2678]: I0621 05:30:27.186241 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ad7575aa-e454-47ee-8669-90ad34f04f02-cilium-config-path\") pod \"cilium-5ljqt\" (UID: \"ad7575aa-e454-47ee-8669-90ad34f04f02\") " pod="kube-system/cilium-5ljqt" Jun 21 05:30:27.188003 kubelet[2678]: I0621 05:30:27.186600 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ad7575aa-e454-47ee-8669-90ad34f04f02-cilium-run\") pod \"cilium-5ljqt\" (UID: \"ad7575aa-e454-47ee-8669-90ad34f04f02\") " pod="kube-system/cilium-5ljqt" Jun 21 05:30:27.188003 kubelet[2678]: I0621 05:30:27.187669 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ad7575aa-e454-47ee-8669-90ad34f04f02-host-proc-sys-kernel\") pod \"cilium-5ljqt\" (UID: \"ad7575aa-e454-47ee-8669-90ad34f04f02\") " pod="kube-system/cilium-5ljqt" Jun 21 05:30:27.188003 kubelet[2678]: I0621 05:30:27.187688 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ad7575aa-e454-47ee-8669-90ad34f04f02-hostproc\") pod \"cilium-5ljqt\" (UID: \"ad7575aa-e454-47ee-8669-90ad34f04f02\") " pod="kube-system/cilium-5ljqt" Jun 21 05:30:27.188003 kubelet[2678]: I0621 05:30:27.187702 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ad7575aa-e454-47ee-8669-90ad34f04f02-etc-cni-netd\") pod \"cilium-5ljqt\" (UID: \"ad7575aa-e454-47ee-8669-90ad34f04f02\") " pod="kube-system/cilium-5ljqt" Jun 21 05:30:27.188003 kubelet[2678]: I0621 05:30:27.187717 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ad7575aa-e454-47ee-8669-90ad34f04f02-cni-path\") pod \"cilium-5ljqt\" (UID: \"ad7575aa-e454-47ee-8669-90ad34f04f02\") " pod="kube-system/cilium-5ljqt" Jun 21 05:30:27.187740 systemd-logind[1504]: New session 27 of user core. Jun 21 05:30:27.190662 systemd[1]: Started session-27.scope - Session 27 of User core. Jun 21 05:30:27.254005 sshd[4422]: Connection closed by 139.178.68.195 port 55326 Jun 21 05:30:27.255538 sshd-session[4420]: pam_unix(sshd:session): session closed for user core Jun 21 05:30:27.266514 systemd[1]: sshd@26-64.23.242.202:22-139.178.68.195:55326.service: Deactivated successfully. Jun 21 05:30:27.269151 systemd[1]: session-27.scope: Deactivated successfully. Jun 21 05:30:27.270268 systemd-logind[1504]: Session 27 logged out. Waiting for processes to exit. Jun 21 05:30:27.276691 systemd[1]: Started sshd@27-64.23.242.202:22-139.178.68.195:55334.service - OpenSSH per-connection server daemon (139.178.68.195:55334). Jun 21 05:30:27.278539 systemd-logind[1504]: Removed session 27. Jun 21 05:30:27.373642 sshd[4429]: Accepted publickey for core from 139.178.68.195 port 55334 ssh2: RSA SHA256:esrwHbjCvD8R4I7sQRiHa5Rpu9l1igA0BMtQzkIUH4o Jun 21 05:30:27.375487 sshd-session[4429]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 05:30:27.381193 systemd-logind[1504]: New session 28 of user core. Jun 21 05:30:27.388581 systemd[1]: Started session-28.scope - Session 28 of User core. Jun 21 05:30:27.438824 kubelet[2678]: E0621 05:30:27.438437 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 21 05:30:27.439538 containerd[1528]: time="2025-06-21T05:30:27.439195143Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5ljqt,Uid:ad7575aa-e454-47ee-8669-90ad34f04f02,Namespace:kube-system,Attempt:0,}" Jun 21 05:30:27.470574 containerd[1528]: time="2025-06-21T05:30:27.470499474Z" level=info msg="connecting to shim ecad9de5c20ced21843c1c597299e50961f8077549a5127a8f49e1ed132fd4e5" address="unix:///run/containerd/s/8f4ccf20e26dac9b2d16e74d4bd4dc33b3da58277cbb8e24ae96808d8eeeea06" namespace=k8s.io protocol=ttrpc version=3 Jun 21 05:30:27.498579 systemd[1]: Started cri-containerd-ecad9de5c20ced21843c1c597299e50961f8077549a5127a8f49e1ed132fd4e5.scope - libcontainer container ecad9de5c20ced21843c1c597299e50961f8077549a5127a8f49e1ed132fd4e5. Jun 21 05:30:27.554347 containerd[1528]: time="2025-06-21T05:30:27.554257862Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5ljqt,Uid:ad7575aa-e454-47ee-8669-90ad34f04f02,Namespace:kube-system,Attempt:0,} returns sandbox id \"ecad9de5c20ced21843c1c597299e50961f8077549a5127a8f49e1ed132fd4e5\"" Jun 21 05:30:27.555417 kubelet[2678]: E0621 05:30:27.555291 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 21 05:30:27.558062 containerd[1528]: time="2025-06-21T05:30:27.558011745Z" level=info msg="CreateContainer within sandbox \"ecad9de5c20ced21843c1c597299e50961f8077549a5127a8f49e1ed132fd4e5\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jun 21 05:30:27.564713 containerd[1528]: time="2025-06-21T05:30:27.564600991Z" level=info msg="Container ba539fb50ed2718333c792fbc3a9ec2564dbd1a570ed124e3671a53359319f5c: CDI devices from CRI Config.CDIDevices: []" Jun 21 05:30:27.574989 containerd[1528]: time="2025-06-21T05:30:27.574939445Z" level=info msg="CreateContainer within sandbox \"ecad9de5c20ced21843c1c597299e50961f8077549a5127a8f49e1ed132fd4e5\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ba539fb50ed2718333c792fbc3a9ec2564dbd1a570ed124e3671a53359319f5c\"" Jun 21 05:30:27.576003 containerd[1528]: time="2025-06-21T05:30:27.575970804Z" level=info msg="StartContainer for \"ba539fb50ed2718333c792fbc3a9ec2564dbd1a570ed124e3671a53359319f5c\"" Jun 21 05:30:27.578168 containerd[1528]: time="2025-06-21T05:30:27.578106551Z" level=info msg="connecting to shim ba539fb50ed2718333c792fbc3a9ec2564dbd1a570ed124e3671a53359319f5c" address="unix:///run/containerd/s/8f4ccf20e26dac9b2d16e74d4bd4dc33b3da58277cbb8e24ae96808d8eeeea06" protocol=ttrpc version=3 Jun 21 05:30:27.603529 systemd[1]: Started cri-containerd-ba539fb50ed2718333c792fbc3a9ec2564dbd1a570ed124e3671a53359319f5c.scope - libcontainer container ba539fb50ed2718333c792fbc3a9ec2564dbd1a570ed124e3671a53359319f5c. Jun 21 05:30:27.645271 containerd[1528]: time="2025-06-21T05:30:27.645229134Z" level=info msg="StartContainer for \"ba539fb50ed2718333c792fbc3a9ec2564dbd1a570ed124e3671a53359319f5c\" returns successfully" Jun 21 05:30:27.664859 systemd[1]: cri-containerd-ba539fb50ed2718333c792fbc3a9ec2564dbd1a570ed124e3671a53359319f5c.scope: Deactivated successfully. Jun 21 05:30:27.670609 containerd[1528]: time="2025-06-21T05:30:27.670533609Z" level=info msg="received exit event container_id:\"ba539fb50ed2718333c792fbc3a9ec2564dbd1a570ed124e3671a53359319f5c\" id:\"ba539fb50ed2718333c792fbc3a9ec2564dbd1a570ed124e3671a53359319f5c\" pid:4497 exited_at:{seconds:1750483827 nanos:670210157}" Jun 21 05:30:27.670758 containerd[1528]: time="2025-06-21T05:30:27.670720734Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ba539fb50ed2718333c792fbc3a9ec2564dbd1a570ed124e3671a53359319f5c\" id:\"ba539fb50ed2718333c792fbc3a9ec2564dbd1a570ed124e3671a53359319f5c\" pid:4497 exited_at:{seconds:1750483827 nanos:670210157}" Jun 21 05:30:27.952904 kubelet[2678]: E0621 05:30:27.952714 2678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7c65d6cfc9-95qfq" podUID="9ad298b5-9803-4ed9-b093-b5662165969f" Jun 21 05:30:28.420251 kubelet[2678]: E0621 05:30:28.418527 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 21 05:30:28.424909 containerd[1528]: time="2025-06-21T05:30:28.424334097Z" level=info msg="CreateContainer within sandbox \"ecad9de5c20ced21843c1c597299e50961f8077549a5127a8f49e1ed132fd4e5\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jun 21 05:30:28.438261 containerd[1528]: time="2025-06-21T05:30:28.437478209Z" level=info msg="Container 8d3e8eb609b0eface9c8ff0611da4d1e04e8985817d41756f8615ab60165feb0: CDI devices from CRI Config.CDIDevices: []" Jun 21 05:30:28.454368 containerd[1528]: time="2025-06-21T05:30:28.454231491Z" level=info msg="CreateContainer within sandbox \"ecad9de5c20ced21843c1c597299e50961f8077549a5127a8f49e1ed132fd4e5\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8d3e8eb609b0eface9c8ff0611da4d1e04e8985817d41756f8615ab60165feb0\"" Jun 21 05:30:28.457170 containerd[1528]: time="2025-06-21T05:30:28.457100993Z" level=info msg="StartContainer for \"8d3e8eb609b0eface9c8ff0611da4d1e04e8985817d41756f8615ab60165feb0\"" Jun 21 05:30:28.460557 containerd[1528]: time="2025-06-21T05:30:28.460503096Z" level=info msg="connecting to shim 8d3e8eb609b0eface9c8ff0611da4d1e04e8985817d41756f8615ab60165feb0" address="unix:///run/containerd/s/8f4ccf20e26dac9b2d16e74d4bd4dc33b3da58277cbb8e24ae96808d8eeeea06" protocol=ttrpc version=3 Jun 21 05:30:28.502574 systemd[1]: Started cri-containerd-8d3e8eb609b0eface9c8ff0611da4d1e04e8985817d41756f8615ab60165feb0.scope - libcontainer container 8d3e8eb609b0eface9c8ff0611da4d1e04e8985817d41756f8615ab60165feb0. Jun 21 05:30:28.550195 containerd[1528]: time="2025-06-21T05:30:28.550146121Z" level=info msg="StartContainer for \"8d3e8eb609b0eface9c8ff0611da4d1e04e8985817d41756f8615ab60165feb0\" returns successfully" Jun 21 05:30:28.561393 systemd[1]: cri-containerd-8d3e8eb609b0eface9c8ff0611da4d1e04e8985817d41756f8615ab60165feb0.scope: Deactivated successfully. Jun 21 05:30:28.563974 containerd[1528]: time="2025-06-21T05:30:28.563898333Z" level=info msg="received exit event container_id:\"8d3e8eb609b0eface9c8ff0611da4d1e04e8985817d41756f8615ab60165feb0\" id:\"8d3e8eb609b0eface9c8ff0611da4d1e04e8985817d41756f8615ab60165feb0\" pid:4543 exited_at:{seconds:1750483828 nanos:563291158}" Jun 21 05:30:28.566663 containerd[1528]: time="2025-06-21T05:30:28.566612759Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8d3e8eb609b0eface9c8ff0611da4d1e04e8985817d41756f8615ab60165feb0\" id:\"8d3e8eb609b0eface9c8ff0611da4d1e04e8985817d41756f8615ab60165feb0\" pid:4543 exited_at:{seconds:1750483828 nanos:563291158}" Jun 21 05:30:28.595289 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8d3e8eb609b0eface9c8ff0611da4d1e04e8985817d41756f8615ab60165feb0-rootfs.mount: Deactivated successfully. Jun 21 05:30:29.014770 kubelet[2678]: I0621 05:30:29.014549 2678 setters.go:600] "Node became not ready" node="ci-4372.0.0-d-47135505f9" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-06-21T05:30:29Z","lastTransitionTime":"2025-06-21T05:30:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jun 21 05:30:29.426092 kubelet[2678]: E0621 05:30:29.426050 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 21 05:30:29.430009 containerd[1528]: time="2025-06-21T05:30:29.429935086Z" level=info msg="CreateContainer within sandbox \"ecad9de5c20ced21843c1c597299e50961f8077549a5127a8f49e1ed132fd4e5\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jun 21 05:30:29.445414 containerd[1528]: time="2025-06-21T05:30:29.445280135Z" level=info msg="Container 4294a0f88435ded4e9e5212345aa54e97c7fa7ddb76492000dc7c3b89a8622e7: CDI devices from CRI Config.CDIDevices: []" Jun 21 05:30:29.453719 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1556558252.mount: Deactivated successfully. Jun 21 05:30:29.463169 containerd[1528]: time="2025-06-21T05:30:29.463052887Z" level=info msg="CreateContainer within sandbox \"ecad9de5c20ced21843c1c597299e50961f8077549a5127a8f49e1ed132fd4e5\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"4294a0f88435ded4e9e5212345aa54e97c7fa7ddb76492000dc7c3b89a8622e7\"" Jun 21 05:30:29.466338 containerd[1528]: time="2025-06-21T05:30:29.464540402Z" level=info msg="StartContainer for \"4294a0f88435ded4e9e5212345aa54e97c7fa7ddb76492000dc7c3b89a8622e7\"" Jun 21 05:30:29.469753 containerd[1528]: time="2025-06-21T05:30:29.469688908Z" level=info msg="connecting to shim 4294a0f88435ded4e9e5212345aa54e97c7fa7ddb76492000dc7c3b89a8622e7" address="unix:///run/containerd/s/8f4ccf20e26dac9b2d16e74d4bd4dc33b3da58277cbb8e24ae96808d8eeeea06" protocol=ttrpc version=3 Jun 21 05:30:29.498578 systemd[1]: Started cri-containerd-4294a0f88435ded4e9e5212345aa54e97c7fa7ddb76492000dc7c3b89a8622e7.scope - libcontainer container 4294a0f88435ded4e9e5212345aa54e97c7fa7ddb76492000dc7c3b89a8622e7. Jun 21 05:30:29.548606 containerd[1528]: time="2025-06-21T05:30:29.548550188Z" level=info msg="StartContainer for \"4294a0f88435ded4e9e5212345aa54e97c7fa7ddb76492000dc7c3b89a8622e7\" returns successfully" Jun 21 05:30:29.553503 systemd[1]: cri-containerd-4294a0f88435ded4e9e5212345aa54e97c7fa7ddb76492000dc7c3b89a8622e7.scope: Deactivated successfully. Jun 21 05:30:29.558212 containerd[1528]: time="2025-06-21T05:30:29.558066975Z" level=info msg="received exit event container_id:\"4294a0f88435ded4e9e5212345aa54e97c7fa7ddb76492000dc7c3b89a8622e7\" id:\"4294a0f88435ded4e9e5212345aa54e97c7fa7ddb76492000dc7c3b89a8622e7\" pid:4587 exited_at:{seconds:1750483829 nanos:556851717}" Jun 21 05:30:29.558212 containerd[1528]: time="2025-06-21T05:30:29.558167046Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4294a0f88435ded4e9e5212345aa54e97c7fa7ddb76492000dc7c3b89a8622e7\" id:\"4294a0f88435ded4e9e5212345aa54e97c7fa7ddb76492000dc7c3b89a8622e7\" pid:4587 exited_at:{seconds:1750483829 nanos:556851717}" Jun 21 05:30:29.584868 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4294a0f88435ded4e9e5212345aa54e97c7fa7ddb76492000dc7c3b89a8622e7-rootfs.mount: Deactivated successfully. Jun 21 05:30:29.952317 kubelet[2678]: E0621 05:30:29.952244 2678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7c65d6cfc9-95qfq" podUID="9ad298b5-9803-4ed9-b093-b5662165969f" Jun 21 05:30:30.434336 kubelet[2678]: E0621 05:30:30.434198 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 21 05:30:30.442152 containerd[1528]: time="2025-06-21T05:30:30.440788392Z" level=info msg="CreateContainer within sandbox \"ecad9de5c20ced21843c1c597299e50961f8077549a5127a8f49e1ed132fd4e5\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jun 21 05:30:30.455629 containerd[1528]: time="2025-06-21T05:30:30.455574914Z" level=info msg="Container 4775ab1836d41a35c7557488abd3c6351b708dfcf314f217dcc81817438805fc: CDI devices from CRI Config.CDIDevices: []" Jun 21 05:30:30.466597 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount493209156.mount: Deactivated successfully. Jun 21 05:30:30.484968 containerd[1528]: time="2025-06-21T05:30:30.484889688Z" level=info msg="CreateContainer within sandbox \"ecad9de5c20ced21843c1c597299e50961f8077549a5127a8f49e1ed132fd4e5\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"4775ab1836d41a35c7557488abd3c6351b708dfcf314f217dcc81817438805fc\"" Jun 21 05:30:30.487889 containerd[1528]: time="2025-06-21T05:30:30.487824522Z" level=info msg="StartContainer for \"4775ab1836d41a35c7557488abd3c6351b708dfcf314f217dcc81817438805fc\"" Jun 21 05:30:30.489706 containerd[1528]: time="2025-06-21T05:30:30.489665870Z" level=info msg="connecting to shim 4775ab1836d41a35c7557488abd3c6351b708dfcf314f217dcc81817438805fc" address="unix:///run/containerd/s/8f4ccf20e26dac9b2d16e74d4bd4dc33b3da58277cbb8e24ae96808d8eeeea06" protocol=ttrpc version=3 Jun 21 05:30:30.520694 systemd[1]: Started cri-containerd-4775ab1836d41a35c7557488abd3c6351b708dfcf314f217dcc81817438805fc.scope - libcontainer container 4775ab1836d41a35c7557488abd3c6351b708dfcf314f217dcc81817438805fc. Jun 21 05:30:30.566351 systemd[1]: cri-containerd-4775ab1836d41a35c7557488abd3c6351b708dfcf314f217dcc81817438805fc.scope: Deactivated successfully. Jun 21 05:30:30.568661 containerd[1528]: time="2025-06-21T05:30:30.568618321Z" level=info msg="StartContainer for \"4775ab1836d41a35c7557488abd3c6351b708dfcf314f217dcc81817438805fc\" returns successfully" Jun 21 05:30:30.570914 containerd[1528]: time="2025-06-21T05:30:30.570874384Z" level=info msg="received exit event container_id:\"4775ab1836d41a35c7557488abd3c6351b708dfcf314f217dcc81817438805fc\" id:\"4775ab1836d41a35c7557488abd3c6351b708dfcf314f217dcc81817438805fc\" pid:4625 exited_at:{seconds:1750483830 nanos:570636487}" Jun 21 05:30:30.571168 containerd[1528]: time="2025-06-21T05:30:30.571142487Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4775ab1836d41a35c7557488abd3c6351b708dfcf314f217dcc81817438805fc\" id:\"4775ab1836d41a35c7557488abd3c6351b708dfcf314f217dcc81817438805fc\" pid:4625 exited_at:{seconds:1750483830 nanos:570636487}" Jun 21 05:30:30.607654 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4775ab1836d41a35c7557488abd3c6351b708dfcf314f217dcc81817438805fc-rootfs.mount: Deactivated successfully. Jun 21 05:30:31.439611 kubelet[2678]: E0621 05:30:31.439240 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 21 05:30:31.446212 containerd[1528]: time="2025-06-21T05:30:31.446159115Z" level=info msg="CreateContainer within sandbox \"ecad9de5c20ced21843c1c597299e50961f8077549a5127a8f49e1ed132fd4e5\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jun 21 05:30:31.455046 containerd[1528]: time="2025-06-21T05:30:31.454950962Z" level=info msg="Container a8af0a6fc4971e732845647775f84fce8d7fa1f36e4cbdea7aee3c4c0e1cc6c8: CDI devices from CRI Config.CDIDevices: []" Jun 21 05:30:31.468930 containerd[1528]: time="2025-06-21T05:30:31.468724807Z" level=info msg="CreateContainer within sandbox \"ecad9de5c20ced21843c1c597299e50961f8077549a5127a8f49e1ed132fd4e5\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a8af0a6fc4971e732845647775f84fce8d7fa1f36e4cbdea7aee3c4c0e1cc6c8\"" Jun 21 05:30:31.470481 containerd[1528]: time="2025-06-21T05:30:31.470276807Z" level=info msg="StartContainer for \"a8af0a6fc4971e732845647775f84fce8d7fa1f36e4cbdea7aee3c4c0e1cc6c8\"" Jun 21 05:30:31.473495 containerd[1528]: time="2025-06-21T05:30:31.473213940Z" level=info msg="connecting to shim a8af0a6fc4971e732845647775f84fce8d7fa1f36e4cbdea7aee3c4c0e1cc6c8" address="unix:///run/containerd/s/8f4ccf20e26dac9b2d16e74d4bd4dc33b3da58277cbb8e24ae96808d8eeeea06" protocol=ttrpc version=3 Jun 21 05:30:31.507608 systemd[1]: Started cri-containerd-a8af0a6fc4971e732845647775f84fce8d7fa1f36e4cbdea7aee3c4c0e1cc6c8.scope - libcontainer container a8af0a6fc4971e732845647775f84fce8d7fa1f36e4cbdea7aee3c4c0e1cc6c8. Jun 21 05:30:31.559161 containerd[1528]: time="2025-06-21T05:30:31.559111011Z" level=info msg="StartContainer for \"a8af0a6fc4971e732845647775f84fce8d7fa1f36e4cbdea7aee3c4c0e1cc6c8\" returns successfully" Jun 21 05:30:31.648090 containerd[1528]: time="2025-06-21T05:30:31.648040601Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a8af0a6fc4971e732845647775f84fce8d7fa1f36e4cbdea7aee3c4c0e1cc6c8\" id:\"79c8f93e13b75a381f1f9b53f588ee644db57f384346f692912a752eabfb5cff\" pid:4693 exited_at:{seconds:1750483831 nanos:646399480}" Jun 21 05:30:31.952444 kubelet[2678]: E0621 05:30:31.952285 2678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7c65d6cfc9-95qfq" podUID="9ad298b5-9803-4ed9-b093-b5662165969f" Jun 21 05:30:32.059608 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Jun 21 05:30:32.449351 kubelet[2678]: E0621 05:30:32.447799 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 21 05:30:32.953973 kubelet[2678]: E0621 05:30:32.953886 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 21 05:30:33.450763 kubelet[2678]: E0621 05:30:33.450721 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 21 05:30:33.953866 kubelet[2678]: E0621 05:30:33.953736 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 21 05:30:34.149068 containerd[1528]: time="2025-06-21T05:30:34.149021087Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a8af0a6fc4971e732845647775f84fce8d7fa1f36e4cbdea7aee3c4c0e1cc6c8\" id:\"66b76a83851001a6377e69dc0a8e742fe11a8097fc99e681ec67ae9287edd794\" pid:4853 exit_status:1 exited_at:{seconds:1750483834 nanos:148706187}" Jun 21 05:30:35.350289 systemd-networkd[1449]: lxc_health: Link UP Jun 21 05:30:35.352733 systemd-networkd[1449]: lxc_health: Gained carrier Jun 21 05:30:35.445291 kubelet[2678]: E0621 05:30:35.445241 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 21 05:30:35.471558 kubelet[2678]: E0621 05:30:35.471528 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 21 05:30:35.481114 kubelet[2678]: I0621 05:30:35.481044 2678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-5ljqt" podStartSLOduration=8.481022183 podStartE2EDuration="8.481022183s" podCreationTimestamp="2025-06-21 05:30:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-21 05:30:32.472677935 +0000 UTC m=+115.685336178" watchObservedRunningTime="2025-06-21 05:30:35.481022183 +0000 UTC m=+118.693680406" Jun 21 05:30:36.475184 kubelet[2678]: E0621 05:30:36.475148 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 21 05:30:36.619070 containerd[1528]: time="2025-06-21T05:30:36.617662003Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a8af0a6fc4971e732845647775f84fce8d7fa1f36e4cbdea7aee3c4c0e1cc6c8\" id:\"8e1e51995fdbda4c4e99c80b3404525839d4536528e4e32194f2d5f77818ab99\" pid:5215 exited_at:{seconds:1750483836 nanos:616961573}" Jun 21 05:30:36.960019 containerd[1528]: time="2025-06-21T05:30:36.959672734Z" level=info msg="StopPodSandbox for \"f7115590d1ac76298bf0345bf8b52d4fe923c98708f6ba19f2646a8dccc5dd0b\"" Jun 21 05:30:36.960019 containerd[1528]: time="2025-06-21T05:30:36.959884012Z" level=info msg="TearDown network for sandbox \"f7115590d1ac76298bf0345bf8b52d4fe923c98708f6ba19f2646a8dccc5dd0b\" successfully" Jun 21 05:30:36.960019 containerd[1528]: time="2025-06-21T05:30:36.959902361Z" level=info msg="StopPodSandbox for \"f7115590d1ac76298bf0345bf8b52d4fe923c98708f6ba19f2646a8dccc5dd0b\" returns successfully" Jun 21 05:30:36.961651 containerd[1528]: time="2025-06-21T05:30:36.961229792Z" level=info msg="RemovePodSandbox for \"f7115590d1ac76298bf0345bf8b52d4fe923c98708f6ba19f2646a8dccc5dd0b\"" Jun 21 05:30:36.961651 containerd[1528]: time="2025-06-21T05:30:36.961287846Z" level=info msg="Forcibly stopping sandbox \"f7115590d1ac76298bf0345bf8b52d4fe923c98708f6ba19f2646a8dccc5dd0b\"" Jun 21 05:30:36.961651 containerd[1528]: time="2025-06-21T05:30:36.961458723Z" level=info msg="TearDown network for sandbox \"f7115590d1ac76298bf0345bf8b52d4fe923c98708f6ba19f2646a8dccc5dd0b\" successfully" Jun 21 05:30:36.970068 containerd[1528]: time="2025-06-21T05:30:36.969686774Z" level=info msg="Ensure that sandbox f7115590d1ac76298bf0345bf8b52d4fe923c98708f6ba19f2646a8dccc5dd0b in task-service has been cleanup successfully" Jun 21 05:30:36.974629 containerd[1528]: time="2025-06-21T05:30:36.974410461Z" level=info msg="RemovePodSandbox \"f7115590d1ac76298bf0345bf8b52d4fe923c98708f6ba19f2646a8dccc5dd0b\" returns successfully" Jun 21 05:30:36.976412 containerd[1528]: time="2025-06-21T05:30:36.975378198Z" level=info msg="StopPodSandbox for \"02102bbec33283547fe27f623daf3153246f8069571354eb64b433faad3345a1\"" Jun 21 05:30:36.976412 containerd[1528]: time="2025-06-21T05:30:36.975552698Z" level=info msg="TearDown network for sandbox \"02102bbec33283547fe27f623daf3153246f8069571354eb64b433faad3345a1\" successfully" Jun 21 05:30:36.976412 containerd[1528]: time="2025-06-21T05:30:36.975570878Z" level=info msg="StopPodSandbox for \"02102bbec33283547fe27f623daf3153246f8069571354eb64b433faad3345a1\" returns successfully" Jun 21 05:30:36.979038 containerd[1528]: time="2025-06-21T05:30:36.978993567Z" level=info msg="RemovePodSandbox for \"02102bbec33283547fe27f623daf3153246f8069571354eb64b433faad3345a1\"" Jun 21 05:30:36.979417 containerd[1528]: time="2025-06-21T05:30:36.979381193Z" level=info msg="Forcibly stopping sandbox \"02102bbec33283547fe27f623daf3153246f8069571354eb64b433faad3345a1\"" Jun 21 05:30:36.979660 containerd[1528]: time="2025-06-21T05:30:36.979645148Z" level=info msg="TearDown network for sandbox \"02102bbec33283547fe27f623daf3153246f8069571354eb64b433faad3345a1\" successfully" Jun 21 05:30:36.981360 containerd[1528]: time="2025-06-21T05:30:36.981323715Z" level=info msg="Ensure that sandbox 02102bbec33283547fe27f623daf3153246f8069571354eb64b433faad3345a1 in task-service has been cleanup successfully" Jun 21 05:30:36.984720 containerd[1528]: time="2025-06-21T05:30:36.984611762Z" level=info msg="RemovePodSandbox \"02102bbec33283547fe27f623daf3153246f8069571354eb64b433faad3345a1\" returns successfully" Jun 21 05:30:37.056622 systemd-networkd[1449]: lxc_health: Gained IPv6LL Jun 21 05:30:38.816185 containerd[1528]: time="2025-06-21T05:30:38.816126740Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a8af0a6fc4971e732845647775f84fce8d7fa1f36e4cbdea7aee3c4c0e1cc6c8\" id:\"1488ef3c6de8c42afbc0408d0456101723ffb5d7bd1da46153d56005564e6e9a\" pid:5242 exited_at:{seconds:1750483838 nanos:815519422}" Jun 21 05:30:41.082538 containerd[1528]: time="2025-06-21T05:30:41.082469388Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a8af0a6fc4971e732845647775f84fce8d7fa1f36e4cbdea7aee3c4c0e1cc6c8\" id:\"53c73df1a10769822fe419c57fe8820a8e7ff457fec96db605ec7c878b94a213\" pid:5271 exited_at:{seconds:1750483841 nanos:79784551}" Jun 21 05:30:41.088363 kubelet[2678]: E0621 05:30:41.088052 2678 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:37062->127.0.0.1:37373: write tcp 127.0.0.1:37062->127.0.0.1:37373: write: connection reset by peer Jun 21 05:30:43.222980 containerd[1528]: time="2025-06-21T05:30:43.222910999Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a8af0a6fc4971e732845647775f84fce8d7fa1f36e4cbdea7aee3c4c0e1cc6c8\" id:\"cd4944c4a539c1124751cb04a2aa22105e8811ea44caad8e89d1aa34f3dad21e\" pid:5309 exited_at:{seconds:1750483843 nanos:222543481}" Jun 21 05:30:43.264573 sshd[4435]: Connection closed by 139.178.68.195 port 55334 Jun 21 05:30:43.265839 sshd-session[4429]: pam_unix(sshd:session): session closed for user core Jun 21 05:30:43.272192 systemd[1]: sshd@27-64.23.242.202:22-139.178.68.195:55334.service: Deactivated successfully. Jun 21 05:30:43.276083 systemd[1]: session-28.scope: Deactivated successfully. Jun 21 05:30:43.278617 systemd-logind[1504]: Session 28 logged out. Waiting for processes to exit. Jun 21 05:30:43.280966 systemd-logind[1504]: Removed session 28.