Jun 20 19:28:07.827989 kernel: Linux version 6.12.34-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Jun 20 17:06:39 -00 2025 Jun 20 19:28:07.828015 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=b7bb3b1ced9c5d47870a8b74c6c30075189c27e25d75251cfa7215e4bbff75ea Jun 20 19:28:07.828025 kernel: BIOS-provided physical RAM map: Jun 20 19:28:07.828033 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jun 20 19:28:07.828040 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jun 20 19:28:07.828047 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jun 20 19:28:07.828056 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jun 20 19:28:07.828066 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jun 20 19:28:07.828078 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jun 20 19:28:07.828086 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jun 20 19:28:07.828094 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jun 20 19:28:07.828101 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jun 20 19:28:07.828109 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jun 20 19:28:07.828116 kernel: NX (Execute Disable) protection: active Jun 20 19:28:07.828128 kernel: APIC: Static calls initialized Jun 20 19:28:07.828136 kernel: SMBIOS 2.8 present. Jun 20 19:28:07.828147 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jun 20 19:28:07.828155 kernel: DMI: Memory slots populated: 1/1 Jun 20 19:28:07.828163 kernel: Hypervisor detected: KVM Jun 20 19:28:07.828171 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jun 20 19:28:07.828179 kernel: kvm-clock: using sched offset of 6459788066 cycles Jun 20 19:28:07.828188 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jun 20 19:28:07.828196 kernel: tsc: Detected 2794.748 MHz processor Jun 20 19:28:07.828207 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jun 20 19:28:07.828216 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jun 20 19:28:07.828224 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jun 20 19:28:07.828233 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jun 20 19:28:07.828241 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jun 20 19:28:07.828249 kernel: Using GB pages for direct mapping Jun 20 19:28:07.828258 kernel: ACPI: Early table checksum verification disabled Jun 20 19:28:07.828266 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jun 20 19:28:07.828274 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 20 19:28:07.828285 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jun 20 19:28:07.828293 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 20 19:28:07.828302 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jun 20 19:28:07.828310 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 20 19:28:07.828330 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 20 19:28:07.828346 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 20 19:28:07.828364 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 20 19:28:07.828373 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Jun 20 19:28:07.828388 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Jun 20 19:28:07.828396 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jun 20 19:28:07.828405 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Jun 20 19:28:07.828414 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Jun 20 19:28:07.828422 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Jun 20 19:28:07.828437 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Jun 20 19:28:07.828449 kernel: No NUMA configuration found Jun 20 19:28:07.828457 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jun 20 19:28:07.828466 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Jun 20 19:28:07.828475 kernel: Zone ranges: Jun 20 19:28:07.828484 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jun 20 19:28:07.828492 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jun 20 19:28:07.828501 kernel: Normal empty Jun 20 19:28:07.828509 kernel: Device empty Jun 20 19:28:07.828518 kernel: Movable zone start for each node Jun 20 19:28:07.828526 kernel: Early memory node ranges Jun 20 19:28:07.828537 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jun 20 19:28:07.828546 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jun 20 19:28:07.828554 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jun 20 19:28:07.828563 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jun 20 19:28:07.828572 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jun 20 19:28:07.828580 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jun 20 19:28:07.828589 kernel: ACPI: PM-Timer IO Port: 0x608 Jun 20 19:28:07.828609 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jun 20 19:28:07.828618 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jun 20 19:28:07.828629 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jun 20 19:28:07.828638 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jun 20 19:28:07.828649 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jun 20 19:28:07.828658 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jun 20 19:28:07.828667 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jun 20 19:28:07.828675 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jun 20 19:28:07.828684 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jun 20 19:28:07.828692 kernel: TSC deadline timer available Jun 20 19:28:07.828701 kernel: CPU topo: Max. logical packages: 1 Jun 20 19:28:07.828712 kernel: CPU topo: Max. logical dies: 1 Jun 20 19:28:07.828720 kernel: CPU topo: Max. dies per package: 1 Jun 20 19:28:07.828729 kernel: CPU topo: Max. threads per core: 1 Jun 20 19:28:07.828737 kernel: CPU topo: Num. cores per package: 4 Jun 20 19:28:07.828746 kernel: CPU topo: Num. threads per package: 4 Jun 20 19:28:07.828755 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Jun 20 19:28:07.828763 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jun 20 19:28:07.828772 kernel: kvm-guest: KVM setup pv remote TLB flush Jun 20 19:28:07.828780 kernel: kvm-guest: setup PV sched yield Jun 20 19:28:07.828791 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jun 20 19:28:07.828800 kernel: Booting paravirtualized kernel on KVM Jun 20 19:28:07.828809 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jun 20 19:28:07.828841 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jun 20 19:28:07.828850 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Jun 20 19:28:07.828859 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Jun 20 19:28:07.828867 kernel: pcpu-alloc: [0] 0 1 2 3 Jun 20 19:28:07.828876 kernel: kvm-guest: PV spinlocks enabled Jun 20 19:28:07.828885 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jun 20 19:28:07.828898 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=b7bb3b1ced9c5d47870a8b74c6c30075189c27e25d75251cfa7215e4bbff75ea Jun 20 19:28:07.828907 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jun 20 19:28:07.828915 kernel: random: crng init done Jun 20 19:28:07.828924 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jun 20 19:28:07.828933 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jun 20 19:28:07.828941 kernel: Fallback order for Node 0: 0 Jun 20 19:28:07.828950 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Jun 20 19:28:07.828959 kernel: Policy zone: DMA32 Jun 20 19:28:07.828967 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jun 20 19:28:07.828978 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jun 20 19:28:07.828987 kernel: ftrace: allocating 40093 entries in 157 pages Jun 20 19:28:07.828996 kernel: ftrace: allocated 157 pages with 5 groups Jun 20 19:28:07.829004 kernel: Dynamic Preempt: voluntary Jun 20 19:28:07.829013 kernel: rcu: Preemptible hierarchical RCU implementation. Jun 20 19:28:07.829022 kernel: rcu: RCU event tracing is enabled. Jun 20 19:28:07.829031 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jun 20 19:28:07.829040 kernel: Trampoline variant of Tasks RCU enabled. Jun 20 19:28:07.829051 kernel: Rude variant of Tasks RCU enabled. Jun 20 19:28:07.829062 kernel: Tracing variant of Tasks RCU enabled. Jun 20 19:28:07.829071 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jun 20 19:28:07.829079 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jun 20 19:28:07.829088 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jun 20 19:28:07.829097 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jun 20 19:28:07.829106 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jun 20 19:28:07.829115 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jun 20 19:28:07.829124 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jun 20 19:28:07.829141 kernel: Console: colour VGA+ 80x25 Jun 20 19:28:07.829150 kernel: printk: legacy console [ttyS0] enabled Jun 20 19:28:07.829159 kernel: ACPI: Core revision 20240827 Jun 20 19:28:07.829169 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jun 20 19:28:07.829180 kernel: APIC: Switch to symmetric I/O mode setup Jun 20 19:28:07.829189 kernel: x2apic enabled Jun 20 19:28:07.829200 kernel: APIC: Switched APIC routing to: physical x2apic Jun 20 19:28:07.829209 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jun 20 19:28:07.829219 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jun 20 19:28:07.829230 kernel: kvm-guest: setup PV IPIs Jun 20 19:28:07.829241 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jun 20 19:28:07.829251 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Jun 20 19:28:07.829262 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Jun 20 19:28:07.829271 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jun 20 19:28:07.829281 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jun 20 19:28:07.829290 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jun 20 19:28:07.829299 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jun 20 19:28:07.829310 kernel: Spectre V2 : Mitigation: Retpolines Jun 20 19:28:07.829319 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jun 20 19:28:07.829328 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jun 20 19:28:07.829337 kernel: RETBleed: Mitigation: untrained return thunk Jun 20 19:28:07.829346 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jun 20 19:28:07.829355 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jun 20 19:28:07.829365 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jun 20 19:28:07.829374 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jun 20 19:28:07.829383 kernel: x86/bugs: return thunk changed Jun 20 19:28:07.829394 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jun 20 19:28:07.829403 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jun 20 19:28:07.829413 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jun 20 19:28:07.829422 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jun 20 19:28:07.829431 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jun 20 19:28:07.829440 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jun 20 19:28:07.829449 kernel: Freeing SMP alternatives memory: 32K Jun 20 19:28:07.829458 kernel: pid_max: default: 32768 minimum: 301 Jun 20 19:28:07.829467 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jun 20 19:28:07.829478 kernel: landlock: Up and running. Jun 20 19:28:07.829498 kernel: SELinux: Initializing. Jun 20 19:28:07.829516 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jun 20 19:28:07.829528 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jun 20 19:28:07.829538 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jun 20 19:28:07.829547 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jun 20 19:28:07.829556 kernel: ... version: 0 Jun 20 19:28:07.829565 kernel: ... bit width: 48 Jun 20 19:28:07.829574 kernel: ... generic registers: 6 Jun 20 19:28:07.829599 kernel: ... value mask: 0000ffffffffffff Jun 20 19:28:07.829609 kernel: ... max period: 00007fffffffffff Jun 20 19:28:07.829618 kernel: ... fixed-purpose events: 0 Jun 20 19:28:07.829627 kernel: ... event mask: 000000000000003f Jun 20 19:28:07.829636 kernel: signal: max sigframe size: 1776 Jun 20 19:28:07.829645 kernel: rcu: Hierarchical SRCU implementation. Jun 20 19:28:07.829654 kernel: rcu: Max phase no-delay instances is 400. Jun 20 19:28:07.829663 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jun 20 19:28:07.829672 kernel: smp: Bringing up secondary CPUs ... Jun 20 19:28:07.829685 kernel: smpboot: x86: Booting SMP configuration: Jun 20 19:28:07.829694 kernel: .... node #0, CPUs: #1 #2 #3 Jun 20 19:28:07.829703 kernel: smp: Brought up 1 node, 4 CPUs Jun 20 19:28:07.829712 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Jun 20 19:28:07.829722 kernel: Memory: 2428912K/2571752K available (14336K kernel code, 2430K rwdata, 9956K rodata, 54424K init, 2544K bss, 136904K reserved, 0K cma-reserved) Jun 20 19:28:07.829731 kernel: devtmpfs: initialized Jun 20 19:28:07.829740 kernel: x86/mm: Memory block size: 128MB Jun 20 19:28:07.829749 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jun 20 19:28:07.829758 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jun 20 19:28:07.829770 kernel: pinctrl core: initialized pinctrl subsystem Jun 20 19:28:07.829779 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jun 20 19:28:07.829788 kernel: audit: initializing netlink subsys (disabled) Jun 20 19:28:07.829797 kernel: audit: type=2000 audit(1750447684.872:1): state=initialized audit_enabled=0 res=1 Jun 20 19:28:07.829806 kernel: thermal_sys: Registered thermal governor 'step_wise' Jun 20 19:28:07.829831 kernel: thermal_sys: Registered thermal governor 'user_space' Jun 20 19:28:07.829840 kernel: cpuidle: using governor menu Jun 20 19:28:07.829849 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jun 20 19:28:07.829858 kernel: dca service started, version 1.12.1 Jun 20 19:28:07.829870 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Jun 20 19:28:07.829879 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jun 20 19:28:07.829888 kernel: PCI: Using configuration type 1 for base access Jun 20 19:28:07.829897 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jun 20 19:28:07.829906 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jun 20 19:28:07.829916 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jun 20 19:28:07.829925 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jun 20 19:28:07.829934 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jun 20 19:28:07.829945 kernel: ACPI: Added _OSI(Module Device) Jun 20 19:28:07.829954 kernel: ACPI: Added _OSI(Processor Device) Jun 20 19:28:07.829963 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jun 20 19:28:07.829972 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jun 20 19:28:07.829981 kernel: ACPI: Interpreter enabled Jun 20 19:28:07.829990 kernel: ACPI: PM: (supports S0 S3 S5) Jun 20 19:28:07.829999 kernel: ACPI: Using IOAPIC for interrupt routing Jun 20 19:28:07.830008 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jun 20 19:28:07.830017 kernel: PCI: Using E820 reservations for host bridge windows Jun 20 19:28:07.830026 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jun 20 19:28:07.830038 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jun 20 19:28:07.830259 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jun 20 19:28:07.830390 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jun 20 19:28:07.830516 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jun 20 19:28:07.830527 kernel: PCI host bridge to bus 0000:00 Jun 20 19:28:07.830678 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jun 20 19:28:07.830800 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jun 20 19:28:07.830943 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jun 20 19:28:07.831059 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jun 20 19:28:07.831175 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jun 20 19:28:07.831294 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jun 20 19:28:07.831408 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jun 20 19:28:07.831561 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jun 20 19:28:07.831733 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jun 20 19:28:07.831888 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Jun 20 19:28:07.832015 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Jun 20 19:28:07.832139 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Jun 20 19:28:07.832263 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jun 20 19:28:07.832414 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jun 20 19:28:07.832546 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Jun 20 19:28:07.832683 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Jun 20 19:28:07.832809 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Jun 20 19:28:07.832977 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jun 20 19:28:07.833104 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Jun 20 19:28:07.833244 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Jun 20 19:28:07.833372 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Jun 20 19:28:07.833526 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jun 20 19:28:07.833672 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Jun 20 19:28:07.833799 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Jun 20 19:28:07.833962 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Jun 20 19:28:07.834090 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Jun 20 19:28:07.834233 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jun 20 19:28:07.834359 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jun 20 19:28:07.834506 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jun 20 19:28:07.834642 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Jun 20 19:28:07.834768 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Jun 20 19:28:07.834945 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jun 20 19:28:07.835074 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Jun 20 19:28:07.835086 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jun 20 19:28:07.835096 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jun 20 19:28:07.835109 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jun 20 19:28:07.835118 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jun 20 19:28:07.835127 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jun 20 19:28:07.835137 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jun 20 19:28:07.835146 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jun 20 19:28:07.835155 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jun 20 19:28:07.835164 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jun 20 19:28:07.835173 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jun 20 19:28:07.835182 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jun 20 19:28:07.835193 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jun 20 19:28:07.835202 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jun 20 19:28:07.835211 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jun 20 19:28:07.835220 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jun 20 19:28:07.835230 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jun 20 19:28:07.835239 kernel: iommu: Default domain type: Translated Jun 20 19:28:07.835250 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jun 20 19:28:07.835260 kernel: PCI: Using ACPI for IRQ routing Jun 20 19:28:07.835271 kernel: PCI: pci_cache_line_size set to 64 bytes Jun 20 19:28:07.835283 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jun 20 19:28:07.835292 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jun 20 19:28:07.835417 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jun 20 19:28:07.835542 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jun 20 19:28:07.835676 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jun 20 19:28:07.835689 kernel: vgaarb: loaded Jun 20 19:28:07.835698 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jun 20 19:28:07.835707 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jun 20 19:28:07.835719 kernel: clocksource: Switched to clocksource kvm-clock Jun 20 19:28:07.835728 kernel: VFS: Disk quotas dquot_6.6.0 Jun 20 19:28:07.835738 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jun 20 19:28:07.835747 kernel: pnp: PnP ACPI init Jun 20 19:28:07.835956 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jun 20 19:28:07.835971 kernel: pnp: PnP ACPI: found 6 devices Jun 20 19:28:07.835980 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jun 20 19:28:07.835990 kernel: NET: Registered PF_INET protocol family Jun 20 19:28:07.836003 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jun 20 19:28:07.836012 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jun 20 19:28:07.836022 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jun 20 19:28:07.836031 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jun 20 19:28:07.836040 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jun 20 19:28:07.836050 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jun 20 19:28:07.836059 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jun 20 19:28:07.836068 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jun 20 19:28:07.836077 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jun 20 19:28:07.836089 kernel: NET: Registered PF_XDP protocol family Jun 20 19:28:07.836206 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jun 20 19:28:07.836321 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jun 20 19:28:07.836435 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jun 20 19:28:07.836548 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jun 20 19:28:07.836672 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jun 20 19:28:07.836787 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jun 20 19:28:07.836799 kernel: PCI: CLS 0 bytes, default 64 Jun 20 19:28:07.836808 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Jun 20 19:28:07.836851 kernel: Initialise system trusted keyrings Jun 20 19:28:07.836861 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jun 20 19:28:07.836870 kernel: Key type asymmetric registered Jun 20 19:28:07.836879 kernel: Asymmetric key parser 'x509' registered Jun 20 19:28:07.836888 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jun 20 19:28:07.836897 kernel: io scheduler mq-deadline registered Jun 20 19:28:07.836906 kernel: io scheduler kyber registered Jun 20 19:28:07.836915 kernel: io scheduler bfq registered Jun 20 19:28:07.836924 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jun 20 19:28:07.836937 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jun 20 19:28:07.836946 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jun 20 19:28:07.836955 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jun 20 19:28:07.836964 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jun 20 19:28:07.836973 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jun 20 19:28:07.836982 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jun 20 19:28:07.836992 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jun 20 19:28:07.837001 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jun 20 19:28:07.837152 kernel: rtc_cmos 00:04: RTC can wake from S4 Jun 20 19:28:07.837287 kernel: rtc_cmos 00:04: registered as rtc0 Jun 20 19:28:07.837409 kernel: rtc_cmos 00:04: setting system clock to 2025-06-20T19:28:07 UTC (1750447687) Jun 20 19:28:07.837530 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jun 20 19:28:07.837541 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jun 20 19:28:07.837551 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Jun 20 19:28:07.837560 kernel: NET: Registered PF_INET6 protocol family Jun 20 19:28:07.837569 kernel: Segment Routing with IPv6 Jun 20 19:28:07.837581 kernel: In-situ OAM (IOAM) with IPv6 Jun 20 19:28:07.837590 kernel: NET: Registered PF_PACKET protocol family Jun 20 19:28:07.837609 kernel: Key type dns_resolver registered Jun 20 19:28:07.837618 kernel: IPI shorthand broadcast: enabled Jun 20 19:28:07.837627 kernel: sched_clock: Marking stable (2909156338, 114746391)->(3047860315, -23957586) Jun 20 19:28:07.837636 kernel: registered taskstats version 1 Jun 20 19:28:07.837645 kernel: Loading compiled-in X.509 certificates Jun 20 19:28:07.837654 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.34-flatcar: 9a085d119111c823c157514215d0379e3a2f1b94' Jun 20 19:28:07.837663 kernel: Demotion targets for Node 0: null Jun 20 19:28:07.837675 kernel: Key type .fscrypt registered Jun 20 19:28:07.837684 kernel: Key type fscrypt-provisioning registered Jun 20 19:28:07.837694 kernel: ima: No TPM chip found, activating TPM-bypass! Jun 20 19:28:07.837703 kernel: ima: Allocated hash algorithm: sha1 Jun 20 19:28:07.837712 kernel: ima: No architecture policies found Jun 20 19:28:07.837720 kernel: clk: Disabling unused clocks Jun 20 19:28:07.837729 kernel: Warning: unable to open an initial console. Jun 20 19:28:07.837739 kernel: Freeing unused kernel image (initmem) memory: 54424K Jun 20 19:28:07.837748 kernel: Write protecting the kernel read-only data: 24576k Jun 20 19:28:07.837760 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Jun 20 19:28:07.837768 kernel: Run /init as init process Jun 20 19:28:07.837778 kernel: with arguments: Jun 20 19:28:07.837787 kernel: /init Jun 20 19:28:07.837795 kernel: with environment: Jun 20 19:28:07.837804 kernel: HOME=/ Jun 20 19:28:07.837833 kernel: TERM=linux Jun 20 19:28:07.837842 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jun 20 19:28:07.837874 systemd[1]: Successfully made /usr/ read-only. Jun 20 19:28:07.837890 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jun 20 19:28:07.837915 systemd[1]: Detected virtualization kvm. Jun 20 19:28:07.837925 systemd[1]: Detected architecture x86-64. Jun 20 19:28:07.837934 systemd[1]: Running in initrd. Jun 20 19:28:07.837944 systemd[1]: No hostname configured, using default hostname. Jun 20 19:28:07.837957 systemd[1]: Hostname set to . Jun 20 19:28:07.837966 systemd[1]: Initializing machine ID from VM UUID. Jun 20 19:28:07.837976 systemd[1]: Queued start job for default target initrd.target. Jun 20 19:28:07.837986 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 20 19:28:07.837996 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 20 19:28:07.838007 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jun 20 19:28:07.838017 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 20 19:28:07.838028 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jun 20 19:28:07.838042 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jun 20 19:28:07.838053 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jun 20 19:28:07.838063 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jun 20 19:28:07.838074 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 20 19:28:07.838084 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 20 19:28:07.838093 systemd[1]: Reached target paths.target - Path Units. Jun 20 19:28:07.838103 systemd[1]: Reached target slices.target - Slice Units. Jun 20 19:28:07.838116 systemd[1]: Reached target swap.target - Swaps. Jun 20 19:28:07.838126 systemd[1]: Reached target timers.target - Timer Units. Jun 20 19:28:07.838136 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jun 20 19:28:07.838146 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 20 19:28:07.838156 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jun 20 19:28:07.838166 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jun 20 19:28:07.838176 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 20 19:28:07.838186 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 20 19:28:07.838196 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 20 19:28:07.838209 systemd[1]: Reached target sockets.target - Socket Units. Jun 20 19:28:07.838219 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jun 20 19:28:07.838229 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 20 19:28:07.838239 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jun 20 19:28:07.838250 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jun 20 19:28:07.838268 systemd[1]: Starting systemd-fsck-usr.service... Jun 20 19:28:07.838280 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 20 19:28:07.838290 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 20 19:28:07.838300 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 19:28:07.838310 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jun 20 19:28:07.838321 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 20 19:28:07.838333 systemd[1]: Finished systemd-fsck-usr.service. Jun 20 19:28:07.838343 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 20 19:28:07.838374 systemd-journald[220]: Collecting audit messages is disabled. Jun 20 19:28:07.838401 systemd-journald[220]: Journal started Jun 20 19:28:07.838422 systemd-journald[220]: Runtime Journal (/run/log/journal/a0f70811bacf4e36b6c43b04c06fca6a) is 6M, max 48.6M, 42.5M free. Jun 20 19:28:07.827177 systemd-modules-load[222]: Inserted module 'overlay' Jun 20 19:28:07.870420 systemd[1]: Started systemd-journald.service - Journal Service. Jun 20 19:28:07.870448 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jun 20 19:28:07.870474 kernel: Bridge firewalling registered Jun 20 19:28:07.856018 systemd-modules-load[222]: Inserted module 'br_netfilter' Jun 20 19:28:07.871997 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 20 19:28:07.873657 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:28:07.875780 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 20 19:28:07.882928 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 20 19:28:07.885001 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 20 19:28:07.888293 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 20 19:28:07.898409 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jun 20 19:28:07.909476 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 20 19:28:07.910665 systemd-tmpfiles[243]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jun 20 19:28:07.911705 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 20 19:28:07.912549 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 20 19:28:07.916381 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 20 19:28:07.920914 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jun 20 19:28:07.924460 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 20 19:28:07.953230 dracut-cmdline[260]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=b7bb3b1ced9c5d47870a8b74c6c30075189c27e25d75251cfa7215e4bbff75ea Jun 20 19:28:07.970372 systemd-resolved[261]: Positive Trust Anchors: Jun 20 19:28:07.970398 systemd-resolved[261]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 20 19:28:07.970436 systemd-resolved[261]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jun 20 19:28:07.973226 systemd-resolved[261]: Defaulting to hostname 'linux'. Jun 20 19:28:07.974941 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 20 19:28:07.979786 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 20 19:28:08.321911 kernel: SCSI subsystem initialized Jun 20 19:28:08.330853 kernel: Loading iSCSI transport class v2.0-870. Jun 20 19:28:08.341846 kernel: iscsi: registered transport (tcp) Jun 20 19:28:08.365166 kernel: iscsi: registered transport (qla4xxx) Jun 20 19:28:08.365268 kernel: QLogic iSCSI HBA Driver Jun 20 19:28:08.389771 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 20 19:28:08.411634 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 20 19:28:08.414358 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 20 19:28:08.479480 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jun 20 19:28:08.482476 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jun 20 19:28:08.538846 kernel: raid6: avx2x4 gen() 29897 MB/s Jun 20 19:28:08.555837 kernel: raid6: avx2x2 gen() 30537 MB/s Jun 20 19:28:08.572932 kernel: raid6: avx2x1 gen() 25973 MB/s Jun 20 19:28:08.572948 kernel: raid6: using algorithm avx2x2 gen() 30537 MB/s Jun 20 19:28:08.590919 kernel: raid6: .... xor() 19978 MB/s, rmw enabled Jun 20 19:28:08.590938 kernel: raid6: using avx2x2 recovery algorithm Jun 20 19:28:08.612842 kernel: xor: automatically using best checksumming function avx Jun 20 19:28:08.953857 kernel: Btrfs loaded, zoned=no, fsverity=no Jun 20 19:28:08.962940 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jun 20 19:28:08.966020 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 20 19:28:09.007196 systemd-udevd[473]: Using default interface naming scheme 'v255'. Jun 20 19:28:09.013613 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 20 19:28:09.015255 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jun 20 19:28:09.040764 dracut-pre-trigger[475]: rd.md=0: removing MD RAID activation Jun 20 19:28:09.071983 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jun 20 19:28:09.073941 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 20 19:28:09.159856 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 20 19:28:09.164412 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jun 20 19:28:09.205859 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jun 20 19:28:09.227857 kernel: cryptd: max_cpu_qlen set to 1000 Jun 20 19:28:09.230847 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jun 20 19:28:09.233004 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jun 20 19:28:09.235851 kernel: AES CTR mode by8 optimization enabled Jun 20 19:28:09.236848 kernel: libata version 3.00 loaded. Jun 20 19:28:09.241738 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jun 20 19:28:09.241796 kernel: GPT:9289727 != 19775487 Jun 20 19:28:09.241811 kernel: GPT:Alternate GPT header not at the end of the disk. Jun 20 19:28:09.241844 kernel: GPT:9289727 != 19775487 Jun 20 19:28:09.242160 kernel: GPT: Use GNU Parted to correct GPT errors. Jun 20 19:28:09.243289 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 20 19:28:09.250137 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 20 19:28:09.259865 kernel: ahci 0000:00:1f.2: version 3.0 Jun 20 19:28:09.260140 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jun 20 19:28:09.260155 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jun 20 19:28:09.260299 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jun 20 19:28:09.260436 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jun 20 19:28:09.250261 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:28:09.254295 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 19:28:09.255647 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 19:28:09.256411 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jun 20 19:28:09.266856 kernel: scsi host0: ahci Jun 20 19:28:09.267075 kernel: scsi host1: ahci Jun 20 19:28:09.267847 kernel: scsi host2: ahci Jun 20 19:28:09.269699 kernel: scsi host3: ahci Jun 20 19:28:09.270999 kernel: scsi host4: ahci Jun 20 19:28:09.271260 kernel: scsi host5: ahci Jun 20 19:28:09.273868 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 lpm-pol 0 Jun 20 19:28:09.273890 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 lpm-pol 0 Jun 20 19:28:09.273901 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 lpm-pol 0 Jun 20 19:28:09.276436 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 lpm-pol 0 Jun 20 19:28:09.278335 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 lpm-pol 0 Jun 20 19:28:09.279691 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 lpm-pol 0 Jun 20 19:28:09.297532 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jun 20 19:28:09.319696 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jun 20 19:28:09.336450 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:28:09.347602 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jun 20 19:28:09.356671 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jun 20 19:28:09.359215 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jun 20 19:28:09.363419 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jun 20 19:28:09.394510 disk-uuid[630]: Primary Header is updated. Jun 20 19:28:09.394510 disk-uuid[630]: Secondary Entries is updated. Jun 20 19:28:09.394510 disk-uuid[630]: Secondary Header is updated. Jun 20 19:28:09.398845 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 20 19:28:09.403837 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 20 19:28:09.592975 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jun 20 19:28:09.593037 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jun 20 19:28:09.593049 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jun 20 19:28:09.593059 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jun 20 19:28:09.593860 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jun 20 19:28:09.594848 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jun 20 19:28:09.595847 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jun 20 19:28:09.596974 kernel: ata3.00: applying bridge limits Jun 20 19:28:09.596987 kernel: ata3.00: configured for UDMA/100 Jun 20 19:28:09.597853 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jun 20 19:28:09.656856 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jun 20 19:28:09.657124 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jun 20 19:28:09.682911 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jun 20 19:28:10.082795 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jun 20 19:28:10.084251 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jun 20 19:28:10.085706 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 20 19:28:10.086132 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 20 19:28:10.091790 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jun 20 19:28:10.125782 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jun 20 19:28:10.405706 disk-uuid[631]: The operation has completed successfully. Jun 20 19:28:10.407216 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 20 19:28:10.438180 systemd[1]: disk-uuid.service: Deactivated successfully. Jun 20 19:28:10.438312 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jun 20 19:28:10.477013 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jun 20 19:28:10.501082 sh[659]: Success Jun 20 19:28:10.519961 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jun 20 19:28:10.520024 kernel: device-mapper: uevent: version 1.0.3 Jun 20 19:28:10.521046 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jun 20 19:28:10.530856 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jun 20 19:28:10.563745 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jun 20 19:28:10.566269 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jun 20 19:28:10.589778 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jun 20 19:28:10.597442 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jun 20 19:28:10.597484 kernel: BTRFS: device fsid 048b924a-9f97-43f5-98d6-0fff18874966 devid 1 transid 41 /dev/mapper/usr (253:0) scanned by mount (671) Jun 20 19:28:10.598794 kernel: BTRFS info (device dm-0): first mount of filesystem 048b924a-9f97-43f5-98d6-0fff18874966 Jun 20 19:28:10.598838 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jun 20 19:28:10.599673 kernel: BTRFS info (device dm-0): using free-space-tree Jun 20 19:28:10.605223 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jun 20 19:28:10.606696 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jun 20 19:28:10.608113 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jun 20 19:28:10.609017 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jun 20 19:28:10.610720 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jun 20 19:28:10.639849 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 (254:6) scanned by mount (704) Jun 20 19:28:10.642018 kernel: BTRFS info (device vda6): first mount of filesystem 40288228-7b4b-4005-945b-574c4c10ab32 Jun 20 19:28:10.642049 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jun 20 19:28:10.642065 kernel: BTRFS info (device vda6): using free-space-tree Jun 20 19:28:10.649837 kernel: BTRFS info (device vda6): last unmount of filesystem 40288228-7b4b-4005-945b-574c4c10ab32 Jun 20 19:28:10.650226 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jun 20 19:28:10.653928 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jun 20 19:28:10.844810 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 20 19:28:10.848329 ignition[745]: Ignition 2.21.0 Jun 20 19:28:10.848348 ignition[745]: Stage: fetch-offline Jun 20 19:28:10.848509 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 20 19:28:10.848443 ignition[745]: no configs at "/usr/lib/ignition/base.d" Jun 20 19:28:10.848454 ignition[745]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 20 19:28:10.848553 ignition[745]: parsed url from cmdline: "" Jun 20 19:28:10.848557 ignition[745]: no config URL provided Jun 20 19:28:10.848563 ignition[745]: reading system config file "/usr/lib/ignition/user.ign" Jun 20 19:28:10.848571 ignition[745]: no config at "/usr/lib/ignition/user.ign" Jun 20 19:28:10.848597 ignition[745]: op(1): [started] loading QEMU firmware config module Jun 20 19:28:10.848606 ignition[745]: op(1): executing: "modprobe" "qemu_fw_cfg" Jun 20 19:28:10.858293 ignition[745]: op(1): [finished] loading QEMU firmware config module Jun 20 19:28:10.886711 systemd-networkd[847]: lo: Link UP Jun 20 19:28:10.886722 systemd-networkd[847]: lo: Gained carrier Jun 20 19:28:10.888358 systemd-networkd[847]: Enumeration completed Jun 20 19:28:10.888467 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 20 19:28:10.889751 systemd-networkd[847]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 19:28:10.889756 systemd-networkd[847]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 20 19:28:10.889930 systemd[1]: Reached target network.target - Network. Jun 20 19:28:10.891165 systemd-networkd[847]: eth0: Link UP Jun 20 19:28:10.891170 systemd-networkd[847]: eth0: Gained carrier Jun 20 19:28:10.891183 systemd-networkd[847]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 19:28:10.900859 systemd-networkd[847]: eth0: DHCPv4 address 10.0.0.126/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jun 20 19:28:10.911213 ignition[745]: parsing config with SHA512: 1e4a3f1c4a9337b4caa475e45c94bdca9fb7aca65c3072f39d8998c831c587117e39256a1aa7b1f6c0613ba2cab562f184eadd41c259dd64844029f351657ce7 Jun 20 19:28:10.915000 unknown[745]: fetched base config from "system" Jun 20 19:28:10.915013 unknown[745]: fetched user config from "qemu" Jun 20 19:28:10.915361 ignition[745]: fetch-offline: fetch-offline passed Jun 20 19:28:10.915421 ignition[745]: Ignition finished successfully Jun 20 19:28:10.918809 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jun 20 19:28:10.919749 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jun 20 19:28:10.921142 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jun 20 19:28:10.963673 ignition[854]: Ignition 2.21.0 Jun 20 19:28:10.963690 ignition[854]: Stage: kargs Jun 20 19:28:10.963912 ignition[854]: no configs at "/usr/lib/ignition/base.d" Jun 20 19:28:10.964724 ignition[854]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 20 19:28:10.966626 ignition[854]: kargs: kargs passed Jun 20 19:28:10.966682 ignition[854]: Ignition finished successfully Jun 20 19:28:10.972163 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jun 20 19:28:10.973534 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jun 20 19:28:11.025372 ignition[862]: Ignition 2.21.0 Jun 20 19:28:11.025385 ignition[862]: Stage: disks Jun 20 19:28:11.025596 ignition[862]: no configs at "/usr/lib/ignition/base.d" Jun 20 19:28:11.025607 ignition[862]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 20 19:28:11.030468 ignition[862]: disks: disks passed Jun 20 19:28:11.031160 ignition[862]: Ignition finished successfully Jun 20 19:28:11.034010 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jun 20 19:28:11.034763 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jun 20 19:28:11.035029 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 20 19:28:11.035354 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 20 19:28:11.035696 systemd[1]: Reached target sysinit.target - System Initialization. Jun 20 19:28:11.036031 systemd[1]: Reached target basic.target - Basic System. Jun 20 19:28:11.037395 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jun 20 19:28:11.066847 systemd-fsck[872]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jun 20 19:28:11.075063 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jun 20 19:28:11.078252 systemd[1]: Mounting sysroot.mount - /sysroot... Jun 20 19:28:11.189886 kernel: EXT4-fs (vda9): mounted filesystem 6290a154-3512-46a6-a5f5-a7fb62c65caa r/w with ordered data mode. Quota mode: none. Jun 20 19:28:11.190785 systemd[1]: Mounted sysroot.mount - /sysroot. Jun 20 19:28:11.192243 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jun 20 19:28:11.194839 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 20 19:28:11.197511 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jun 20 19:28:11.198648 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jun 20 19:28:11.198695 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jun 20 19:28:11.198723 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jun 20 19:28:11.227479 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jun 20 19:28:11.231439 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jun 20 19:28:11.234845 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 (254:6) scanned by mount (880) Jun 20 19:28:11.237008 kernel: BTRFS info (device vda6): first mount of filesystem 40288228-7b4b-4005-945b-574c4c10ab32 Jun 20 19:28:11.237025 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jun 20 19:28:11.237036 kernel: BTRFS info (device vda6): using free-space-tree Jun 20 19:28:11.241789 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 20 19:28:11.275722 initrd-setup-root[904]: cut: /sysroot/etc/passwd: No such file or directory Jun 20 19:28:11.280477 initrd-setup-root[911]: cut: /sysroot/etc/group: No such file or directory Jun 20 19:28:11.287357 initrd-setup-root[918]: cut: /sysroot/etc/shadow: No such file or directory Jun 20 19:28:11.291926 initrd-setup-root[925]: cut: /sysroot/etc/gshadow: No such file or directory Jun 20 19:28:11.631329 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jun 20 19:28:11.634137 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jun 20 19:28:11.635185 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jun 20 19:28:11.656907 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jun 20 19:28:11.658704 kernel: BTRFS info (device vda6): last unmount of filesystem 40288228-7b4b-4005-945b-574c4c10ab32 Jun 20 19:28:11.675978 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jun 20 19:28:11.704556 ignition[994]: INFO : Ignition 2.21.0 Jun 20 19:28:11.704556 ignition[994]: INFO : Stage: mount Jun 20 19:28:11.707168 ignition[994]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 20 19:28:11.707168 ignition[994]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 20 19:28:11.709365 ignition[994]: INFO : mount: mount passed Jun 20 19:28:11.709365 ignition[994]: INFO : Ignition finished successfully Jun 20 19:28:11.711117 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jun 20 19:28:11.712619 systemd[1]: Starting ignition-files.service - Ignition (files)... Jun 20 19:28:11.740624 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 20 19:28:11.772507 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 (254:6) scanned by mount (1006) Jun 20 19:28:11.774560 kernel: BTRFS info (device vda6): first mount of filesystem 40288228-7b4b-4005-945b-574c4c10ab32 Jun 20 19:28:11.774574 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jun 20 19:28:11.774585 kernel: BTRFS info (device vda6): using free-space-tree Jun 20 19:28:11.778592 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 20 19:28:11.819155 ignition[1023]: INFO : Ignition 2.21.0 Jun 20 19:28:11.819155 ignition[1023]: INFO : Stage: files Jun 20 19:28:11.820940 ignition[1023]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 20 19:28:11.820940 ignition[1023]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 20 19:28:11.823601 ignition[1023]: DEBUG : files: compiled without relabeling support, skipping Jun 20 19:28:11.824877 ignition[1023]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jun 20 19:28:11.824877 ignition[1023]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jun 20 19:28:11.827765 ignition[1023]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jun 20 19:28:11.827765 ignition[1023]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jun 20 19:28:11.830745 ignition[1023]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jun 20 19:28:11.830745 ignition[1023]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jun 20 19:28:11.830745 ignition[1023]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jun 20 19:28:11.828138 unknown[1023]: wrote ssh authorized keys file for user: core Jun 20 19:28:11.871056 ignition[1023]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jun 20 19:28:11.982247 ignition[1023]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jun 20 19:28:11.982247 ignition[1023]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jun 20 19:28:11.986216 ignition[1023]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jun 20 19:28:12.474275 ignition[1023]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jun 20 19:28:12.596475 ignition[1023]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jun 20 19:28:12.596475 ignition[1023]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jun 20 19:28:12.600318 ignition[1023]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jun 20 19:28:12.600318 ignition[1023]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jun 20 19:28:12.600318 ignition[1023]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jun 20 19:28:12.600318 ignition[1023]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 20 19:28:12.600318 ignition[1023]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 20 19:28:12.600318 ignition[1023]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 20 19:28:12.600318 ignition[1023]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 20 19:28:12.619346 systemd-networkd[847]: eth0: Gained IPv6LL Jun 20 19:28:12.663244 ignition[1023]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jun 20 19:28:12.665475 ignition[1023]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jun 20 19:28:12.665475 ignition[1023]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jun 20 19:28:12.798034 ignition[1023]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jun 20 19:28:12.798034 ignition[1023]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jun 20 19:28:12.802841 ignition[1023]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jun 20 19:28:13.313726 ignition[1023]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jun 20 19:28:13.909713 ignition[1023]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jun 20 19:28:13.909713 ignition[1023]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jun 20 19:28:13.913643 ignition[1023]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 20 19:28:13.918014 ignition[1023]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 20 19:28:13.918014 ignition[1023]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jun 20 19:28:13.921552 ignition[1023]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jun 20 19:28:13.921552 ignition[1023]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jun 20 19:28:13.925004 ignition[1023]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jun 20 19:28:13.925004 ignition[1023]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jun 20 19:28:13.925004 ignition[1023]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jun 20 19:28:13.946996 ignition[1023]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jun 20 19:28:13.953273 ignition[1023]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jun 20 19:28:13.955099 ignition[1023]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jun 20 19:28:13.955099 ignition[1023]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jun 20 19:28:13.957990 ignition[1023]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jun 20 19:28:13.957990 ignition[1023]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jun 20 19:28:13.957990 ignition[1023]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jun 20 19:28:13.957990 ignition[1023]: INFO : files: files passed Jun 20 19:28:13.957990 ignition[1023]: INFO : Ignition finished successfully Jun 20 19:28:13.967527 systemd[1]: Finished ignition-files.service - Ignition (files). Jun 20 19:28:13.970338 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jun 20 19:28:13.972896 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jun 20 19:28:13.987330 systemd[1]: ignition-quench.service: Deactivated successfully. Jun 20 19:28:13.987486 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jun 20 19:28:13.992073 initrd-setup-root-after-ignition[1052]: grep: /sysroot/oem/oem-release: No such file or directory Jun 20 19:28:13.996083 initrd-setup-root-after-ignition[1054]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 20 19:28:13.996083 initrd-setup-root-after-ignition[1054]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jun 20 19:28:13.999610 initrd-setup-root-after-ignition[1058]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 20 19:28:14.002576 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 20 19:28:14.003241 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jun 20 19:28:14.007343 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jun 20 19:28:14.071762 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jun 20 19:28:14.071917 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jun 20 19:28:14.074220 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jun 20 19:28:14.076224 systemd[1]: Reached target initrd.target - Initrd Default Target. Jun 20 19:28:14.078284 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jun 20 19:28:14.079252 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jun 20 19:28:14.102515 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 20 19:28:14.105180 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jun 20 19:28:14.128878 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jun 20 19:28:14.129241 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 20 19:28:14.131484 systemd[1]: Stopped target timers.target - Timer Units. Jun 20 19:28:14.133689 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jun 20 19:28:14.133828 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 20 19:28:14.137273 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jun 20 19:28:14.137665 systemd[1]: Stopped target basic.target - Basic System. Jun 20 19:28:14.138170 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jun 20 19:28:14.138516 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jun 20 19:28:14.138865 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jun 20 19:28:14.139343 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jun 20 19:28:14.139686 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jun 20 19:28:14.140191 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jun 20 19:28:14.140540 systemd[1]: Stopped target sysinit.target - System Initialization. Jun 20 19:28:14.140879 systemd[1]: Stopped target local-fs.target - Local File Systems. Jun 20 19:28:14.141359 systemd[1]: Stopped target swap.target - Swaps. Jun 20 19:28:14.141676 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jun 20 19:28:14.141784 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jun 20 19:28:14.159686 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jun 20 19:28:14.160204 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 20 19:28:14.160505 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jun 20 19:28:14.165803 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 20 19:28:14.166375 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jun 20 19:28:14.166493 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jun 20 19:28:14.170984 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jun 20 19:28:14.171100 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jun 20 19:28:14.173218 systemd[1]: Stopped target paths.target - Path Units. Jun 20 19:28:14.173705 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jun 20 19:28:14.179886 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 20 19:28:14.180240 systemd[1]: Stopped target slices.target - Slice Units. Jun 20 19:28:14.182898 systemd[1]: Stopped target sockets.target - Socket Units. Jun 20 19:28:14.184623 systemd[1]: iscsid.socket: Deactivated successfully. Jun 20 19:28:14.184721 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jun 20 19:28:14.187978 systemd[1]: iscsiuio.socket: Deactivated successfully. Jun 20 19:28:14.188066 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 20 19:28:14.188557 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jun 20 19:28:14.188665 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 20 19:28:14.191142 systemd[1]: ignition-files.service: Deactivated successfully. Jun 20 19:28:14.191251 systemd[1]: Stopped ignition-files.service - Ignition (files). Jun 20 19:28:14.196108 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jun 20 19:28:14.197719 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jun 20 19:28:14.200437 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jun 20 19:28:14.200611 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jun 20 19:28:14.201882 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jun 20 19:28:14.201986 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jun 20 19:28:14.210712 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jun 20 19:28:14.211003 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jun 20 19:28:14.239785 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jun 20 19:28:14.284205 ignition[1079]: INFO : Ignition 2.21.0 Jun 20 19:28:14.284205 ignition[1079]: INFO : Stage: umount Jun 20 19:28:14.286147 ignition[1079]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 20 19:28:14.286147 ignition[1079]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 20 19:28:14.286147 ignition[1079]: INFO : umount: umount passed Jun 20 19:28:14.286147 ignition[1079]: INFO : Ignition finished successfully Jun 20 19:28:14.289188 systemd[1]: ignition-mount.service: Deactivated successfully. Jun 20 19:28:14.289324 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jun 20 19:28:14.290477 systemd[1]: Stopped target network.target - Network. Jun 20 19:28:14.292591 systemd[1]: ignition-disks.service: Deactivated successfully. Jun 20 19:28:14.292649 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jun 20 19:28:14.294532 systemd[1]: ignition-kargs.service: Deactivated successfully. Jun 20 19:28:14.294579 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jun 20 19:28:14.295049 systemd[1]: ignition-setup.service: Deactivated successfully. Jun 20 19:28:14.295099 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jun 20 19:28:14.295382 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jun 20 19:28:14.295435 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jun 20 19:28:14.295839 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jun 20 19:28:14.296316 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jun 20 19:28:14.311408 systemd[1]: systemd-resolved.service: Deactivated successfully. Jun 20 19:28:14.311565 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jun 20 19:28:14.316045 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jun 20 19:28:14.316477 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jun 20 19:28:14.316533 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 20 19:28:14.321149 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jun 20 19:28:14.321486 systemd[1]: systemd-networkd.service: Deactivated successfully. Jun 20 19:28:14.321662 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jun 20 19:28:14.326562 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jun 20 19:28:14.327278 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jun 20 19:28:14.330000 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jun 20 19:28:14.330068 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jun 20 19:28:14.333210 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jun 20 19:28:14.333561 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jun 20 19:28:14.333633 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 20 19:28:14.334133 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 20 19:28:14.334200 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 20 19:28:14.339364 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jun 20 19:28:14.339441 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jun 20 19:28:14.339901 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 20 19:28:14.347068 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jun 20 19:28:14.364875 systemd[1]: systemd-udevd.service: Deactivated successfully. Jun 20 19:28:14.365082 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 20 19:28:14.365849 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jun 20 19:28:14.365896 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jun 20 19:28:14.368688 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jun 20 19:28:14.368729 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jun 20 19:28:14.369335 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jun 20 19:28:14.369385 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jun 20 19:28:14.370038 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jun 20 19:28:14.370087 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jun 20 19:28:14.370687 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 20 19:28:14.370735 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 20 19:28:14.372193 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jun 20 19:28:14.381279 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jun 20 19:28:14.381338 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jun 20 19:28:14.386436 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jun 20 19:28:14.386490 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 20 19:28:14.391325 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jun 20 19:28:14.392517 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 20 19:28:14.395408 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jun 20 19:28:14.395485 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jun 20 19:28:14.396200 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 20 19:28:14.396246 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:28:14.401396 systemd[1]: network-cleanup.service: Deactivated successfully. Jun 20 19:28:14.401542 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jun 20 19:28:14.402493 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jun 20 19:28:14.402597 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jun 20 19:28:14.421756 systemd[1]: sysroot-boot.service: Deactivated successfully. Jun 20 19:28:14.421906 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jun 20 19:28:14.422739 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jun 20 19:28:14.424807 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jun 20 19:28:14.424876 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jun 20 19:28:14.428799 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jun 20 19:28:14.462141 systemd[1]: Switching root. Jun 20 19:28:14.496848 systemd-journald[220]: Received SIGTERM from PID 1 (systemd). Jun 20 19:28:14.496920 systemd-journald[220]: Journal stopped Jun 20 19:28:15.874127 kernel: SELinux: policy capability network_peer_controls=1 Jun 20 19:28:15.874197 kernel: SELinux: policy capability open_perms=1 Jun 20 19:28:15.874210 kernel: SELinux: policy capability extended_socket_class=1 Jun 20 19:28:15.874222 kernel: SELinux: policy capability always_check_network=0 Jun 20 19:28:15.874233 kernel: SELinux: policy capability cgroup_seclabel=1 Jun 20 19:28:15.874245 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jun 20 19:28:15.874256 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jun 20 19:28:15.874267 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jun 20 19:28:15.874284 kernel: SELinux: policy capability userspace_initial_context=0 Jun 20 19:28:15.874306 kernel: audit: type=1403 audit(1750447694.957:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jun 20 19:28:15.874325 systemd[1]: Successfully loaded SELinux policy in 50.820ms. Jun 20 19:28:15.874345 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 12.904ms. Jun 20 19:28:15.874358 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jun 20 19:28:15.874371 systemd[1]: Detected virtualization kvm. Jun 20 19:28:15.874389 systemd[1]: Detected architecture x86-64. Jun 20 19:28:15.874406 systemd[1]: Detected first boot. Jun 20 19:28:15.874418 systemd[1]: Initializing machine ID from VM UUID. Jun 20 19:28:15.874430 zram_generator::config[1125]: No configuration found. Jun 20 19:28:15.874448 kernel: Guest personality initialized and is inactive Jun 20 19:28:15.874459 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jun 20 19:28:15.874471 kernel: Initialized host personality Jun 20 19:28:15.874482 kernel: NET: Registered PF_VSOCK protocol family Jun 20 19:28:15.874494 systemd[1]: Populated /etc with preset unit settings. Jun 20 19:28:15.874514 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jun 20 19:28:15.874526 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jun 20 19:28:15.874538 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jun 20 19:28:15.874556 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jun 20 19:28:15.874568 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jun 20 19:28:15.874580 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jun 20 19:28:15.874592 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jun 20 19:28:15.874604 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jun 20 19:28:15.874616 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jun 20 19:28:15.874628 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jun 20 19:28:15.874641 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jun 20 19:28:15.874654 systemd[1]: Created slice user.slice - User and Session Slice. Jun 20 19:28:15.874671 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 20 19:28:15.874687 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 20 19:28:15.874699 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jun 20 19:28:15.874711 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jun 20 19:28:15.874724 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jun 20 19:28:15.874736 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 20 19:28:15.874748 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jun 20 19:28:15.874766 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 20 19:28:15.874778 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 20 19:28:15.874790 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jun 20 19:28:15.874802 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jun 20 19:28:15.874828 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jun 20 19:28:15.874840 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jun 20 19:28:15.874853 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 20 19:28:15.874865 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 20 19:28:15.874877 systemd[1]: Reached target slices.target - Slice Units. Jun 20 19:28:15.874892 systemd[1]: Reached target swap.target - Swaps. Jun 20 19:28:15.874909 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jun 20 19:28:15.874922 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jun 20 19:28:15.874934 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jun 20 19:28:15.874946 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 20 19:28:15.874958 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 20 19:28:15.874974 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 20 19:28:15.874986 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jun 20 19:28:15.874998 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jun 20 19:28:15.875010 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jun 20 19:28:15.875027 systemd[1]: Mounting media.mount - External Media Directory... Jun 20 19:28:15.875039 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:28:15.875051 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jun 20 19:28:15.875064 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jun 20 19:28:15.875076 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jun 20 19:28:15.875089 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jun 20 19:28:15.875101 systemd[1]: Reached target machines.target - Containers. Jun 20 19:28:15.875113 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jun 20 19:28:15.875130 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 20 19:28:15.875143 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 20 19:28:15.875155 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jun 20 19:28:15.875167 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 20 19:28:15.875179 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 20 19:28:15.875191 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 20 19:28:15.875204 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jun 20 19:28:15.875218 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 20 19:28:15.875231 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jun 20 19:28:15.875253 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jun 20 19:28:15.875267 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jun 20 19:28:15.875279 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jun 20 19:28:15.875291 systemd[1]: Stopped systemd-fsck-usr.service. Jun 20 19:28:15.875304 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 19:28:15.875316 kernel: fuse: init (API version 7.41) Jun 20 19:28:15.875328 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 20 19:28:15.875340 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 20 19:28:15.875357 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 20 19:28:15.875369 kernel: loop: module loaded Jun 20 19:28:15.875399 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jun 20 19:28:15.875410 kernel: ACPI: bus type drm_connector registered Jun 20 19:28:15.875422 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jun 20 19:28:15.875434 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 20 19:28:15.875452 systemd[1]: verity-setup.service: Deactivated successfully. Jun 20 19:28:15.875464 systemd[1]: Stopped verity-setup.service. Jun 20 19:28:15.875476 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:28:15.875489 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jun 20 19:28:15.875501 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jun 20 19:28:15.875513 systemd[1]: Mounted media.mount - External Media Directory. Jun 20 19:28:15.875525 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jun 20 19:28:15.875567 systemd-journald[1200]: Collecting audit messages is disabled. Jun 20 19:28:15.875969 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jun 20 19:28:15.875995 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jun 20 19:28:15.876009 systemd-journald[1200]: Journal started Jun 20 19:28:15.876037 systemd-journald[1200]: Runtime Journal (/run/log/journal/a0f70811bacf4e36b6c43b04c06fca6a) is 6M, max 48.6M, 42.5M free. Jun 20 19:28:15.534443 systemd[1]: Queued start job for default target multi-user.target. Jun 20 19:28:15.557120 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jun 20 19:28:15.557638 systemd[1]: systemd-journald.service: Deactivated successfully. Jun 20 19:28:15.878853 systemd[1]: Started systemd-journald.service - Journal Service. Jun 20 19:28:15.880803 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jun 20 19:28:15.882488 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 20 19:28:15.884235 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jun 20 19:28:15.884469 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jun 20 19:28:15.886162 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 20 19:28:15.886388 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 20 19:28:15.887905 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 20 19:28:15.888126 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 20 19:28:15.889611 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 20 19:28:15.889951 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 20 19:28:15.891472 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jun 20 19:28:15.891690 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jun 20 19:28:15.893235 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 20 19:28:15.893465 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 20 19:28:15.894986 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 20 19:28:15.896597 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 20 19:28:15.898236 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jun 20 19:28:15.899936 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jun 20 19:28:15.915401 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 20 19:28:15.918212 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jun 20 19:28:15.920630 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jun 20 19:28:15.921911 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jun 20 19:28:15.922015 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 20 19:28:15.924205 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jun 20 19:28:15.934291 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jun 20 19:28:15.935520 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 19:28:15.937207 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jun 20 19:28:15.939522 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jun 20 19:28:15.941883 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 20 19:28:15.944049 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jun 20 19:28:15.945198 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 20 19:28:15.952085 systemd-journald[1200]: Time spent on flushing to /var/log/journal/a0f70811bacf4e36b6c43b04c06fca6a is 13.387ms for 977 entries. Jun 20 19:28:15.952085 systemd-journald[1200]: System Journal (/var/log/journal/a0f70811bacf4e36b6c43b04c06fca6a) is 8M, max 195.6M, 187.6M free. Jun 20 19:28:15.979885 systemd-journald[1200]: Received client request to flush runtime journal. Jun 20 19:28:15.947040 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 20 19:28:15.950019 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jun 20 19:28:15.955015 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 20 19:28:15.959005 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jun 20 19:28:15.961069 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jun 20 19:28:15.962785 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 20 19:28:15.981527 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jun 20 19:28:15.982106 kernel: loop0: detected capacity change from 0 to 224512 Jun 20 19:28:15.983522 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jun 20 19:28:15.986476 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jun 20 19:28:15.989922 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jun 20 19:28:15.993209 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 20 19:28:16.080841 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jun 20 19:28:16.087517 systemd-tmpfiles[1245]: ACLs are not supported, ignoring. Jun 20 19:28:16.087535 systemd-tmpfiles[1245]: ACLs are not supported, ignoring. Jun 20 19:28:16.095589 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 20 19:28:16.098907 kernel: loop1: detected capacity change from 0 to 113872 Jun 20 19:28:16.099495 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jun 20 19:28:16.112790 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jun 20 19:28:16.131844 kernel: loop2: detected capacity change from 0 to 146240 Jun 20 19:28:16.138791 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jun 20 19:28:16.142450 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 20 19:28:16.162115 kernel: loop3: detected capacity change from 0 to 224512 Jun 20 19:28:16.172512 systemd-tmpfiles[1266]: ACLs are not supported, ignoring. Jun 20 19:28:16.172531 systemd-tmpfiles[1266]: ACLs are not supported, ignoring. Jun 20 19:28:16.176857 kernel: loop4: detected capacity change from 0 to 113872 Jun 20 19:28:16.178448 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 20 19:28:16.189838 kernel: loop5: detected capacity change from 0 to 146240 Jun 20 19:28:16.199124 (sd-merge)[1269]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jun 20 19:28:16.199726 (sd-merge)[1269]: Merged extensions into '/usr'. Jun 20 19:28:16.204425 systemd[1]: Reload requested from client PID 1244 ('systemd-sysext') (unit systemd-sysext.service)... Jun 20 19:28:16.204443 systemd[1]: Reloading... Jun 20 19:28:16.407855 zram_generator::config[1296]: No configuration found. Jun 20 19:28:16.513178 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 19:28:16.608212 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jun 20 19:28:16.608661 systemd[1]: Reloading finished in 403 ms. Jun 20 19:28:16.610152 ldconfig[1239]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jun 20 19:28:16.626980 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jun 20 19:28:16.628579 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jun 20 19:28:16.647385 systemd[1]: Starting ensure-sysext.service... Jun 20 19:28:16.649343 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jun 20 19:28:16.659582 systemd[1]: Reload requested from client PID 1333 ('systemctl') (unit ensure-sysext.service)... Jun 20 19:28:16.659695 systemd[1]: Reloading... Jun 20 19:28:16.676454 systemd-tmpfiles[1335]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jun 20 19:28:16.677524 systemd-tmpfiles[1335]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jun 20 19:28:16.677978 systemd-tmpfiles[1335]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jun 20 19:28:16.678376 systemd-tmpfiles[1335]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jun 20 19:28:16.679491 systemd-tmpfiles[1335]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jun 20 19:28:16.679940 systemd-tmpfiles[1335]: ACLs are not supported, ignoring. Jun 20 19:28:16.680086 systemd-tmpfiles[1335]: ACLs are not supported, ignoring. Jun 20 19:28:16.684749 systemd-tmpfiles[1335]: Detected autofs mount point /boot during canonicalization of boot. Jun 20 19:28:16.684841 systemd-tmpfiles[1335]: Skipping /boot Jun 20 19:28:16.754849 zram_generator::config[1361]: No configuration found. Jun 20 19:28:16.753187 systemd-tmpfiles[1335]: Detected autofs mount point /boot during canonicalization of boot. Jun 20 19:28:16.753198 systemd-tmpfiles[1335]: Skipping /boot Jun 20 19:28:16.843389 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 19:28:16.923979 systemd[1]: Reloading finished in 263 ms. Jun 20 19:28:16.938611 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jun 20 19:28:16.966186 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 20 19:28:16.983706 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jun 20 19:28:16.987439 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jun 20 19:28:16.990315 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jun 20 19:28:16.998274 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 20 19:28:17.006051 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 20 19:28:17.008832 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jun 20 19:28:17.012404 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:28:17.012580 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 20 19:28:17.020141 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 20 19:28:17.022579 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 20 19:28:17.026556 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 20 19:28:17.027848 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 19:28:17.027979 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 19:28:17.028088 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:28:17.029625 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jun 20 19:28:17.033603 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 20 19:28:17.034955 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 20 19:28:17.037322 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 20 19:28:17.037565 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 20 19:28:17.039487 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 20 19:28:17.039985 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 20 19:28:17.054249 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jun 20 19:28:17.055203 systemd-udevd[1406]: Using default interface naming scheme 'v255'. Jun 20 19:28:17.062686 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:28:17.063306 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 20 19:28:17.065536 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 20 19:28:17.081961 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 20 19:28:17.091787 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 20 19:28:17.098460 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 20 19:28:17.100048 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 19:28:17.100170 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 19:28:17.103093 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jun 20 19:28:17.107838 augenrules[1458]: No rules Jun 20 19:28:17.107051 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jun 20 19:28:17.108181 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:28:17.109356 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 20 19:28:17.113064 systemd[1]: audit-rules.service: Deactivated successfully. Jun 20 19:28:17.113703 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jun 20 19:28:17.117297 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jun 20 19:28:17.119908 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 20 19:28:17.120483 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 20 19:28:17.124505 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 20 19:28:17.124713 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 20 19:28:17.136734 systemd[1]: Finished ensure-sysext.service. Jun 20 19:28:17.138477 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 20 19:28:17.144163 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 20 19:28:17.149238 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 20 19:28:17.150049 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 20 19:28:17.151709 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jun 20 19:28:17.164682 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jun 20 19:28:17.172006 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 20 19:28:17.173307 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 20 19:28:17.173388 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 20 19:28:17.178001 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jun 20 19:28:17.179401 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jun 20 19:28:17.262843 kernel: mousedev: PS/2 mouse device common for all mice Jun 20 19:28:17.279430 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jun 20 19:28:17.283836 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Jun 20 19:28:17.292215 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jun 20 19:28:17.299856 kernel: ACPI: button: Power Button [PWRF] Jun 20 19:28:17.303619 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jun 20 19:28:17.326242 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jun 20 19:28:17.331125 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jun 20 19:28:17.331400 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jun 20 19:28:17.391894 systemd-networkd[1483]: lo: Link UP Jun 20 19:28:17.391904 systemd-networkd[1483]: lo: Gained carrier Jun 20 19:28:17.393590 systemd-networkd[1483]: Enumeration completed Jun 20 19:28:17.393697 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 20 19:28:17.395095 systemd-networkd[1483]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 19:28:17.395104 systemd-networkd[1483]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 20 19:28:17.395705 systemd-networkd[1483]: eth0: Link UP Jun 20 19:28:17.395867 systemd-networkd[1483]: eth0: Gained carrier Jun 20 19:28:17.395880 systemd-networkd[1483]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 19:28:17.398935 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jun 20 19:28:17.404136 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jun 20 19:28:17.415018 systemd-resolved[1404]: Positive Trust Anchors: Jun 20 19:28:17.415033 systemd-resolved[1404]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 20 19:28:17.415066 systemd-resolved[1404]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jun 20 19:28:17.415179 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 19:28:17.416666 systemd-networkd[1483]: eth0: DHCPv4 address 10.0.0.126/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jun 20 19:28:17.429609 systemd-resolved[1404]: Defaulting to hostname 'linux'. Jun 20 19:28:17.434601 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 20 19:28:17.436020 systemd[1]: Reached target network.target - Network. Jun 20 19:28:17.436976 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 20 19:28:17.452891 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jun 20 19:28:17.454431 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jun 20 19:28:17.457106 systemd[1]: Reached target time-set.target - System Time Set. Jun 20 19:28:17.539954 systemd-timesyncd[1484]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jun 20 19:28:17.540107 systemd-timesyncd[1484]: Initial clock synchronization to Fri 2025-06-20 19:28:17.853650 UTC. Jun 20 19:28:17.597848 kernel: kvm_amd: TSC scaling supported Jun 20 19:28:17.597965 kernel: kvm_amd: Nested Virtualization enabled Jun 20 19:28:17.597994 kernel: kvm_amd: Nested Paging enabled Jun 20 19:28:17.598015 kernel: kvm_amd: LBR virtualization supported Jun 20 19:28:17.598038 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jun 20 19:28:17.598091 kernel: kvm_amd: Virtual GIF supported Jun 20 19:28:17.619851 kernel: EDAC MC: Ver: 3.0.0 Jun 20 19:28:17.633975 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:28:17.635498 systemd[1]: Reached target sysinit.target - System Initialization. Jun 20 19:28:17.636698 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jun 20 19:28:17.637971 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jun 20 19:28:17.639268 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jun 20 19:28:17.640675 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jun 20 19:28:17.641897 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jun 20 19:28:17.643208 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jun 20 19:28:17.644517 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jun 20 19:28:17.644561 systemd[1]: Reached target paths.target - Path Units. Jun 20 19:28:17.645542 systemd[1]: Reached target timers.target - Timer Units. Jun 20 19:28:17.647715 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jun 20 19:28:17.650865 systemd[1]: Starting docker.socket - Docker Socket for the API... Jun 20 19:28:17.654745 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jun 20 19:28:17.656178 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jun 20 19:28:17.657457 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jun 20 19:28:17.667637 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jun 20 19:28:17.669125 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jun 20 19:28:17.670929 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jun 20 19:28:17.672688 systemd[1]: Reached target sockets.target - Socket Units. Jun 20 19:28:17.673679 systemd[1]: Reached target basic.target - Basic System. Jun 20 19:28:17.674670 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jun 20 19:28:17.674700 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jun 20 19:28:17.675710 systemd[1]: Starting containerd.service - containerd container runtime... Jun 20 19:28:17.677796 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jun 20 19:28:17.679743 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jun 20 19:28:17.682893 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jun 20 19:28:17.691961 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jun 20 19:28:17.693020 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jun 20 19:28:17.694119 jq[1534]: false Jun 20 19:28:17.694143 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jun 20 19:28:17.696216 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jun 20 19:28:17.699895 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jun 20 19:28:17.703616 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jun 20 19:28:17.706275 extend-filesystems[1535]: Found /dev/vda6 Jun 20 19:28:17.706539 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jun 20 19:28:17.711976 google_oslogin_nss_cache[1536]: oslogin_cache_refresh[1536]: Refreshing passwd entry cache Jun 20 19:28:17.711985 oslogin_cache_refresh[1536]: Refreshing passwd entry cache Jun 20 19:28:17.713036 systemd[1]: Starting systemd-logind.service - User Login Management... Jun 20 19:28:17.715132 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jun 20 19:28:17.715622 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jun 20 19:28:17.716778 extend-filesystems[1535]: Found /dev/vda9 Jun 20 19:28:17.718107 systemd[1]: Starting update-engine.service - Update Engine... Jun 20 19:28:17.719943 google_oslogin_nss_cache[1536]: oslogin_cache_refresh[1536]: Failure getting users, quitting Jun 20 19:28:17.719943 google_oslogin_nss_cache[1536]: oslogin_cache_refresh[1536]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jun 20 19:28:17.719943 google_oslogin_nss_cache[1536]: oslogin_cache_refresh[1536]: Refreshing group entry cache Jun 20 19:28:17.719847 oslogin_cache_refresh[1536]: Failure getting users, quitting Jun 20 19:28:17.719873 oslogin_cache_refresh[1536]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jun 20 19:28:17.719922 oslogin_cache_refresh[1536]: Refreshing group entry cache Jun 20 19:28:17.721607 extend-filesystems[1535]: Checking size of /dev/vda9 Jun 20 19:28:17.722729 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jun 20 19:28:17.726902 google_oslogin_nss_cache[1536]: oslogin_cache_refresh[1536]: Failure getting groups, quitting Jun 20 19:28:17.726899 oslogin_cache_refresh[1536]: Failure getting groups, quitting Jun 20 19:28:17.726990 google_oslogin_nss_cache[1536]: oslogin_cache_refresh[1536]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jun 20 19:28:17.726914 oslogin_cache_refresh[1536]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jun 20 19:28:17.729319 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jun 20 19:28:17.730166 jq[1554]: true Jun 20 19:28:17.731165 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jun 20 19:28:17.731473 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jun 20 19:28:17.731807 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jun 20 19:28:17.732093 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jun 20 19:28:17.733698 systemd[1]: motdgen.service: Deactivated successfully. Jun 20 19:28:17.734026 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jun 20 19:28:17.738147 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jun 20 19:28:17.738410 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jun 20 19:28:17.741649 update_engine[1551]: I20250620 19:28:17.741560 1551 main.cc:92] Flatcar Update Engine starting Jun 20 19:28:17.749371 extend-filesystems[1535]: Resized partition /dev/vda9 Jun 20 19:28:17.757289 jq[1562]: true Jun 20 19:28:17.759942 extend-filesystems[1572]: resize2fs 1.47.2 (1-Jan-2025) Jun 20 19:28:17.764829 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jun 20 19:28:17.767600 tar[1560]: linux-amd64/LICENSE Jun 20 19:28:17.767600 tar[1560]: linux-amd64/helm Jun 20 19:28:17.775693 (ntainerd)[1577]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jun 20 19:28:17.791875 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jun 20 19:28:17.798317 dbus-daemon[1532]: [system] SELinux support is enabled Jun 20 19:28:17.802566 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jun 20 19:28:17.817965 update_engine[1551]: I20250620 19:28:17.814766 1551 update_check_scheduler.cc:74] Next update check in 4m41s Jun 20 19:28:17.812682 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jun 20 19:28:17.812703 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jun 20 19:28:17.814101 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jun 20 19:28:17.814118 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jun 20 19:28:17.815506 systemd[1]: Started update-engine.service - Update Engine. Jun 20 19:28:17.818169 extend-filesystems[1572]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jun 20 19:28:17.818169 extend-filesystems[1572]: old_desc_blocks = 1, new_desc_blocks = 1 Jun 20 19:28:17.818169 extend-filesystems[1572]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jun 20 19:28:17.825131 extend-filesystems[1535]: Resized filesystem in /dev/vda9 Jun 20 19:28:17.820003 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jun 20 19:28:17.825695 systemd[1]: extend-filesystems.service: Deactivated successfully. Jun 20 19:28:17.826048 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jun 20 19:28:17.828014 systemd-logind[1547]: Watching system buttons on /dev/input/event2 (Power Button) Jun 20 19:28:17.828039 systemd-logind[1547]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jun 20 19:28:17.832365 systemd-logind[1547]: New seat seat0. Jun 20 19:28:17.834572 systemd[1]: Started systemd-logind.service - User Login Management. Jun 20 19:28:17.848304 sshd_keygen[1558]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jun 20 19:28:17.878695 bash[1597]: Updated "/home/core/.ssh/authorized_keys" Jun 20 19:28:17.882038 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jun 20 19:28:17.888646 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jun 20 19:28:17.890637 locksmithd[1592]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jun 20 19:28:17.894955 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jun 20 19:28:17.899919 systemd[1]: Starting issuegen.service - Generate /run/issue... Jun 20 19:28:17.985230 systemd[1]: issuegen.service: Deactivated successfully. Jun 20 19:28:17.985582 systemd[1]: Finished issuegen.service - Generate /run/issue. Jun 20 19:28:17.990139 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jun 20 19:28:18.022404 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jun 20 19:28:18.025952 systemd[1]: Started getty@tty1.service - Getty on tty1. Jun 20 19:28:18.029114 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jun 20 19:28:18.119946 systemd[1]: Reached target getty.target - Login Prompts. Jun 20 19:28:18.380727 containerd[1577]: time="2025-06-20T19:28:18Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jun 20 19:28:18.381442 containerd[1577]: time="2025-06-20T19:28:18.381352143Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jun 20 19:28:18.391944 containerd[1577]: time="2025-06-20T19:28:18.391901048Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="17.255µs" Jun 20 19:28:18.391944 containerd[1577]: time="2025-06-20T19:28:18.391927853Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jun 20 19:28:18.392025 containerd[1577]: time="2025-06-20T19:28:18.391947641Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jun 20 19:28:18.392163 containerd[1577]: time="2025-06-20T19:28:18.392131512Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jun 20 19:28:18.392163 containerd[1577]: time="2025-06-20T19:28:18.392151767Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jun 20 19:28:18.392213 containerd[1577]: time="2025-06-20T19:28:18.392202348Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jun 20 19:28:18.392341 containerd[1577]: time="2025-06-20T19:28:18.392298284Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jun 20 19:28:18.392341 containerd[1577]: time="2025-06-20T19:28:18.392325986Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jun 20 19:28:18.392638 containerd[1577]: time="2025-06-20T19:28:18.392594440Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jun 20 19:28:18.392638 containerd[1577]: time="2025-06-20T19:28:18.392624485Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jun 20 19:28:18.392638 containerd[1577]: time="2025-06-20T19:28:18.392635774Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jun 20 19:28:18.392706 containerd[1577]: time="2025-06-20T19:28:18.392644147Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jun 20 19:28:18.392789 containerd[1577]: time="2025-06-20T19:28:18.392760274Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jun 20 19:28:18.393115 containerd[1577]: time="2025-06-20T19:28:18.393081248Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jun 20 19:28:18.393141 containerd[1577]: time="2025-06-20T19:28:18.393125539Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jun 20 19:28:18.393141 containerd[1577]: time="2025-06-20T19:28:18.393135745Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jun 20 19:28:18.393260 containerd[1577]: time="2025-06-20T19:28:18.393232950Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jun 20 19:28:18.396237 containerd[1577]: time="2025-06-20T19:28:18.396200786Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jun 20 19:28:18.396321 containerd[1577]: time="2025-06-20T19:28:18.396293419Z" level=info msg="metadata content store policy set" policy=shared Jun 20 19:28:18.401771 containerd[1577]: time="2025-06-20T19:28:18.401731899Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jun 20 19:28:18.401812 containerd[1577]: time="2025-06-20T19:28:18.401790218Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jun 20 19:28:18.401812 containerd[1577]: time="2025-06-20T19:28:18.401806131Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jun 20 19:28:18.401870 containerd[1577]: time="2025-06-20T19:28:18.401831499Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jun 20 19:28:18.401870 containerd[1577]: time="2025-06-20T19:28:18.401847047Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jun 20 19:28:18.401910 containerd[1577]: time="2025-06-20T19:28:18.401874051Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jun 20 19:28:18.401910 containerd[1577]: time="2025-06-20T19:28:18.401899264Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jun 20 19:28:18.401963 containerd[1577]: time="2025-06-20T19:28:18.401912677Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jun 20 19:28:18.401963 containerd[1577]: time="2025-06-20T19:28:18.401922789Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jun 20 19:28:18.401963 containerd[1577]: time="2025-06-20T19:28:18.401934318Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jun 20 19:28:18.401963 containerd[1577]: time="2025-06-20T19:28:18.401943556Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jun 20 19:28:18.402041 containerd[1577]: time="2025-06-20T19:28:18.401967788Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jun 20 19:28:18.402138 containerd[1577]: time="2025-06-20T19:28:18.402101610Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jun 20 19:28:18.402138 containerd[1577]: time="2025-06-20T19:28:18.402129999Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jun 20 19:28:18.402204 containerd[1577]: time="2025-06-20T19:28:18.402145329Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jun 20 19:28:18.402204 containerd[1577]: time="2025-06-20T19:28:18.402156055Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jun 20 19:28:18.402204 containerd[1577]: time="2025-06-20T19:28:18.402167480Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jun 20 19:28:18.402204 containerd[1577]: time="2025-06-20T19:28:18.402178498Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jun 20 19:28:18.402204 containerd[1577]: time="2025-06-20T19:28:18.402189442Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jun 20 19:28:18.402204 containerd[1577]: time="2025-06-20T19:28:18.402202179Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jun 20 19:28:18.402327 containerd[1577]: time="2025-06-20T19:28:18.402213572Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jun 20 19:28:18.402327 containerd[1577]: time="2025-06-20T19:28:18.402224757Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jun 20 19:28:18.402327 containerd[1577]: time="2025-06-20T19:28:18.402236066Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jun 20 19:28:18.402327 containerd[1577]: time="2025-06-20T19:28:18.402322598Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jun 20 19:28:18.402512 containerd[1577]: time="2025-06-20T19:28:18.402340135Z" level=info msg="Start snapshots syncer" Jun 20 19:28:18.402512 containerd[1577]: time="2025-06-20T19:28:18.402372117Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jun 20 19:28:18.403346 containerd[1577]: time="2025-06-20T19:28:18.403256921Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jun 20 19:28:18.403508 containerd[1577]: time="2025-06-20T19:28:18.403370944Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jun 20 19:28:18.407778 containerd[1577]: time="2025-06-20T19:28:18.405757443Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jun 20 19:28:18.407778 containerd[1577]: time="2025-06-20T19:28:18.405991594Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jun 20 19:28:18.407778 containerd[1577]: time="2025-06-20T19:28:18.406022754Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jun 20 19:28:18.407778 containerd[1577]: time="2025-06-20T19:28:18.406037302Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jun 20 19:28:18.407778 containerd[1577]: time="2025-06-20T19:28:18.406047601Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jun 20 19:28:18.407778 containerd[1577]: time="2025-06-20T19:28:18.406059546Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jun 20 19:28:18.407778 containerd[1577]: time="2025-06-20T19:28:18.406071627Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jun 20 19:28:18.407778 containerd[1577]: time="2025-06-20T19:28:18.406083040Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jun 20 19:28:18.407778 containerd[1577]: time="2025-06-20T19:28:18.406116439Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jun 20 19:28:18.407778 containerd[1577]: time="2025-06-20T19:28:18.406126968Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jun 20 19:28:18.407778 containerd[1577]: time="2025-06-20T19:28:18.406138121Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jun 20 19:28:18.407778 containerd[1577]: time="2025-06-20T19:28:18.406190420Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jun 20 19:28:18.407778 containerd[1577]: time="2025-06-20T19:28:18.406204646Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jun 20 19:28:18.407778 containerd[1577]: time="2025-06-20T19:28:18.406213696Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jun 20 19:28:18.408087 containerd[1577]: time="2025-06-20T19:28:18.406237243Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jun 20 19:28:18.408087 containerd[1577]: time="2025-06-20T19:28:18.406256665Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jun 20 19:28:18.408087 containerd[1577]: time="2025-06-20T19:28:18.406283543Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jun 20 19:28:18.408087 containerd[1577]: time="2025-06-20T19:28:18.406298759Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jun 20 19:28:18.408087 containerd[1577]: time="2025-06-20T19:28:18.406323284Z" level=info msg="runtime interface created" Jun 20 19:28:18.408087 containerd[1577]: time="2025-06-20T19:28:18.406329094Z" level=info msg="created NRI interface" Jun 20 19:28:18.408087 containerd[1577]: time="2025-06-20T19:28:18.406337676Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jun 20 19:28:18.408087 containerd[1577]: time="2025-06-20T19:28:18.406354995Z" level=info msg="Connect containerd service" Jun 20 19:28:18.408087 containerd[1577]: time="2025-06-20T19:28:18.406386425Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jun 20 19:28:18.408087 containerd[1577]: time="2025-06-20T19:28:18.407359488Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 20 19:28:18.619653 tar[1560]: linux-amd64/README.md Jun 20 19:28:18.628346 containerd[1577]: time="2025-06-20T19:28:18.628253807Z" level=info msg="Start subscribing containerd event" Jun 20 19:28:18.628531 containerd[1577]: time="2025-06-20T19:28:18.628356449Z" level=info msg="Start recovering state" Jun 20 19:28:18.628607 containerd[1577]: time="2025-06-20T19:28:18.628558701Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jun 20 19:28:18.628607 containerd[1577]: time="2025-06-20T19:28:18.628598108Z" level=info msg="Start event monitor" Jun 20 19:28:18.628704 containerd[1577]: time="2025-06-20T19:28:18.628680745Z" level=info msg=serving... address=/run/containerd/containerd.sock Jun 20 19:28:18.628737 containerd[1577]: time="2025-06-20T19:28:18.628711664Z" level=info msg="Start cni network conf syncer for default" Jun 20 19:28:18.628737 containerd[1577]: time="2025-06-20T19:28:18.628727805Z" level=info msg="Start streaming server" Jun 20 19:28:18.628807 containerd[1577]: time="2025-06-20T19:28:18.628744364Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jun 20 19:28:18.628807 containerd[1577]: time="2025-06-20T19:28:18.628752570Z" level=info msg="runtime interface starting up..." Jun 20 19:28:18.628807 containerd[1577]: time="2025-06-20T19:28:18.628798351Z" level=info msg="starting plugins..." Jun 20 19:28:18.629061 containerd[1577]: time="2025-06-20T19:28:18.629040479Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jun 20 19:28:18.629317 containerd[1577]: time="2025-06-20T19:28:18.629272141Z" level=info msg="containerd successfully booted in 0.249365s" Jun 20 19:28:18.629329 systemd[1]: Started containerd.service - containerd container runtime. Jun 20 19:28:18.640939 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jun 20 19:28:19.146175 systemd-networkd[1483]: eth0: Gained IPv6LL Jun 20 19:28:19.150646 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jun 20 19:28:19.152627 systemd[1]: Reached target network-online.target - Network is Online. Jun 20 19:28:19.155638 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jun 20 19:28:19.158368 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:28:19.160712 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jun 20 19:28:19.189196 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jun 20 19:28:19.190916 systemd[1]: coreos-metadata.service: Deactivated successfully. Jun 20 19:28:19.191201 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jun 20 19:28:19.194534 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jun 20 19:28:20.797622 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:28:20.799669 systemd[1]: Reached target multi-user.target - Multi-User System. Jun 20 19:28:20.801123 systemd[1]: Startup finished in 2.970s (kernel) + 7.334s (initrd) + 5.892s (userspace) = 16.197s. Jun 20 19:28:20.834507 (kubelet)[1667]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:28:21.426670 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jun 20 19:28:21.428105 systemd[1]: Started sshd@0-10.0.0.126:22-10.0.0.1:54588.service - OpenSSH per-connection server daemon (10.0.0.1:54588). Jun 20 19:28:21.496894 sshd[1679]: Accepted publickey for core from 10.0.0.1 port 54588 ssh2: RSA SHA256:YoQ4GiRtY5Hu7FaS/OnNYeCnsR+r8YS2g6Qh7XD/NPE Jun 20 19:28:21.499129 sshd-session[1679]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:28:21.506623 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jun 20 19:28:21.507877 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jun 20 19:28:21.515612 systemd-logind[1547]: New session 1 of user core. Jun 20 19:28:21.540235 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jun 20 19:28:21.543878 systemd[1]: Starting user@500.service - User Manager for UID 500... Jun 20 19:28:21.544515 kubelet[1667]: E0620 19:28:21.544458 1667 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:28:21.548496 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:28:21.548707 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:28:21.549119 systemd[1]: kubelet.service: Consumed 2.090s CPU time, 265M memory peak. Jun 20 19:28:21.560573 (systemd)[1683]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jun 20 19:28:21.563546 systemd-logind[1547]: New session c1 of user core. Jun 20 19:28:21.718370 systemd[1683]: Queued start job for default target default.target. Jun 20 19:28:21.738254 systemd[1683]: Created slice app.slice - User Application Slice. Jun 20 19:28:21.738282 systemd[1683]: Reached target paths.target - Paths. Jun 20 19:28:21.738325 systemd[1683]: Reached target timers.target - Timers. Jun 20 19:28:21.740093 systemd[1683]: Starting dbus.socket - D-Bus User Message Bus Socket... Jun 20 19:28:21.751323 systemd[1683]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jun 20 19:28:21.751453 systemd[1683]: Reached target sockets.target - Sockets. Jun 20 19:28:21.751496 systemd[1683]: Reached target basic.target - Basic System. Jun 20 19:28:21.751538 systemd[1683]: Reached target default.target - Main User Target. Jun 20 19:28:21.751570 systemd[1683]: Startup finished in 180ms. Jun 20 19:28:21.751992 systemd[1]: Started user@500.service - User Manager for UID 500. Jun 20 19:28:21.801985 systemd[1]: Started session-1.scope - Session 1 of User core. Jun 20 19:28:21.865070 systemd[1]: Started sshd@1-10.0.0.126:22-10.0.0.1:54598.service - OpenSSH per-connection server daemon (10.0.0.1:54598). Jun 20 19:28:21.924195 sshd[1695]: Accepted publickey for core from 10.0.0.1 port 54598 ssh2: RSA SHA256:YoQ4GiRtY5Hu7FaS/OnNYeCnsR+r8YS2g6Qh7XD/NPE Jun 20 19:28:21.925757 sshd-session[1695]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:28:21.930378 systemd-logind[1547]: New session 2 of user core. Jun 20 19:28:21.943970 systemd[1]: Started session-2.scope - Session 2 of User core. Jun 20 19:28:21.999945 sshd[1697]: Connection closed by 10.0.0.1 port 54598 Jun 20 19:28:22.000159 sshd-session[1695]: pam_unix(sshd:session): session closed for user core Jun 20 19:28:22.016388 systemd[1]: sshd@1-10.0.0.126:22-10.0.0.1:54598.service: Deactivated successfully. Jun 20 19:28:22.018264 systemd[1]: session-2.scope: Deactivated successfully. Jun 20 19:28:22.018965 systemd-logind[1547]: Session 2 logged out. Waiting for processes to exit. Jun 20 19:28:22.022276 systemd[1]: Started sshd@2-10.0.0.126:22-10.0.0.1:54604.service - OpenSSH per-connection server daemon (10.0.0.1:54604). Jun 20 19:28:22.022873 systemd-logind[1547]: Removed session 2. Jun 20 19:28:22.082491 sshd[1703]: Accepted publickey for core from 10.0.0.1 port 54604 ssh2: RSA SHA256:YoQ4GiRtY5Hu7FaS/OnNYeCnsR+r8YS2g6Qh7XD/NPE Jun 20 19:28:22.084081 sshd-session[1703]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:28:22.088346 systemd-logind[1547]: New session 3 of user core. Jun 20 19:28:22.098990 systemd[1]: Started session-3.scope - Session 3 of User core. Jun 20 19:28:22.148621 sshd[1705]: Connection closed by 10.0.0.1 port 54604 Jun 20 19:28:22.149083 sshd-session[1703]: pam_unix(sshd:session): session closed for user core Jun 20 19:28:22.165613 systemd[1]: sshd@2-10.0.0.126:22-10.0.0.1:54604.service: Deactivated successfully. Jun 20 19:28:22.167532 systemd[1]: session-3.scope: Deactivated successfully. Jun 20 19:28:22.168258 systemd-logind[1547]: Session 3 logged out. Waiting for processes to exit. Jun 20 19:28:22.171478 systemd[1]: Started sshd@3-10.0.0.126:22-10.0.0.1:54610.service - OpenSSH per-connection server daemon (10.0.0.1:54610). Jun 20 19:28:22.172139 systemd-logind[1547]: Removed session 3. Jun 20 19:28:22.230323 sshd[1711]: Accepted publickey for core from 10.0.0.1 port 54610 ssh2: RSA SHA256:YoQ4GiRtY5Hu7FaS/OnNYeCnsR+r8YS2g6Qh7XD/NPE Jun 20 19:28:22.231629 sshd-session[1711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:28:22.236149 systemd-logind[1547]: New session 4 of user core. Jun 20 19:28:22.246983 systemd[1]: Started session-4.scope - Session 4 of User core. Jun 20 19:28:22.302468 sshd[1713]: Connection closed by 10.0.0.1 port 54610 Jun 20 19:28:22.302704 sshd-session[1711]: pam_unix(sshd:session): session closed for user core Jun 20 19:28:22.315578 systemd[1]: sshd@3-10.0.0.126:22-10.0.0.1:54610.service: Deactivated successfully. Jun 20 19:28:22.317531 systemd[1]: session-4.scope: Deactivated successfully. Jun 20 19:28:22.318284 systemd-logind[1547]: Session 4 logged out. Waiting for processes to exit. Jun 20 19:28:22.321549 systemd[1]: Started sshd@4-10.0.0.126:22-10.0.0.1:54626.service - OpenSSH per-connection server daemon (10.0.0.1:54626). Jun 20 19:28:22.322105 systemd-logind[1547]: Removed session 4. Jun 20 19:28:22.379160 sshd[1719]: Accepted publickey for core from 10.0.0.1 port 54626 ssh2: RSA SHA256:YoQ4GiRtY5Hu7FaS/OnNYeCnsR+r8YS2g6Qh7XD/NPE Jun 20 19:28:22.380595 sshd-session[1719]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:28:22.384585 systemd-logind[1547]: New session 5 of user core. Jun 20 19:28:22.397979 systemd[1]: Started session-5.scope - Session 5 of User core. Jun 20 19:28:22.457578 sudo[1722]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jun 20 19:28:22.457926 sudo[1722]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 19:28:22.474351 sudo[1722]: pam_unix(sudo:session): session closed for user root Jun 20 19:28:22.476021 sshd[1721]: Connection closed by 10.0.0.1 port 54626 Jun 20 19:28:22.476341 sshd-session[1719]: pam_unix(sshd:session): session closed for user core Jun 20 19:28:22.485618 systemd[1]: sshd@4-10.0.0.126:22-10.0.0.1:54626.service: Deactivated successfully. Jun 20 19:28:22.487592 systemd[1]: session-5.scope: Deactivated successfully. Jun 20 19:28:22.488336 systemd-logind[1547]: Session 5 logged out. Waiting for processes to exit. Jun 20 19:28:22.491442 systemd[1]: Started sshd@5-10.0.0.126:22-10.0.0.1:54636.service - OpenSSH per-connection server daemon (10.0.0.1:54636). Jun 20 19:28:22.492048 systemd-logind[1547]: Removed session 5. Jun 20 19:28:22.541989 sshd[1728]: Accepted publickey for core from 10.0.0.1 port 54636 ssh2: RSA SHA256:YoQ4GiRtY5Hu7FaS/OnNYeCnsR+r8YS2g6Qh7XD/NPE Jun 20 19:28:22.543399 sshd-session[1728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:28:22.547576 systemd-logind[1547]: New session 6 of user core. Jun 20 19:28:22.556979 systemd[1]: Started session-6.scope - Session 6 of User core. Jun 20 19:28:22.612730 sudo[1732]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jun 20 19:28:22.613087 sudo[1732]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 19:28:22.621212 sudo[1732]: pam_unix(sudo:session): session closed for user root Jun 20 19:28:22.627857 sudo[1731]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jun 20 19:28:22.628174 sudo[1731]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 19:28:22.638400 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jun 20 19:28:22.681976 augenrules[1754]: No rules Jun 20 19:28:22.683907 systemd[1]: audit-rules.service: Deactivated successfully. Jun 20 19:28:22.684219 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jun 20 19:28:22.685439 sudo[1731]: pam_unix(sudo:session): session closed for user root Jun 20 19:28:22.687069 sshd[1730]: Connection closed by 10.0.0.1 port 54636 Jun 20 19:28:22.687381 sshd-session[1728]: pam_unix(sshd:session): session closed for user core Jun 20 19:28:22.699662 systemd[1]: sshd@5-10.0.0.126:22-10.0.0.1:54636.service: Deactivated successfully. Jun 20 19:28:22.701576 systemd[1]: session-6.scope: Deactivated successfully. Jun 20 19:28:22.702433 systemd-logind[1547]: Session 6 logged out. Waiting for processes to exit. Jun 20 19:28:22.706002 systemd[1]: Started sshd@6-10.0.0.126:22-10.0.0.1:54640.service - OpenSSH per-connection server daemon (10.0.0.1:54640). Jun 20 19:28:22.706596 systemd-logind[1547]: Removed session 6. Jun 20 19:28:22.765441 sshd[1763]: Accepted publickey for core from 10.0.0.1 port 54640 ssh2: RSA SHA256:YoQ4GiRtY5Hu7FaS/OnNYeCnsR+r8YS2g6Qh7XD/NPE Jun 20 19:28:22.766853 sshd-session[1763]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:28:22.771246 systemd-logind[1547]: New session 7 of user core. Jun 20 19:28:22.782979 systemd[1]: Started session-7.scope - Session 7 of User core. Jun 20 19:28:22.836922 sudo[1766]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jun 20 19:28:22.837233 sudo[1766]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 19:28:23.538290 systemd[1]: Starting docker.service - Docker Application Container Engine... Jun 20 19:28:23.558167 (dockerd)[1786]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jun 20 19:28:24.030910 dockerd[1786]: time="2025-06-20T19:28:24.030749962Z" level=info msg="Starting up" Jun 20 19:28:24.031668 dockerd[1786]: time="2025-06-20T19:28:24.031620729Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jun 20 19:28:24.548218 dockerd[1786]: time="2025-06-20T19:28:24.548162492Z" level=info msg="Loading containers: start." Jun 20 19:28:24.559852 kernel: Initializing XFRM netlink socket Jun 20 19:28:24.817705 systemd-networkd[1483]: docker0: Link UP Jun 20 19:28:24.823480 dockerd[1786]: time="2025-06-20T19:28:24.823441742Z" level=info msg="Loading containers: done." Jun 20 19:28:24.846578 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1425393040-merged.mount: Deactivated successfully. Jun 20 19:28:24.848111 dockerd[1786]: time="2025-06-20T19:28:24.848066204Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jun 20 19:28:24.848181 dockerd[1786]: time="2025-06-20T19:28:24.848165035Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jun 20 19:28:24.848312 dockerd[1786]: time="2025-06-20T19:28:24.848292947Z" level=info msg="Initializing buildkit" Jun 20 19:28:24.878826 dockerd[1786]: time="2025-06-20T19:28:24.878775132Z" level=info msg="Completed buildkit initialization" Jun 20 19:28:24.883068 dockerd[1786]: time="2025-06-20T19:28:24.883038769Z" level=info msg="Daemon has completed initialization" Jun 20 19:28:24.883144 dockerd[1786]: time="2025-06-20T19:28:24.883100904Z" level=info msg="API listen on /run/docker.sock" Jun 20 19:28:24.883255 systemd[1]: Started docker.service - Docker Application Container Engine. Jun 20 19:28:25.796202 containerd[1577]: time="2025-06-20T19:28:25.796154972Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\"" Jun 20 19:28:26.615861 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2571509016.mount: Deactivated successfully. Jun 20 19:28:27.952331 containerd[1577]: time="2025-06-20T19:28:27.952257467Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:28:27.953068 containerd[1577]: time="2025-06-20T19:28:27.952998803Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.6: active requests=0, bytes read=28799045" Jun 20 19:28:27.954560 containerd[1577]: time="2025-06-20T19:28:27.954514999Z" level=info msg="ImageCreate event name:\"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:28:27.958695 containerd[1577]: time="2025-06-20T19:28:27.958643982Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:28:27.959762 containerd[1577]: time="2025-06-20T19:28:27.959724970Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.6\" with image id \"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\", size \"28795845\" in 2.163521438s" Jun 20 19:28:27.959808 containerd[1577]: time="2025-06-20T19:28:27.959762084Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\" returns image reference \"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\"" Jun 20 19:28:27.960621 containerd[1577]: time="2025-06-20T19:28:27.960583609Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\"" Jun 20 19:28:29.351502 containerd[1577]: time="2025-06-20T19:28:29.351418820Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:28:29.352147 containerd[1577]: time="2025-06-20T19:28:29.352098592Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.6: active requests=0, bytes read=24783912" Jun 20 19:28:29.353240 containerd[1577]: time="2025-06-20T19:28:29.353205745Z" level=info msg="ImageCreate event name:\"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:28:29.355847 containerd[1577]: time="2025-06-20T19:28:29.355776383Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:28:29.356831 containerd[1577]: time="2025-06-20T19:28:29.356777292Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.6\" with image id \"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\", size \"26385746\" in 1.396148956s" Jun 20 19:28:29.356875 containerd[1577]: time="2025-06-20T19:28:29.356814971Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\" returns image reference \"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\"" Jun 20 19:28:29.357439 containerd[1577]: time="2025-06-20T19:28:29.357402330Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\"" Jun 20 19:28:30.631699 containerd[1577]: time="2025-06-20T19:28:30.631617850Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:28:30.632658 containerd[1577]: time="2025-06-20T19:28:30.632595346Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.6: active requests=0, bytes read=19176916" Jun 20 19:28:30.634657 containerd[1577]: time="2025-06-20T19:28:30.634625845Z" level=info msg="ImageCreate event name:\"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:28:30.637517 containerd[1577]: time="2025-06-20T19:28:30.637454805Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:28:30.638417 containerd[1577]: time="2025-06-20T19:28:30.638366612Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.6\" with image id \"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\", size \"20778768\" in 1.280928284s" Jun 20 19:28:30.638417 containerd[1577]: time="2025-06-20T19:28:30.638415609Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\" returns image reference \"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\"" Jun 20 19:28:30.638972 containerd[1577]: time="2025-06-20T19:28:30.638939182Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\"" Jun 20 19:28:31.755995 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3347701589.mount: Deactivated successfully. Jun 20 19:28:31.757197 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jun 20 19:28:31.758632 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:28:32.983779 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:28:32.996270 (kubelet)[2075]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:28:33.077473 kubelet[2075]: E0620 19:28:33.077402 2075 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:28:33.084260 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:28:33.084471 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:28:33.084920 systemd[1]: kubelet.service: Consumed 399ms CPU time, 110.3M memory peak. Jun 20 19:28:33.740748 containerd[1577]: time="2025-06-20T19:28:33.740611456Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:28:33.742850 containerd[1577]: time="2025-06-20T19:28:33.742749539Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.6: active requests=0, bytes read=30895363" Jun 20 19:28:33.744734 containerd[1577]: time="2025-06-20T19:28:33.744655126Z" level=info msg="ImageCreate event name:\"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:28:33.747061 containerd[1577]: time="2025-06-20T19:28:33.747010536Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:28:33.750298 containerd[1577]: time="2025-06-20T19:28:33.749206140Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.6\" with image id \"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\", repo tag \"registry.k8s.io/kube-proxy:v1.32.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\", size \"30894382\" in 3.110227864s" Jun 20 19:28:33.750298 containerd[1577]: time="2025-06-20T19:28:33.749282577Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\" returns image reference \"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\"" Jun 20 19:28:33.751366 containerd[1577]: time="2025-06-20T19:28:33.751313372Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jun 20 19:28:34.285021 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1723573273.mount: Deactivated successfully. Jun 20 19:28:35.164247 containerd[1577]: time="2025-06-20T19:28:35.164170786Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:28:35.165003 containerd[1577]: time="2025-06-20T19:28:35.164934262Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jun 20 19:28:35.166363 containerd[1577]: time="2025-06-20T19:28:35.166300582Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:28:35.169069 containerd[1577]: time="2025-06-20T19:28:35.169009630Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:28:35.169802 containerd[1577]: time="2025-06-20T19:28:35.169766034Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.418401023s" Jun 20 19:28:35.169802 containerd[1577]: time="2025-06-20T19:28:35.169803024Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jun 20 19:28:35.170446 containerd[1577]: time="2025-06-20T19:28:35.170421197Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jun 20 19:28:35.643249 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2824452609.mount: Deactivated successfully. Jun 20 19:28:35.648489 containerd[1577]: time="2025-06-20T19:28:35.648444586Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 19:28:35.649150 containerd[1577]: time="2025-06-20T19:28:35.649079892Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jun 20 19:28:35.650344 containerd[1577]: time="2025-06-20T19:28:35.650286958Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 19:28:35.652160 containerd[1577]: time="2025-06-20T19:28:35.652111805Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 19:28:35.652756 containerd[1577]: time="2025-06-20T19:28:35.652712637Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 482.261461ms" Jun 20 19:28:35.652756 containerd[1577]: time="2025-06-20T19:28:35.652750914Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jun 20 19:28:35.653347 containerd[1577]: time="2025-06-20T19:28:35.653300955Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jun 20 19:28:36.207049 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3922308663.mount: Deactivated successfully. Jun 20 19:28:38.981664 containerd[1577]: time="2025-06-20T19:28:38.981584076Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:28:38.982411 containerd[1577]: time="2025-06-20T19:28:38.982357569Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551360" Jun 20 19:28:38.983664 containerd[1577]: time="2025-06-20T19:28:38.983628668Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:28:38.986481 containerd[1577]: time="2025-06-20T19:28:38.986416817Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:28:38.987590 containerd[1577]: time="2025-06-20T19:28:38.987541323Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 3.334208382s" Jun 20 19:28:38.987660 containerd[1577]: time="2025-06-20T19:28:38.987589846Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jun 20 19:28:40.693154 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:28:40.693363 systemd[1]: kubelet.service: Consumed 399ms CPU time, 110.3M memory peak. Jun 20 19:28:40.696465 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:28:40.724566 systemd[1]: Reload requested from client PID 2226 ('systemctl') (unit session-7.scope)... Jun 20 19:28:40.724589 systemd[1]: Reloading... Jun 20 19:28:40.809850 zram_generator::config[2269]: No configuration found. Jun 20 19:28:41.035186 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 19:28:41.170602 systemd[1]: Reloading finished in 445 ms. Jun 20 19:28:41.258120 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jun 20 19:28:41.258280 systemd[1]: kubelet.service: Failed with result 'signal'. Jun 20 19:28:41.258739 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:28:41.258815 systemd[1]: kubelet.service: Consumed 218ms CPU time, 98.3M memory peak. Jun 20 19:28:41.261372 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:28:41.448215 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:28:41.465121 (kubelet)[2317]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 20 19:28:41.548858 kubelet[2317]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 19:28:41.548858 kubelet[2317]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jun 20 19:28:41.548858 kubelet[2317]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 19:28:41.549343 kubelet[2317]: I0620 19:28:41.548938 2317 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 20 19:28:41.727731 kubelet[2317]: I0620 19:28:41.727582 2317 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jun 20 19:28:41.727731 kubelet[2317]: I0620 19:28:41.727656 2317 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 20 19:28:41.728507 kubelet[2317]: I0620 19:28:41.728469 2317 server.go:954] "Client rotation is on, will bootstrap in background" Jun 20 19:28:41.752165 kubelet[2317]: E0620 19:28:41.752112 2317 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.126:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.126:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:28:41.753231 kubelet[2317]: I0620 19:28:41.753198 2317 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 20 19:28:41.882838 kubelet[2317]: I0620 19:28:41.882775 2317 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jun 20 19:28:41.888434 kubelet[2317]: I0620 19:28:41.888394 2317 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 20 19:28:41.888698 kubelet[2317]: I0620 19:28:41.888642 2317 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 20 19:28:41.888906 kubelet[2317]: I0620 19:28:41.888684 2317 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jun 20 19:28:41.889079 kubelet[2317]: I0620 19:28:41.888916 2317 topology_manager.go:138] "Creating topology manager with none policy" Jun 20 19:28:41.889079 kubelet[2317]: I0620 19:28:41.888926 2317 container_manager_linux.go:304] "Creating device plugin manager" Jun 20 19:28:41.889129 kubelet[2317]: I0620 19:28:41.889087 2317 state_mem.go:36] "Initialized new in-memory state store" Jun 20 19:28:41.892141 kubelet[2317]: I0620 19:28:41.892102 2317 kubelet.go:446] "Attempting to sync node with API server" Jun 20 19:28:41.892141 kubelet[2317]: I0620 19:28:41.892143 2317 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 20 19:28:41.892237 kubelet[2317]: I0620 19:28:41.892176 2317 kubelet.go:352] "Adding apiserver pod source" Jun 20 19:28:41.892237 kubelet[2317]: I0620 19:28:41.892191 2317 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 20 19:28:41.894725 kubelet[2317]: W0620 19:28:41.894609 2317 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.126:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.126:6443: connect: connection refused Jun 20 19:28:41.894725 kubelet[2317]: W0620 19:28:41.894660 2317 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.126:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.126:6443: connect: connection refused Jun 20 19:28:41.894725 kubelet[2317]: E0620 19:28:41.894688 2317 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.126:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.126:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:28:41.894725 kubelet[2317]: E0620 19:28:41.894701 2317 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.126:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.126:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:28:41.897912 kubelet[2317]: I0620 19:28:41.897886 2317 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jun 20 19:28:41.898659 kubelet[2317]: I0620 19:28:41.898639 2317 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jun 20 19:28:41.898768 kubelet[2317]: W0620 19:28:41.898749 2317 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jun 20 19:28:41.902121 kubelet[2317]: I0620 19:28:41.902096 2317 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jun 20 19:28:41.902176 kubelet[2317]: I0620 19:28:41.902142 2317 server.go:1287] "Started kubelet" Jun 20 19:28:41.904207 kubelet[2317]: I0620 19:28:41.903462 2317 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 20 19:28:41.904207 kubelet[2317]: I0620 19:28:41.903771 2317 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 20 19:28:41.904207 kubelet[2317]: I0620 19:28:41.903867 2317 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 20 19:28:41.904207 kubelet[2317]: I0620 19:28:41.903929 2317 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jun 20 19:28:41.904950 kubelet[2317]: I0620 19:28:41.904925 2317 server.go:479] "Adding debug handlers to kubelet server" Jun 20 19:28:41.906767 kubelet[2317]: I0620 19:28:41.906066 2317 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jun 20 19:28:41.906767 kubelet[2317]: E0620 19:28:41.906403 2317 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 20 19:28:41.906767 kubelet[2317]: I0620 19:28:41.906433 2317 volume_manager.go:297] "Starting Kubelet Volume Manager" Jun 20 19:28:41.906767 kubelet[2317]: I0620 19:28:41.906587 2317 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jun 20 19:28:41.906767 kubelet[2317]: I0620 19:28:41.906651 2317 reconciler.go:26] "Reconciler: start to sync state" Jun 20 19:28:41.907923 kubelet[2317]: W0620 19:28:41.906995 2317 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.126:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.126:6443: connect: connection refused Jun 20 19:28:41.907923 kubelet[2317]: E0620 19:28:41.907041 2317 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.126:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.126:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:28:41.907923 kubelet[2317]: E0620 19:28:41.907302 2317 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.126:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.126:6443: connect: connection refused" interval="200ms" Jun 20 19:28:41.908424 kubelet[2317]: I0620 19:28:41.908395 2317 factory.go:221] Registration of the systemd container factory successfully Jun 20 19:28:41.908515 kubelet[2317]: I0620 19:28:41.908478 2317 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 20 19:28:41.908777 kubelet[2317]: E0620 19:28:41.907777 2317 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.126:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.126:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.184ad6edc5de0b44 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-06-20 19:28:41.902115652 +0000 UTC m=+0.430690908,LastTimestamp:2025-06-20 19:28:41.902115652 +0000 UTC m=+0.430690908,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jun 20 19:28:41.909951 kubelet[2317]: I0620 19:28:41.909911 2317 factory.go:221] Registration of the containerd container factory successfully Jun 20 19:28:41.910297 kubelet[2317]: E0620 19:28:41.910030 2317 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 20 19:28:41.922769 kubelet[2317]: I0620 19:28:41.922704 2317 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 20 19:28:41.924175 kubelet[2317]: I0620 19:28:41.924137 2317 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 20 19:28:41.924175 kubelet[2317]: I0620 19:28:41.924178 2317 status_manager.go:227] "Starting to sync pod status with apiserver" Jun 20 19:28:41.924312 kubelet[2317]: I0620 19:28:41.924209 2317 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jun 20 19:28:41.924312 kubelet[2317]: I0620 19:28:41.924215 2317 kubelet.go:2382] "Starting kubelet main sync loop" Jun 20 19:28:41.924312 kubelet[2317]: E0620 19:28:41.924263 2317 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 20 19:28:41.929535 kubelet[2317]: W0620 19:28:41.929482 2317 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.126:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.126:6443: connect: connection refused Jun 20 19:28:41.929589 kubelet[2317]: E0620 19:28:41.929536 2317 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.126:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.126:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:28:41.930377 kubelet[2317]: I0620 19:28:41.930357 2317 cpu_manager.go:221] "Starting CPU manager" policy="none" Jun 20 19:28:41.930377 kubelet[2317]: I0620 19:28:41.930371 2317 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jun 20 19:28:41.930458 kubelet[2317]: I0620 19:28:41.930388 2317 state_mem.go:36] "Initialized new in-memory state store" Jun 20 19:28:42.006694 kubelet[2317]: E0620 19:28:42.006582 2317 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 20 19:28:42.025145 kubelet[2317]: E0620 19:28:42.025055 2317 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jun 20 19:28:42.107571 kubelet[2317]: E0620 19:28:42.107514 2317 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 20 19:28:42.108323 kubelet[2317]: E0620 19:28:42.108276 2317 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.126:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.126:6443: connect: connection refused" interval="400ms" Jun 20 19:28:42.208691 kubelet[2317]: E0620 19:28:42.208630 2317 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 20 19:28:42.225966 kubelet[2317]: E0620 19:28:42.225913 2317 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jun 20 19:28:42.309853 kubelet[2317]: E0620 19:28:42.309677 2317 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 20 19:28:42.410213 kubelet[2317]: I0620 19:28:42.410160 2317 policy_none.go:49] "None policy: Start" Jun 20 19:28:42.410213 kubelet[2317]: I0620 19:28:42.410214 2317 memory_manager.go:186] "Starting memorymanager" policy="None" Jun 20 19:28:42.410353 kubelet[2317]: I0620 19:28:42.410235 2317 state_mem.go:35] "Initializing new in-memory state store" Jun 20 19:28:42.410699 kubelet[2317]: E0620 19:28:42.410669 2317 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 20 19:28:42.419133 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jun 20 19:28:42.433558 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jun 20 19:28:42.437016 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jun 20 19:28:42.457843 kubelet[2317]: I0620 19:28:42.456066 2317 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 20 19:28:42.457843 kubelet[2317]: I0620 19:28:42.456561 2317 eviction_manager.go:189] "Eviction manager: starting control loop" Jun 20 19:28:42.457843 kubelet[2317]: I0620 19:28:42.456582 2317 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 20 19:28:42.457843 kubelet[2317]: I0620 19:28:42.456797 2317 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 20 19:28:42.461358 kubelet[2317]: E0620 19:28:42.461326 2317 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jun 20 19:28:42.461517 kubelet[2317]: E0620 19:28:42.461503 2317 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jun 20 19:28:42.509697 kubelet[2317]: E0620 19:28:42.509587 2317 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.126:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.126:6443: connect: connection refused" interval="800ms" Jun 20 19:28:42.558104 kubelet[2317]: I0620 19:28:42.558039 2317 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jun 20 19:28:42.558576 kubelet[2317]: E0620 19:28:42.558432 2317 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.126:6443/api/v1/nodes\": dial tcp 10.0.0.126:6443: connect: connection refused" node="localhost" Jun 20 19:28:42.635519 systemd[1]: Created slice kubepods-burstable-pod8a75e163f27396b2168da0f88f85f8a5.slice - libcontainer container kubepods-burstable-pod8a75e163f27396b2168da0f88f85f8a5.slice. Jun 20 19:28:42.643788 kubelet[2317]: E0620 19:28:42.643747 2317 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jun 20 19:28:42.645802 systemd[1]: Created slice kubepods-burstable-pod0b78071d34d6c8c1eb572d230c3437fd.slice - libcontainer container kubepods-burstable-pod0b78071d34d6c8c1eb572d230c3437fd.slice. Jun 20 19:28:42.655158 kubelet[2317]: E0620 19:28:42.655126 2317 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jun 20 19:28:42.658074 systemd[1]: Created slice kubepods-burstable-podd1af03769b64da1b1e8089a7035018fc.slice - libcontainer container kubepods-burstable-podd1af03769b64da1b1e8089a7035018fc.slice. Jun 20 19:28:42.660080 kubelet[2317]: E0620 19:28:42.660054 2317 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jun 20 19:28:42.710470 kubelet[2317]: I0620 19:28:42.710432 2317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jun 20 19:28:42.710470 kubelet[2317]: I0620 19:28:42.710461 2317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0b78071d34d6c8c1eb572d230c3437fd-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0b78071d34d6c8c1eb572d230c3437fd\") " pod="kube-system/kube-apiserver-localhost" Jun 20 19:28:42.710613 kubelet[2317]: I0620 19:28:42.710481 2317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0b78071d34d6c8c1eb572d230c3437fd-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0b78071d34d6c8c1eb572d230c3437fd\") " pod="kube-system/kube-apiserver-localhost" Jun 20 19:28:42.710613 kubelet[2317]: I0620 19:28:42.710498 2317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jun 20 19:28:42.710613 kubelet[2317]: I0620 19:28:42.710515 2317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jun 20 19:28:42.710613 kubelet[2317]: I0620 19:28:42.710528 2317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jun 20 19:28:42.710613 kubelet[2317]: I0620 19:28:42.710541 2317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a75e163f27396b2168da0f88f85f8a5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8a75e163f27396b2168da0f88f85f8a5\") " pod="kube-system/kube-scheduler-localhost" Jun 20 19:28:42.710777 kubelet[2317]: I0620 19:28:42.710570 2317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0b78071d34d6c8c1eb572d230c3437fd-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0b78071d34d6c8c1eb572d230c3437fd\") " pod="kube-system/kube-apiserver-localhost" Jun 20 19:28:42.710777 kubelet[2317]: I0620 19:28:42.710594 2317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jun 20 19:28:42.760145 kubelet[2317]: I0620 19:28:42.760100 2317 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jun 20 19:28:42.760548 kubelet[2317]: E0620 19:28:42.760519 2317 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.126:6443/api/v1/nodes\": dial tcp 10.0.0.126:6443: connect: connection refused" node="localhost" Jun 20 19:28:42.834759 kubelet[2317]: W0620 19:28:42.834697 2317 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.126:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.126:6443: connect: connection refused Jun 20 19:28:42.834759 kubelet[2317]: E0620 19:28:42.834767 2317 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.126:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.126:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:28:42.945197 kubelet[2317]: E0620 19:28:42.945063 2317 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:28:42.945766 containerd[1577]: time="2025-06-20T19:28:42.945722879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8a75e163f27396b2168da0f88f85f8a5,Namespace:kube-system,Attempt:0,}" Jun 20 19:28:42.956090 kubelet[2317]: E0620 19:28:42.956042 2317 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:28:42.956436 containerd[1577]: time="2025-06-20T19:28:42.956401433Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:0b78071d34d6c8c1eb572d230c3437fd,Namespace:kube-system,Attempt:0,}" Jun 20 19:28:42.960719 kubelet[2317]: E0620 19:28:42.960677 2317 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:28:42.961022 containerd[1577]: time="2025-06-20T19:28:42.960983277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d1af03769b64da1b1e8089a7035018fc,Namespace:kube-system,Attempt:0,}" Jun 20 19:28:42.980288 containerd[1577]: time="2025-06-20T19:28:42.980193665Z" level=info msg="connecting to shim 3cd29ca956807e9b9857509987ae90f9026ddd42835827ab9c6443a63557a441" address="unix:///run/containerd/s/10d91fbc9bd3d5c2e49d32ce80b760fce5dee3dec79d4d2d5030c70f3cdc1ccf" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:28:42.989273 containerd[1577]: time="2025-06-20T19:28:42.989183076Z" level=info msg="connecting to shim e64cd411b5cdc449eea24169b484ac886b41da92b86c7c90532aa032ab3acae2" address="unix:///run/containerd/s/a0f9bbf398bed78ddfa375e367a2168111ff170f570317c99ca5ea79b4626fd9" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:28:43.012696 containerd[1577]: time="2025-06-20T19:28:43.012634734Z" level=info msg="connecting to shim cad08eacb369ba695f0c1c8a2b2ac9a226a25f29320e5120c42f8f0e440b2dae" address="unix:///run/containerd/s/8dbdd664605328c1e673f9fbfaafb74149b43f468956c6b4ad71ab3ec02e75bf" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:28:43.045157 systemd[1]: Started cri-containerd-3cd29ca956807e9b9857509987ae90f9026ddd42835827ab9c6443a63557a441.scope - libcontainer container 3cd29ca956807e9b9857509987ae90f9026ddd42835827ab9c6443a63557a441. Jun 20 19:28:43.050279 systemd[1]: Started cri-containerd-e64cd411b5cdc449eea24169b484ac886b41da92b86c7c90532aa032ab3acae2.scope - libcontainer container e64cd411b5cdc449eea24169b484ac886b41da92b86c7c90532aa032ab3acae2. Jun 20 19:28:43.058779 systemd[1]: Started cri-containerd-cad08eacb369ba695f0c1c8a2b2ac9a226a25f29320e5120c42f8f0e440b2dae.scope - libcontainer container cad08eacb369ba695f0c1c8a2b2ac9a226a25f29320e5120c42f8f0e440b2dae. Jun 20 19:28:43.123318 containerd[1577]: time="2025-06-20T19:28:43.123023936Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d1af03769b64da1b1e8089a7035018fc,Namespace:kube-system,Attempt:0,} returns sandbox id \"cad08eacb369ba695f0c1c8a2b2ac9a226a25f29320e5120c42f8f0e440b2dae\"" Jun 20 19:28:43.123318 containerd[1577]: time="2025-06-20T19:28:43.123184551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:0b78071d34d6c8c1eb572d230c3437fd,Namespace:kube-system,Attempt:0,} returns sandbox id \"e64cd411b5cdc449eea24169b484ac886b41da92b86c7c90532aa032ab3acae2\"" Jun 20 19:28:43.124733 kubelet[2317]: E0620 19:28:43.124684 2317 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:28:43.125247 kubelet[2317]: E0620 19:28:43.124697 2317 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:28:43.127621 containerd[1577]: time="2025-06-20T19:28:43.127566889Z" level=info msg="CreateContainer within sandbox \"cad08eacb369ba695f0c1c8a2b2ac9a226a25f29320e5120c42f8f0e440b2dae\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jun 20 19:28:43.130234 containerd[1577]: time="2025-06-20T19:28:43.130193313Z" level=info msg="CreateContainer within sandbox \"e64cd411b5cdc449eea24169b484ac886b41da92b86c7c90532aa032ab3acae2\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jun 20 19:28:43.162043 kubelet[2317]: I0620 19:28:43.162024 2317 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jun 20 19:28:43.162691 kubelet[2317]: E0620 19:28:43.162631 2317 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.126:6443/api/v1/nodes\": dial tcp 10.0.0.126:6443: connect: connection refused" node="localhost" Jun 20 19:28:43.164578 containerd[1577]: time="2025-06-20T19:28:43.164526253Z" level=info msg="Container 90f958b93d5f95e1f7d216f35ebf0350f586dd026de396c0a6224a336b2afbde: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:28:43.167238 containerd[1577]: time="2025-06-20T19:28:43.167198637Z" level=info msg="Container 08ca206412f2c470f3d2fa6fccf0f9fbb2f386614365ec007a5eeb6ded7dfedc: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:28:43.167454 containerd[1577]: time="2025-06-20T19:28:43.167428780Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8a75e163f27396b2168da0f88f85f8a5,Namespace:kube-system,Attempt:0,} returns sandbox id \"3cd29ca956807e9b9857509987ae90f9026ddd42835827ab9c6443a63557a441\"" Jun 20 19:28:43.168129 kubelet[2317]: E0620 19:28:43.168097 2317 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:28:43.169515 containerd[1577]: time="2025-06-20T19:28:43.169489893Z" level=info msg="CreateContainer within sandbox \"3cd29ca956807e9b9857509987ae90f9026ddd42835827ab9c6443a63557a441\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jun 20 19:28:43.175015 containerd[1577]: time="2025-06-20T19:28:43.174984009Z" level=info msg="CreateContainer within sandbox \"cad08eacb369ba695f0c1c8a2b2ac9a226a25f29320e5120c42f8f0e440b2dae\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"90f958b93d5f95e1f7d216f35ebf0350f586dd026de396c0a6224a336b2afbde\"" Jun 20 19:28:43.178467 containerd[1577]: time="2025-06-20T19:28:43.178416549Z" level=info msg="StartContainer for \"90f958b93d5f95e1f7d216f35ebf0350f586dd026de396c0a6224a336b2afbde\"" Jun 20 19:28:43.179483 containerd[1577]: time="2025-06-20T19:28:43.179446198Z" level=info msg="connecting to shim 90f958b93d5f95e1f7d216f35ebf0350f586dd026de396c0a6224a336b2afbde" address="unix:///run/containerd/s/8dbdd664605328c1e673f9fbfaafb74149b43f468956c6b4ad71ab3ec02e75bf" protocol=ttrpc version=3 Jun 20 19:28:43.180448 containerd[1577]: time="2025-06-20T19:28:43.180411366Z" level=info msg="CreateContainer within sandbox \"e64cd411b5cdc449eea24169b484ac886b41da92b86c7c90532aa032ab3acae2\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"08ca206412f2c470f3d2fa6fccf0f9fbb2f386614365ec007a5eeb6ded7dfedc\"" Jun 20 19:28:43.180990 containerd[1577]: time="2025-06-20T19:28:43.180957122Z" level=info msg="StartContainer for \"08ca206412f2c470f3d2fa6fccf0f9fbb2f386614365ec007a5eeb6ded7dfedc\"" Jun 20 19:28:43.181765 containerd[1577]: time="2025-06-20T19:28:43.181726037Z" level=info msg="Container 129bb254bb150c7f0f882fb7ff92edfc39eb75450ae2f2e561b2ac0b55c7455e: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:28:43.181889 containerd[1577]: time="2025-06-20T19:28:43.181862906Z" level=info msg="connecting to shim 08ca206412f2c470f3d2fa6fccf0f9fbb2f386614365ec007a5eeb6ded7dfedc" address="unix:///run/containerd/s/a0f9bbf398bed78ddfa375e367a2168111ff170f570317c99ca5ea79b4626fd9" protocol=ttrpc version=3 Jun 20 19:28:43.190215 containerd[1577]: time="2025-06-20T19:28:43.190179075Z" level=info msg="CreateContainer within sandbox \"3cd29ca956807e9b9857509987ae90f9026ddd42835827ab9c6443a63557a441\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"129bb254bb150c7f0f882fb7ff92edfc39eb75450ae2f2e561b2ac0b55c7455e\"" Jun 20 19:28:43.190725 containerd[1577]: time="2025-06-20T19:28:43.190686926Z" level=info msg="StartContainer for \"129bb254bb150c7f0f882fb7ff92edfc39eb75450ae2f2e561b2ac0b55c7455e\"" Jun 20 19:28:43.197385 containerd[1577]: time="2025-06-20T19:28:43.197230986Z" level=info msg="connecting to shim 129bb254bb150c7f0f882fb7ff92edfc39eb75450ae2f2e561b2ac0b55c7455e" address="unix:///run/containerd/s/10d91fbc9bd3d5c2e49d32ce80b760fce5dee3dec79d4d2d5030c70f3cdc1ccf" protocol=ttrpc version=3 Jun 20 19:28:43.202966 systemd[1]: Started cri-containerd-08ca206412f2c470f3d2fa6fccf0f9fbb2f386614365ec007a5eeb6ded7dfedc.scope - libcontainer container 08ca206412f2c470f3d2fa6fccf0f9fbb2f386614365ec007a5eeb6ded7dfedc. Jun 20 19:28:43.207683 systemd[1]: Started cri-containerd-90f958b93d5f95e1f7d216f35ebf0350f586dd026de396c0a6224a336b2afbde.scope - libcontainer container 90f958b93d5f95e1f7d216f35ebf0350f586dd026de396c0a6224a336b2afbde. Jun 20 19:28:43.224950 systemd[1]: Started cri-containerd-129bb254bb150c7f0f882fb7ff92edfc39eb75450ae2f2e561b2ac0b55c7455e.scope - libcontainer container 129bb254bb150c7f0f882fb7ff92edfc39eb75450ae2f2e561b2ac0b55c7455e. Jun 20 19:28:43.256731 kubelet[2317]: W0620 19:28:43.256542 2317 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.126:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.126:6443: connect: connection refused Jun 20 19:28:43.256731 kubelet[2317]: E0620 19:28:43.256606 2317 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.126:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.126:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:28:43.286932 containerd[1577]: time="2025-06-20T19:28:43.286778298Z" level=info msg="StartContainer for \"08ca206412f2c470f3d2fa6fccf0f9fbb2f386614365ec007a5eeb6ded7dfedc\" returns successfully" Jun 20 19:28:43.287917 containerd[1577]: time="2025-06-20T19:28:43.287881537Z" level=info msg="StartContainer for \"129bb254bb150c7f0f882fb7ff92edfc39eb75450ae2f2e561b2ac0b55c7455e\" returns successfully" Jun 20 19:28:43.288000 containerd[1577]: time="2025-06-20T19:28:43.287980130Z" level=info msg="StartContainer for \"90f958b93d5f95e1f7d216f35ebf0350f586dd026de396c0a6224a336b2afbde\" returns successfully" Jun 20 19:28:43.310432 kubelet[2317]: E0620 19:28:43.310381 2317 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.126:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.126:6443: connect: connection refused" interval="1.6s" Jun 20 19:28:43.360889 kubelet[2317]: W0620 19:28:43.360565 2317 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.126:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.126:6443: connect: connection refused Jun 20 19:28:43.361088 kubelet[2317]: E0620 19:28:43.361012 2317 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.126:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.126:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:28:43.938425 kubelet[2317]: E0620 19:28:43.938123 2317 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jun 20 19:28:43.938425 kubelet[2317]: E0620 19:28:43.938343 2317 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:28:43.975543 kubelet[2317]: I0620 19:28:43.966927 2317 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jun 20 19:28:43.983360 kubelet[2317]: E0620 19:28:43.983319 2317 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jun 20 19:28:43.983495 kubelet[2317]: E0620 19:28:43.983485 2317 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:28:43.986768 kubelet[2317]: E0620 19:28:43.986551 2317 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jun 20 19:28:43.986768 kubelet[2317]: E0620 19:28:43.986706 2317 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:28:44.914119 kubelet[2317]: E0620 19:28:44.914067 2317 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jun 20 19:28:44.981618 kubelet[2317]: E0620 19:28:44.981583 2317 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jun 20 19:28:44.982204 kubelet[2317]: E0620 19:28:44.981719 2317 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:28:44.982204 kubelet[2317]: E0620 19:28:44.981805 2317 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jun 20 19:28:44.982204 kubelet[2317]: E0620 19:28:44.982048 2317 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:28:45.019321 kubelet[2317]: I0620 19:28:45.019231 2317 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jun 20 19:28:45.019321 kubelet[2317]: E0620 19:28:45.019293 2317 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jun 20 19:28:45.032129 kubelet[2317]: E0620 19:28:45.032059 2317 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 20 19:28:45.132583 kubelet[2317]: E0620 19:28:45.132486 2317 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 20 19:28:45.233571 kubelet[2317]: E0620 19:28:45.233423 2317 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 20 19:28:45.307775 kubelet[2317]: I0620 19:28:45.307700 2317 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jun 20 19:28:45.312658 kubelet[2317]: E0620 19:28:45.312621 2317 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jun 20 19:28:45.312658 kubelet[2317]: I0620 19:28:45.312643 2317 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jun 20 19:28:45.314341 kubelet[2317]: E0620 19:28:45.314301 2317 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jun 20 19:28:45.314341 kubelet[2317]: I0620 19:28:45.314340 2317 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jun 20 19:28:45.315681 kubelet[2317]: E0620 19:28:45.315648 2317 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jun 20 19:28:45.897892 kubelet[2317]: I0620 19:28:45.897845 2317 apiserver.go:52] "Watching apiserver" Jun 20 19:28:45.907106 kubelet[2317]: I0620 19:28:45.907058 2317 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jun 20 19:28:47.395160 systemd[1]: Reload requested from client PID 2594 ('systemctl') (unit session-7.scope)... Jun 20 19:28:47.395182 systemd[1]: Reloading... Jun 20 19:28:47.525861 zram_generator::config[2637]: No configuration found. Jun 20 19:28:47.595837 kubelet[2317]: I0620 19:28:47.595749 2317 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jun 20 19:28:47.724473 kubelet[2317]: E0620 19:28:47.724330 2317 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:28:47.781384 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 19:28:47.935832 systemd[1]: Reloading finished in 540 ms. Jun 20 19:28:47.961090 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:28:47.983648 systemd[1]: kubelet.service: Deactivated successfully. Jun 20 19:28:47.984044 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:28:47.984121 systemd[1]: kubelet.service: Consumed 1.036s CPU time, 130.6M memory peak. Jun 20 19:28:47.986390 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:28:48.205159 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:28:48.210037 (kubelet)[2682]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 20 19:28:48.253857 kubelet[2682]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 19:28:48.253857 kubelet[2682]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jun 20 19:28:48.253857 kubelet[2682]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 19:28:48.254233 kubelet[2682]: I0620 19:28:48.254093 2682 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 20 19:28:48.261404 kubelet[2682]: I0620 19:28:48.261363 2682 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jun 20 19:28:48.261404 kubelet[2682]: I0620 19:28:48.261384 2682 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 20 19:28:48.261601 kubelet[2682]: I0620 19:28:48.261578 2682 server.go:954] "Client rotation is on, will bootstrap in background" Jun 20 19:28:48.262706 kubelet[2682]: I0620 19:28:48.262680 2682 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jun 20 19:28:48.264603 kubelet[2682]: I0620 19:28:48.264576 2682 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 20 19:28:48.270550 kubelet[2682]: I0620 19:28:48.270521 2682 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jun 20 19:28:48.275693 kubelet[2682]: I0620 19:28:48.275659 2682 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 20 19:28:48.275973 kubelet[2682]: I0620 19:28:48.275924 2682 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 20 19:28:48.276145 kubelet[2682]: I0620 19:28:48.275962 2682 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jun 20 19:28:48.276216 kubelet[2682]: I0620 19:28:48.276156 2682 topology_manager.go:138] "Creating topology manager with none policy" Jun 20 19:28:48.276216 kubelet[2682]: I0620 19:28:48.276167 2682 container_manager_linux.go:304] "Creating device plugin manager" Jun 20 19:28:48.276280 kubelet[2682]: I0620 19:28:48.276232 2682 state_mem.go:36] "Initialized new in-memory state store" Jun 20 19:28:48.276437 kubelet[2682]: I0620 19:28:48.276411 2682 kubelet.go:446] "Attempting to sync node with API server" Jun 20 19:28:48.276473 kubelet[2682]: I0620 19:28:48.276443 2682 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 20 19:28:48.276473 kubelet[2682]: I0620 19:28:48.276465 2682 kubelet.go:352] "Adding apiserver pod source" Jun 20 19:28:48.276473 kubelet[2682]: I0620 19:28:48.276474 2682 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 20 19:28:48.277787 kubelet[2682]: I0620 19:28:48.277760 2682 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jun 20 19:28:48.278167 kubelet[2682]: I0620 19:28:48.278131 2682 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jun 20 19:28:48.278777 kubelet[2682]: I0620 19:28:48.278692 2682 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jun 20 19:28:48.278777 kubelet[2682]: I0620 19:28:48.278742 2682 server.go:1287] "Started kubelet" Jun 20 19:28:48.280281 kubelet[2682]: I0620 19:28:48.280126 2682 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 20 19:28:48.280697 kubelet[2682]: I0620 19:28:48.280683 2682 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 20 19:28:48.284784 kubelet[2682]: I0620 19:28:48.282727 2682 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 20 19:28:48.284784 kubelet[2682]: I0620 19:28:48.283327 2682 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jun 20 19:28:48.284784 kubelet[2682]: I0620 19:28:48.283448 2682 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jun 20 19:28:48.284784 kubelet[2682]: I0620 19:28:48.284203 2682 server.go:479] "Adding debug handlers to kubelet server" Jun 20 19:28:48.287788 kubelet[2682]: I0620 19:28:48.287757 2682 volume_manager.go:297] "Starting Kubelet Volume Manager" Jun 20 19:28:48.290010 kubelet[2682]: I0620 19:28:48.289989 2682 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jun 20 19:28:48.741534 kubelet[2682]: E0620 19:28:48.290229 2682 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 20 19:28:48.741534 kubelet[2682]: I0620 19:28:48.292178 2682 factory.go:221] Registration of the containerd container factory successfully Jun 20 19:28:48.741534 kubelet[2682]: I0620 19:28:48.292191 2682 factory.go:221] Registration of the systemd container factory successfully Jun 20 19:28:48.741534 kubelet[2682]: I0620 19:28:48.292258 2682 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 20 19:28:48.741534 kubelet[2682]: I0620 19:28:48.298357 2682 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 20 19:28:48.741534 kubelet[2682]: I0620 19:28:48.299639 2682 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 20 19:28:48.741534 kubelet[2682]: I0620 19:28:48.299663 2682 status_manager.go:227] "Starting to sync pod status with apiserver" Jun 20 19:28:48.741534 kubelet[2682]: I0620 19:28:48.299683 2682 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jun 20 19:28:48.741534 kubelet[2682]: I0620 19:28:48.299690 2682 kubelet.go:2382] "Starting kubelet main sync loop" Jun 20 19:28:48.741534 kubelet[2682]: E0620 19:28:48.299737 2682 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 20 19:28:48.741534 kubelet[2682]: I0620 19:28:48.326308 2682 cpu_manager.go:221] "Starting CPU manager" policy="none" Jun 20 19:28:48.741534 kubelet[2682]: I0620 19:28:48.326319 2682 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jun 20 19:28:48.741534 kubelet[2682]: I0620 19:28:48.326344 2682 state_mem.go:36] "Initialized new in-memory state store" Jun 20 19:28:48.741534 kubelet[2682]: E0620 19:28:48.400180 2682 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jun 20 19:28:48.741534 kubelet[2682]: E0620 19:28:48.600935 2682 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jun 20 19:28:48.741534 kubelet[2682]: I0620 19:28:48.740979 2682 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jun 20 19:28:48.742317 kubelet[2682]: I0620 19:28:48.741004 2682 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jun 20 19:28:48.742317 kubelet[2682]: I0620 19:28:48.741025 2682 policy_none.go:49] "None policy: Start" Jun 20 19:28:48.742317 kubelet[2682]: I0620 19:28:48.741043 2682 memory_manager.go:186] "Starting memorymanager" policy="None" Jun 20 19:28:48.742317 kubelet[2682]: I0620 19:28:48.741054 2682 state_mem.go:35] "Initializing new in-memory state store" Jun 20 19:28:48.742317 kubelet[2682]: I0620 19:28:48.741097 2682 reconciler.go:26] "Reconciler: start to sync state" Jun 20 19:28:48.742317 kubelet[2682]: I0620 19:28:48.741161 2682 state_mem.go:75] "Updated machine memory state" Jun 20 19:28:48.746157 kubelet[2682]: I0620 19:28:48.746134 2682 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 20 19:28:48.746317 kubelet[2682]: I0620 19:28:48.746295 2682 eviction_manager.go:189] "Eviction manager: starting control loop" Jun 20 19:28:48.746348 kubelet[2682]: I0620 19:28:48.746314 2682 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 20 19:28:48.747834 kubelet[2682]: I0620 19:28:48.746741 2682 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 20 19:28:48.751439 kubelet[2682]: E0620 19:28:48.751404 2682 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jun 20 19:28:48.853930 kubelet[2682]: I0620 19:28:48.853893 2682 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jun 20 19:28:49.001779 kubelet[2682]: I0620 19:28:49.001674 2682 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jun 20 19:28:49.001779 kubelet[2682]: I0620 19:28:49.001719 2682 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jun 20 19:28:49.002629 kubelet[2682]: I0620 19:28:49.002562 2682 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jun 20 19:28:49.143414 kubelet[2682]: I0620 19:28:49.143368 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jun 20 19:28:49.143414 kubelet[2682]: I0620 19:28:49.143413 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jun 20 19:28:49.143617 kubelet[2682]: I0620 19:28:49.143442 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jun 20 19:28:49.143617 kubelet[2682]: I0620 19:28:49.143464 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jun 20 19:28:49.143617 kubelet[2682]: I0620 19:28:49.143483 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jun 20 19:28:49.143617 kubelet[2682]: I0620 19:28:49.143502 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a75e163f27396b2168da0f88f85f8a5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8a75e163f27396b2168da0f88f85f8a5\") " pod="kube-system/kube-scheduler-localhost" Jun 20 19:28:49.143617 kubelet[2682]: I0620 19:28:49.143517 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0b78071d34d6c8c1eb572d230c3437fd-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0b78071d34d6c8c1eb572d230c3437fd\") " pod="kube-system/kube-apiserver-localhost" Jun 20 19:28:49.143726 kubelet[2682]: I0620 19:28:49.143531 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0b78071d34d6c8c1eb572d230c3437fd-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0b78071d34d6c8c1eb572d230c3437fd\") " pod="kube-system/kube-apiserver-localhost" Jun 20 19:28:49.143726 kubelet[2682]: I0620 19:28:49.143545 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0b78071d34d6c8c1eb572d230c3437fd-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0b78071d34d6c8c1eb572d230c3437fd\") " pod="kube-system/kube-apiserver-localhost" Jun 20 19:28:49.171580 kubelet[2682]: I0620 19:28:49.171541 2682 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jun 20 19:28:49.171662 kubelet[2682]: I0620 19:28:49.171640 2682 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jun 20 19:28:49.172215 kubelet[2682]: E0620 19:28:49.172184 2682 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jun 20 19:28:49.277769 kubelet[2682]: I0620 19:28:49.277639 2682 apiserver.go:52] "Watching apiserver" Jun 20 19:28:49.291105 kubelet[2682]: I0620 19:28:49.291069 2682 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jun 20 19:28:49.438849 sudo[2720]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jun 20 19:28:49.439284 sudo[2720]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jun 20 19:28:49.473601 kubelet[2682]: E0620 19:28:49.473194 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:28:49.473601 kubelet[2682]: E0620 19:28:49.473344 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:28:49.476844 kubelet[2682]: E0620 19:28:49.474416 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:28:49.498803 kubelet[2682]: I0620 19:28:49.498720 2682 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=0.498694152 podStartE2EDuration="498.694152ms" podCreationTimestamp="2025-06-20 19:28:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:28:49.49805834 +0000 UTC m=+1.281679781" watchObservedRunningTime="2025-06-20 19:28:49.498694152 +0000 UTC m=+1.282315593" Jun 20 19:28:49.511716 kubelet[2682]: I0620 19:28:49.511567 2682 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=0.511545575 podStartE2EDuration="511.545575ms" podCreationTimestamp="2025-06-20 19:28:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:28:49.505882371 +0000 UTC m=+1.289503822" watchObservedRunningTime="2025-06-20 19:28:49.511545575 +0000 UTC m=+1.295167016" Jun 20 19:28:49.511716 kubelet[2682]: I0620 19:28:49.511694 2682 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.511689696 podStartE2EDuration="2.511689696s" podCreationTimestamp="2025-06-20 19:28:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:28:49.511376843 +0000 UTC m=+1.294998284" watchObservedRunningTime="2025-06-20 19:28:49.511689696 +0000 UTC m=+1.295311137" Jun 20 19:28:49.955796 sudo[2720]: pam_unix(sudo:session): session closed for user root Jun 20 19:28:50.313960 kubelet[2682]: E0620 19:28:50.313498 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:28:50.313960 kubelet[2682]: E0620 19:28:50.313720 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:28:50.314971 kubelet[2682]: E0620 19:28:50.314912 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:28:51.315054 kubelet[2682]: E0620 19:28:51.314966 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:28:51.315732 kubelet[2682]: E0620 19:28:51.315404 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:28:51.354325 sudo[1766]: pam_unix(sudo:session): session closed for user root Jun 20 19:28:51.355928 sshd[1765]: Connection closed by 10.0.0.1 port 54640 Jun 20 19:28:51.356600 sshd-session[1763]: pam_unix(sshd:session): session closed for user core Jun 20 19:28:51.361399 systemd[1]: sshd@6-10.0.0.126:22-10.0.0.1:54640.service: Deactivated successfully. Jun 20 19:28:51.363721 systemd[1]: session-7.scope: Deactivated successfully. Jun 20 19:28:51.363979 systemd[1]: session-7.scope: Consumed 4.468s CPU time, 263M memory peak. Jun 20 19:28:51.365259 systemd-logind[1547]: Session 7 logged out. Waiting for processes to exit. Jun 20 19:28:51.366622 systemd-logind[1547]: Removed session 7. Jun 20 19:28:52.547868 kubelet[2682]: I0620 19:28:52.547808 2682 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jun 20 19:28:52.548385 containerd[1577]: time="2025-06-20T19:28:52.548197891Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jun 20 19:28:52.548620 kubelet[2682]: I0620 19:28:52.548391 2682 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jun 20 19:28:52.661939 systemd[1]: Created slice kubepods-besteffort-podc98f3d8c_e50d_4e3d_96f2_7d07c6c8c56e.slice - libcontainer container kubepods-besteffort-podc98f3d8c_e50d_4e3d_96f2_7d07c6c8c56e.slice. Jun 20 19:28:52.679776 systemd[1]: Created slice kubepods-burstable-podbf6fcb13_d682_471c_ba56_4cc814165f9b.slice - libcontainer container kubepods-burstable-podbf6fcb13_d682_471c_ba56_4cc814165f9b.slice. Jun 20 19:28:52.764172 kubelet[2682]: I0620 19:28:52.764135 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bf6fcb13-d682-471c-ba56-4cc814165f9b-cilium-config-path\") pod \"cilium-g4q5s\" (UID: \"bf6fcb13-d682-471c-ba56-4cc814165f9b\") " pod="kube-system/cilium-g4q5s" Jun 20 19:28:52.764172 kubelet[2682]: I0620 19:28:52.764168 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bf6fcb13-d682-471c-ba56-4cc814165f9b-hostproc\") pod \"cilium-g4q5s\" (UID: \"bf6fcb13-d682-471c-ba56-4cc814165f9b\") " pod="kube-system/cilium-g4q5s" Jun 20 19:28:52.764172 kubelet[2682]: I0620 19:28:52.764189 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bf6fcb13-d682-471c-ba56-4cc814165f9b-xtables-lock\") pod \"cilium-g4q5s\" (UID: \"bf6fcb13-d682-471c-ba56-4cc814165f9b\") " pod="kube-system/cilium-g4q5s" Jun 20 19:28:52.764381 kubelet[2682]: I0620 19:28:52.764205 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c98f3d8c-e50d-4e3d-96f2-7d07c6c8c56e-kube-proxy\") pod \"kube-proxy-klh9b\" (UID: \"c98f3d8c-e50d-4e3d-96f2-7d07c6c8c56e\") " pod="kube-system/kube-proxy-klh9b" Jun 20 19:28:52.764381 kubelet[2682]: I0620 19:28:52.764219 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bf6fcb13-d682-471c-ba56-4cc814165f9b-cilium-cgroup\") pod \"cilium-g4q5s\" (UID: \"bf6fcb13-d682-471c-ba56-4cc814165f9b\") " pod="kube-system/cilium-g4q5s" Jun 20 19:28:52.764381 kubelet[2682]: I0620 19:28:52.764232 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c98f3d8c-e50d-4e3d-96f2-7d07c6c8c56e-xtables-lock\") pod \"kube-proxy-klh9b\" (UID: \"c98f3d8c-e50d-4e3d-96f2-7d07c6c8c56e\") " pod="kube-system/kube-proxy-klh9b" Jun 20 19:28:52.764381 kubelet[2682]: I0620 19:28:52.764249 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bf6fcb13-d682-471c-ba56-4cc814165f9b-hubble-tls\") pod \"cilium-g4q5s\" (UID: \"bf6fcb13-d682-471c-ba56-4cc814165f9b\") " pod="kube-system/cilium-g4q5s" Jun 20 19:28:52.764381 kubelet[2682]: I0620 19:28:52.764263 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bf6fcb13-d682-471c-ba56-4cc814165f9b-etc-cni-netd\") pod \"cilium-g4q5s\" (UID: \"bf6fcb13-d682-471c-ba56-4cc814165f9b\") " pod="kube-system/cilium-g4q5s" Jun 20 19:28:52.764381 kubelet[2682]: I0620 19:28:52.764278 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bf6fcb13-d682-471c-ba56-4cc814165f9b-lib-modules\") pod \"cilium-g4q5s\" (UID: \"bf6fcb13-d682-471c-ba56-4cc814165f9b\") " pod="kube-system/cilium-g4q5s" Jun 20 19:28:52.764534 kubelet[2682]: I0620 19:28:52.764336 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bf6fcb13-d682-471c-ba56-4cc814165f9b-cilium-run\") pod \"cilium-g4q5s\" (UID: \"bf6fcb13-d682-471c-ba56-4cc814165f9b\") " pod="kube-system/cilium-g4q5s" Jun 20 19:28:52.764534 kubelet[2682]: I0620 19:28:52.764370 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bf6fcb13-d682-471c-ba56-4cc814165f9b-clustermesh-secrets\") pod \"cilium-g4q5s\" (UID: \"bf6fcb13-d682-471c-ba56-4cc814165f9b\") " pod="kube-system/cilium-g4q5s" Jun 20 19:28:52.764534 kubelet[2682]: I0620 19:28:52.764398 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bf6fcb13-d682-471c-ba56-4cc814165f9b-host-proc-sys-kernel\") pod \"cilium-g4q5s\" (UID: \"bf6fcb13-d682-471c-ba56-4cc814165f9b\") " pod="kube-system/cilium-g4q5s" Jun 20 19:28:52.764534 kubelet[2682]: I0620 19:28:52.764416 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-644vm\" (UniqueName: \"kubernetes.io/projected/c98f3d8c-e50d-4e3d-96f2-7d07c6c8c56e-kube-api-access-644vm\") pod \"kube-proxy-klh9b\" (UID: \"c98f3d8c-e50d-4e3d-96f2-7d07c6c8c56e\") " pod="kube-system/kube-proxy-klh9b" Jun 20 19:28:52.764534 kubelet[2682]: I0620 19:28:52.764434 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bf6fcb13-d682-471c-ba56-4cc814165f9b-bpf-maps\") pod \"cilium-g4q5s\" (UID: \"bf6fcb13-d682-471c-ba56-4cc814165f9b\") " pod="kube-system/cilium-g4q5s" Jun 20 19:28:52.764534 kubelet[2682]: I0620 19:28:52.764447 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bf6fcb13-d682-471c-ba56-4cc814165f9b-cni-path\") pod \"cilium-g4q5s\" (UID: \"bf6fcb13-d682-471c-ba56-4cc814165f9b\") " pod="kube-system/cilium-g4q5s" Jun 20 19:28:52.764681 kubelet[2682]: I0620 19:28:52.764464 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bf6fcb13-d682-471c-ba56-4cc814165f9b-host-proc-sys-net\") pod \"cilium-g4q5s\" (UID: \"bf6fcb13-d682-471c-ba56-4cc814165f9b\") " pod="kube-system/cilium-g4q5s" Jun 20 19:28:52.764681 kubelet[2682]: I0620 19:28:52.764479 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lrlp\" (UniqueName: \"kubernetes.io/projected/bf6fcb13-d682-471c-ba56-4cc814165f9b-kube-api-access-4lrlp\") pod \"cilium-g4q5s\" (UID: \"bf6fcb13-d682-471c-ba56-4cc814165f9b\") " pod="kube-system/cilium-g4q5s" Jun 20 19:28:52.764681 kubelet[2682]: I0620 19:28:52.764493 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c98f3d8c-e50d-4e3d-96f2-7d07c6c8c56e-lib-modules\") pod \"kube-proxy-klh9b\" (UID: \"c98f3d8c-e50d-4e3d-96f2-7d07c6c8c56e\") " pod="kube-system/kube-proxy-klh9b" Jun 20 19:28:52.898593 kubelet[2682]: E0620 19:28:52.898476 2682 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jun 20 19:28:52.898593 kubelet[2682]: E0620 19:28:52.898512 2682 projected.go:194] Error preparing data for projected volume kube-api-access-644vm for pod kube-system/kube-proxy-klh9b: configmap "kube-root-ca.crt" not found Jun 20 19:28:52.898593 kubelet[2682]: E0620 19:28:52.898573 2682 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c98f3d8c-e50d-4e3d-96f2-7d07c6c8c56e-kube-api-access-644vm podName:c98f3d8c-e50d-4e3d-96f2-7d07c6c8c56e nodeName:}" failed. No retries permitted until 2025-06-20 19:28:53.398553155 +0000 UTC m=+5.182174586 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-644vm" (UniqueName: "kubernetes.io/projected/c98f3d8c-e50d-4e3d-96f2-7d07c6c8c56e-kube-api-access-644vm") pod "kube-proxy-klh9b" (UID: "c98f3d8c-e50d-4e3d-96f2-7d07c6c8c56e") : configmap "kube-root-ca.crt" not found Jun 20 19:28:52.898937 kubelet[2682]: E0620 19:28:52.898884 2682 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jun 20 19:28:52.898937 kubelet[2682]: E0620 19:28:52.898924 2682 projected.go:194] Error preparing data for projected volume kube-api-access-4lrlp for pod kube-system/cilium-g4q5s: configmap "kube-root-ca.crt" not found Jun 20 19:28:52.899167 kubelet[2682]: E0620 19:28:52.898991 2682 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bf6fcb13-d682-471c-ba56-4cc814165f9b-kube-api-access-4lrlp podName:bf6fcb13-d682-471c-ba56-4cc814165f9b nodeName:}" failed. No retries permitted until 2025-06-20 19:28:53.398970705 +0000 UTC m=+5.182592146 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-4lrlp" (UniqueName: "kubernetes.io/projected/bf6fcb13-d682-471c-ba56-4cc814165f9b-kube-api-access-4lrlp") pod "cilium-g4q5s" (UID: "bf6fcb13-d682-471c-ba56-4cc814165f9b") : configmap "kube-root-ca.crt" not found Jun 20 19:28:53.573329 kubelet[2682]: E0620 19:28:53.573266 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:28:53.574017 containerd[1577]: time="2025-06-20T19:28:53.573968215Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-klh9b,Uid:c98f3d8c-e50d-4e3d-96f2-7d07c6c8c56e,Namespace:kube-system,Attempt:0,}" Jun 20 19:28:53.584132 kubelet[2682]: E0620 19:28:53.584102 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:28:53.584557 containerd[1577]: time="2025-06-20T19:28:53.584499392Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-g4q5s,Uid:bf6fcb13-d682-471c-ba56-4cc814165f9b,Namespace:kube-system,Attempt:0,}" Jun 20 19:28:53.634463 containerd[1577]: time="2025-06-20T19:28:53.634410614Z" level=info msg="connecting to shim f98ae6a79219e75824af8ab7f370449f2efb23b3676c5d9e12f61ed056de649a" address="unix:///run/containerd/s/7a0dd6ac7c024e90f82f792a4a5288eb8d144b6e1947e7471df48e345dcd82ae" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:28:53.636389 containerd[1577]: time="2025-06-20T19:28:53.636333119Z" level=info msg="connecting to shim 0f5ca81dd03caa1ea58ae280e111cf7d92e9c779275b11f513fd2ad169fe36da" address="unix:///run/containerd/s/03cf39ec48aa1872eae8201633e2dc94f1870aa8321b1ef51e8316f9143dcadb" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:28:53.683276 systemd[1]: Started cri-containerd-0f5ca81dd03caa1ea58ae280e111cf7d92e9c779275b11f513fd2ad169fe36da.scope - libcontainer container 0f5ca81dd03caa1ea58ae280e111cf7d92e9c779275b11f513fd2ad169fe36da. Jun 20 19:28:53.689238 systemd[1]: Started cri-containerd-f98ae6a79219e75824af8ab7f370449f2efb23b3676c5d9e12f61ed056de649a.scope - libcontainer container f98ae6a79219e75824af8ab7f370449f2efb23b3676c5d9e12f61ed056de649a. Jun 20 19:28:53.695035 systemd[1]: Created slice kubepods-besteffort-pod85efc7db_489b_427b_acda_5c79ac2e7a72.slice - libcontainer container kubepods-besteffort-pod85efc7db_489b_427b_acda_5c79ac2e7a72.slice. Jun 20 19:28:53.718314 containerd[1577]: time="2025-06-20T19:28:53.718269634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-klh9b,Uid:c98f3d8c-e50d-4e3d-96f2-7d07c6c8c56e,Namespace:kube-system,Attempt:0,} returns sandbox id \"0f5ca81dd03caa1ea58ae280e111cf7d92e9c779275b11f513fd2ad169fe36da\"" Jun 20 19:28:53.719182 kubelet[2682]: E0620 19:28:53.719157 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:28:53.723562 containerd[1577]: time="2025-06-20T19:28:53.723525716Z" level=info msg="CreateContainer within sandbox \"0f5ca81dd03caa1ea58ae280e111cf7d92e9c779275b11f513fd2ad169fe36da\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jun 20 19:28:53.727147 containerd[1577]: time="2025-06-20T19:28:53.727112460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-g4q5s,Uid:bf6fcb13-d682-471c-ba56-4cc814165f9b,Namespace:kube-system,Attempt:0,} returns sandbox id \"f98ae6a79219e75824af8ab7f370449f2efb23b3676c5d9e12f61ed056de649a\"" Jun 20 19:28:53.729211 kubelet[2682]: E0620 19:28:53.728102 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:28:53.729284 containerd[1577]: time="2025-06-20T19:28:53.728959534Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jun 20 19:28:53.739778 containerd[1577]: time="2025-06-20T19:28:53.739734016Z" level=info msg="Container 4fc2fd021e56cd93c057d6e4fbbcdf300abd06ef664ef42a91a5118077469441: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:28:53.747826 containerd[1577]: time="2025-06-20T19:28:53.747788294Z" level=info msg="CreateContainer within sandbox \"0f5ca81dd03caa1ea58ae280e111cf7d92e9c779275b11f513fd2ad169fe36da\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4fc2fd021e56cd93c057d6e4fbbcdf300abd06ef664ef42a91a5118077469441\"" Jun 20 19:28:53.748361 containerd[1577]: time="2025-06-20T19:28:53.748336436Z" level=info msg="StartContainer for \"4fc2fd021e56cd93c057d6e4fbbcdf300abd06ef664ef42a91a5118077469441\"" Jun 20 19:28:53.749692 containerd[1577]: time="2025-06-20T19:28:53.749620557Z" level=info msg="connecting to shim 4fc2fd021e56cd93c057d6e4fbbcdf300abd06ef664ef42a91a5118077469441" address="unix:///run/containerd/s/03cf39ec48aa1872eae8201633e2dc94f1870aa8321b1ef51e8316f9143dcadb" protocol=ttrpc version=3 Jun 20 19:28:53.771062 kubelet[2682]: I0620 19:28:53.770941 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/85efc7db-489b-427b-acda-5c79ac2e7a72-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-2sc7n\" (UID: \"85efc7db-489b-427b-acda-5c79ac2e7a72\") " pod="kube-system/cilium-operator-6c4d7847fc-2sc7n" Jun 20 19:28:53.771062 kubelet[2682]: I0620 19:28:53.770984 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpgzx\" (UniqueName: \"kubernetes.io/projected/85efc7db-489b-427b-acda-5c79ac2e7a72-kube-api-access-tpgzx\") pod \"cilium-operator-6c4d7847fc-2sc7n\" (UID: \"85efc7db-489b-427b-acda-5c79ac2e7a72\") " pod="kube-system/cilium-operator-6c4d7847fc-2sc7n" Jun 20 19:28:53.775042 systemd[1]: Started cri-containerd-4fc2fd021e56cd93c057d6e4fbbcdf300abd06ef664ef42a91a5118077469441.scope - libcontainer container 4fc2fd021e56cd93c057d6e4fbbcdf300abd06ef664ef42a91a5118077469441. Jun 20 19:28:53.848257 containerd[1577]: time="2025-06-20T19:28:53.848112663Z" level=info msg="StartContainer for \"4fc2fd021e56cd93c057d6e4fbbcdf300abd06ef664ef42a91a5118077469441\" returns successfully" Jun 20 19:28:53.997951 kubelet[2682]: E0620 19:28:53.997902 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:28:53.998639 containerd[1577]: time="2025-06-20T19:28:53.998338874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-2sc7n,Uid:85efc7db-489b-427b-acda-5c79ac2e7a72,Namespace:kube-system,Attempt:0,}" Jun 20 19:28:54.091387 containerd[1577]: time="2025-06-20T19:28:54.091329249Z" level=info msg="connecting to shim 74e40d21ae2eae7f8e1284ac431f7d8ca6ec579f56ec149d9113a5ef37d40c2c" address="unix:///run/containerd/s/a618e6f3c75e6f390934b77cd1c0620fae08beae8ceec672c2d8124934d7b485" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:28:54.155073 systemd[1]: Started cri-containerd-74e40d21ae2eae7f8e1284ac431f7d8ca6ec579f56ec149d9113a5ef37d40c2c.scope - libcontainer container 74e40d21ae2eae7f8e1284ac431f7d8ca6ec579f56ec149d9113a5ef37d40c2c. Jun 20 19:28:54.203455 containerd[1577]: time="2025-06-20T19:28:54.203416189Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-2sc7n,Uid:85efc7db-489b-427b-acda-5c79ac2e7a72,Namespace:kube-system,Attempt:0,} returns sandbox id \"74e40d21ae2eae7f8e1284ac431f7d8ca6ec579f56ec149d9113a5ef37d40c2c\"" Jun 20 19:28:54.204241 kubelet[2682]: E0620 19:28:54.204217 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:28:54.324264 kubelet[2682]: E0620 19:28:54.324227 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:28:54.333291 kubelet[2682]: I0620 19:28:54.333207 2682 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-klh9b" podStartSLOduration=2.333188101 podStartE2EDuration="2.333188101s" podCreationTimestamp="2025-06-20 19:28:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:28:54.332555557 +0000 UTC m=+6.116176998" watchObservedRunningTime="2025-06-20 19:28:54.333188101 +0000 UTC m=+6.116809532" Jun 20 19:28:55.731443 kubelet[2682]: E0620 19:28:55.731380 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:28:56.328684 kubelet[2682]: E0620 19:28:56.328643 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:28:56.421162 kubelet[2682]: E0620 19:28:56.421129 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:28:57.330404 kubelet[2682]: E0620 19:28:57.330330 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:28:57.723002 kubelet[2682]: E0620 19:28:57.722843 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:28:58.332568 kubelet[2682]: E0620 19:28:58.332528 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:28:59.562021 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount617817016.mount: Deactivated successfully. Jun 20 19:29:02.612233 update_engine[1551]: I20250620 19:29:02.612110 1551 update_attempter.cc:509] Updating boot flags... Jun 20 19:29:02.614667 containerd[1577]: time="2025-06-20T19:29:02.614600432Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:29:02.615617 containerd[1577]: time="2025-06-20T19:29:02.615341763Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jun 20 19:29:02.616538 containerd[1577]: time="2025-06-20T19:29:02.616492124Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:29:02.618026 containerd[1577]: time="2025-06-20T19:29:02.617980490Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 8.888996158s" Jun 20 19:29:02.618026 containerd[1577]: time="2025-06-20T19:29:02.618014098Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jun 20 19:29:02.619410 containerd[1577]: time="2025-06-20T19:29:02.619366082Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jun 20 19:29:02.620711 containerd[1577]: time="2025-06-20T19:29:02.620673802Z" level=info msg="CreateContainer within sandbox \"f98ae6a79219e75824af8ab7f370449f2efb23b3676c5d9e12f61ed056de649a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jun 20 19:29:02.634505 containerd[1577]: time="2025-06-20T19:29:02.634439633Z" level=info msg="Container 99f5f47a933bd649cfaf960ce98234e811e155f8e954605bebdbd658558db1f4: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:29:02.644194 containerd[1577]: time="2025-06-20T19:29:02.643990640Z" level=info msg="CreateContainer within sandbox \"f98ae6a79219e75824af8ab7f370449f2efb23b3676c5d9e12f61ed056de649a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"99f5f47a933bd649cfaf960ce98234e811e155f8e954605bebdbd658558db1f4\"" Jun 20 19:29:02.645185 containerd[1577]: time="2025-06-20T19:29:02.644883934Z" level=info msg="StartContainer for \"99f5f47a933bd649cfaf960ce98234e811e155f8e954605bebdbd658558db1f4\"" Jun 20 19:29:02.646540 containerd[1577]: time="2025-06-20T19:29:02.646500513Z" level=info msg="connecting to shim 99f5f47a933bd649cfaf960ce98234e811e155f8e954605bebdbd658558db1f4" address="unix:///run/containerd/s/7a0dd6ac7c024e90f82f792a4a5288eb8d144b6e1947e7471df48e345dcd82ae" protocol=ttrpc version=3 Jun 20 19:29:02.698220 systemd[1]: Started cri-containerd-99f5f47a933bd649cfaf960ce98234e811e155f8e954605bebdbd658558db1f4.scope - libcontainer container 99f5f47a933bd649cfaf960ce98234e811e155f8e954605bebdbd658558db1f4. Jun 20 19:29:02.792568 containerd[1577]: time="2025-06-20T19:29:02.792459620Z" level=info msg="StartContainer for \"99f5f47a933bd649cfaf960ce98234e811e155f8e954605bebdbd658558db1f4\" returns successfully" Jun 20 19:29:02.802490 systemd[1]: cri-containerd-99f5f47a933bd649cfaf960ce98234e811e155f8e954605bebdbd658558db1f4.scope: Deactivated successfully. Jun 20 19:29:02.803128 systemd[1]: cri-containerd-99f5f47a933bd649cfaf960ce98234e811e155f8e954605bebdbd658558db1f4.scope: Consumed 29ms CPU time, 6.6M memory peak, 3.2M written to disk. Jun 20 19:29:02.804944 containerd[1577]: time="2025-06-20T19:29:02.804900168Z" level=info msg="received exit event container_id:\"99f5f47a933bd649cfaf960ce98234e811e155f8e954605bebdbd658558db1f4\" id:\"99f5f47a933bd649cfaf960ce98234e811e155f8e954605bebdbd658558db1f4\" pid:3122 exited_at:{seconds:1750447742 nanos:804426174}" Jun 20 19:29:02.805045 containerd[1577]: time="2025-06-20T19:29:02.805000979Z" level=info msg="TaskExit event in podsandbox handler container_id:\"99f5f47a933bd649cfaf960ce98234e811e155f8e954605bebdbd658558db1f4\" id:\"99f5f47a933bd649cfaf960ce98234e811e155f8e954605bebdbd658558db1f4\" pid:3122 exited_at:{seconds:1750447742 nanos:804426174}" Jun 20 19:29:02.824741 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-99f5f47a933bd649cfaf960ce98234e811e155f8e954605bebdbd658558db1f4-rootfs.mount: Deactivated successfully. Jun 20 19:29:03.344232 kubelet[2682]: E0620 19:29:03.344194 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:29:04.336417 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount728930803.mount: Deactivated successfully. Jun 20 19:29:04.346900 kubelet[2682]: E0620 19:29:04.346738 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:29:04.348761 containerd[1577]: time="2025-06-20T19:29:04.348687308Z" level=info msg="CreateContainer within sandbox \"f98ae6a79219e75824af8ab7f370449f2efb23b3676c5d9e12f61ed056de649a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jun 20 19:29:04.359721 containerd[1577]: time="2025-06-20T19:29:04.359642675Z" level=info msg="Container e5c065bbb697d2794a98b2d0f08868bf7ffc5939e471325df32a202725acfc79: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:29:04.367641 containerd[1577]: time="2025-06-20T19:29:04.367598328Z" level=info msg="CreateContainer within sandbox \"f98ae6a79219e75824af8ab7f370449f2efb23b3676c5d9e12f61ed056de649a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e5c065bbb697d2794a98b2d0f08868bf7ffc5939e471325df32a202725acfc79\"" Jun 20 19:29:04.368227 containerd[1577]: time="2025-06-20T19:29:04.368128031Z" level=info msg="StartContainer for \"e5c065bbb697d2794a98b2d0f08868bf7ffc5939e471325df32a202725acfc79\"" Jun 20 19:29:04.369186 containerd[1577]: time="2025-06-20T19:29:04.369159792Z" level=info msg="connecting to shim e5c065bbb697d2794a98b2d0f08868bf7ffc5939e471325df32a202725acfc79" address="unix:///run/containerd/s/7a0dd6ac7c024e90f82f792a4a5288eb8d144b6e1947e7471df48e345dcd82ae" protocol=ttrpc version=3 Jun 20 19:29:04.390956 systemd[1]: Started cri-containerd-e5c065bbb697d2794a98b2d0f08868bf7ffc5939e471325df32a202725acfc79.scope - libcontainer container e5c065bbb697d2794a98b2d0f08868bf7ffc5939e471325df32a202725acfc79. Jun 20 19:29:04.427733 containerd[1577]: time="2025-06-20T19:29:04.427671346Z" level=info msg="StartContainer for \"e5c065bbb697d2794a98b2d0f08868bf7ffc5939e471325df32a202725acfc79\" returns successfully" Jun 20 19:29:04.449052 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 20 19:29:04.449610 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 20 19:29:04.449950 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jun 20 19:29:04.452381 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 20 19:29:04.454021 systemd[1]: cri-containerd-e5c065bbb697d2794a98b2d0f08868bf7ffc5939e471325df32a202725acfc79.scope: Deactivated successfully. Jun 20 19:29:04.454952 containerd[1577]: time="2025-06-20T19:29:04.454899389Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e5c065bbb697d2794a98b2d0f08868bf7ffc5939e471325df32a202725acfc79\" id:\"e5c065bbb697d2794a98b2d0f08868bf7ffc5939e471325df32a202725acfc79\" pid:3180 exited_at:{seconds:1750447744 nanos:454325286}" Jun 20 19:29:04.455558 containerd[1577]: time="2025-06-20T19:29:04.455524179Z" level=info msg="received exit event container_id:\"e5c065bbb697d2794a98b2d0f08868bf7ffc5939e471325df32a202725acfc79\" id:\"e5c065bbb697d2794a98b2d0f08868bf7ffc5939e471325df32a202725acfc79\" pid:3180 exited_at:{seconds:1750447744 nanos:454325286}" Jun 20 19:29:04.485606 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 20 19:29:04.736965 containerd[1577]: time="2025-06-20T19:29:04.736908331Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:29:04.737805 containerd[1577]: time="2025-06-20T19:29:04.737770667Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jun 20 19:29:04.738929 containerd[1577]: time="2025-06-20T19:29:04.738878788Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:29:04.740239 containerd[1577]: time="2025-06-20T19:29:04.740157614Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.120739597s" Jun 20 19:29:04.740239 containerd[1577]: time="2025-06-20T19:29:04.740213625Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jun 20 19:29:04.742314 containerd[1577]: time="2025-06-20T19:29:04.742271641Z" level=info msg="CreateContainer within sandbox \"74e40d21ae2eae7f8e1284ac431f7d8ca6ec579f56ec149d9113a5ef37d40c2c\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jun 20 19:29:04.753329 containerd[1577]: time="2025-06-20T19:29:04.753285362Z" level=info msg="Container a23b46d6a499f7354c95d17dccda73815f1049174c41a93694c9476dcdebb25d: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:29:04.761461 containerd[1577]: time="2025-06-20T19:29:04.761414343Z" level=info msg="CreateContainer within sandbox \"74e40d21ae2eae7f8e1284ac431f7d8ca6ec579f56ec149d9113a5ef37d40c2c\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"a23b46d6a499f7354c95d17dccda73815f1049174c41a93694c9476dcdebb25d\"" Jun 20 19:29:04.761908 containerd[1577]: time="2025-06-20T19:29:04.761871149Z" level=info msg="StartContainer for \"a23b46d6a499f7354c95d17dccda73815f1049174c41a93694c9476dcdebb25d\"" Jun 20 19:29:04.762785 containerd[1577]: time="2025-06-20T19:29:04.762746767Z" level=info msg="connecting to shim a23b46d6a499f7354c95d17dccda73815f1049174c41a93694c9476dcdebb25d" address="unix:///run/containerd/s/a618e6f3c75e6f390934b77cd1c0620fae08beae8ceec672c2d8124934d7b485" protocol=ttrpc version=3 Jun 20 19:29:04.794004 systemd[1]: Started cri-containerd-a23b46d6a499f7354c95d17dccda73815f1049174c41a93694c9476dcdebb25d.scope - libcontainer container a23b46d6a499f7354c95d17dccda73815f1049174c41a93694c9476dcdebb25d. Jun 20 19:29:04.828537 containerd[1577]: time="2025-06-20T19:29:04.828470847Z" level=info msg="StartContainer for \"a23b46d6a499f7354c95d17dccda73815f1049174c41a93694c9476dcdebb25d\" returns successfully" Jun 20 19:29:05.333585 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e5c065bbb697d2794a98b2d0f08868bf7ffc5939e471325df32a202725acfc79-rootfs.mount: Deactivated successfully. Jun 20 19:29:05.355688 kubelet[2682]: E0620 19:29:05.355613 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:29:05.357789 containerd[1577]: time="2025-06-20T19:29:05.357741365Z" level=info msg="CreateContainer within sandbox \"f98ae6a79219e75824af8ab7f370449f2efb23b3676c5d9e12f61ed056de649a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jun 20 19:29:05.358434 kubelet[2682]: E0620 19:29:05.358412 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:29:05.614301 containerd[1577]: time="2025-06-20T19:29:05.614155802Z" level=info msg="Container 52c7db4b75f39ea35a78cfde563ba9b64fbef0938a12a7f8f4fc9dd43870fd88: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:29:05.619423 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3012039538.mount: Deactivated successfully. Jun 20 19:29:05.625724 containerd[1577]: time="2025-06-20T19:29:05.625666446Z" level=info msg="CreateContainer within sandbox \"f98ae6a79219e75824af8ab7f370449f2efb23b3676c5d9e12f61ed056de649a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"52c7db4b75f39ea35a78cfde563ba9b64fbef0938a12a7f8f4fc9dd43870fd88\"" Jun 20 19:29:05.626851 containerd[1577]: time="2025-06-20T19:29:05.626531434Z" level=info msg="StartContainer for \"52c7db4b75f39ea35a78cfde563ba9b64fbef0938a12a7f8f4fc9dd43870fd88\"" Jun 20 19:29:05.628169 containerd[1577]: time="2025-06-20T19:29:05.628127252Z" level=info msg="connecting to shim 52c7db4b75f39ea35a78cfde563ba9b64fbef0938a12a7f8f4fc9dd43870fd88" address="unix:///run/containerd/s/7a0dd6ac7c024e90f82f792a4a5288eb8d144b6e1947e7471df48e345dcd82ae" protocol=ttrpc version=3 Jun 20 19:29:05.653980 systemd[1]: Started cri-containerd-52c7db4b75f39ea35a78cfde563ba9b64fbef0938a12a7f8f4fc9dd43870fd88.scope - libcontainer container 52c7db4b75f39ea35a78cfde563ba9b64fbef0938a12a7f8f4fc9dd43870fd88. Jun 20 19:29:05.734605 systemd[1]: cri-containerd-52c7db4b75f39ea35a78cfde563ba9b64fbef0938a12a7f8f4fc9dd43870fd88.scope: Deactivated successfully. Jun 20 19:29:05.734790 containerd[1577]: time="2025-06-20T19:29:05.734655802Z" level=info msg="StartContainer for \"52c7db4b75f39ea35a78cfde563ba9b64fbef0938a12a7f8f4fc9dd43870fd88\" returns successfully" Jun 20 19:29:05.735603 containerd[1577]: time="2025-06-20T19:29:05.735566855Z" level=info msg="received exit event container_id:\"52c7db4b75f39ea35a78cfde563ba9b64fbef0938a12a7f8f4fc9dd43870fd88\" id:\"52c7db4b75f39ea35a78cfde563ba9b64fbef0938a12a7f8f4fc9dd43870fd88\" pid:3267 exited_at:{seconds:1750447745 nanos:735395361}" Jun 20 19:29:05.735867 containerd[1577]: time="2025-06-20T19:29:05.735844251Z" level=info msg="TaskExit event in podsandbox handler container_id:\"52c7db4b75f39ea35a78cfde563ba9b64fbef0938a12a7f8f4fc9dd43870fd88\" id:\"52c7db4b75f39ea35a78cfde563ba9b64fbef0938a12a7f8f4fc9dd43870fd88\" pid:3267 exited_at:{seconds:1750447745 nanos:735395361}" Jun 20 19:29:05.759938 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-52c7db4b75f39ea35a78cfde563ba9b64fbef0938a12a7f8f4fc9dd43870fd88-rootfs.mount: Deactivated successfully. Jun 20 19:29:06.364883 kubelet[2682]: E0620 19:29:06.363846 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:29:06.364883 kubelet[2682]: E0620 19:29:06.363874 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:29:06.369903 containerd[1577]: time="2025-06-20T19:29:06.369859708Z" level=info msg="CreateContainer within sandbox \"f98ae6a79219e75824af8ab7f370449f2efb23b3676c5d9e12f61ed056de649a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jun 20 19:29:06.382093 kubelet[2682]: I0620 19:29:06.381752 2682 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-2sc7n" podStartSLOduration=2.845370156 podStartE2EDuration="13.381734755s" podCreationTimestamp="2025-06-20 19:28:53 +0000 UTC" firstStartedPulling="2025-06-20 19:28:54.204640307 +0000 UTC m=+5.988261748" lastFinishedPulling="2025-06-20 19:29:04.741004916 +0000 UTC m=+16.524626347" observedRunningTime="2025-06-20 19:29:05.618077327 +0000 UTC m=+17.401698768" watchObservedRunningTime="2025-06-20 19:29:06.381734755 +0000 UTC m=+18.165356196" Jun 20 19:29:06.383132 containerd[1577]: time="2025-06-20T19:29:06.383067208Z" level=info msg="Container b71e1fd6290ebe02596efe0277d598fe934e08c2d3d1ba11da249ca80cbb188f: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:29:06.391013 containerd[1577]: time="2025-06-20T19:29:06.390965775Z" level=info msg="CreateContainer within sandbox \"f98ae6a79219e75824af8ab7f370449f2efb23b3676c5d9e12f61ed056de649a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b71e1fd6290ebe02596efe0277d598fe934e08c2d3d1ba11da249ca80cbb188f\"" Jun 20 19:29:06.391551 containerd[1577]: time="2025-06-20T19:29:06.391522762Z" level=info msg="StartContainer for \"b71e1fd6290ebe02596efe0277d598fe934e08c2d3d1ba11da249ca80cbb188f\"" Jun 20 19:29:06.392344 containerd[1577]: time="2025-06-20T19:29:06.392295366Z" level=info msg="connecting to shim b71e1fd6290ebe02596efe0277d598fe934e08c2d3d1ba11da249ca80cbb188f" address="unix:///run/containerd/s/7a0dd6ac7c024e90f82f792a4a5288eb8d144b6e1947e7471df48e345dcd82ae" protocol=ttrpc version=3 Jun 20 19:29:06.415962 systemd[1]: Started cri-containerd-b71e1fd6290ebe02596efe0277d598fe934e08c2d3d1ba11da249ca80cbb188f.scope - libcontainer container b71e1fd6290ebe02596efe0277d598fe934e08c2d3d1ba11da249ca80cbb188f. Jun 20 19:29:06.445805 systemd[1]: cri-containerd-b71e1fd6290ebe02596efe0277d598fe934e08c2d3d1ba11da249ca80cbb188f.scope: Deactivated successfully. Jun 20 19:29:06.447218 containerd[1577]: time="2025-06-20T19:29:06.447171956Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b71e1fd6290ebe02596efe0277d598fe934e08c2d3d1ba11da249ca80cbb188f\" id:\"b71e1fd6290ebe02596efe0277d598fe934e08c2d3d1ba11da249ca80cbb188f\" pid:3306 exited_at:{seconds:1750447746 nanos:446721745}" Jun 20 19:29:06.450553 containerd[1577]: time="2025-06-20T19:29:06.450494622Z" level=info msg="received exit event container_id:\"b71e1fd6290ebe02596efe0277d598fe934e08c2d3d1ba11da249ca80cbb188f\" id:\"b71e1fd6290ebe02596efe0277d598fe934e08c2d3d1ba11da249ca80cbb188f\" pid:3306 exited_at:{seconds:1750447746 nanos:446721745}" Jun 20 19:29:06.458952 containerd[1577]: time="2025-06-20T19:29:06.458906322Z" level=info msg="StartContainer for \"b71e1fd6290ebe02596efe0277d598fe934e08c2d3d1ba11da249ca80cbb188f\" returns successfully" Jun 20 19:29:06.473117 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b71e1fd6290ebe02596efe0277d598fe934e08c2d3d1ba11da249ca80cbb188f-rootfs.mount: Deactivated successfully. Jun 20 19:29:07.369228 kubelet[2682]: E0620 19:29:07.369190 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:29:07.371766 containerd[1577]: time="2025-06-20T19:29:07.371574769Z" level=info msg="CreateContainer within sandbox \"f98ae6a79219e75824af8ab7f370449f2efb23b3676c5d9e12f61ed056de649a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jun 20 19:29:07.386925 containerd[1577]: time="2025-06-20T19:29:07.386876716Z" level=info msg="Container 534592dc240f5dd6344871f534417b87dff006328293a6186876ce65e2f97db8: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:29:07.388967 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3461469794.mount: Deactivated successfully. Jun 20 19:29:07.396136 containerd[1577]: time="2025-06-20T19:29:07.396097349Z" level=info msg="CreateContainer within sandbox \"f98ae6a79219e75824af8ab7f370449f2efb23b3676c5d9e12f61ed056de649a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"534592dc240f5dd6344871f534417b87dff006328293a6186876ce65e2f97db8\"" Jun 20 19:29:07.396701 containerd[1577]: time="2025-06-20T19:29:07.396661656Z" level=info msg="StartContainer for \"534592dc240f5dd6344871f534417b87dff006328293a6186876ce65e2f97db8\"" Jun 20 19:29:07.397700 containerd[1577]: time="2025-06-20T19:29:07.397658655Z" level=info msg="connecting to shim 534592dc240f5dd6344871f534417b87dff006328293a6186876ce65e2f97db8" address="unix:///run/containerd/s/7a0dd6ac7c024e90f82f792a4a5288eb8d144b6e1947e7471df48e345dcd82ae" protocol=ttrpc version=3 Jun 20 19:29:07.420951 systemd[1]: Started cri-containerd-534592dc240f5dd6344871f534417b87dff006328293a6186876ce65e2f97db8.scope - libcontainer container 534592dc240f5dd6344871f534417b87dff006328293a6186876ce65e2f97db8. Jun 20 19:29:07.457486 containerd[1577]: time="2025-06-20T19:29:07.457444090Z" level=info msg="StartContainer for \"534592dc240f5dd6344871f534417b87dff006328293a6186876ce65e2f97db8\" returns successfully" Jun 20 19:29:07.527336 containerd[1577]: time="2025-06-20T19:29:07.527287029Z" level=info msg="TaskExit event in podsandbox handler container_id:\"534592dc240f5dd6344871f534417b87dff006328293a6186876ce65e2f97db8\" id:\"24f50fb3cc6931b52875b790402159c32f1eb27d81fdd179679b2f56a8ae6852\" pid:3374 exited_at:{seconds:1750447747 nanos:526724372}" Jun 20 19:29:07.629634 kubelet[2682]: I0620 19:29:07.629526 2682 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jun 20 19:29:07.664529 systemd[1]: Created slice kubepods-burstable-podf2aa04bd_7587_436c_9d4c_5d3dd1f94024.slice - libcontainer container kubepods-burstable-podf2aa04bd_7587_436c_9d4c_5d3dd1f94024.slice. Jun 20 19:29:07.672718 systemd[1]: Created slice kubepods-burstable-podd3592af3_cfdf_46f2_9805_86da86c425fc.slice - libcontainer container kubepods-burstable-podd3592af3_cfdf_46f2_9805_86da86c425fc.slice. Jun 20 19:29:07.762441 kubelet[2682]: I0620 19:29:07.762349 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlb4v\" (UniqueName: \"kubernetes.io/projected/d3592af3-cfdf-46f2-9805-86da86c425fc-kube-api-access-qlb4v\") pod \"coredns-668d6bf9bc-2c4kh\" (UID: \"d3592af3-cfdf-46f2-9805-86da86c425fc\") " pod="kube-system/coredns-668d6bf9bc-2c4kh" Jun 20 19:29:07.762441 kubelet[2682]: I0620 19:29:07.762416 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdvwg\" (UniqueName: \"kubernetes.io/projected/f2aa04bd-7587-436c-9d4c-5d3dd1f94024-kube-api-access-zdvwg\") pod \"coredns-668d6bf9bc-8c9kb\" (UID: \"f2aa04bd-7587-436c-9d4c-5d3dd1f94024\") " pod="kube-system/coredns-668d6bf9bc-8c9kb" Jun 20 19:29:07.762441 kubelet[2682]: I0620 19:29:07.762445 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f2aa04bd-7587-436c-9d4c-5d3dd1f94024-config-volume\") pod \"coredns-668d6bf9bc-8c9kb\" (UID: \"f2aa04bd-7587-436c-9d4c-5d3dd1f94024\") " pod="kube-system/coredns-668d6bf9bc-8c9kb" Jun 20 19:29:07.762696 kubelet[2682]: I0620 19:29:07.762485 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d3592af3-cfdf-46f2-9805-86da86c425fc-config-volume\") pod \"coredns-668d6bf9bc-2c4kh\" (UID: \"d3592af3-cfdf-46f2-9805-86da86c425fc\") " pod="kube-system/coredns-668d6bf9bc-2c4kh" Jun 20 19:29:07.969949 kubelet[2682]: E0620 19:29:07.969907 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:29:07.970623 containerd[1577]: time="2025-06-20T19:29:07.970423823Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8c9kb,Uid:f2aa04bd-7587-436c-9d4c-5d3dd1f94024,Namespace:kube-system,Attempt:0,}" Jun 20 19:29:07.976341 kubelet[2682]: E0620 19:29:07.976318 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:29:07.976647 containerd[1577]: time="2025-06-20T19:29:07.976623269Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2c4kh,Uid:d3592af3-cfdf-46f2-9805-86da86c425fc,Namespace:kube-system,Attempt:0,}" Jun 20 19:29:08.375627 kubelet[2682]: E0620 19:29:08.375509 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:29:08.395829 kubelet[2682]: I0620 19:29:08.395544 2682 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-g4q5s" podStartSLOduration=7.505042918 podStartE2EDuration="16.395526731s" podCreationTimestamp="2025-06-20 19:28:52 +0000 UTC" firstStartedPulling="2025-06-20 19:28:53.728585119 +0000 UTC m=+5.512206560" lastFinishedPulling="2025-06-20 19:29:02.619068941 +0000 UTC m=+14.402690373" observedRunningTime="2025-06-20 19:29:08.394204905 +0000 UTC m=+20.177826336" watchObservedRunningTime="2025-06-20 19:29:08.395526731 +0000 UTC m=+20.179148172" Jun 20 19:29:09.265556 systemd-networkd[1483]: cilium_host: Link UP Jun 20 19:29:09.265775 systemd-networkd[1483]: cilium_net: Link UP Jun 20 19:29:09.266073 systemd-networkd[1483]: cilium_net: Gained carrier Jun 20 19:29:09.268441 systemd-networkd[1483]: cilium_host: Gained carrier Jun 20 19:29:09.376930 systemd-networkd[1483]: cilium_vxlan: Link UP Jun 20 19:29:09.377295 systemd-networkd[1483]: cilium_vxlan: Gained carrier Jun 20 19:29:09.379447 kubelet[2682]: E0620 19:29:09.379417 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:29:09.433034 systemd-networkd[1483]: cilium_host: Gained IPv6LL Jun 20 19:29:09.594869 kernel: NET: Registered PF_ALG protocol family Jun 20 19:29:10.218006 systemd-networkd[1483]: cilium_net: Gained IPv6LL Jun 20 19:29:10.277861 systemd-networkd[1483]: lxc_health: Link UP Jun 20 19:29:10.278366 systemd-networkd[1483]: lxc_health: Gained carrier Jun 20 19:29:10.381422 kubelet[2682]: E0620 19:29:10.381362 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:29:10.537049 systemd-networkd[1483]: cilium_vxlan: Gained IPv6LL Jun 20 19:29:10.557911 kernel: eth0: renamed from tmp230f1 Jun 20 19:29:10.560301 systemd-networkd[1483]: lxc56ea1236ebd4: Link UP Jun 20 19:29:10.568871 kernel: eth0: renamed from tmpfe086 Jun 20 19:29:10.570552 systemd-networkd[1483]: lxc9990c127e0b1: Link UP Jun 20 19:29:10.571227 systemd-networkd[1483]: lxc56ea1236ebd4: Gained carrier Jun 20 19:29:10.573498 systemd-networkd[1483]: lxc9990c127e0b1: Gained carrier Jun 20 19:29:11.369030 systemd-networkd[1483]: lxc_health: Gained IPv6LL Jun 20 19:29:11.585931 kubelet[2682]: E0620 19:29:11.585861 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:29:12.073113 systemd-networkd[1483]: lxc56ea1236ebd4: Gained IPv6LL Jun 20 19:29:12.201047 systemd-networkd[1483]: lxc9990c127e0b1: Gained IPv6LL Jun 20 19:29:12.384325 kubelet[2682]: E0620 19:29:12.384200 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:29:13.385676 kubelet[2682]: E0620 19:29:13.385646 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:29:13.882269 containerd[1577]: time="2025-06-20T19:29:13.882184747Z" level=info msg="connecting to shim fe08632fa479c64a8eaf31bc5f4eb1f41a5fd0f81eddf3f962ce5e9eac539056" address="unix:///run/containerd/s/0e02614d8408b09f9587e6f932d76da7533243f70e5f434be9cb232336ee7e73" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:29:13.909763 containerd[1577]: time="2025-06-20T19:29:13.909688204Z" level=info msg="connecting to shim 230f13f45e933993ff7e3995e0c67b7e07d4b8dfcce99e288bd17414f50c0217" address="unix:///run/containerd/s/3dc014b711155d3ddbd7d0abf1a3bb97f6ce95347e3d83252844d7e1b836f1b0" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:29:13.910985 systemd[1]: Started cri-containerd-fe08632fa479c64a8eaf31bc5f4eb1f41a5fd0f81eddf3f962ce5e9eac539056.scope - libcontainer container fe08632fa479c64a8eaf31bc5f4eb1f41a5fd0f81eddf3f962ce5e9eac539056. Jun 20 19:29:13.927134 systemd-resolved[1404]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 20 19:29:13.945973 systemd[1]: Started cri-containerd-230f13f45e933993ff7e3995e0c67b7e07d4b8dfcce99e288bd17414f50c0217.scope - libcontainer container 230f13f45e933993ff7e3995e0c67b7e07d4b8dfcce99e288bd17414f50c0217. Jun 20 19:29:13.962657 containerd[1577]: time="2025-06-20T19:29:13.962601761Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2c4kh,Uid:d3592af3-cfdf-46f2-9805-86da86c425fc,Namespace:kube-system,Attempt:0,} returns sandbox id \"fe08632fa479c64a8eaf31bc5f4eb1f41a5fd0f81eddf3f962ce5e9eac539056\"" Jun 20 19:29:13.964832 kubelet[2682]: E0620 19:29:13.964794 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:29:13.968038 systemd-resolved[1404]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 20 19:29:13.968171 containerd[1577]: time="2025-06-20T19:29:13.967964661Z" level=info msg="CreateContainer within sandbox \"fe08632fa479c64a8eaf31bc5f4eb1f41a5fd0f81eddf3f962ce5e9eac539056\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 20 19:29:13.984330 containerd[1577]: time="2025-06-20T19:29:13.984265127Z" level=info msg="Container 594744667fb163dcd4760d54f7a1ff1dddee4c14dea28cb1f86c830aed869212: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:29:13.991278 containerd[1577]: time="2025-06-20T19:29:13.991158840Z" level=info msg="CreateContainer within sandbox \"fe08632fa479c64a8eaf31bc5f4eb1f41a5fd0f81eddf3f962ce5e9eac539056\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"594744667fb163dcd4760d54f7a1ff1dddee4c14dea28cb1f86c830aed869212\"" Jun 20 19:29:13.992860 containerd[1577]: time="2025-06-20T19:29:13.992556578Z" level=info msg="StartContainer for \"594744667fb163dcd4760d54f7a1ff1dddee4c14dea28cb1f86c830aed869212\"" Jun 20 19:29:14.000177 containerd[1577]: time="2025-06-20T19:29:14.000129292Z" level=info msg="connecting to shim 594744667fb163dcd4760d54f7a1ff1dddee4c14dea28cb1f86c830aed869212" address="unix:///run/containerd/s/0e02614d8408b09f9587e6f932d76da7533243f70e5f434be9cb232336ee7e73" protocol=ttrpc version=3 Jun 20 19:29:14.002573 containerd[1577]: time="2025-06-20T19:29:14.002523146Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8c9kb,Uid:f2aa04bd-7587-436c-9d4c-5d3dd1f94024,Namespace:kube-system,Attempt:0,} returns sandbox id \"230f13f45e933993ff7e3995e0c67b7e07d4b8dfcce99e288bd17414f50c0217\"" Jun 20 19:29:14.003369 kubelet[2682]: E0620 19:29:14.003344 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:29:14.006026 containerd[1577]: time="2025-06-20T19:29:14.005627684Z" level=info msg="CreateContainer within sandbox \"230f13f45e933993ff7e3995e0c67b7e07d4b8dfcce99e288bd17414f50c0217\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 20 19:29:14.017569 containerd[1577]: time="2025-06-20T19:29:14.017527915Z" level=info msg="Container 7c2a8b5444a0e5b2542f5203c7c76f018ca86951bb415906f1cf6760be0acfcf: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:29:14.024110 containerd[1577]: time="2025-06-20T19:29:14.024037151Z" level=info msg="CreateContainer within sandbox \"230f13f45e933993ff7e3995e0c67b7e07d4b8dfcce99e288bd17414f50c0217\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7c2a8b5444a0e5b2542f5203c7c76f018ca86951bb415906f1cf6760be0acfcf\"" Jun 20 19:29:14.024452 containerd[1577]: time="2025-06-20T19:29:14.024420495Z" level=info msg="StartContainer for \"7c2a8b5444a0e5b2542f5203c7c76f018ca86951bb415906f1cf6760be0acfcf\"" Jun 20 19:29:14.025367 containerd[1577]: time="2025-06-20T19:29:14.025337673Z" level=info msg="connecting to shim 7c2a8b5444a0e5b2542f5203c7c76f018ca86951bb415906f1cf6760be0acfcf" address="unix:///run/containerd/s/3dc014b711155d3ddbd7d0abf1a3bb97f6ce95347e3d83252844d7e1b836f1b0" protocol=ttrpc version=3 Jun 20 19:29:14.026083 systemd[1]: Started cri-containerd-594744667fb163dcd4760d54f7a1ff1dddee4c14dea28cb1f86c830aed869212.scope - libcontainer container 594744667fb163dcd4760d54f7a1ff1dddee4c14dea28cb1f86c830aed869212. Jun 20 19:29:14.052118 systemd[1]: Started cri-containerd-7c2a8b5444a0e5b2542f5203c7c76f018ca86951bb415906f1cf6760be0acfcf.scope - libcontainer container 7c2a8b5444a0e5b2542f5203c7c76f018ca86951bb415906f1cf6760be0acfcf. Jun 20 19:29:14.076944 containerd[1577]: time="2025-06-20T19:29:14.076847200Z" level=info msg="StartContainer for \"594744667fb163dcd4760d54f7a1ff1dddee4c14dea28cb1f86c830aed869212\" returns successfully" Jun 20 19:29:14.089588 containerd[1577]: time="2025-06-20T19:29:14.089555021Z" level=info msg="StartContainer for \"7c2a8b5444a0e5b2542f5203c7c76f018ca86951bb415906f1cf6760be0acfcf\" returns successfully" Jun 20 19:29:14.389008 kubelet[2682]: E0620 19:29:14.388912 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:29:14.392069 kubelet[2682]: E0620 19:29:14.392024 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:29:14.853935 kubelet[2682]: I0620 19:29:14.853854 2682 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-8c9kb" podStartSLOduration=21.853836315 podStartE2EDuration="21.853836315s" podCreationTimestamp="2025-06-20 19:28:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:29:14.686254557 +0000 UTC m=+26.469875999" watchObservedRunningTime="2025-06-20 19:29:14.853836315 +0000 UTC m=+26.637457766" Jun 20 19:29:14.874745 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1289676640.mount: Deactivated successfully. Jun 20 19:29:15.394030 kubelet[2682]: E0620 19:29:15.393991 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:29:15.394519 kubelet[2682]: E0620 19:29:15.394104 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:29:16.396740 kubelet[2682]: E0620 19:29:16.396694 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:29:16.397350 kubelet[2682]: E0620 19:29:16.396897 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:29:19.887595 systemd[1]: Started sshd@7-10.0.0.126:22-10.0.0.1:53828.service - OpenSSH per-connection server daemon (10.0.0.1:53828). Jun 20 19:29:19.946918 sshd[4037]: Accepted publickey for core from 10.0.0.1 port 53828 ssh2: RSA SHA256:YoQ4GiRtY5Hu7FaS/OnNYeCnsR+r8YS2g6Qh7XD/NPE Jun 20 19:29:19.948661 sshd-session[4037]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:29:19.953912 systemd-logind[1547]: New session 8 of user core. Jun 20 19:29:19.966020 systemd[1]: Started session-8.scope - Session 8 of User core. Jun 20 19:29:20.100700 sshd[4039]: Connection closed by 10.0.0.1 port 53828 Jun 20 19:29:20.101139 sshd-session[4037]: pam_unix(sshd:session): session closed for user core Jun 20 19:29:20.106963 systemd[1]: sshd@7-10.0.0.126:22-10.0.0.1:53828.service: Deactivated successfully. Jun 20 19:29:20.109483 systemd[1]: session-8.scope: Deactivated successfully. Jun 20 19:29:20.110489 systemd-logind[1547]: Session 8 logged out. Waiting for processes to exit. Jun 20 19:29:20.112222 systemd-logind[1547]: Removed session 8. Jun 20 19:29:25.115192 systemd[1]: Started sshd@8-10.0.0.126:22-10.0.0.1:54468.service - OpenSSH per-connection server daemon (10.0.0.1:54468). Jun 20 19:29:25.172102 sshd[4060]: Accepted publickey for core from 10.0.0.1 port 54468 ssh2: RSA SHA256:YoQ4GiRtY5Hu7FaS/OnNYeCnsR+r8YS2g6Qh7XD/NPE Jun 20 19:29:25.173714 sshd-session[4060]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:29:25.178296 systemd-logind[1547]: New session 9 of user core. Jun 20 19:29:25.187956 systemd[1]: Started session-9.scope - Session 9 of User core. Jun 20 19:29:25.308691 sshd[4062]: Connection closed by 10.0.0.1 port 54468 Jun 20 19:29:25.309070 sshd-session[4060]: pam_unix(sshd:session): session closed for user core Jun 20 19:29:25.312247 systemd[1]: sshd@8-10.0.0.126:22-10.0.0.1:54468.service: Deactivated successfully. Jun 20 19:29:25.314527 systemd[1]: session-9.scope: Deactivated successfully. Jun 20 19:29:25.316102 systemd-logind[1547]: Session 9 logged out. Waiting for processes to exit. Jun 20 19:29:25.317962 systemd-logind[1547]: Removed session 9. Jun 20 19:29:30.324037 systemd[1]: Started sshd@9-10.0.0.126:22-10.0.0.1:54478.service - OpenSSH per-connection server daemon (10.0.0.1:54478). Jun 20 19:29:30.452505 sshd[4076]: Accepted publickey for core from 10.0.0.1 port 54478 ssh2: RSA SHA256:YoQ4GiRtY5Hu7FaS/OnNYeCnsR+r8YS2g6Qh7XD/NPE Jun 20 19:29:30.454382 sshd-session[4076]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:29:30.460257 systemd-logind[1547]: New session 10 of user core. Jun 20 19:29:30.475284 systemd[1]: Started session-10.scope - Session 10 of User core. Jun 20 19:29:30.588782 sshd[4078]: Connection closed by 10.0.0.1 port 54478 Jun 20 19:29:30.589100 sshd-session[4076]: pam_unix(sshd:session): session closed for user core Jun 20 19:29:30.594207 systemd[1]: sshd@9-10.0.0.126:22-10.0.0.1:54478.service: Deactivated successfully. Jun 20 19:29:30.596906 systemd[1]: session-10.scope: Deactivated successfully. Jun 20 19:29:30.597937 systemd-logind[1547]: Session 10 logged out. Waiting for processes to exit. Jun 20 19:29:30.599635 systemd-logind[1547]: Removed session 10. Jun 20 19:29:35.604251 systemd[1]: Started sshd@10-10.0.0.126:22-10.0.0.1:48262.service - OpenSSH per-connection server daemon (10.0.0.1:48262). Jun 20 19:29:35.655086 sshd[4092]: Accepted publickey for core from 10.0.0.1 port 48262 ssh2: RSA SHA256:YoQ4GiRtY5Hu7FaS/OnNYeCnsR+r8YS2g6Qh7XD/NPE Jun 20 19:29:35.656680 sshd-session[4092]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:29:35.661142 systemd-logind[1547]: New session 11 of user core. Jun 20 19:29:35.674986 systemd[1]: Started session-11.scope - Session 11 of User core. Jun 20 19:29:35.796979 sshd[4094]: Connection closed by 10.0.0.1 port 48262 Jun 20 19:29:35.797390 sshd-session[4092]: pam_unix(sshd:session): session closed for user core Jun 20 19:29:35.809650 systemd[1]: sshd@10-10.0.0.126:22-10.0.0.1:48262.service: Deactivated successfully. Jun 20 19:29:35.811744 systemd[1]: session-11.scope: Deactivated successfully. Jun 20 19:29:35.812721 systemd-logind[1547]: Session 11 logged out. Waiting for processes to exit. Jun 20 19:29:35.816395 systemd[1]: Started sshd@11-10.0.0.126:22-10.0.0.1:48264.service - OpenSSH per-connection server daemon (10.0.0.1:48264). Jun 20 19:29:35.817293 systemd-logind[1547]: Removed session 11. Jun 20 19:29:35.873296 sshd[4109]: Accepted publickey for core from 10.0.0.1 port 48264 ssh2: RSA SHA256:YoQ4GiRtY5Hu7FaS/OnNYeCnsR+r8YS2g6Qh7XD/NPE Jun 20 19:29:35.874898 sshd-session[4109]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:29:35.879996 systemd-logind[1547]: New session 12 of user core. Jun 20 19:29:35.889968 systemd[1]: Started session-12.scope - Session 12 of User core. Jun 20 19:29:36.050666 sshd[4111]: Connection closed by 10.0.0.1 port 48264 Jun 20 19:29:36.051117 sshd-session[4109]: pam_unix(sshd:session): session closed for user core Jun 20 19:29:36.059372 systemd[1]: sshd@11-10.0.0.126:22-10.0.0.1:48264.service: Deactivated successfully. Jun 20 19:29:36.061173 systemd[1]: session-12.scope: Deactivated successfully. Jun 20 19:29:36.062035 systemd-logind[1547]: Session 12 logged out. Waiting for processes to exit. Jun 20 19:29:36.064853 systemd[1]: Started sshd@12-10.0.0.126:22-10.0.0.1:48278.service - OpenSSH per-connection server daemon (10.0.0.1:48278). Jun 20 19:29:36.065525 systemd-logind[1547]: Removed session 12. Jun 20 19:29:36.129956 sshd[4122]: Accepted publickey for core from 10.0.0.1 port 48278 ssh2: RSA SHA256:YoQ4GiRtY5Hu7FaS/OnNYeCnsR+r8YS2g6Qh7XD/NPE Jun 20 19:29:36.131658 sshd-session[4122]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:29:36.136604 systemd-logind[1547]: New session 13 of user core. Jun 20 19:29:36.148006 systemd[1]: Started session-13.scope - Session 13 of User core. Jun 20 19:29:36.270807 sshd[4124]: Connection closed by 10.0.0.1 port 48278 Jun 20 19:29:36.271202 sshd-session[4122]: pam_unix(sshd:session): session closed for user core Jun 20 19:29:36.276062 systemd[1]: sshd@12-10.0.0.126:22-10.0.0.1:48278.service: Deactivated successfully. Jun 20 19:29:36.278722 systemd[1]: session-13.scope: Deactivated successfully. Jun 20 19:29:36.279697 systemd-logind[1547]: Session 13 logged out. Waiting for processes to exit. Jun 20 19:29:36.281515 systemd-logind[1547]: Removed session 13. Jun 20 19:29:41.284918 systemd[1]: Started sshd@13-10.0.0.126:22-10.0.0.1:48282.service - OpenSSH per-connection server daemon (10.0.0.1:48282). Jun 20 19:29:41.342914 sshd[4138]: Accepted publickey for core from 10.0.0.1 port 48282 ssh2: RSA SHA256:YoQ4GiRtY5Hu7FaS/OnNYeCnsR+r8YS2g6Qh7XD/NPE Jun 20 19:29:41.344265 sshd-session[4138]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:29:41.348288 systemd-logind[1547]: New session 14 of user core. Jun 20 19:29:41.357979 systemd[1]: Started session-14.scope - Session 14 of User core. Jun 20 19:29:41.462395 sshd[4140]: Connection closed by 10.0.0.1 port 48282 Jun 20 19:29:41.462747 sshd-session[4138]: pam_unix(sshd:session): session closed for user core Jun 20 19:29:41.466676 systemd[1]: sshd@13-10.0.0.126:22-10.0.0.1:48282.service: Deactivated successfully. Jun 20 19:29:41.468803 systemd[1]: session-14.scope: Deactivated successfully. Jun 20 19:29:41.469622 systemd-logind[1547]: Session 14 logged out. Waiting for processes to exit. Jun 20 19:29:41.470901 systemd-logind[1547]: Removed session 14. Jun 20 19:29:46.476966 systemd[1]: Started sshd@14-10.0.0.126:22-10.0.0.1:42422.service - OpenSSH per-connection server daemon (10.0.0.1:42422). Jun 20 19:29:46.533430 sshd[4154]: Accepted publickey for core from 10.0.0.1 port 42422 ssh2: RSA SHA256:YoQ4GiRtY5Hu7FaS/OnNYeCnsR+r8YS2g6Qh7XD/NPE Jun 20 19:29:46.535574 sshd-session[4154]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:29:46.541334 systemd-logind[1547]: New session 15 of user core. Jun 20 19:29:46.551046 systemd[1]: Started session-15.scope - Session 15 of User core. Jun 20 19:29:46.670454 sshd[4156]: Connection closed by 10.0.0.1 port 42422 Jun 20 19:29:46.670800 sshd-session[4154]: pam_unix(sshd:session): session closed for user core Jun 20 19:29:46.674927 systemd[1]: sshd@14-10.0.0.126:22-10.0.0.1:42422.service: Deactivated successfully. Jun 20 19:29:46.677480 systemd[1]: session-15.scope: Deactivated successfully. Jun 20 19:29:46.678435 systemd-logind[1547]: Session 15 logged out. Waiting for processes to exit. Jun 20 19:29:46.679838 systemd-logind[1547]: Removed session 15. Jun 20 19:29:51.681567 systemd[1]: Started sshd@15-10.0.0.126:22-10.0.0.1:42436.service - OpenSSH per-connection server daemon (10.0.0.1:42436). Jun 20 19:29:51.737282 sshd[4172]: Accepted publickey for core from 10.0.0.1 port 42436 ssh2: RSA SHA256:YoQ4GiRtY5Hu7FaS/OnNYeCnsR+r8YS2g6Qh7XD/NPE Jun 20 19:29:51.738756 sshd-session[4172]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:29:51.743171 systemd-logind[1547]: New session 16 of user core. Jun 20 19:29:51.752953 systemd[1]: Started session-16.scope - Session 16 of User core. Jun 20 19:29:51.863231 sshd[4174]: Connection closed by 10.0.0.1 port 42436 Jun 20 19:29:51.863583 sshd-session[4172]: pam_unix(sshd:session): session closed for user core Jun 20 19:29:51.875369 systemd[1]: sshd@15-10.0.0.126:22-10.0.0.1:42436.service: Deactivated successfully. Jun 20 19:29:51.877147 systemd[1]: session-16.scope: Deactivated successfully. Jun 20 19:29:51.877904 systemd-logind[1547]: Session 16 logged out. Waiting for processes to exit. Jun 20 19:29:51.880672 systemd[1]: Started sshd@16-10.0.0.126:22-10.0.0.1:42450.service - OpenSSH per-connection server daemon (10.0.0.1:42450). Jun 20 19:29:51.881346 systemd-logind[1547]: Removed session 16. Jun 20 19:29:51.935667 sshd[4187]: Accepted publickey for core from 10.0.0.1 port 42450 ssh2: RSA SHA256:YoQ4GiRtY5Hu7FaS/OnNYeCnsR+r8YS2g6Qh7XD/NPE Jun 20 19:29:51.937487 sshd-session[4187]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:29:51.942341 systemd-logind[1547]: New session 17 of user core. Jun 20 19:29:51.956949 systemd[1]: Started session-17.scope - Session 17 of User core. Jun 20 19:29:52.519157 sshd[4189]: Connection closed by 10.0.0.1 port 42450 Jun 20 19:29:52.519568 sshd-session[4187]: pam_unix(sshd:session): session closed for user core Jun 20 19:29:52.528659 systemd[1]: sshd@16-10.0.0.126:22-10.0.0.1:42450.service: Deactivated successfully. Jun 20 19:29:52.530689 systemd[1]: session-17.scope: Deactivated successfully. Jun 20 19:29:52.531481 systemd-logind[1547]: Session 17 logged out. Waiting for processes to exit. Jun 20 19:29:52.534784 systemd[1]: Started sshd@17-10.0.0.126:22-10.0.0.1:42460.service - OpenSSH per-connection server daemon (10.0.0.1:42460). Jun 20 19:29:52.535529 systemd-logind[1547]: Removed session 17. Jun 20 19:29:52.601692 sshd[4201]: Accepted publickey for core from 10.0.0.1 port 42460 ssh2: RSA SHA256:YoQ4GiRtY5Hu7FaS/OnNYeCnsR+r8YS2g6Qh7XD/NPE Jun 20 19:29:52.603676 sshd-session[4201]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:29:52.608831 systemd-logind[1547]: New session 18 of user core. Jun 20 19:29:52.619010 systemd[1]: Started session-18.scope - Session 18 of User core. Jun 20 19:29:53.640105 sshd[4203]: Connection closed by 10.0.0.1 port 42460 Jun 20 19:29:53.640590 sshd-session[4201]: pam_unix(sshd:session): session closed for user core Jun 20 19:29:53.654404 systemd[1]: sshd@17-10.0.0.126:22-10.0.0.1:42460.service: Deactivated successfully. Jun 20 19:29:53.656710 systemd[1]: session-18.scope: Deactivated successfully. Jun 20 19:29:53.659662 systemd-logind[1547]: Session 18 logged out. Waiting for processes to exit. Jun 20 19:29:53.662723 systemd[1]: Started sshd@18-10.0.0.126:22-10.0.0.1:56462.service - OpenSSH per-connection server daemon (10.0.0.1:56462). Jun 20 19:29:53.664004 systemd-logind[1547]: Removed session 18. Jun 20 19:29:53.714493 sshd[4222]: Accepted publickey for core from 10.0.0.1 port 56462 ssh2: RSA SHA256:YoQ4GiRtY5Hu7FaS/OnNYeCnsR+r8YS2g6Qh7XD/NPE Jun 20 19:29:53.716170 sshd-session[4222]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:29:53.720827 systemd-logind[1547]: New session 19 of user core. Jun 20 19:29:53.730975 systemd[1]: Started session-19.scope - Session 19 of User core. Jun 20 19:29:53.943506 sshd[4224]: Connection closed by 10.0.0.1 port 56462 Jun 20 19:29:53.943907 sshd-session[4222]: pam_unix(sshd:session): session closed for user core Jun 20 19:29:53.952975 systemd[1]: sshd@18-10.0.0.126:22-10.0.0.1:56462.service: Deactivated successfully. Jun 20 19:29:53.955006 systemd[1]: session-19.scope: Deactivated successfully. Jun 20 19:29:53.955905 systemd-logind[1547]: Session 19 logged out. Waiting for processes to exit. Jun 20 19:29:53.959138 systemd[1]: Started sshd@19-10.0.0.126:22-10.0.0.1:56472.service - OpenSSH per-connection server daemon (10.0.0.1:56472). Jun 20 19:29:53.959731 systemd-logind[1547]: Removed session 19. Jun 20 19:29:54.016814 sshd[4236]: Accepted publickey for core from 10.0.0.1 port 56472 ssh2: RSA SHA256:YoQ4GiRtY5Hu7FaS/OnNYeCnsR+r8YS2g6Qh7XD/NPE Jun 20 19:29:54.018732 sshd-session[4236]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:29:54.023460 systemd-logind[1547]: New session 20 of user core. Jun 20 19:29:54.032945 systemd[1]: Started session-20.scope - Session 20 of User core. Jun 20 19:29:54.145359 sshd[4240]: Connection closed by 10.0.0.1 port 56472 Jun 20 19:29:54.145682 sshd-session[4236]: pam_unix(sshd:session): session closed for user core Jun 20 19:29:54.150618 systemd[1]: sshd@19-10.0.0.126:22-10.0.0.1:56472.service: Deactivated successfully. Jun 20 19:29:54.153278 systemd[1]: session-20.scope: Deactivated successfully. Jun 20 19:29:54.154269 systemd-logind[1547]: Session 20 logged out. Waiting for processes to exit. Jun 20 19:29:54.156086 systemd-logind[1547]: Removed session 20. Jun 20 19:29:59.169208 systemd[1]: Started sshd@20-10.0.0.126:22-10.0.0.1:56482.service - OpenSSH per-connection server daemon (10.0.0.1:56482). Jun 20 19:29:59.234930 sshd[4253]: Accepted publickey for core from 10.0.0.1 port 56482 ssh2: RSA SHA256:YoQ4GiRtY5Hu7FaS/OnNYeCnsR+r8YS2g6Qh7XD/NPE Jun 20 19:29:59.236351 sshd-session[4253]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:29:59.241275 systemd-logind[1547]: New session 21 of user core. Jun 20 19:29:59.255005 systemd[1]: Started session-21.scope - Session 21 of User core. Jun 20 19:29:59.301025 kubelet[2682]: E0620 19:29:59.300966 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:29:59.369425 sshd[4255]: Connection closed by 10.0.0.1 port 56482 Jun 20 19:29:59.369871 sshd-session[4253]: pam_unix(sshd:session): session closed for user core Jun 20 19:29:59.374934 systemd[1]: sshd@20-10.0.0.126:22-10.0.0.1:56482.service: Deactivated successfully. Jun 20 19:29:59.376807 systemd[1]: session-21.scope: Deactivated successfully. Jun 20 19:29:59.377717 systemd-logind[1547]: Session 21 logged out. Waiting for processes to exit. Jun 20 19:29:59.379698 systemd-logind[1547]: Removed session 21. Jun 20 19:30:04.387029 systemd[1]: Started sshd@21-10.0.0.126:22-10.0.0.1:32876.service - OpenSSH per-connection server daemon (10.0.0.1:32876). Jun 20 19:30:04.440325 sshd[4270]: Accepted publickey for core from 10.0.0.1 port 32876 ssh2: RSA SHA256:YoQ4GiRtY5Hu7FaS/OnNYeCnsR+r8YS2g6Qh7XD/NPE Jun 20 19:30:04.441943 sshd-session[4270]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:30:04.446396 systemd-logind[1547]: New session 22 of user core. Jun 20 19:30:04.456957 systemd[1]: Started session-22.scope - Session 22 of User core. Jun 20 19:30:04.617101 sshd[4272]: Connection closed by 10.0.0.1 port 32876 Jun 20 19:30:04.617435 sshd-session[4270]: pam_unix(sshd:session): session closed for user core Jun 20 19:30:04.622137 systemd[1]: sshd@21-10.0.0.126:22-10.0.0.1:32876.service: Deactivated successfully. Jun 20 19:30:04.624277 systemd[1]: session-22.scope: Deactivated successfully. Jun 20 19:30:04.625160 systemd-logind[1547]: Session 22 logged out. Waiting for processes to exit. Jun 20 19:30:04.626363 systemd-logind[1547]: Removed session 22. Jun 20 19:30:09.631495 systemd[1]: Started sshd@22-10.0.0.126:22-10.0.0.1:32882.service - OpenSSH per-connection server daemon (10.0.0.1:32882). Jun 20 19:30:09.685496 sshd[4286]: Accepted publickey for core from 10.0.0.1 port 32882 ssh2: RSA SHA256:YoQ4GiRtY5Hu7FaS/OnNYeCnsR+r8YS2g6Qh7XD/NPE Jun 20 19:30:09.687473 sshd-session[4286]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:30:09.692605 systemd-logind[1547]: New session 23 of user core. Jun 20 19:30:09.700121 systemd[1]: Started session-23.scope - Session 23 of User core. Jun 20 19:30:09.815786 sshd[4288]: Connection closed by 10.0.0.1 port 32882 Jun 20 19:30:09.816148 sshd-session[4286]: pam_unix(sshd:session): session closed for user core Jun 20 19:30:09.820964 systemd[1]: sshd@22-10.0.0.126:22-10.0.0.1:32882.service: Deactivated successfully. Jun 20 19:30:09.823082 systemd[1]: session-23.scope: Deactivated successfully. Jun 20 19:30:09.823901 systemd-logind[1547]: Session 23 logged out. Waiting for processes to exit. Jun 20 19:30:09.825212 systemd-logind[1547]: Removed session 23. Jun 20 19:30:14.301237 kubelet[2682]: E0620 19:30:14.301183 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:30:14.833433 systemd[1]: Started sshd@23-10.0.0.126:22-10.0.0.1:51792.service - OpenSSH per-connection server daemon (10.0.0.1:51792). Jun 20 19:30:14.893700 sshd[4301]: Accepted publickey for core from 10.0.0.1 port 51792 ssh2: RSA SHA256:YoQ4GiRtY5Hu7FaS/OnNYeCnsR+r8YS2g6Qh7XD/NPE Jun 20 19:30:14.895257 sshd-session[4301]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:30:14.899722 systemd-logind[1547]: New session 24 of user core. Jun 20 19:30:14.907954 systemd[1]: Started session-24.scope - Session 24 of User core. Jun 20 19:30:15.017506 sshd[4303]: Connection closed by 10.0.0.1 port 51792 Jun 20 19:30:15.018060 sshd-session[4301]: pam_unix(sshd:session): session closed for user core Jun 20 19:30:15.029809 systemd[1]: sshd@23-10.0.0.126:22-10.0.0.1:51792.service: Deactivated successfully. Jun 20 19:30:15.031956 systemd[1]: session-24.scope: Deactivated successfully. Jun 20 19:30:15.032952 systemd-logind[1547]: Session 24 logged out. Waiting for processes to exit. Jun 20 19:30:15.037180 systemd[1]: Started sshd@24-10.0.0.126:22-10.0.0.1:51798.service - OpenSSH per-connection server daemon (10.0.0.1:51798). Jun 20 19:30:15.037866 systemd-logind[1547]: Removed session 24. Jun 20 19:30:15.093997 sshd[4316]: Accepted publickey for core from 10.0.0.1 port 51798 ssh2: RSA SHA256:YoQ4GiRtY5Hu7FaS/OnNYeCnsR+r8YS2g6Qh7XD/NPE Jun 20 19:30:15.095595 sshd-session[4316]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:30:15.100379 systemd-logind[1547]: New session 25 of user core. Jun 20 19:30:15.117961 systemd[1]: Started session-25.scope - Session 25 of User core. Jun 20 19:30:16.300602 kubelet[2682]: E0620 19:30:16.300555 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:30:16.452980 kubelet[2682]: I0620 19:30:16.452840 2682 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-2c4kh" podStartSLOduration=83.452801216 podStartE2EDuration="1m23.452801216s" podCreationTimestamp="2025-06-20 19:28:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:29:15.050224831 +0000 UTC m=+26.833846272" watchObservedRunningTime="2025-06-20 19:30:16.452801216 +0000 UTC m=+88.236422657" Jun 20 19:30:16.461729 containerd[1577]: time="2025-06-20T19:30:16.461588640Z" level=info msg="StopContainer for \"a23b46d6a499f7354c95d17dccda73815f1049174c41a93694c9476dcdebb25d\" with timeout 30 (s)" Jun 20 19:30:16.471134 containerd[1577]: time="2025-06-20T19:30:16.471070221Z" level=info msg="Stop container \"a23b46d6a499f7354c95d17dccda73815f1049174c41a93694c9476dcdebb25d\" with signal terminated" Jun 20 19:30:16.487365 systemd[1]: cri-containerd-a23b46d6a499f7354c95d17dccda73815f1049174c41a93694c9476dcdebb25d.scope: Deactivated successfully. Jun 20 19:30:16.489198 containerd[1577]: time="2025-06-20T19:30:16.488996446Z" level=info msg="received exit event container_id:\"a23b46d6a499f7354c95d17dccda73815f1049174c41a93694c9476dcdebb25d\" id:\"a23b46d6a499f7354c95d17dccda73815f1049174c41a93694c9476dcdebb25d\" pid:3233 exited_at:{seconds:1750447816 nanos:488522229}" Jun 20 19:30:16.495007 containerd[1577]: time="2025-06-20T19:30:16.494942717Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a23b46d6a499f7354c95d17dccda73815f1049174c41a93694c9476dcdebb25d\" id:\"a23b46d6a499f7354c95d17dccda73815f1049174c41a93694c9476dcdebb25d\" pid:3233 exited_at:{seconds:1750447816 nanos:488522229}" Jun 20 19:30:16.504636 containerd[1577]: time="2025-06-20T19:30:16.504576696Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 20 19:30:16.505605 containerd[1577]: time="2025-06-20T19:30:16.505572953Z" level=info msg="TaskExit event in podsandbox handler container_id:\"534592dc240f5dd6344871f534417b87dff006328293a6186876ce65e2f97db8\" id:\"a65f8bf36c663603b0b6f21aeb8ec19051ce9eb506c4cc3229aa9a7bb6b4b2f7\" pid:4344 exited_at:{seconds:1750447816 nanos:505216276}" Jun 20 19:30:16.507709 containerd[1577]: time="2025-06-20T19:30:16.507675362Z" level=info msg="StopContainer for \"534592dc240f5dd6344871f534417b87dff006328293a6186876ce65e2f97db8\" with timeout 2 (s)" Jun 20 19:30:16.507964 containerd[1577]: time="2025-06-20T19:30:16.507935811Z" level=info msg="Stop container \"534592dc240f5dd6344871f534417b87dff006328293a6186876ce65e2f97db8\" with signal terminated" Jun 20 19:30:16.516602 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a23b46d6a499f7354c95d17dccda73815f1049174c41a93694c9476dcdebb25d-rootfs.mount: Deactivated successfully. Jun 20 19:30:16.517773 systemd-networkd[1483]: lxc_health: Link DOWN Jun 20 19:30:16.517780 systemd-networkd[1483]: lxc_health: Lost carrier Jun 20 19:30:16.534660 containerd[1577]: time="2025-06-20T19:30:16.534620093Z" level=info msg="StopContainer for \"a23b46d6a499f7354c95d17dccda73815f1049174c41a93694c9476dcdebb25d\" returns successfully" Jun 20 19:30:16.535302 containerd[1577]: time="2025-06-20T19:30:16.535276496Z" level=info msg="StopPodSandbox for \"74e40d21ae2eae7f8e1284ac431f7d8ca6ec579f56ec149d9113a5ef37d40c2c\"" Jun 20 19:30:16.535360 containerd[1577]: time="2025-06-20T19:30:16.535341913Z" level=info msg="Container to stop \"a23b46d6a499f7354c95d17dccda73815f1049174c41a93694c9476dcdebb25d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 19:30:16.543467 systemd[1]: cri-containerd-74e40d21ae2eae7f8e1284ac431f7d8ca6ec579f56ec149d9113a5ef37d40c2c.scope: Deactivated successfully. Jun 20 19:30:16.545203 systemd[1]: cri-containerd-534592dc240f5dd6344871f534417b87dff006328293a6186876ce65e2f97db8.scope: Deactivated successfully. Jun 20 19:30:16.545621 systemd[1]: cri-containerd-534592dc240f5dd6344871f534417b87dff006328293a6186876ce65e2f97db8.scope: Consumed 6.633s CPU time, 124.4M memory peak, 212K read from disk, 13.3M written to disk. Jun 20 19:30:16.546640 containerd[1577]: time="2025-06-20T19:30:16.546258489Z" level=info msg="TaskExit event in podsandbox handler container_id:\"74e40d21ae2eae7f8e1284ac431f7d8ca6ec579f56ec149d9113a5ef37d40c2c\" id:\"74e40d21ae2eae7f8e1284ac431f7d8ca6ec579f56ec149d9113a5ef37d40c2c\" pid:3001 exit_status:137 exited_at:{seconds:1750447816 nanos:544776152}" Jun 20 19:30:16.549098 containerd[1577]: time="2025-06-20T19:30:16.549015036Z" level=info msg="received exit event container_id:\"534592dc240f5dd6344871f534417b87dff006328293a6186876ce65e2f97db8\" id:\"534592dc240f5dd6344871f534417b87dff006328293a6186876ce65e2f97db8\" pid:3344 exited_at:{seconds:1750447816 nanos:548736011}" Jun 20 19:30:16.574304 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-534592dc240f5dd6344871f534417b87dff006328293a6186876ce65e2f97db8-rootfs.mount: Deactivated successfully. Jun 20 19:30:16.581554 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-74e40d21ae2eae7f8e1284ac431f7d8ca6ec579f56ec149d9113a5ef37d40c2c-rootfs.mount: Deactivated successfully. Jun 20 19:30:16.626258 containerd[1577]: time="2025-06-20T19:30:16.626201200Z" level=info msg="shim disconnected" id=74e40d21ae2eae7f8e1284ac431f7d8ca6ec579f56ec149d9113a5ef37d40c2c namespace=k8s.io Jun 20 19:30:16.626258 containerd[1577]: time="2025-06-20T19:30:16.626248673Z" level=warning msg="cleaning up after shim disconnected" id=74e40d21ae2eae7f8e1284ac431f7d8ca6ec579f56ec149d9113a5ef37d40c2c namespace=k8s.io Jun 20 19:30:16.655671 containerd[1577]: time="2025-06-20T19:30:16.626259775Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 19:30:16.655869 containerd[1577]: time="2025-06-20T19:30:16.632258739Z" level=info msg="StopContainer for \"534592dc240f5dd6344871f534417b87dff006328293a6186876ce65e2f97db8\" returns successfully" Jun 20 19:30:16.656448 containerd[1577]: time="2025-06-20T19:30:16.656402434Z" level=info msg="StopPodSandbox for \"f98ae6a79219e75824af8ab7f370449f2efb23b3676c5d9e12f61ed056de649a\"" Jun 20 19:30:16.656627 containerd[1577]: time="2025-06-20T19:30:16.656498703Z" level=info msg="Container to stop \"b71e1fd6290ebe02596efe0277d598fe934e08c2d3d1ba11da249ca80cbb188f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 19:30:16.656627 containerd[1577]: time="2025-06-20T19:30:16.656518681Z" level=info msg="Container to stop \"534592dc240f5dd6344871f534417b87dff006328293a6186876ce65e2f97db8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 19:30:16.656627 containerd[1577]: time="2025-06-20T19:30:16.656530455Z" level=info msg="Container to stop \"99f5f47a933bd649cfaf960ce98234e811e155f8e954605bebdbd658558db1f4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 19:30:16.656627 containerd[1577]: time="2025-06-20T19:30:16.656542688Z" level=info msg="Container to stop \"e5c065bbb697d2794a98b2d0f08868bf7ffc5939e471325df32a202725acfc79\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 19:30:16.656627 containerd[1577]: time="2025-06-20T19:30:16.656554231Z" level=info msg="Container to stop \"52c7db4b75f39ea35a78cfde563ba9b64fbef0938a12a7f8f4fc9dd43870fd88\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 19:30:16.668237 systemd[1]: cri-containerd-f98ae6a79219e75824af8ab7f370449f2efb23b3676c5d9e12f61ed056de649a.scope: Deactivated successfully. Jun 20 19:30:16.684973 containerd[1577]: time="2025-06-20T19:30:16.684846336Z" level=info msg="TaskExit event in podsandbox handler container_id:\"534592dc240f5dd6344871f534417b87dff006328293a6186876ce65e2f97db8\" id:\"534592dc240f5dd6344871f534417b87dff006328293a6186876ce65e2f97db8\" pid:3344 exited_at:{seconds:1750447816 nanos:548736011}" Jun 20 19:30:16.686410 containerd[1577]: time="2025-06-20T19:30:16.685093408Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f98ae6a79219e75824af8ab7f370449f2efb23b3676c5d9e12f61ed056de649a\" id:\"f98ae6a79219e75824af8ab7f370449f2efb23b3676c5d9e12f61ed056de649a\" pid:2836 exit_status:137 exited_at:{seconds:1750447816 nanos:669458772}" Jun 20 19:30:16.689184 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-74e40d21ae2eae7f8e1284ac431f7d8ca6ec579f56ec149d9113a5ef37d40c2c-shm.mount: Deactivated successfully. Jun 20 19:30:16.694811 containerd[1577]: time="2025-06-20T19:30:16.694685515Z" level=info msg="received exit event sandbox_id:\"74e40d21ae2eae7f8e1284ac431f7d8ca6ec579f56ec149d9113a5ef37d40c2c\" exit_status:137 exited_at:{seconds:1750447816 nanos:544776152}" Jun 20 19:30:16.698681 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f98ae6a79219e75824af8ab7f370449f2efb23b3676c5d9e12f61ed056de649a-rootfs.mount: Deactivated successfully. Jun 20 19:30:16.700648 containerd[1577]: time="2025-06-20T19:30:16.700589284Z" level=info msg="TearDown network for sandbox \"74e40d21ae2eae7f8e1284ac431f7d8ca6ec579f56ec149d9113a5ef37d40c2c\" successfully" Jun 20 19:30:16.700648 containerd[1577]: time="2025-06-20T19:30:16.700642097Z" level=info msg="StopPodSandbox for \"74e40d21ae2eae7f8e1284ac431f7d8ca6ec579f56ec149d9113a5ef37d40c2c\" returns successfully" Jun 20 19:30:16.705072 containerd[1577]: time="2025-06-20T19:30:16.704966969Z" level=info msg="received exit event sandbox_id:\"f98ae6a79219e75824af8ab7f370449f2efb23b3676c5d9e12f61ed056de649a\" exit_status:137 exited_at:{seconds:1750447816 nanos:669458772}" Jun 20 19:30:16.706558 containerd[1577]: time="2025-06-20T19:30:16.706464085Z" level=info msg="shim disconnected" id=f98ae6a79219e75824af8ab7f370449f2efb23b3676c5d9e12f61ed056de649a namespace=k8s.io Jun 20 19:30:16.706558 containerd[1577]: time="2025-06-20T19:30:16.706502480Z" level=warning msg="cleaning up after shim disconnected" id=f98ae6a79219e75824af8ab7f370449f2efb23b3676c5d9e12f61ed056de649a namespace=k8s.io Jun 20 19:30:16.706558 containerd[1577]: time="2025-06-20T19:30:16.706510877Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 19:30:16.712808 containerd[1577]: time="2025-06-20T19:30:16.712749360Z" level=info msg="TearDown network for sandbox \"f98ae6a79219e75824af8ab7f370449f2efb23b3676c5d9e12f61ed056de649a\" successfully" Jun 20 19:30:16.712808 containerd[1577]: time="2025-06-20T19:30:16.712782564Z" level=info msg="StopPodSandbox for \"f98ae6a79219e75824af8ab7f370449f2efb23b3676c5d9e12f61ed056de649a\" returns successfully" Jun 20 19:30:16.779916 kubelet[2682]: I0620 19:30:16.779860 2682 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4lrlp\" (UniqueName: \"kubernetes.io/projected/bf6fcb13-d682-471c-ba56-4cc814165f9b-kube-api-access-4lrlp\") pod \"bf6fcb13-d682-471c-ba56-4cc814165f9b\" (UID: \"bf6fcb13-d682-471c-ba56-4cc814165f9b\") " Jun 20 19:30:16.779916 kubelet[2682]: I0620 19:30:16.779909 2682 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/85efc7db-489b-427b-acda-5c79ac2e7a72-cilium-config-path\") pod \"85efc7db-489b-427b-acda-5c79ac2e7a72\" (UID: \"85efc7db-489b-427b-acda-5c79ac2e7a72\") " Jun 20 19:30:16.780128 kubelet[2682]: I0620 19:30:16.779931 2682 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bf6fcb13-d682-471c-ba56-4cc814165f9b-cilium-run\") pod \"bf6fcb13-d682-471c-ba56-4cc814165f9b\" (UID: \"bf6fcb13-d682-471c-ba56-4cc814165f9b\") " Jun 20 19:30:16.780128 kubelet[2682]: I0620 19:30:16.779947 2682 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bf6fcb13-d682-471c-ba56-4cc814165f9b-host-proc-sys-kernel\") pod \"bf6fcb13-d682-471c-ba56-4cc814165f9b\" (UID: \"bf6fcb13-d682-471c-ba56-4cc814165f9b\") " Jun 20 19:30:16.780128 kubelet[2682]: I0620 19:30:16.779964 2682 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bf6fcb13-d682-471c-ba56-4cc814165f9b-bpf-maps\") pod \"bf6fcb13-d682-471c-ba56-4cc814165f9b\" (UID: \"bf6fcb13-d682-471c-ba56-4cc814165f9b\") " Jun 20 19:30:16.780128 kubelet[2682]: I0620 19:30:16.779981 2682 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bf6fcb13-d682-471c-ba56-4cc814165f9b-etc-cni-netd\") pod \"bf6fcb13-d682-471c-ba56-4cc814165f9b\" (UID: \"bf6fcb13-d682-471c-ba56-4cc814165f9b\") " Jun 20 19:30:16.780128 kubelet[2682]: I0620 19:30:16.779995 2682 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bf6fcb13-d682-471c-ba56-4cc814165f9b-cni-path\") pod \"bf6fcb13-d682-471c-ba56-4cc814165f9b\" (UID: \"bf6fcb13-d682-471c-ba56-4cc814165f9b\") " Jun 20 19:30:16.780128 kubelet[2682]: I0620 19:30:16.780013 2682 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bf6fcb13-d682-471c-ba56-4cc814165f9b-xtables-lock\") pod \"bf6fcb13-d682-471c-ba56-4cc814165f9b\" (UID: \"bf6fcb13-d682-471c-ba56-4cc814165f9b\") " Jun 20 19:30:16.780282 kubelet[2682]: I0620 19:30:16.780032 2682 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tpgzx\" (UniqueName: \"kubernetes.io/projected/85efc7db-489b-427b-acda-5c79ac2e7a72-kube-api-access-tpgzx\") pod \"85efc7db-489b-427b-acda-5c79ac2e7a72\" (UID: \"85efc7db-489b-427b-acda-5c79ac2e7a72\") " Jun 20 19:30:16.780282 kubelet[2682]: I0620 19:30:16.780050 2682 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bf6fcb13-d682-471c-ba56-4cc814165f9b-hostproc\") pod \"bf6fcb13-d682-471c-ba56-4cc814165f9b\" (UID: \"bf6fcb13-d682-471c-ba56-4cc814165f9b\") " Jun 20 19:30:16.780282 kubelet[2682]: I0620 19:30:16.780070 2682 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bf6fcb13-d682-471c-ba56-4cc814165f9b-cilium-config-path\") pod \"bf6fcb13-d682-471c-ba56-4cc814165f9b\" (UID: \"bf6fcb13-d682-471c-ba56-4cc814165f9b\") " Jun 20 19:30:16.780282 kubelet[2682]: I0620 19:30:16.780084 2682 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bf6fcb13-d682-471c-ba56-4cc814165f9b-cilium-cgroup\") pod \"bf6fcb13-d682-471c-ba56-4cc814165f9b\" (UID: \"bf6fcb13-d682-471c-ba56-4cc814165f9b\") " Jun 20 19:30:16.780282 kubelet[2682]: I0620 19:30:16.780099 2682 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bf6fcb13-d682-471c-ba56-4cc814165f9b-hubble-tls\") pod \"bf6fcb13-d682-471c-ba56-4cc814165f9b\" (UID: \"bf6fcb13-d682-471c-ba56-4cc814165f9b\") " Jun 20 19:30:16.780282 kubelet[2682]: I0620 19:30:16.780126 2682 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bf6fcb13-d682-471c-ba56-4cc814165f9b-clustermesh-secrets\") pod \"bf6fcb13-d682-471c-ba56-4cc814165f9b\" (UID: \"bf6fcb13-d682-471c-ba56-4cc814165f9b\") " Jun 20 19:30:16.780420 kubelet[2682]: I0620 19:30:16.780144 2682 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bf6fcb13-d682-471c-ba56-4cc814165f9b-lib-modules\") pod \"bf6fcb13-d682-471c-ba56-4cc814165f9b\" (UID: \"bf6fcb13-d682-471c-ba56-4cc814165f9b\") " Jun 20 19:30:16.780420 kubelet[2682]: I0620 19:30:16.780160 2682 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bf6fcb13-d682-471c-ba56-4cc814165f9b-host-proc-sys-net\") pod \"bf6fcb13-d682-471c-ba56-4cc814165f9b\" (UID: \"bf6fcb13-d682-471c-ba56-4cc814165f9b\") " Jun 20 19:30:16.780420 kubelet[2682]: I0620 19:30:16.780232 2682 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bf6fcb13-d682-471c-ba56-4cc814165f9b-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "bf6fcb13-d682-471c-ba56-4cc814165f9b" (UID: "bf6fcb13-d682-471c-ba56-4cc814165f9b"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 19:30:16.780420 kubelet[2682]: I0620 19:30:16.780271 2682 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bf6fcb13-d682-471c-ba56-4cc814165f9b-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "bf6fcb13-d682-471c-ba56-4cc814165f9b" (UID: "bf6fcb13-d682-471c-ba56-4cc814165f9b"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 19:30:16.780420 kubelet[2682]: I0620 19:30:16.780289 2682 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bf6fcb13-d682-471c-ba56-4cc814165f9b-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "bf6fcb13-d682-471c-ba56-4cc814165f9b" (UID: "bf6fcb13-d682-471c-ba56-4cc814165f9b"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 19:30:16.780531 kubelet[2682]: I0620 19:30:16.780304 2682 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bf6fcb13-d682-471c-ba56-4cc814165f9b-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "bf6fcb13-d682-471c-ba56-4cc814165f9b" (UID: "bf6fcb13-d682-471c-ba56-4cc814165f9b"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 19:30:16.780531 kubelet[2682]: I0620 19:30:16.780319 2682 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bf6fcb13-d682-471c-ba56-4cc814165f9b-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "bf6fcb13-d682-471c-ba56-4cc814165f9b" (UID: "bf6fcb13-d682-471c-ba56-4cc814165f9b"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 19:30:16.780531 kubelet[2682]: I0620 19:30:16.780334 2682 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bf6fcb13-d682-471c-ba56-4cc814165f9b-cni-path" (OuterVolumeSpecName: "cni-path") pod "bf6fcb13-d682-471c-ba56-4cc814165f9b" (UID: "bf6fcb13-d682-471c-ba56-4cc814165f9b"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 19:30:16.780531 kubelet[2682]: I0620 19:30:16.780349 2682 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bf6fcb13-d682-471c-ba56-4cc814165f9b-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "bf6fcb13-d682-471c-ba56-4cc814165f9b" (UID: "bf6fcb13-d682-471c-ba56-4cc814165f9b"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 19:30:16.780949 kubelet[2682]: I0620 19:30:16.780611 2682 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bf6fcb13-d682-471c-ba56-4cc814165f9b-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "bf6fcb13-d682-471c-ba56-4cc814165f9b" (UID: "bf6fcb13-d682-471c-ba56-4cc814165f9b"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 19:30:16.780949 kubelet[2682]: I0620 19:30:16.780729 2682 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bf6fcb13-d682-471c-ba56-4cc814165f9b-hostproc" (OuterVolumeSpecName: "hostproc") pod "bf6fcb13-d682-471c-ba56-4cc814165f9b" (UID: "bf6fcb13-d682-471c-ba56-4cc814165f9b"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 19:30:16.784435 kubelet[2682]: I0620 19:30:16.784379 2682 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85efc7db-489b-427b-acda-5c79ac2e7a72-kube-api-access-tpgzx" (OuterVolumeSpecName: "kube-api-access-tpgzx") pod "85efc7db-489b-427b-acda-5c79ac2e7a72" (UID: "85efc7db-489b-427b-acda-5c79ac2e7a72"). InnerVolumeSpecName "kube-api-access-tpgzx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jun 20 19:30:16.784695 kubelet[2682]: I0620 19:30:16.784551 2682 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85efc7db-489b-427b-acda-5c79ac2e7a72-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "85efc7db-489b-427b-acda-5c79ac2e7a72" (UID: "85efc7db-489b-427b-acda-5c79ac2e7a72"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jun 20 19:30:16.784859 kubelet[2682]: I0620 19:30:16.784786 2682 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bf6fcb13-d682-471c-ba56-4cc814165f9b-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "bf6fcb13-d682-471c-ba56-4cc814165f9b" (UID: "bf6fcb13-d682-471c-ba56-4cc814165f9b"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 19:30:16.784984 kubelet[2682]: I0620 19:30:16.784902 2682 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf6fcb13-d682-471c-ba56-4cc814165f9b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "bf6fcb13-d682-471c-ba56-4cc814165f9b" (UID: "bf6fcb13-d682-471c-ba56-4cc814165f9b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jun 20 19:30:16.785620 kubelet[2682]: I0620 19:30:16.785579 2682 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf6fcb13-d682-471c-ba56-4cc814165f9b-kube-api-access-4lrlp" (OuterVolumeSpecName: "kube-api-access-4lrlp") pod "bf6fcb13-d682-471c-ba56-4cc814165f9b" (UID: "bf6fcb13-d682-471c-ba56-4cc814165f9b"). InnerVolumeSpecName "kube-api-access-4lrlp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jun 20 19:30:16.786540 kubelet[2682]: I0620 19:30:16.786518 2682 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf6fcb13-d682-471c-ba56-4cc814165f9b-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "bf6fcb13-d682-471c-ba56-4cc814165f9b" (UID: "bf6fcb13-d682-471c-ba56-4cc814165f9b"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jun 20 19:30:16.788464 kubelet[2682]: I0620 19:30:16.788419 2682 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf6fcb13-d682-471c-ba56-4cc814165f9b-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "bf6fcb13-d682-471c-ba56-4cc814165f9b" (UID: "bf6fcb13-d682-471c-ba56-4cc814165f9b"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jun 20 19:30:16.881144 kubelet[2682]: I0620 19:30:16.880991 2682 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bf6fcb13-d682-471c-ba56-4cc814165f9b-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jun 20 19:30:16.881144 kubelet[2682]: I0620 19:30:16.881026 2682 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bf6fcb13-d682-471c-ba56-4cc814165f9b-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jun 20 19:30:16.881144 kubelet[2682]: I0620 19:30:16.881039 2682 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bf6fcb13-d682-471c-ba56-4cc814165f9b-hostproc\") on node \"localhost\" DevicePath \"\"" Jun 20 19:30:16.881144 kubelet[2682]: I0620 19:30:16.881047 2682 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bf6fcb13-d682-471c-ba56-4cc814165f9b-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jun 20 19:30:16.881144 kubelet[2682]: I0620 19:30:16.881055 2682 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bf6fcb13-d682-471c-ba56-4cc814165f9b-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jun 20 19:30:16.881144 kubelet[2682]: I0620 19:30:16.881063 2682 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bf6fcb13-d682-471c-ba56-4cc814165f9b-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jun 20 19:30:16.881144 kubelet[2682]: I0620 19:30:16.881070 2682 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bf6fcb13-d682-471c-ba56-4cc814165f9b-lib-modules\") on node \"localhost\" DevicePath \"\"" Jun 20 19:30:16.881144 kubelet[2682]: I0620 19:30:16.881079 2682 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bf6fcb13-d682-471c-ba56-4cc814165f9b-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jun 20 19:30:16.881478 kubelet[2682]: I0620 19:30:16.881087 2682 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4lrlp\" (UniqueName: \"kubernetes.io/projected/bf6fcb13-d682-471c-ba56-4cc814165f9b-kube-api-access-4lrlp\") on node \"localhost\" DevicePath \"\"" Jun 20 19:30:16.881478 kubelet[2682]: I0620 19:30:16.881096 2682 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/85efc7db-489b-427b-acda-5c79ac2e7a72-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jun 20 19:30:16.881478 kubelet[2682]: I0620 19:30:16.881103 2682 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bf6fcb13-d682-471c-ba56-4cc814165f9b-cilium-run\") on node \"localhost\" DevicePath \"\"" Jun 20 19:30:16.881478 kubelet[2682]: I0620 19:30:16.881111 2682 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bf6fcb13-d682-471c-ba56-4cc814165f9b-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jun 20 19:30:16.881478 kubelet[2682]: I0620 19:30:16.881127 2682 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bf6fcb13-d682-471c-ba56-4cc814165f9b-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jun 20 19:30:16.881478 kubelet[2682]: I0620 19:30:16.881136 2682 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bf6fcb13-d682-471c-ba56-4cc814165f9b-cni-path\") on node \"localhost\" DevicePath \"\"" Jun 20 19:30:16.881478 kubelet[2682]: I0620 19:30:16.881144 2682 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bf6fcb13-d682-471c-ba56-4cc814165f9b-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jun 20 19:30:16.881478 kubelet[2682]: I0620 19:30:16.881152 2682 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tpgzx\" (UniqueName: \"kubernetes.io/projected/85efc7db-489b-427b-acda-5c79ac2e7a72-kube-api-access-tpgzx\") on node \"localhost\" DevicePath \"\"" Jun 20 19:30:17.516078 systemd[1]: var-lib-kubelet-pods-85efc7db\x2d489b\x2d427b\x2dacda\x2d5c79ac2e7a72-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtpgzx.mount: Deactivated successfully. Jun 20 19:30:17.516219 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f98ae6a79219e75824af8ab7f370449f2efb23b3676c5d9e12f61ed056de649a-shm.mount: Deactivated successfully. Jun 20 19:30:17.516316 systemd[1]: var-lib-kubelet-pods-bf6fcb13\x2dd682\x2d471c\x2dba56\x2d4cc814165f9b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4lrlp.mount: Deactivated successfully. Jun 20 19:30:17.516403 systemd[1]: var-lib-kubelet-pods-bf6fcb13\x2dd682\x2d471c\x2dba56\x2d4cc814165f9b-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jun 20 19:30:17.516477 systemd[1]: var-lib-kubelet-pods-bf6fcb13\x2dd682\x2d471c\x2dba56\x2d4cc814165f9b-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jun 20 19:30:17.523355 kubelet[2682]: I0620 19:30:17.523272 2682 scope.go:117] "RemoveContainer" containerID="a23b46d6a499f7354c95d17dccda73815f1049174c41a93694c9476dcdebb25d" Jun 20 19:30:17.525770 containerd[1577]: time="2025-06-20T19:30:17.525716636Z" level=info msg="RemoveContainer for \"a23b46d6a499f7354c95d17dccda73815f1049174c41a93694c9476dcdebb25d\"" Jun 20 19:30:17.533358 containerd[1577]: time="2025-06-20T19:30:17.533312875Z" level=info msg="RemoveContainer for \"a23b46d6a499f7354c95d17dccda73815f1049174c41a93694c9476dcdebb25d\" returns successfully" Jun 20 19:30:17.537938 systemd[1]: Removed slice kubepods-besteffort-pod85efc7db_489b_427b_acda_5c79ac2e7a72.slice - libcontainer container kubepods-besteffort-pod85efc7db_489b_427b_acda_5c79ac2e7a72.slice. Jun 20 19:30:17.539987 systemd[1]: Removed slice kubepods-burstable-podbf6fcb13_d682_471c_ba56_4cc814165f9b.slice - libcontainer container kubepods-burstable-podbf6fcb13_d682_471c_ba56_4cc814165f9b.slice. Jun 20 19:30:17.540097 systemd[1]: kubepods-burstable-podbf6fcb13_d682_471c_ba56_4cc814165f9b.slice: Consumed 6.754s CPU time, 124.7M memory peak, 220K read from disk, 16.6M written to disk. Jun 20 19:30:17.543262 kubelet[2682]: I0620 19:30:17.543215 2682 scope.go:117] "RemoveContainer" containerID="a23b46d6a499f7354c95d17dccda73815f1049174c41a93694c9476dcdebb25d" Jun 20 19:30:17.543637 containerd[1577]: time="2025-06-20T19:30:17.543517540Z" level=error msg="ContainerStatus for \"a23b46d6a499f7354c95d17dccda73815f1049174c41a93694c9476dcdebb25d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a23b46d6a499f7354c95d17dccda73815f1049174c41a93694c9476dcdebb25d\": not found" Jun 20 19:30:17.547117 kubelet[2682]: E0620 19:30:17.547083 2682 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a23b46d6a499f7354c95d17dccda73815f1049174c41a93694c9476dcdebb25d\": not found" containerID="a23b46d6a499f7354c95d17dccda73815f1049174c41a93694c9476dcdebb25d" Jun 20 19:30:17.547215 kubelet[2682]: I0620 19:30:17.547126 2682 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a23b46d6a499f7354c95d17dccda73815f1049174c41a93694c9476dcdebb25d"} err="failed to get container status \"a23b46d6a499f7354c95d17dccda73815f1049174c41a93694c9476dcdebb25d\": rpc error: code = NotFound desc = an error occurred when try to find container \"a23b46d6a499f7354c95d17dccda73815f1049174c41a93694c9476dcdebb25d\": not found" Jun 20 19:30:17.547215 kubelet[2682]: I0620 19:30:17.547196 2682 scope.go:117] "RemoveContainer" containerID="534592dc240f5dd6344871f534417b87dff006328293a6186876ce65e2f97db8" Jun 20 19:30:17.549914 containerd[1577]: time="2025-06-20T19:30:17.549865258Z" level=info msg="RemoveContainer for \"534592dc240f5dd6344871f534417b87dff006328293a6186876ce65e2f97db8\"" Jun 20 19:30:17.562647 containerd[1577]: time="2025-06-20T19:30:17.562612313Z" level=info msg="RemoveContainer for \"534592dc240f5dd6344871f534417b87dff006328293a6186876ce65e2f97db8\" returns successfully" Jun 20 19:30:17.563356 kubelet[2682]: I0620 19:30:17.562907 2682 scope.go:117] "RemoveContainer" containerID="b71e1fd6290ebe02596efe0277d598fe934e08c2d3d1ba11da249ca80cbb188f" Jun 20 19:30:17.564802 containerd[1577]: time="2025-06-20T19:30:17.564768331Z" level=info msg="RemoveContainer for \"b71e1fd6290ebe02596efe0277d598fe934e08c2d3d1ba11da249ca80cbb188f\"" Jun 20 19:30:17.569583 containerd[1577]: time="2025-06-20T19:30:17.569550350Z" level=info msg="RemoveContainer for \"b71e1fd6290ebe02596efe0277d598fe934e08c2d3d1ba11da249ca80cbb188f\" returns successfully" Jun 20 19:30:17.569828 kubelet[2682]: I0620 19:30:17.569777 2682 scope.go:117] "RemoveContainer" containerID="52c7db4b75f39ea35a78cfde563ba9b64fbef0938a12a7f8f4fc9dd43870fd88" Jun 20 19:30:17.572069 containerd[1577]: time="2025-06-20T19:30:17.572043213Z" level=info msg="RemoveContainer for \"52c7db4b75f39ea35a78cfde563ba9b64fbef0938a12a7f8f4fc9dd43870fd88\"" Jun 20 19:30:17.576125 containerd[1577]: time="2025-06-20T19:30:17.576087507Z" level=info msg="RemoveContainer for \"52c7db4b75f39ea35a78cfde563ba9b64fbef0938a12a7f8f4fc9dd43870fd88\" returns successfully" Jun 20 19:30:17.576376 kubelet[2682]: I0620 19:30:17.576322 2682 scope.go:117] "RemoveContainer" containerID="e5c065bbb697d2794a98b2d0f08868bf7ffc5939e471325df32a202725acfc79" Jun 20 19:30:17.577938 containerd[1577]: time="2025-06-20T19:30:17.577889946Z" level=info msg="RemoveContainer for \"e5c065bbb697d2794a98b2d0f08868bf7ffc5939e471325df32a202725acfc79\"" Jun 20 19:30:17.641586 containerd[1577]: time="2025-06-20T19:30:17.641539577Z" level=info msg="RemoveContainer for \"e5c065bbb697d2794a98b2d0f08868bf7ffc5939e471325df32a202725acfc79\" returns successfully" Jun 20 19:30:17.641883 kubelet[2682]: I0620 19:30:17.641839 2682 scope.go:117] "RemoveContainer" containerID="99f5f47a933bd649cfaf960ce98234e811e155f8e954605bebdbd658558db1f4" Jun 20 19:30:17.643439 containerd[1577]: time="2025-06-20T19:30:17.643401993Z" level=info msg="RemoveContainer for \"99f5f47a933bd649cfaf960ce98234e811e155f8e954605bebdbd658558db1f4\"" Jun 20 19:30:17.647776 containerd[1577]: time="2025-06-20T19:30:17.647720850Z" level=info msg="RemoveContainer for \"99f5f47a933bd649cfaf960ce98234e811e155f8e954605bebdbd658558db1f4\" returns successfully" Jun 20 19:30:17.647981 kubelet[2682]: I0620 19:30:17.647939 2682 scope.go:117] "RemoveContainer" containerID="534592dc240f5dd6344871f534417b87dff006328293a6186876ce65e2f97db8" Jun 20 19:30:17.648198 containerd[1577]: time="2025-06-20T19:30:17.648150477Z" level=error msg="ContainerStatus for \"534592dc240f5dd6344871f534417b87dff006328293a6186876ce65e2f97db8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"534592dc240f5dd6344871f534417b87dff006328293a6186876ce65e2f97db8\": not found" Jun 20 19:30:17.648382 kubelet[2682]: E0620 19:30:17.648339 2682 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"534592dc240f5dd6344871f534417b87dff006328293a6186876ce65e2f97db8\": not found" containerID="534592dc240f5dd6344871f534417b87dff006328293a6186876ce65e2f97db8" Jun 20 19:30:17.648423 kubelet[2682]: I0620 19:30:17.648386 2682 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"534592dc240f5dd6344871f534417b87dff006328293a6186876ce65e2f97db8"} err="failed to get container status \"534592dc240f5dd6344871f534417b87dff006328293a6186876ce65e2f97db8\": rpc error: code = NotFound desc = an error occurred when try to find container \"534592dc240f5dd6344871f534417b87dff006328293a6186876ce65e2f97db8\": not found" Jun 20 19:30:17.648423 kubelet[2682]: I0620 19:30:17.648415 2682 scope.go:117] "RemoveContainer" containerID="b71e1fd6290ebe02596efe0277d598fe934e08c2d3d1ba11da249ca80cbb188f" Jun 20 19:30:17.648668 containerd[1577]: time="2025-06-20T19:30:17.648629028Z" level=error msg="ContainerStatus for \"b71e1fd6290ebe02596efe0277d598fe934e08c2d3d1ba11da249ca80cbb188f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b71e1fd6290ebe02596efe0277d598fe934e08c2d3d1ba11da249ca80cbb188f\": not found" Jun 20 19:30:17.648847 kubelet[2682]: E0620 19:30:17.648799 2682 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b71e1fd6290ebe02596efe0277d598fe934e08c2d3d1ba11da249ca80cbb188f\": not found" containerID="b71e1fd6290ebe02596efe0277d598fe934e08c2d3d1ba11da249ca80cbb188f" Jun 20 19:30:17.648895 kubelet[2682]: I0620 19:30:17.648859 2682 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b71e1fd6290ebe02596efe0277d598fe934e08c2d3d1ba11da249ca80cbb188f"} err="failed to get container status \"b71e1fd6290ebe02596efe0277d598fe934e08c2d3d1ba11da249ca80cbb188f\": rpc error: code = NotFound desc = an error occurred when try to find container \"b71e1fd6290ebe02596efe0277d598fe934e08c2d3d1ba11da249ca80cbb188f\": not found" Jun 20 19:30:17.648895 kubelet[2682]: I0620 19:30:17.648891 2682 scope.go:117] "RemoveContainer" containerID="52c7db4b75f39ea35a78cfde563ba9b64fbef0938a12a7f8f4fc9dd43870fd88" Jun 20 19:30:17.649236 containerd[1577]: time="2025-06-20T19:30:17.649188949Z" level=error msg="ContainerStatus for \"52c7db4b75f39ea35a78cfde563ba9b64fbef0938a12a7f8f4fc9dd43870fd88\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"52c7db4b75f39ea35a78cfde563ba9b64fbef0938a12a7f8f4fc9dd43870fd88\": not found" Jun 20 19:30:17.649370 kubelet[2682]: E0620 19:30:17.649345 2682 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"52c7db4b75f39ea35a78cfde563ba9b64fbef0938a12a7f8f4fc9dd43870fd88\": not found" containerID="52c7db4b75f39ea35a78cfde563ba9b64fbef0938a12a7f8f4fc9dd43870fd88" Jun 20 19:30:17.649417 kubelet[2682]: I0620 19:30:17.649367 2682 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"52c7db4b75f39ea35a78cfde563ba9b64fbef0938a12a7f8f4fc9dd43870fd88"} err="failed to get container status \"52c7db4b75f39ea35a78cfde563ba9b64fbef0938a12a7f8f4fc9dd43870fd88\": rpc error: code = NotFound desc = an error occurred when try to find container \"52c7db4b75f39ea35a78cfde563ba9b64fbef0938a12a7f8f4fc9dd43870fd88\": not found" Jun 20 19:30:17.649417 kubelet[2682]: I0620 19:30:17.649381 2682 scope.go:117] "RemoveContainer" containerID="e5c065bbb697d2794a98b2d0f08868bf7ffc5939e471325df32a202725acfc79" Jun 20 19:30:17.649567 containerd[1577]: time="2025-06-20T19:30:17.649534120Z" level=error msg="ContainerStatus for \"e5c065bbb697d2794a98b2d0f08868bf7ffc5939e471325df32a202725acfc79\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e5c065bbb697d2794a98b2d0f08868bf7ffc5939e471325df32a202725acfc79\": not found" Jun 20 19:30:17.649735 kubelet[2682]: E0620 19:30:17.649707 2682 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e5c065bbb697d2794a98b2d0f08868bf7ffc5939e471325df32a202725acfc79\": not found" containerID="e5c065bbb697d2794a98b2d0f08868bf7ffc5939e471325df32a202725acfc79" Jun 20 19:30:17.649777 kubelet[2682]: I0620 19:30:17.649743 2682 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e5c065bbb697d2794a98b2d0f08868bf7ffc5939e471325df32a202725acfc79"} err="failed to get container status \"e5c065bbb697d2794a98b2d0f08868bf7ffc5939e471325df32a202725acfc79\": rpc error: code = NotFound desc = an error occurred when try to find container \"e5c065bbb697d2794a98b2d0f08868bf7ffc5939e471325df32a202725acfc79\": not found" Jun 20 19:30:17.649777 kubelet[2682]: I0620 19:30:17.649774 2682 scope.go:117] "RemoveContainer" containerID="99f5f47a933bd649cfaf960ce98234e811e155f8e954605bebdbd658558db1f4" Jun 20 19:30:17.650037 containerd[1577]: time="2025-06-20T19:30:17.649990088Z" level=error msg="ContainerStatus for \"99f5f47a933bd649cfaf960ce98234e811e155f8e954605bebdbd658558db1f4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"99f5f47a933bd649cfaf960ce98234e811e155f8e954605bebdbd658558db1f4\": not found" Jun 20 19:30:17.650173 kubelet[2682]: E0620 19:30:17.650138 2682 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"99f5f47a933bd649cfaf960ce98234e811e155f8e954605bebdbd658558db1f4\": not found" containerID="99f5f47a933bd649cfaf960ce98234e811e155f8e954605bebdbd658558db1f4" Jun 20 19:30:17.650226 kubelet[2682]: I0620 19:30:17.650175 2682 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"99f5f47a933bd649cfaf960ce98234e811e155f8e954605bebdbd658558db1f4"} err="failed to get container status \"99f5f47a933bd649cfaf960ce98234e811e155f8e954605bebdbd658558db1f4\": rpc error: code = NotFound desc = an error occurred when try to find container \"99f5f47a933bd649cfaf960ce98234e811e155f8e954605bebdbd658558db1f4\": not found" Jun 20 19:30:18.302648 kubelet[2682]: I0620 19:30:18.302593 2682 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85efc7db-489b-427b-acda-5c79ac2e7a72" path="/var/lib/kubelet/pods/85efc7db-489b-427b-acda-5c79ac2e7a72/volumes" Jun 20 19:30:18.303219 kubelet[2682]: I0620 19:30:18.303180 2682 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf6fcb13-d682-471c-ba56-4cc814165f9b" path="/var/lib/kubelet/pods/bf6fcb13-d682-471c-ba56-4cc814165f9b/volumes" Jun 20 19:30:18.424463 sshd[4318]: Connection closed by 10.0.0.1 port 51798 Jun 20 19:30:18.425079 sshd-session[4316]: pam_unix(sshd:session): session closed for user core Jun 20 19:30:18.440108 systemd[1]: sshd@24-10.0.0.126:22-10.0.0.1:51798.service: Deactivated successfully. Jun 20 19:30:18.442394 systemd[1]: session-25.scope: Deactivated successfully. Jun 20 19:30:18.443468 systemd-logind[1547]: Session 25 logged out. Waiting for processes to exit. Jun 20 19:30:18.446787 systemd[1]: Started sshd@25-10.0.0.126:22-10.0.0.1:51804.service - OpenSSH per-connection server daemon (10.0.0.1:51804). Jun 20 19:30:18.447534 systemd-logind[1547]: Removed session 25. Jun 20 19:30:18.503175 sshd[4468]: Accepted publickey for core from 10.0.0.1 port 51804 ssh2: RSA SHA256:YoQ4GiRtY5Hu7FaS/OnNYeCnsR+r8YS2g6Qh7XD/NPE Jun 20 19:30:18.504588 sshd-session[4468]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:30:18.509149 systemd-logind[1547]: New session 26 of user core. Jun 20 19:30:18.517959 systemd[1]: Started session-26.scope - Session 26 of User core. Jun 20 19:30:18.767761 kubelet[2682]: E0620 19:30:18.767692 2682 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jun 20 19:30:18.941093 sshd[4470]: Connection closed by 10.0.0.1 port 51804 Jun 20 19:30:18.942126 sshd-session[4468]: pam_unix(sshd:session): session closed for user core Jun 20 19:30:18.954705 kubelet[2682]: I0620 19:30:18.954638 2682 memory_manager.go:355] "RemoveStaleState removing state" podUID="85efc7db-489b-427b-acda-5c79ac2e7a72" containerName="cilium-operator" Jun 20 19:30:18.954705 kubelet[2682]: I0620 19:30:18.954674 2682 memory_manager.go:355] "RemoveStaleState removing state" podUID="bf6fcb13-d682-471c-ba56-4cc814165f9b" containerName="cilium-agent" Jun 20 19:30:18.955721 systemd[1]: sshd@25-10.0.0.126:22-10.0.0.1:51804.service: Deactivated successfully. Jun 20 19:30:18.960709 systemd[1]: session-26.scope: Deactivated successfully. Jun 20 19:30:18.963809 systemd-logind[1547]: Session 26 logged out. Waiting for processes to exit. Jun 20 19:30:18.971007 systemd[1]: Started sshd@26-10.0.0.126:22-10.0.0.1:51808.service - OpenSSH per-connection server daemon (10.0.0.1:51808). Jun 20 19:30:18.974195 systemd-logind[1547]: Removed session 26. Jun 20 19:30:18.974998 kubelet[2682]: W0620 19:30:18.974806 2682 reflector.go:569] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Jun 20 19:30:18.975134 kubelet[2682]: E0620 19:30:18.975104 2682 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Jun 20 19:30:18.980776 systemd[1]: Created slice kubepods-burstable-pod3d1bf63f_703c_4f71_9501_590141af69dc.slice - libcontainer container kubepods-burstable-pod3d1bf63f_703c_4f71_9501_590141af69dc.slice. Jun 20 19:30:19.026725 sshd[4482]: Accepted publickey for core from 10.0.0.1 port 51808 ssh2: RSA SHA256:YoQ4GiRtY5Hu7FaS/OnNYeCnsR+r8YS2g6Qh7XD/NPE Jun 20 19:30:19.028536 sshd-session[4482]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:30:19.033435 systemd-logind[1547]: New session 27 of user core. Jun 20 19:30:19.041015 systemd[1]: Started session-27.scope - Session 27 of User core. Jun 20 19:30:19.093046 kubelet[2682]: I0620 19:30:19.092996 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3d1bf63f-703c-4f71-9501-590141af69dc-cni-path\") pod \"cilium-z5q6s\" (UID: \"3d1bf63f-703c-4f71-9501-590141af69dc\") " pod="kube-system/cilium-z5q6s" Jun 20 19:30:19.093046 kubelet[2682]: I0620 19:30:19.093052 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/3d1bf63f-703c-4f71-9501-590141af69dc-cilium-ipsec-secrets\") pod \"cilium-z5q6s\" (UID: \"3d1bf63f-703c-4f71-9501-590141af69dc\") " pod="kube-system/cilium-z5q6s" Jun 20 19:30:19.093195 kubelet[2682]: I0620 19:30:19.093080 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3d1bf63f-703c-4f71-9501-590141af69dc-hubble-tls\") pod \"cilium-z5q6s\" (UID: \"3d1bf63f-703c-4f71-9501-590141af69dc\") " pod="kube-system/cilium-z5q6s" Jun 20 19:30:19.093195 kubelet[2682]: I0620 19:30:19.093107 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3d1bf63f-703c-4f71-9501-590141af69dc-bpf-maps\") pod \"cilium-z5q6s\" (UID: \"3d1bf63f-703c-4f71-9501-590141af69dc\") " pod="kube-system/cilium-z5q6s" Jun 20 19:30:19.093195 kubelet[2682]: I0620 19:30:19.093127 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3d1bf63f-703c-4f71-9501-590141af69dc-cilium-cgroup\") pod \"cilium-z5q6s\" (UID: \"3d1bf63f-703c-4f71-9501-590141af69dc\") " pod="kube-system/cilium-z5q6s" Jun 20 19:30:19.093195 kubelet[2682]: I0620 19:30:19.093146 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3d1bf63f-703c-4f71-9501-590141af69dc-etc-cni-netd\") pod \"cilium-z5q6s\" (UID: \"3d1bf63f-703c-4f71-9501-590141af69dc\") " pod="kube-system/cilium-z5q6s" Jun 20 19:30:19.093195 kubelet[2682]: I0620 19:30:19.093166 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3d1bf63f-703c-4f71-9501-590141af69dc-host-proc-sys-net\") pod \"cilium-z5q6s\" (UID: \"3d1bf63f-703c-4f71-9501-590141af69dc\") " pod="kube-system/cilium-z5q6s" Jun 20 19:30:19.093324 kubelet[2682]: I0620 19:30:19.093247 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3d1bf63f-703c-4f71-9501-590141af69dc-cilium-config-path\") pod \"cilium-z5q6s\" (UID: \"3d1bf63f-703c-4f71-9501-590141af69dc\") " pod="kube-system/cilium-z5q6s" Jun 20 19:30:19.093324 kubelet[2682]: I0620 19:30:19.093283 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3d1bf63f-703c-4f71-9501-590141af69dc-hostproc\") pod \"cilium-z5q6s\" (UID: \"3d1bf63f-703c-4f71-9501-590141af69dc\") " pod="kube-system/cilium-z5q6s" Jun 20 19:30:19.093324 kubelet[2682]: I0620 19:30:19.093318 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3d1bf63f-703c-4f71-9501-590141af69dc-xtables-lock\") pod \"cilium-z5q6s\" (UID: \"3d1bf63f-703c-4f71-9501-590141af69dc\") " pod="kube-system/cilium-z5q6s" Jun 20 19:30:19.093396 kubelet[2682]: I0620 19:30:19.093343 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3d1bf63f-703c-4f71-9501-590141af69dc-cilium-run\") pod \"cilium-z5q6s\" (UID: \"3d1bf63f-703c-4f71-9501-590141af69dc\") " pod="kube-system/cilium-z5q6s" Jun 20 19:30:19.093396 kubelet[2682]: I0620 19:30:19.093368 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3d1bf63f-703c-4f71-9501-590141af69dc-lib-modules\") pod \"cilium-z5q6s\" (UID: \"3d1bf63f-703c-4f71-9501-590141af69dc\") " pod="kube-system/cilium-z5q6s" Jun 20 19:30:19.093396 kubelet[2682]: I0620 19:30:19.093390 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3d1bf63f-703c-4f71-9501-590141af69dc-clustermesh-secrets\") pod \"cilium-z5q6s\" (UID: \"3d1bf63f-703c-4f71-9501-590141af69dc\") " pod="kube-system/cilium-z5q6s" Jun 20 19:30:19.093467 kubelet[2682]: I0620 19:30:19.093413 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3d1bf63f-703c-4f71-9501-590141af69dc-host-proc-sys-kernel\") pod \"cilium-z5q6s\" (UID: \"3d1bf63f-703c-4f71-9501-590141af69dc\") " pod="kube-system/cilium-z5q6s" Jun 20 19:30:19.093467 kubelet[2682]: I0620 19:30:19.093440 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrq4t\" (UniqueName: \"kubernetes.io/projected/3d1bf63f-703c-4f71-9501-590141af69dc-kube-api-access-hrq4t\") pod \"cilium-z5q6s\" (UID: \"3d1bf63f-703c-4f71-9501-590141af69dc\") " pod="kube-system/cilium-z5q6s" Jun 20 19:30:19.093609 sshd[4484]: Connection closed by 10.0.0.1 port 51808 Jun 20 19:30:19.094176 sshd-session[4482]: pam_unix(sshd:session): session closed for user core Jun 20 19:30:19.107243 systemd[1]: sshd@26-10.0.0.126:22-10.0.0.1:51808.service: Deactivated successfully. Jun 20 19:30:19.109616 systemd[1]: session-27.scope: Deactivated successfully. Jun 20 19:30:19.110574 systemd-logind[1547]: Session 27 logged out. Waiting for processes to exit. Jun 20 19:30:19.115348 systemd[1]: Started sshd@27-10.0.0.126:22-10.0.0.1:51820.service - OpenSSH per-connection server daemon (10.0.0.1:51820). Jun 20 19:30:19.116322 systemd-logind[1547]: Removed session 27. Jun 20 19:30:19.178020 sshd[4491]: Accepted publickey for core from 10.0.0.1 port 51820 ssh2: RSA SHA256:YoQ4GiRtY5Hu7FaS/OnNYeCnsR+r8YS2g6Qh7XD/NPE Jun 20 19:30:19.179490 sshd-session[4491]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:30:19.184594 systemd-logind[1547]: New session 28 of user core. Jun 20 19:30:19.197090 systemd[1]: Started session-28.scope - Session 28 of User core. Jun 20 19:30:20.191903 kubelet[2682]: E0620 19:30:20.191855 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:30:20.192585 containerd[1577]: time="2025-06-20T19:30:20.192544537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-z5q6s,Uid:3d1bf63f-703c-4f71-9501-590141af69dc,Namespace:kube-system,Attempt:0,}" Jun 20 19:30:20.211506 containerd[1577]: time="2025-06-20T19:30:20.211460405Z" level=info msg="connecting to shim 535663ae04f543c16ca050c7f7655d367f239158baf660de25cd92c7e517df37" address="unix:///run/containerd/s/db6717cbfead0e79de7a65ad274aa824955207d10f94d517f226a5af86b9f16e" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:30:20.241070 systemd[1]: Started cri-containerd-535663ae04f543c16ca050c7f7655d367f239158baf660de25cd92c7e517df37.scope - libcontainer container 535663ae04f543c16ca050c7f7655d367f239158baf660de25cd92c7e517df37. Jun 20 19:30:20.267586 containerd[1577]: time="2025-06-20T19:30:20.267546096Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-z5q6s,Uid:3d1bf63f-703c-4f71-9501-590141af69dc,Namespace:kube-system,Attempt:0,} returns sandbox id \"535663ae04f543c16ca050c7f7655d367f239158baf660de25cd92c7e517df37\"" Jun 20 19:30:20.268350 kubelet[2682]: E0620 19:30:20.268308 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:30:20.271027 containerd[1577]: time="2025-06-20T19:30:20.270971049Z" level=info msg="CreateContainer within sandbox \"535663ae04f543c16ca050c7f7655d367f239158baf660de25cd92c7e517df37\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jun 20 19:30:20.280089 containerd[1577]: time="2025-06-20T19:30:20.280036322Z" level=info msg="Container 02f8fb2ffa68655187fbc0ffe157aed28548217b7d2f3a594a8dde6fc3f555cd: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:30:20.283838 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3819125857.mount: Deactivated successfully. Jun 20 19:30:20.288446 containerd[1577]: time="2025-06-20T19:30:20.288409042Z" level=info msg="CreateContainer within sandbox \"535663ae04f543c16ca050c7f7655d367f239158baf660de25cd92c7e517df37\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"02f8fb2ffa68655187fbc0ffe157aed28548217b7d2f3a594a8dde6fc3f555cd\"" Jun 20 19:30:20.288929 containerd[1577]: time="2025-06-20T19:30:20.288903905Z" level=info msg="StartContainer for \"02f8fb2ffa68655187fbc0ffe157aed28548217b7d2f3a594a8dde6fc3f555cd\"" Jun 20 19:30:20.289717 containerd[1577]: time="2025-06-20T19:30:20.289682172Z" level=info msg="connecting to shim 02f8fb2ffa68655187fbc0ffe157aed28548217b7d2f3a594a8dde6fc3f555cd" address="unix:///run/containerd/s/db6717cbfead0e79de7a65ad274aa824955207d10f94d517f226a5af86b9f16e" protocol=ttrpc version=3 Jun 20 19:30:20.314013 systemd[1]: Started cri-containerd-02f8fb2ffa68655187fbc0ffe157aed28548217b7d2f3a594a8dde6fc3f555cd.scope - libcontainer container 02f8fb2ffa68655187fbc0ffe157aed28548217b7d2f3a594a8dde6fc3f555cd. Jun 20 19:30:20.344515 containerd[1577]: time="2025-06-20T19:30:20.344474112Z" level=info msg="StartContainer for \"02f8fb2ffa68655187fbc0ffe157aed28548217b7d2f3a594a8dde6fc3f555cd\" returns successfully" Jun 20 19:30:20.354188 systemd[1]: cri-containerd-02f8fb2ffa68655187fbc0ffe157aed28548217b7d2f3a594a8dde6fc3f555cd.scope: Deactivated successfully. Jun 20 19:30:20.355932 containerd[1577]: time="2025-06-20T19:30:20.355804997Z" level=info msg="received exit event container_id:\"02f8fb2ffa68655187fbc0ffe157aed28548217b7d2f3a594a8dde6fc3f555cd\" id:\"02f8fb2ffa68655187fbc0ffe157aed28548217b7d2f3a594a8dde6fc3f555cd\" pid:4563 exited_at:{seconds:1750447820 nanos:355042970}" Jun 20 19:30:20.356082 containerd[1577]: time="2025-06-20T19:30:20.355809125Z" level=info msg="TaskExit event in podsandbox handler container_id:\"02f8fb2ffa68655187fbc0ffe157aed28548217b7d2f3a594a8dde6fc3f555cd\" id:\"02f8fb2ffa68655187fbc0ffe157aed28548217b7d2f3a594a8dde6fc3f555cd\" pid:4563 exited_at:{seconds:1750447820 nanos:355042970}" Jun 20 19:30:20.538276 kubelet[2682]: E0620 19:30:20.538105 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:30:20.541393 containerd[1577]: time="2025-06-20T19:30:20.541342495Z" level=info msg="CreateContainer within sandbox \"535663ae04f543c16ca050c7f7655d367f239158baf660de25cd92c7e517df37\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jun 20 19:30:20.599143 containerd[1577]: time="2025-06-20T19:30:20.599075737Z" level=info msg="Container 739eb0d44139d2e5740585a2246d0549ac689dc4379702ea1d6131210a98ff13: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:30:20.711626 containerd[1577]: time="2025-06-20T19:30:20.711560109Z" level=info msg="CreateContainer within sandbox \"535663ae04f543c16ca050c7f7655d367f239158baf660de25cd92c7e517df37\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"739eb0d44139d2e5740585a2246d0549ac689dc4379702ea1d6131210a98ff13\"" Jun 20 19:30:20.712775 containerd[1577]: time="2025-06-20T19:30:20.712697920Z" level=info msg="StartContainer for \"739eb0d44139d2e5740585a2246d0549ac689dc4379702ea1d6131210a98ff13\"" Jun 20 19:30:20.713843 containerd[1577]: time="2025-06-20T19:30:20.713779372Z" level=info msg="connecting to shim 739eb0d44139d2e5740585a2246d0549ac689dc4379702ea1d6131210a98ff13" address="unix:///run/containerd/s/db6717cbfead0e79de7a65ad274aa824955207d10f94d517f226a5af86b9f16e" protocol=ttrpc version=3 Jun 20 19:30:20.738027 systemd[1]: Started cri-containerd-739eb0d44139d2e5740585a2246d0549ac689dc4379702ea1d6131210a98ff13.scope - libcontainer container 739eb0d44139d2e5740585a2246d0549ac689dc4379702ea1d6131210a98ff13. Jun 20 19:30:20.776437 containerd[1577]: time="2025-06-20T19:30:20.776397017Z" level=info msg="StartContainer for \"739eb0d44139d2e5740585a2246d0549ac689dc4379702ea1d6131210a98ff13\" returns successfully" Jun 20 19:30:20.782461 systemd[1]: cri-containerd-739eb0d44139d2e5740585a2246d0549ac689dc4379702ea1d6131210a98ff13.scope: Deactivated successfully. Jun 20 19:30:20.783041 containerd[1577]: time="2025-06-20T19:30:20.782967948Z" level=info msg="received exit event container_id:\"739eb0d44139d2e5740585a2246d0549ac689dc4379702ea1d6131210a98ff13\" id:\"739eb0d44139d2e5740585a2246d0549ac689dc4379702ea1d6131210a98ff13\" pid:4608 exited_at:{seconds:1750447820 nanos:782641700}" Jun 20 19:30:20.783228 containerd[1577]: time="2025-06-20T19:30:20.783188432Z" level=info msg="TaskExit event in podsandbox handler container_id:\"739eb0d44139d2e5740585a2246d0549ac689dc4379702ea1d6131210a98ff13\" id:\"739eb0d44139d2e5740585a2246d0549ac689dc4379702ea1d6131210a98ff13\" pid:4608 exited_at:{seconds:1750447820 nanos:782641700}" Jun 20 19:30:20.820011 kubelet[2682]: I0620 19:30:20.819862 2682 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-06-20T19:30:20Z","lastTransitionTime":"2025-06-20T19:30:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jun 20 19:30:21.208386 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-02f8fb2ffa68655187fbc0ffe157aed28548217b7d2f3a594a8dde6fc3f555cd-rootfs.mount: Deactivated successfully. Jun 20 19:30:21.546437 kubelet[2682]: E0620 19:30:21.546036 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:30:21.552116 containerd[1577]: time="2025-06-20T19:30:21.551643195Z" level=info msg="CreateContainer within sandbox \"535663ae04f543c16ca050c7f7655d367f239158baf660de25cd92c7e517df37\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jun 20 19:30:21.606012 containerd[1577]: time="2025-06-20T19:30:21.604324786Z" level=info msg="Container 63376eaa6cf436d129689850aae8895850bf7239699342fabb4901ed7937a297: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:30:21.618167 containerd[1577]: time="2025-06-20T19:30:21.618107509Z" level=info msg="CreateContainer within sandbox \"535663ae04f543c16ca050c7f7655d367f239158baf660de25cd92c7e517df37\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"63376eaa6cf436d129689850aae8895850bf7239699342fabb4901ed7937a297\"" Jun 20 19:30:21.618701 containerd[1577]: time="2025-06-20T19:30:21.618657725Z" level=info msg="StartContainer for \"63376eaa6cf436d129689850aae8895850bf7239699342fabb4901ed7937a297\"" Jun 20 19:30:21.620172 containerd[1577]: time="2025-06-20T19:30:21.620129428Z" level=info msg="connecting to shim 63376eaa6cf436d129689850aae8895850bf7239699342fabb4901ed7937a297" address="unix:///run/containerd/s/db6717cbfead0e79de7a65ad274aa824955207d10f94d517f226a5af86b9f16e" protocol=ttrpc version=3 Jun 20 19:30:21.645128 systemd[1]: Started cri-containerd-63376eaa6cf436d129689850aae8895850bf7239699342fabb4901ed7937a297.scope - libcontainer container 63376eaa6cf436d129689850aae8895850bf7239699342fabb4901ed7937a297. Jun 20 19:30:21.693317 systemd[1]: cri-containerd-63376eaa6cf436d129689850aae8895850bf7239699342fabb4901ed7937a297.scope: Deactivated successfully. Jun 20 19:30:21.693872 containerd[1577]: time="2025-06-20T19:30:21.693811319Z" level=info msg="StartContainer for \"63376eaa6cf436d129689850aae8895850bf7239699342fabb4901ed7937a297\" returns successfully" Jun 20 19:30:21.695129 containerd[1577]: time="2025-06-20T19:30:21.695093309Z" level=info msg="TaskExit event in podsandbox handler container_id:\"63376eaa6cf436d129689850aae8895850bf7239699342fabb4901ed7937a297\" id:\"63376eaa6cf436d129689850aae8895850bf7239699342fabb4901ed7937a297\" pid:4652 exited_at:{seconds:1750447821 nanos:694411201}" Jun 20 19:30:21.695235 containerd[1577]: time="2025-06-20T19:30:21.695109280Z" level=info msg="received exit event container_id:\"63376eaa6cf436d129689850aae8895850bf7239699342fabb4901ed7937a297\" id:\"63376eaa6cf436d129689850aae8895850bf7239699342fabb4901ed7937a297\" pid:4652 exited_at:{seconds:1750447821 nanos:694411201}" Jun 20 19:30:21.719055 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-63376eaa6cf436d129689850aae8895850bf7239699342fabb4901ed7937a297-rootfs.mount: Deactivated successfully. Jun 20 19:30:22.551129 kubelet[2682]: E0620 19:30:22.551080 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:30:22.552794 containerd[1577]: time="2025-06-20T19:30:22.552751412Z" level=info msg="CreateContainer within sandbox \"535663ae04f543c16ca050c7f7655d367f239158baf660de25cd92c7e517df37\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jun 20 19:30:22.563620 containerd[1577]: time="2025-06-20T19:30:22.563465257Z" level=info msg="Container 4caa330c32473f84e7a69b5705fbb2ce28db5b307e28268ae77994edbcba2f22: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:30:22.572777 containerd[1577]: time="2025-06-20T19:30:22.572713230Z" level=info msg="CreateContainer within sandbox \"535663ae04f543c16ca050c7f7655d367f239158baf660de25cd92c7e517df37\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"4caa330c32473f84e7a69b5705fbb2ce28db5b307e28268ae77994edbcba2f22\"" Jun 20 19:30:22.573864 containerd[1577]: time="2025-06-20T19:30:22.573249074Z" level=info msg="StartContainer for \"4caa330c32473f84e7a69b5705fbb2ce28db5b307e28268ae77994edbcba2f22\"" Jun 20 19:30:22.574349 containerd[1577]: time="2025-06-20T19:30:22.574303600Z" level=info msg="connecting to shim 4caa330c32473f84e7a69b5705fbb2ce28db5b307e28268ae77994edbcba2f22" address="unix:///run/containerd/s/db6717cbfead0e79de7a65ad274aa824955207d10f94d517f226a5af86b9f16e" protocol=ttrpc version=3 Jun 20 19:30:22.601979 systemd[1]: Started cri-containerd-4caa330c32473f84e7a69b5705fbb2ce28db5b307e28268ae77994edbcba2f22.scope - libcontainer container 4caa330c32473f84e7a69b5705fbb2ce28db5b307e28268ae77994edbcba2f22. Jun 20 19:30:22.630626 systemd[1]: cri-containerd-4caa330c32473f84e7a69b5705fbb2ce28db5b307e28268ae77994edbcba2f22.scope: Deactivated successfully. Jun 20 19:30:22.631644 containerd[1577]: time="2025-06-20T19:30:22.631586807Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4caa330c32473f84e7a69b5705fbb2ce28db5b307e28268ae77994edbcba2f22\" id:\"4caa330c32473f84e7a69b5705fbb2ce28db5b307e28268ae77994edbcba2f22\" pid:4692 exited_at:{seconds:1750447822 nanos:631331249}" Jun 20 19:30:22.632697 containerd[1577]: time="2025-06-20T19:30:22.632649009Z" level=info msg="received exit event container_id:\"4caa330c32473f84e7a69b5705fbb2ce28db5b307e28268ae77994edbcba2f22\" id:\"4caa330c32473f84e7a69b5705fbb2ce28db5b307e28268ae77994edbcba2f22\" pid:4692 exited_at:{seconds:1750447822 nanos:631331249}" Jun 20 19:30:22.642470 containerd[1577]: time="2025-06-20T19:30:22.642441292Z" level=info msg="StartContainer for \"4caa330c32473f84e7a69b5705fbb2ce28db5b307e28268ae77994edbcba2f22\" returns successfully" Jun 20 19:30:22.658212 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4caa330c32473f84e7a69b5705fbb2ce28db5b307e28268ae77994edbcba2f22-rootfs.mount: Deactivated successfully. Jun 20 19:30:23.556729 kubelet[2682]: E0620 19:30:23.556687 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:30:23.558880 containerd[1577]: time="2025-06-20T19:30:23.558750456Z" level=info msg="CreateContainer within sandbox \"535663ae04f543c16ca050c7f7655d367f239158baf660de25cd92c7e517df37\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jun 20 19:30:23.576486 containerd[1577]: time="2025-06-20T19:30:23.576425960Z" level=info msg="Container 78782100418341c3ffcec5ed798c0da8251e600a2777d1db23a2dcd899ec9695: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:30:23.579648 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount139461537.mount: Deactivated successfully. Jun 20 19:30:23.695071 containerd[1577]: time="2025-06-20T19:30:23.695012512Z" level=info msg="CreateContainer within sandbox \"535663ae04f543c16ca050c7f7655d367f239158baf660de25cd92c7e517df37\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"78782100418341c3ffcec5ed798c0da8251e600a2777d1db23a2dcd899ec9695\"" Jun 20 19:30:23.695672 containerd[1577]: time="2025-06-20T19:30:23.695609570Z" level=info msg="StartContainer for \"78782100418341c3ffcec5ed798c0da8251e600a2777d1db23a2dcd899ec9695\"" Jun 20 19:30:23.696639 containerd[1577]: time="2025-06-20T19:30:23.696613963Z" level=info msg="connecting to shim 78782100418341c3ffcec5ed798c0da8251e600a2777d1db23a2dcd899ec9695" address="unix:///run/containerd/s/db6717cbfead0e79de7a65ad274aa824955207d10f94d517f226a5af86b9f16e" protocol=ttrpc version=3 Jun 20 19:30:23.725950 systemd[1]: Started cri-containerd-78782100418341c3ffcec5ed798c0da8251e600a2777d1db23a2dcd899ec9695.scope - libcontainer container 78782100418341c3ffcec5ed798c0da8251e600a2777d1db23a2dcd899ec9695. Jun 20 19:30:23.767157 containerd[1577]: time="2025-06-20T19:30:23.767020087Z" level=info msg="StartContainer for \"78782100418341c3ffcec5ed798c0da8251e600a2777d1db23a2dcd899ec9695\" returns successfully" Jun 20 19:30:23.769078 kubelet[2682]: E0620 19:30:23.769009 2682 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jun 20 19:30:23.840236 containerd[1577]: time="2025-06-20T19:30:23.839892431Z" level=info msg="TaskExit event in podsandbox handler container_id:\"78782100418341c3ffcec5ed798c0da8251e600a2777d1db23a2dcd899ec9695\" id:\"0c30423558043f21ae7000f0ddb5e999c3cc05629c700fbff501be43a654e3e7\" pid:4759 exited_at:{seconds:1750447823 nanos:839465859}" Jun 20 19:30:24.209870 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Jun 20 19:30:24.564030 kubelet[2682]: E0620 19:30:24.563861 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:30:25.536547 containerd[1577]: time="2025-06-20T19:30:25.536475369Z" level=info msg="TaskExit event in podsandbox handler container_id:\"78782100418341c3ffcec5ed798c0da8251e600a2777d1db23a2dcd899ec9695\" id:\"6a9071625039a0d60056e862d6c493a376598f84f3914bbe003ccdbc3fd95d57\" pid:4837 exit_status:1 exited_at:{seconds:1750447825 nanos:536029014}" Jun 20 19:30:26.194478 kubelet[2682]: E0620 19:30:26.194174 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:30:26.300861 kubelet[2682]: E0620 19:30:26.300720 2682 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-2c4kh" podUID="d3592af3-cfdf-46f2-9805-86da86c425fc" Jun 20 19:30:27.339452 systemd-networkd[1483]: lxc_health: Link UP Jun 20 19:30:27.342855 systemd-networkd[1483]: lxc_health: Gained carrier Jun 20 19:30:27.656527 containerd[1577]: time="2025-06-20T19:30:27.656385514Z" level=info msg="TaskExit event in podsandbox handler container_id:\"78782100418341c3ffcec5ed798c0da8251e600a2777d1db23a2dcd899ec9695\" id:\"40446635769f037de9d9d99347bbf13887769449addc0f36b471a8881b1a3981\" pid:5286 exited_at:{seconds:1750447827 nanos:656034564}" Jun 20 19:30:28.193863 kubelet[2682]: E0620 19:30:28.193534 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:30:28.209844 kubelet[2682]: I0620 19:30:28.209535 2682 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-z5q6s" podStartSLOduration=10.209518646 podStartE2EDuration="10.209518646s" podCreationTimestamp="2025-06-20 19:30:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:30:24.589996531 +0000 UTC m=+96.373617972" watchObservedRunningTime="2025-06-20 19:30:28.209518646 +0000 UTC m=+99.993140087" Jun 20 19:30:28.300010 kubelet[2682]: E0620 19:30:28.299931 2682 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-2c4kh" podUID="d3592af3-cfdf-46f2-9805-86da86c425fc" Jun 20 19:30:28.300629 kubelet[2682]: E0620 19:30:28.300594 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:30:28.571503 kubelet[2682]: E0620 19:30:28.571360 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:30:28.937139 systemd-networkd[1483]: lxc_health: Gained IPv6LL Jun 20 19:30:29.574248 kubelet[2682]: E0620 19:30:29.574172 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:30:29.783952 containerd[1577]: time="2025-06-20T19:30:29.783898648Z" level=info msg="TaskExit event in podsandbox handler container_id:\"78782100418341c3ffcec5ed798c0da8251e600a2777d1db23a2dcd899ec9695\" id:\"7ae8dde3d21da0e3b4ef6d1b08a5db78d16476e2494bfa9df6e273f12243d56c\" pid:5326 exited_at:{seconds:1750447829 nanos:782446630}" Jun 20 19:30:30.301580 kubelet[2682]: E0620 19:30:30.301494 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:30:31.880701 containerd[1577]: time="2025-06-20T19:30:31.880623292Z" level=info msg="TaskExit event in podsandbox handler container_id:\"78782100418341c3ffcec5ed798c0da8251e600a2777d1db23a2dcd899ec9695\" id:\"fa3f8a2715a5bbf3fe2a9bf4c9b442f48584fdd1ae5346275053d15b740bf71b\" pid:5359 exited_at:{seconds:1750447831 nanos:880149931}" Jun 20 19:30:33.300848 kubelet[2682]: E0620 19:30:33.300762 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:30:33.967395 containerd[1577]: time="2025-06-20T19:30:33.967323830Z" level=info msg="TaskExit event in podsandbox handler container_id:\"78782100418341c3ffcec5ed798c0da8251e600a2777d1db23a2dcd899ec9695\" id:\"c7edb0a9cf2c8d220eb918707913b8a58ad81b470edf19278ed9e90791360f71\" pid:5383 exited_at:{seconds:1750447833 nanos:966958413}" Jun 20 19:30:33.986153 sshd[4496]: Connection closed by 10.0.0.1 port 51820 Jun 20 19:30:33.986672 sshd-session[4491]: pam_unix(sshd:session): session closed for user core Jun 20 19:30:33.991660 systemd[1]: sshd@27-10.0.0.126:22-10.0.0.1:51820.service: Deactivated successfully. Jun 20 19:30:33.993871 systemd[1]: session-28.scope: Deactivated successfully. Jun 20 19:30:33.994720 systemd-logind[1547]: Session 28 logged out. Waiting for processes to exit. Jun 20 19:30:33.996165 systemd-logind[1547]: Removed session 28.